Sample records for model correctly identified

  1. Towards process-informed bias correction of climate change simulations

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Shepherd, Theodore G.; Widmann, Martin; Zappa, Giuseppe; Walton, Daniel; Gutiérrez, José M.; Hagemann, Stefan; Richter, Ingo; Soares, Pedro M. M.; Hall, Alex; Mearns, Linda O.

    2017-11-01

    Biases in climate model simulations introduce biases in subsequent impact simulations. Therefore, bias correction methods are operationally used to post-process regional climate projections. However, many problems have been identified, and some researchers question the very basis of the approach. Here we demonstrate that a typical cross-validation is unable to identify improper use of bias correction. Several examples show the limited ability of bias correction to correct and to downscale variability, and demonstrate that bias correction can cause implausible climate change signals. Bias correction cannot overcome major model errors, and naive application might result in ill-informed adaptation decisions. We conclude with a list of recommendations and suggestions for future research to reduce, post-process, and cope with climate model biases.

  2. Specification Search for Identifying the Correct Mean Trajectory in Polynomial Latent Growth Models

    ERIC Educational Resources Information Center

    Kim, Minjung; Kwok, Oi-Man; Yoon, Myeongsun; Willson, Victor; Lai, Mark H. C.

    2016-01-01

    This study investigated the optimal strategy for model specification search under the latent growth modeling (LGM) framework, specifically on searching for the correct polynomial mean or average growth model when there is no a priori hypothesized model in the absence of theory. In this simulation study, the effectiveness of different starting…

  3. Identifying the theory of dark matter with direct detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gluscevic, Vera; Gresham, Moira I.; McDermott, Samuel D.

    2015-12-01

    Identifying the true theory of dark matter depends crucially on accurately characterizing interactions of dark matter (DM) with other species. In the context of DM direct detection, we present a study of the prospects for correctly identifying the low-energy effective DM-nucleus scattering operators connected to UV-complete models of DM-quark interactions. We take a census of plausible UV-complete interaction models with different low-energy leading-order DM-nuclear responses. For each model (corresponding to different spin–, momentum–, and velocity-dependent responses), we create a large number of realizations of recoil-energy spectra, and use Bayesian methods to investigate the probability that experiments will be able tomore » select the correct scattering model within a broad set of competing scattering hypotheses. We conclude that agnostic analysis of a strong signal (such as Generation-2 would see if cross sections are just below the current limits) seen on xenon and germanium experiments is likely to correctly identify momentum dependence of the dominant response, ruling out models with either 'heavy' or 'light' mediators, and enabling downselection of allowed models. However, a unique determination of the correct UV completion will critically depend on the availability of measurements from a wider variety of nuclear targets, including iodine or fluorine. We investigate how model-selection prospects depend on the energy window available for the analysis. In addition, we discuss accuracy of the DM particle mass determination under a wide variety of scattering models, and investigate impact of the specific types of particle-physics uncertainties on prospects for model selection.« less

  4. Identifying the theory of dark matter with direct detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gluscevic, Vera; Gresham, Moira I.; McDermott, Samuel D.

    2015-12-29

    Identifying the true theory of dark matter depends crucially on accurately characterizing interactions of dark matter (DM) with other species. In the context of DM direct detection, we present a study of the prospects for correctly identifying the low-energy effective DM-nucleus scattering operators connected to UV-complete models of DM-quark interactions. We take a census of plausible UV-complete interaction models with different low-energy leading-order DM-nuclear responses. For each model (corresponding to different spin–, momentum–, and velocity-dependent responses), we create a large number of realizations of recoil-energy spectra, and use Bayesian methods to investigate the probability that experiments will be able tomore » select the correct scattering model within a broad set of competing scattering hypotheses. We conclude that agnostic analysis of a strong signal (such as Generation-2 would see if cross sections are just below the current limits) seen on xenon and germanium experiments is likely to correctly identify momentum dependence of the dominant response, ruling out models with either “heavy” or “light” mediators, and enabling downselection of allowed models. However, a unique determination of the correct UV completion will critically depend on the availability of measurements from a wider variety of nuclear targets, including iodine or fluorine. We investigate how model-selection prospects depend on the energy window available for the analysis. In addition, we discuss accuracy of the DM particle mass determination under a wide variety of scattering models, and investigate impact of the specific types of particle-physics uncertainties on prospects for model selection.« less

  5. A dual-process account of auditory change detection.

    PubMed

    McAnally, Ken I; Martin, Russell L; Eramudugolla, Ranmalee; Stuart, Geoffrey W; Irvine, Dexter R F; Mattingley, Jason B

    2010-08-01

    Listeners can be "deaf" to a substantial change in a scene comprising multiple auditory objects unless their attention has been directed to the changed object. It is unclear whether auditory change detection relies on identification of the objects in pre- and post-change scenes. We compared the rates at which listeners correctly identify changed objects with those predicted by change-detection models based on signal detection theory (SDT) and high-threshold theory (HTT). Detected changes were not identified as accurately as predicted by models based on either theory, suggesting that some changes are detected by a process that does not support change identification. Undetected changes were identified as accurately as predicted by the HTT model but much less accurately than predicted by the SDT models. The process underlying change detection was investigated further by determining receiver-operating characteristics (ROCs). ROCs did not conform to those predicted by either a SDT or a HTT model but were well modeled by a dual-process that incorporated HTT and SDT components. The dual-process model also accurately predicted the rates at which detected and undetected changes were correctly identified.

  6. Two-compartment modeling of tissue microcirculation revisited.

    PubMed

    Brix, Gunnar; Salehi Ravesh, Mona; Griebel, Jürgen

    2017-05-01

    Conventional two-compartment modeling of tissue microcirculation is used for tracer kinetic analysis of dynamic contrast-enhanced (DCE) computed tomography or magnetic resonance imaging studies although it is well-known that the underlying assumption of an instantaneous mixing of the administered contrast agent (CA) in capillaries is far from being realistic. It was thus the aim of the present study to provide theoretical and computational evidence in favor of a conceptually alternative modeling approach that makes it possible to characterize the bias inherent to compartment modeling and, moreover, to approximately correct for it. Starting from a two-region distributed-parameter model that accounts for spatial gradients in CA concentrations within blood-tissue exchange units, a modified lumped two-compartment exchange model was derived. It has the same analytical structure as the conventional two-compartment model, but indicates that the apparent blood flow identifiable from measured DCE data is substantially overestimated, whereas the three other model parameters (i.e., the permeability-surface area product as well as the volume fractions of the plasma and interstitial distribution space) are unbiased. Furthermore, a simple formula was derived to approximately compute a bias-corrected flow from the estimates of the apparent flow and permeability-surface area product obtained by model fitting. To evaluate the accuracy of the proposed modeling and bias correction method, representative noise-free DCE curves were analyzed. They were simulated for 36 microcirculation and four input scenarios by an axially distributed reference model. As analytically proven, the considered two-compartment exchange model is structurally identifiable from tissue residue data. The apparent flow values estimated for the 144 simulated tissue/input scenarios were considerably biased. After bias-correction, the deviations between estimated and actual parameter values were (11.2 ± 6.4) % (vs. (105 ± 21) % without correction) for the flow, (3.6 ± 6.1) % for the permeability-surface area product, (5.8 ± 4.9) % for the vascular volume and (2.5 ± 4.1) % for the interstitial volume; with individual deviations of more than 20% being the exception and just marginal. Increasing the duration of CA administration only had a statistically significant but opposite effect on the accuracy of the estimated flow (declined) and intravascular volume (improved). Physiologically well-defined tissue parameters are structurally identifiable and accurately estimable from DCE data by the conceptually modified two-compartment model in combination with the bias correction. The accuracy of the bias-corrected flow is nearly comparable to that of the three other (theoretically unbiased) model parameters. As compared to conventional two-compartment modeling, this feature constitutes a major advantage for tracer kinetic analysis of both preclinical and clinical DCE imaging studies. © 2017 American Association of Physicists in Medicine.

  7. Statistical bias correction modelling for seasonal rainfall forecast for the case of Bali island

    NASA Astrophysics Data System (ADS)

    Lealdi, D.; Nurdiati, S.; Sopaheluwakan, A.

    2018-04-01

    Rainfall is an element of climate which is highly influential to the agricultural sector. Rain pattern and distribution highly determines the sustainability of agricultural activities. Therefore, information on rainfall is very useful for agriculture sector and farmers in anticipating the possibility of extreme events which often cause failures of agricultural production. This research aims to identify the biases from seasonal forecast products from ECMWF (European Centre for Medium-Range Weather Forecasts) rainfall forecast and to build a transfer function in order to correct the distribution biases as a new prediction model using quantile mapping approach. We apply this approach to the case of Bali Island, and as a result, the use of bias correction methods in correcting systematic biases from the model gives better results. The new prediction model obtained with this approach is better than ever. We found generally that during rainy season, the bias correction approach performs better than in dry season.

  8. Phaser.MRage: automated molecular replacement

    PubMed Central

    Bunkóczi, Gábor; Echols, Nathaniel; McCoy, Airlie J.; Oeffner, Robert D.; Adams, Paul D.; Read, Randy J.

    2013-01-01

    Phaser.MRage is a molecular-replacement automation framework that implements a full model-generation workflow and provides several layers of model exploration to the user. It is designed to handle a large number of models and can distribute calculations efficiently onto parallel hardware. In addition, phaser.MRage can identify correct solutions and use this information to accelerate the search. Firstly, it can quickly score all alternative models of a component once a correct solution has been found. Secondly, it can perform extensive analysis of identified solutions to find protein assemblies and can employ assembled models for subsequent searches. Thirdly, it is able to use a priori assembly information (derived from, for example, homologues) to speculatively place and score molecules, thereby customizing the search procedure to a certain class of protein molecule (for example, antibodies) and incorporating additional biological information into molecular replacement. PMID:24189240

  9. Phaser.MRage: automated molecular replacement.

    PubMed

    Bunkóczi, Gábor; Echols, Nathaniel; McCoy, Airlie J; Oeffner, Robert D; Adams, Paul D; Read, Randy J

    2013-11-01

    Phaser.MRage is a molecular-replacement automation framework that implements a full model-generation workflow and provides several layers of model exploration to the user. It is designed to handle a large number of models and can distribute calculations efficiently onto parallel hardware. In addition, phaser.MRage can identify correct solutions and use this information to accelerate the search. Firstly, it can quickly score all alternative models of a component once a correct solution has been found. Secondly, it can perform extensive analysis of identified solutions to find protein assemblies and can employ assembled models for subsequent searches. Thirdly, it is able to use a priori assembly information (derived from, for example, homologues) to speculatively place and score molecules, thereby customizing the search procedure to a certain class of protein molecule (for example, antibodies) and incorporating additional biological information into molecular replacement.

  10. 76 FR 22308 - Airworthiness Directives; Airbus Model A340-541 and -642 Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-21

    ... airworthiness information (MCAI) originated by an aviation authority of another country to identify and correct... PB201 were de-validated starting from the SRM revision issued on January 2009. The terminology ``De... ``de-validated SRM'' repairs and, if necessary, to apply the associated corrective actions [repair...

  11. A CONCISE PANEL OF BIOMARKERS IDENTIFIES NEUROCOGNITIVE FUNCTIONING CHANGES IN HIV-INFECTED INDIVIDUALS

    PubMed Central

    Marcotte, Thomas D.; Deutsch, Reena; Michael, Benedict Daniel; Franklin, Donald; Cookson, Debra Rosario; Bharti, Ajay R.; Grant, Igor; Letendre, Scott L.

    2013-01-01

    Background Neurocognitive (NC) impairment (NCI) occurs commonly in people living with HIV. Despite substantial effort, no biomarkers have been sufficiently validated for diagnosis and prognosis of NCI in the clinic. The goal of this project was to identify diagnostic or prognostic biomarkers for NCI in a comprehensively characterized HIV cohort. Methods Multidisciplinary case review selected 98 HIV-infected individuals and categorized them into four NC groups using normative data: stably normal (SN), stably impaired (SI), worsening (Wo), or improving (Im). All subjects underwent comprehensive NC testing, phlebotomy, and lumbar puncture at two timepoints separated by a median of 6.2 months. Eight biomarkers were measured in CSF and blood by immunoassay. Results were analyzed using mixed model linear regression and staged recursive partitioning. Results At the first visit, subjects were mostly middle-aged (median 45) white (58%) men (84%) who had AIDS (70%). Of the 73% who took antiretroviral therapy (ART), 54% had HIV RNA levels below 50 c/mL in plasma. Mixed model linear regression identified that only MCP-1 in CSF was associated with neurocognitive change group. Recursive partitioning models aimed at diagnosis (i.e., correctly classifying neurocognitive status at the first visit) were complex and required most biomarkers to achieve misclassification limits. In contrast, prognostic models were more efficient. A combination of three biomarkers (sCD14, MCP-1, SDF-1α) correctly classified 82% of Wo and SN subjects, including 88% of SN subjects. A combination of two biomarkers (MCP-1, TNF-α) correctly classified 81% of Im and SI subjects, including 100% of SI subjects. Conclusions This analysis of well-characterized individuals identified concise panels of biomarkers associated with NC change. Across all analyses, the two most frequently identified biomarkers were sCD14 and MCP-1, indicators of monocyte/macrophage activation. While the panels differed depending on the outcome and on the degree of misclassification, nearly all stable patients were correctly classified. PMID:24101401

  12. 76 FR 2279 - Airworthiness Directives; Empresa Brasileira de Aeronautica S.A. (EMBRAER) Model ERJ 170 and ERJ...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-13

    ... de Aeronautica S.A. (EMBRAER) Model ERJ 170 and ERJ 190 Airplanes AGENCY: Federal Aviation... airworthiness information (MCAI) originated by an aviation authority of another country to identify and correct...., Monday through Friday, except Federal holidays. For service information identified in this proposed AD...

  13. Apparent resistivity for transient electromagnetic induction logging and its correction in radial layer identification

    NASA Astrophysics Data System (ADS)

    Meng, Qingxin; Hu, Xiangyun; Pan, Heping; Xi, Yufei

    2018-04-01

    We propose an algorithm for calculating all-time apparent resistivity from transient electromagnetic induction logging. The algorithm is based on the whole-space transient electric field expression of the uniform model and Halley's optimisation. In trial calculations for uniform models, the all-time algorithm is shown to have high accuracy. We use the finite-difference time-domain method to simulate the transient electromagnetic field in radial two-layer models without wall rock and convert the simulation results to apparent resistivity using the all-time algorithm. The time-varying apparent resistivity reflects the radially layered geoelectrical structure of the models and the apparent resistivity of the earliest time channel follows the true resistivity of the inner layer; however, the apparent resistivity at larger times reflects the comprehensive electrical characteristics of the inner and outer layers. To accurately identify the outer layer resistivity based on the series relationship model of the layered resistance, the apparent resistivity and diffusion depth of the different time channels are approximately replaced by related model parameters; that is, we propose an apparent resistivity correction algorithm. By correcting the time-varying apparent resistivity of radial two-layer models, we show that the correction results reflect the radially layered electrical structure and the corrected resistivities of the larger time channels follow the outer layer resistivity. The transient electromagnetic fields of radially layered models with wall rock are simulated to obtain the 2D time-varying profiles of the apparent resistivity and corrections. The results suggest that the time-varying apparent resistivity and correction results reflect the vertical and radial geoelectrical structures. For models with small wall-rock effect, the correction removes the effect of the low-resistance inner layer on the apparent resistivity of the larger time channels.

  14. PET motion correction in context of integrated PET/MR: Current techniques, limitations, and future projections.

    PubMed

    Gillman, Ashley; Smith, Jye; Thomas, Paul; Rose, Stephen; Dowson, Nicholas

    2017-12-01

    Patient motion is an important consideration in modern PET image reconstruction. Advances in PET technology mean motion has an increasingly important influence on resulting image quality. Motion-induced artifacts can have adverse effects on clinical outcomes, including missed diagnoses and oversized radiotherapy treatment volumes. This review aims to summarize the wide variety of motion correction techniques available in PET and combined PET/CT and PET/MR, with a focus on the latter. A general framework for the motion correction of PET images is presented, consisting of acquisition, modeling, and correction stages. Methods for measuring, modeling, and correcting motion and associated artifacts, both in literature and commercially available, are presented, and their relative merits are contrasted. Identified limitations of current methods include modeling of aperiodic and/or unpredictable motion, attaining adequate temporal resolution for motion correction in dynamic kinetic modeling acquisitions, and maintaining availability of the MR in PET/MR scans for diagnostic acquisitions. Finally, avenues for future investigation are discussed, with a focus on improvements that could improve PET image quality, and that are practical in the clinical environment. © 2017 American Association of Physicists in Medicine.

  15. Analysis of Cricoid Pressure Force and Technique Among Anesthesiologists, Nurse Anesthetists, and Registered Nurses.

    PubMed

    Lefave, Melissa; Harrell, Brad; Wright, Molly

    2016-06-01

    The purpose of this project was to assess the ability of anesthesiologists, nurse anesthetists, and registered nurses to correctly identify anatomic landmarks of cricoid pressure and apply the correct amount of force. The project included an educational intervention with one group pretest-post-test design. Participants demonstrated cricoid pressure on a laryngotracheal model. After an educational intervention video, participants were asked to repeat cricoid pressure on the model. Participants with a nurse anesthesia background applied more appropriate force pretest than other participants; however, post-test results, while improved, showed no significant difference among providers. Participant identification of the correct anatomy of the cricoid cartilage and application of correct force were significantly improved after education. This study revealed that participants lacked prior knowledge of correct cricoid anatomy and pressure as well as the ability to apply correct force to the laryngotracheal model before an educational intervention. The intervention used in this study proved successful in educating health care providers. Copyright © 2016 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All rights reserved.

  16. Corrective Action Decision Document/Corrective Action Plan for Corrective Action Unit 97: Yucca Flat/Climax Mine Nevada National Security Site, Nevada, Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farnham, Irene

    This corrective action decision document (CADD)/corrective action plan (CAP) has been prepared for Corrective Action Unit (CAU) 97, Yucca Flat/Climax Mine, Nevada National Security Site (NNSS), Nevada. The Yucca Flat/Climax Mine CAU is located in the northeastern portion of the NNSS and comprises 720 corrective action sites. A total of 747 underground nuclear detonations took place within this CAU between 1957 and 1992 and resulted in the release of radionuclides (RNs) in the subsurface in the vicinity of the test cavities. The CADD portion describes the Yucca Flat/Climax Mine CAU data-collection and modeling activities completed during the corrective action investigationmore » (CAI) stage, presents the corrective action objectives, and describes the actions recommended to meet the objectives. The CAP portion describes the corrective action implementation plan. The CAP presents CAU regulatory boundary objectives and initial use-restriction boundaries identified and negotiated by DOE and the Nevada Division of Environmental Protection (NDEP). The CAP also presents the model evaluation process designed to build confidence that the groundwater flow and contaminant transport modeling results can be used for the regulatory decisions required for CAU closure. The UGTA strategy assumes that active remediation of subsurface RN contamination is not feasible with current technology. As a result, the corrective action is based on a combination of characterization and modeling studies, monitoring, and institutional controls. The strategy is implemented through a four-stage approach that comprises the following: (1) corrective action investigation plan (CAIP), (2) CAI, (3) CADD/CAP, and (4) closure report (CR) stages.« less

  17. On the Limitations of Variational Bias Correction

    NASA Technical Reports Server (NTRS)

    Moradi, Isaac; Mccarty, Will; Gelaro, Ronald

    2018-01-01

    Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.

  18. Linear optics measurements and corrections using an AC dipole in RHIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, G.; Bai, M.; Yang, L.

    2010-05-23

    We report recent experimental results on linear optics measurements and corrections using ac dipole. In RHIC 2009 run, the concept of the SVD correction algorithm is tested at injection energy for both identifying the artificial gradient errors and correcting it using the trim quadrupoles. The measured phase beatings were reduced by 30% and 40% respectively for two dedicated experiments. In RHIC 2010 run, ac dipole is used to measure {beta}* and chromatic {beta} function. For the 0.65m {beta}* lattice, we observed a factor of 3 discrepancy between model and measured chromatic {beta} function in the yellow ring.

  19. Neutron Capture and the Antineutrino Yield from Nuclear Reactors.

    PubMed

    Huber, Patrick; Jaffke, Patrick

    2016-03-25

    We identify a new, flux-dependent correction to the antineutrino spectrum as produced in nuclear reactors. The abundance of certain nuclides, whose decay chains produce antineutrinos above the threshold for inverse beta decay, has a nonlinear dependence on the neutron flux, unlike the vast majority of antineutrino producing nuclides, whose decay rate is directly related to the fission rate. We have identified four of these so-called nonlinear nuclides and determined that they result in an antineutrino excess at low energies below 3.2 MeV, dependent on the reactor thermal neutron flux. We develop an analytic model for the size of the correction and compare it to the results of detailed reactor simulations for various real existing reactors, spanning 3 orders of magnitude in neutron flux. In a typical pressurized water reactor the resulting correction can reach ∼0.9% of the low energy flux which is comparable in size to other, known low-energy corrections from spent nuclear fuel and the nonequilibrium correction. For naval reactors the nonlinear correction may reach the 5% level by the end of cycle.

  20. A fractional Fourier transform analysis of a bubble excited by an ultrasonic chirp.

    PubMed

    Barlow, Euan; Mulholland, Anthony J

    2011-11-01

    The fractional Fourier transform is proposed here as a model based, signal processing technique for determining the size of a bubble in a fluid. The bubble is insonified with an ultrasonic chirp and the radiated pressure field is recorded. This experimental bubble response is then compared with a series of theoretical model responses to identify the most accurate match between experiment and theory which allows the correct bubble size to be identified. The fractional Fourier transform is used to produce a more detailed description of each response, and two-dimensional cross correlation is then employed to identify the similarities between the experimental response and each theoretical response. In this paper the experimental bubble response is simulated by adding various levels of noise to the theoretical model output. The method is compared to the standard technique of using time-domain cross correlation. The proposed method is shown to be far more robust at correctly sizing the bubble and can cope with much lower signal to noise ratios.

  1. A graph edit dictionary for correcting errors in roof topology graphs reconstructed from point clouds

    NASA Astrophysics Data System (ADS)

    Xiong, B.; Oude Elberink, S.; Vosselman, G.

    2014-07-01

    In the task of 3D building model reconstruction from point clouds we face the problem of recovering a roof topology graph in the presence of noise, small roof faces and low point densities. Errors in roof topology graphs will seriously affect the final modelling results. The aim of this research is to automatically correct these errors. We define the graph correction as a graph-to-graph problem, similar to the spelling correction problem (also called the string-to-string problem). The graph correction is more complex than string correction, as the graphs are 2D while strings are only 1D. We design a strategy based on a dictionary of graph edit operations to automatically identify and correct the errors in the input graph. For each type of error the graph edit dictionary stores a representative erroneous subgraph as well as the corrected version. As an erroneous roof topology graph may contain several errors, a heuristic search is applied to find the optimum sequence of graph edits to correct the errors one by one. The graph edit dictionary can be expanded to include entries needed to cope with errors that were previously not encountered. Experiments show that the dictionary with only fifteen entries already properly corrects one quarter of erroneous graphs in about 4500 buildings, and even half of the erroneous graphs in one test area, achieving as high as a 95% acceptance rate of the reconstructed models.

  2. Characterization of Artifacts Introduced by the Empirical Volcano-Scan Atmospheric Correction Commonly Applied to CRISM and OMEGA Near-Infrared Spectra

    NASA Technical Reports Server (NTRS)

    Wiseman, S.M.; Arvidson, R.E.; Wolff, M. J.; Smith, M. D.; Seelos, F. P.; Morgan, F.; Murchie, S. L.; Mustard, J. F.; Morris, R. V.; Humm, D.; hide

    2014-01-01

    The empirical volcano-scan atmospheric correction is widely applied to Martian near infrared CRISM and OMEGA spectra between 1000 and 2600 nanometers to remove prominent atmospheric gas absorptions with minimal computational investment. This correction method employs division by a scaled empirically-derived atmospheric transmission spectrum that is generated from observations of the Martian surface in which different path lengths through the atmosphere were measured and transmission calculated using the Beer-Lambert Law. Identifying and characterizing both artifacts and residual atmospheric features left by the volcano-scan correction is important for robust interpretation of CRISM and OMEGA volcano scan corrected spectra. In order to identify and determine the cause of spectral artifacts introduced by the volcano-scan correction, we simulated this correction using a multiple scattering radiative transfer algorithm (DISORT). Simulated transmission spectra that are similar to actual CRISM- and OMEGA-derived transmission spectra were generated from modeled Olympus Mons base and summit spectra. Results from the simulations were used to investigate the validity of assumptions inherent in the volcano-scan correction and to identify artifacts introduced by this method of atmospheric correction. We found that the most prominent artifact, a bowl-shaped feature centered near 2000 nanometers, is caused by the inaccurate assumption that absorption coefficients of CO2 in the Martian atmosphere are independent of column density. In addition, spectral albedo and slope are modified by atmospheric aerosols. Residual atmospheric contributions that are caused by variable amounts of dust aerosols, ice aerosols, and water vapor are characterized by the analysis of CRISM volcano-scan corrected spectra from the same location acquired at different times under variable atmospheric conditions.

  3. Characterization of artifacts introduced by the empirical volcano-scan atmospheric correction commonly applied to CRISM and OMEGA near-infrared spectra

    NASA Astrophysics Data System (ADS)

    Wiseman, S. M.; Arvidson, R. E.; Wolff, M. J.; Smith, M. D.; Seelos, F. P.; Morgan, F.; Murchie, S. L.; Mustard, J. F.; Morris, R. V.; Humm, D.; McGuire, P. C.

    2016-05-01

    The empirical 'volcano-scan' atmospheric correction is widely applied to martian near infrared CRISM and OMEGA spectra between ∼1000 and ∼2600 nm to remove prominent atmospheric gas absorptions with minimal computational investment. This correction method employs division by a scaled empirically-derived atmospheric transmission spectrum that is generated from observations of the martian surface in which different path lengths through the atmosphere were measured and transmission calculated using the Beer-Lambert Law. Identifying and characterizing both artifacts and residual atmospheric features left by the volcano-scan correction is important for robust interpretation of CRISM and OMEGA volcano-scan corrected spectra. In order to identify and determine the cause of spectral artifacts introduced by the volcano-scan correction, we simulated this correction using a multiple scattering radiative transfer algorithm (DISORT). Simulated transmission spectra that are similar to actual CRISM- and OMEGA-derived transmission spectra were generated from modeled Olympus Mons base and summit spectra. Results from the simulations were used to investigate the validity of assumptions inherent in the volcano-scan correction and to identify artifacts introduced by this method of atmospheric correction. We found that the most prominent artifact, a bowl-shaped feature centered near 2000 nm, is caused by the inaccurate assumption that absorption coefficients of CO2 in the martian atmosphere are independent of column density. In addition, spectral albedo and slope are modified by atmospheric aerosols. Residual atmospheric contributions that are caused by variable amounts of dust aerosols, ice aerosols, and water vapor are characterized by the analysis of CRISM volcano-scan corrected spectra from the same location acquired at different times under variable atmospheric conditions.

  4. Model-based aberration correction in a closed-loop wavefront-sensor-less adaptive optics system.

    PubMed

    Song, H; Fraanje, R; Schitter, G; Kroese, H; Vdovin, G; Verhaegen, M

    2010-11-08

    In many scientific and medical applications, such as laser systems and microscopes, wavefront-sensor-less (WFSless) adaptive optics (AO) systems are used to improve the laser beam quality or the image resolution by correcting the wavefront aberration in the optical path. The lack of direct wavefront measurement in WFSless AO systems imposes a challenge to achieve efficient aberration correction. This paper presents an aberration correction approach for WFSlss AO systems based on the model of the WFSless AO system and a small number of intensity measurements, where the model is identified from the input-output data of the WFSless AO system by black-box identification. This approach is validated in an experimental setup with 20 static aberrations having Kolmogorov spatial distributions. By correcting N=9 Zernike modes (N is the number of aberration modes), an intensity improvement from 49% of the maximum value to 89% has been achieved in average based on N+5=14 intensity measurements. With the worst initial intensity, an improvement from 17% of the maximum value to 86% has been achieved based on N+4=13 intensity measurements.

  5. Sex Determination of Carolina Wrens (Thryothorus ludovicianus) in the Mississippi Alluvial Valley

    USGS Publications Warehouse

    Twedt, D.J.

    2004-01-01

    I identified sexual dimorphism in wing length (unflattened chord) of Carolina Wrens (Thryothorus ludovicianus) within the central Mississippi Alluvial Valley (northeast Louisiana and west-central Mississippi) and used this difference to assign a sex to captured wrens. Wrens were identified as female when wing length was less than 57.5 mm or male when wing length was greater than 58.5 mm. Verification of predicted sex was obtained from recaptures of banded individuals where sex was ascertained from the presence of a cloacal protuberance or brood patch. Correct prediction of sex was 81% for adult females and 95% for adult males. An alternative model, which categorized wrens with wing lengths of 58 and 59 mm as birds of unknown sex, increased correct prediction of females to 93% but reduced the number of individuals to which sex was assigned. These simple, predictive, wing-length-based models also correctly assigned sex for more than 88% of young (hatching-year) birds.

  6. Exchange-Hole Dipole Dispersion Model for Accurate Energy Ranking in Molecular Crystal Structure Prediction.

    PubMed

    Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R

    2017-02-14

    Accurate energy ranking is a key facet to the problem of first-principles crystal-structure prediction (CSP) of molecular crystals. This work presents a systematic assessment of B86bPBE-XDM, a semilocal density functional combined with the exchange-hole dipole moment (XDM) dispersion model, for energy ranking using 14 compounds from the first five CSP blind tests. Specifically, the set of crystals studied comprises 11 rigid, planar compounds and 3 co-crystals. The experimental structure was correctly identified as the lowest in lattice energy for 12 of the 14 total crystals. One of the exceptions is 4-hydroxythiophene-2-carbonitrile, for which the experimental structure was correctly identified once a quasi-harmonic estimate of the vibrational free-energy contribution was included, evidencing the occasional importance of thermal corrections for accurate energy ranking. The other exception is an organic salt, where charge-transfer error (also called delocalization error) is expected to cause the base density functional to be unreliable. Provided the choice of base density functional is appropriate and an estimate of temperature effects is used, XDM-corrected density-functional theory is highly reliable for the energetic ranking of competing crystal structures.

  7. Dose-to-water conversion for the backscatter-shielded EPID: A frame-based method to correct for EPID energy response to MLC transmitted radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zwan, Benjamin J., E-mail: benjamin.zwan@uon.edu.au; O’Connor, Daryl J.; King, Brian W.

    2014-08-15

    Purpose: To develop a frame-by-frame correction for the energy response of amorphous silicon electronic portal imaging devices (a-Si EPIDs) to radiation that has transmitted through the multileaf collimator (MLC) and to integrate this correction into the backscatter shielded EPID (BSS-EPID) dose-to-water conversion model. Methods: Individual EPID frames were acquired using a Varian frame grabber and iTools acquisition software then processed using in-house software developed inMATLAB. For each EPID image frame, the region below the MLC leaves was identified and all pixels in this region were multiplied by a factor of 1.3 to correct for the under-response of the imager tomore » MLC transmitted radiation. The corrected frames were then summed to form a corrected integrated EPID image. This correction was implemented as an initial step in the BSS-EPID dose-to-water conversion model which was then used to compute dose planes in a water phantom for 35 IMRT fields. The calculated dose planes, with and without the proposed MLC transmission correction, were compared to measurements in solid water using a two-dimensional diode array. Results: It was observed that the integration of the MLC transmission correction into the BSS-EPID dose model improved agreement between modeled and measured dose planes. In particular, the MLC correction produced higher pass rates for almost all Head and Neck fields tested, yielding an average pass rate of 99.8% for 2%/2 mm criteria. A two-sample independentt-test and fisher F-test were used to show that the MLC transmission correction resulted in a statistically significant reduction in the mean and the standard deviation of the gamma values, respectively, to give a more accurate and consistent dose-to-water conversion. Conclusions: The frame-by-frame MLC transmission response correction was shown to improve the accuracy and reduce the variability of the BSS-EPID dose-to-water conversion model. The correction may be applied as a preprocessing step in any pretreatment portal dosimetry calculation and has been shown to be beneficial for highly modulated IMRT fields.« less

  8. Review and verification of CARE 3 mathematical model and code

    NASA Technical Reports Server (NTRS)

    Rose, D. M.; Altschul, R. E.; Manke, J. W.; Nelson, D. L.

    1983-01-01

    The CARE-III mathematical model and code verification performed by Boeing Computer Services were documented. The mathematical model was verified for permanent and intermittent faults. The transient fault model was not addressed. The code verification was performed on CARE-III, Version 3. A CARE III Version 4, which corrects deficiencies identified in Version 3, is being developed.

  9. Statistical Selection of Biological Models for Genome-Wide Association Analyses.

    PubMed

    Bi, Wenjian; Kang, Guolian; Pounds, Stanley B

    2018-05-24

    Genome-wide association studies have discovered many biologically important associations of genes with phenotypes. Typically, genome-wide association analyses formally test the association of each genetic feature (SNP, CNV, etc) with the phenotype of interest and summarize the results with multiplicity-adjusted p-values. However, very small p-values only provide evidence against the null hypothesis of no association without indicating which biological model best explains the observed data. Correctly identifying a specific biological model may improve the scientific interpretation and can be used to more effectively select and design a follow-up validation study. Thus, statistical methodology to identify the correct biological model for a particular genotype-phenotype association can be very useful to investigators. Here, we propose a general statistical method to summarize how accurately each of five biological models (null, additive, dominant, recessive, co-dominant) represents the data observed for each variant in a GWAS study. We show that the new method stringently controls the false discovery rate and asymptotically selects the correct biological model. Simulations of two-stage discovery-validation studies show that the new method has these properties and that its validation power is similar to or exceeds that of simple methods that use the same statistical model for all SNPs. Example analyses of three data sets also highlight these advantages of the new method. An R package is freely available at www.stjuderesearch.org/site/depts/biostats/maew. Copyright © 2018. Published by Elsevier Inc.

  10. Protein model discrimination using mutational sensitivity derived from deep sequencing.

    PubMed

    Adkar, Bharat V; Tripathi, Arti; Sahoo, Anusmita; Bajaj, Kanika; Goswami, Devrishi; Chakrabarti, Purbani; Swarnkar, Mohit K; Gokhale, Rajesh S; Varadarajan, Raghavan

    2012-02-08

    A major bottleneck in protein structure prediction is the selection of correct models from a pool of decoys. Relative activities of ∼1,200 individual single-site mutants in a saturation library of the bacterial toxin CcdB were estimated by determining their relative populations using deep sequencing. This phenotypic information was used to define an empirical score for each residue (RankScore), which correlated with the residue depth, and identify active-site residues. Using these correlations, ∼98% of correct models of CcdB (RMSD ≤ 4Å) were identified from a large set of decoys. The model-discrimination methodology was further validated on eleven different monomeric proteins using simulated RankScore values. The methodology is also a rapid, accurate way to obtain relative activities of each mutant in a large pool and derive sequence-structure-function relationships without protein isolation or characterization. It can be applied to any system in which mutational effects can be monitored by a phenotypic readout. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Quantifying errors in trace species transport modeling.

    PubMed

    Prather, Michael J; Zhu, Xin; Strahan, Susan E; Steenrod, Stephen D; Rodriguez, Jose M

    2008-12-16

    One expectation when computationally solving an Earth system model is that a correct answer exists, that with adequate physical approximations and numerical methods our solutions will converge to that single answer. With such hubris, we performed a controlled numerical test of the atmospheric transport of CO(2) using 2 models known for accurate transport of trace species. Resulting differences were unexpectedly large, indicating that in some cases, scientific conclusions may err because of lack of knowledge of the numerical errors in tracer transport models. By doubling the resolution, thereby reducing numerical error, both models show some convergence to the same answer. Now, under realistic conditions, we identify a practical approach for finding the correct answer and thus quantifying the advection error.

  12. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    PubMed

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  13. A Market-Basket Approach to Predict the Acute Aquatic Toxicity of Munitions and Energetic Materials.

    PubMed

    Burgoon, Lyle D

    2016-06-01

    An ongoing challenge in chemical production, including the production of insensitive munitions and energetics, is the ability to make predictions about potential environmental hazards early in the process. To address this challenge, a quantitative structure activity relationship model was developed to predict acute fathead minnow toxicity of insensitive munitions and energetic materials. Computational predictive toxicology models like this one may be used to identify and prioritize environmentally safer materials early in their development. The developed model is based on the Apriori market-basket/frequent itemset mining approach to identify probabilistic prediction rules using chemical atom-pairs and the lethality data for 57 compounds from a fathead minnow acute toxicity assay. Lethality data were discretized into four categories based on the Globally Harmonized System of Classification and Labelling of Chemicals. Apriori identified toxicophores for categories two and three. The model classified 32 of the 57 compounds correctly, with a fivefold cross-validation classification rate of 74 %. A structure-based surrogate approach classified the remaining 25 chemicals correctly at 48 %. This result is unsurprising as these 25 chemicals were fairly unique within the larger set.

  14. Validating the Use of Deep Learning Neural Networks for Correction of Large Hydrometric Datasets

    NASA Astrophysics Data System (ADS)

    Frazier, N.; Ogden, F. L.; Regina, J. A.; Cheng, Y.

    2017-12-01

    Collection and validation of Earth systems data can be time consuming and labor intensive. In particular, high resolution hydrometric data, including rainfall and streamflow measurements, are difficult to obtain due to a multitude of complicating factors. Measurement equipment is subject to clogs, environmental disturbances, and sensor drift. Manual intervention is typically required to identify, correct, and validate these data. Weirs can become clogged and the pressure transducer may float or drift over time. We typically employ a graphical tool called Time Series Editor to manually remove clogs and sensor drift from the data. However, this process is highly subjective and requires hydrological expertise. Two different people may produce two different data sets. To use this data for scientific discovery and model validation, a more consistent method is needed to processes this field data. Deep learning neural networks have proved to be excellent mechanisms for recognizing patterns in data. We explore the use of Recurrent Neural Networks (RNN) to capture the patterns in the data over time using various gating mechanisms (LSTM and GRU), network architectures, and hyper-parameters to build an automated data correction model. We also explore the required amount of manually corrected training data required to train the network for reasonable accuracy. The benefits of this approach are that the time to process a data set is significantly reduced, and the results are 100% reproducible after training is complete. Additionally, we train the RNN and calibrate a physically-based hydrological model against the same portion of data. Both the RNN and the model are applied to the remaining data using a split-sample methodology. Performance of the machine learning is evaluated for plausibility by comparing with the output of the hydrological model, and this analysis identifies potential periods where additional investigation is warranted.

  15. Non-Tidal Ocean Loading Correction for the Argentinean-German Geodetic Observatory Using an Empirical Model of Storm Surge for the Río de la Plata

    NASA Astrophysics Data System (ADS)

    Oreiro, F. A.; Wziontek, H.; Fiore, M. M. E.; D'Onofrio, E. E.; Brunini, C.

    2018-05-01

    The Argentinean-German Geodetic Observatory is located 13 km from the Río de la Plata, in an area that is frequently affected by storm surges that can vary the level of the river over ±3 m. Water-level information from seven tide gauge stations located in the Río de la Plata are used to calculate every hour an empirical model of water heights (tidal + non-tidal component) and an empirical model of storm surge (non-tidal component) for the period 01/2016-12/2016. Using the SPOTL software, the gravimetric response of the models and the tidal response are calculated, obtaining that for the observatory location, the range of the tidal component (3.6 nm/s2) is only 12% of the range of the non-tidal component (29.4 nm/s2). The gravimetric response of the storm surge model is subtracted from the superconducting gravimeter observations, after applying the traditional corrections, and a reduction of 7% of the RMS is obtained. The wavelet transform is applied to the same series, before and after the non-tidal correction, and a clear decrease in the spectral energy in the periods between 2 and 12 days is identify between the series. Using the same software East, North and Up displacements are calculated, and a range of 3, 2, and 11 mm is obtained, respectively. The residuals obtained after applying the non-tidal correction allow to clearly identify the influence of rain events in the superconducting gravimeter observations, indicating the need of the analysis of this, and others, hydrological and geophysical effects.

  16. Phase I Hydrologic Data for the Groundwater Flow and Contaminant Transport Model of Corrective Action Unit 97: Yucca Flat/Climax Mine, Nevada Test Site, Nye County, Nevada, Rev. No.: 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John McCord

    2006-06-01

    The U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office (NNSA/NSO) initiated the Underground Test Area (UGTA) Project to assess and evaluate the effects of the underground nuclear weapons tests on groundwater beneath the Nevada Test Site (NTS) and vicinity. The framework for this evaluation is provided in Appendix VI, Revision No. 1 (December 7, 2000) of the Federal Facility Agreement and Consent Order (FFACO, 1996). Section 3.0 of Appendix VI ''Corrective Action Strategy'' of the FFACO describes the process that will be used to complete corrective actions specifically for the UGTA Project. The objective of themore » UGTA corrective action strategy is to define contaminant boundaries for each UGTA corrective action unit (CAU) where groundwater may have become contaminated from the underground nuclear weapons tests. The contaminant boundaries are determined based on modeling of groundwater flow and contaminant transport. A summary of the FFACO corrective action process and the UGTA corrective action strategy is provided in Section 1.5. The FFACO (1996) corrective action process for the Yucca Flat/Climax Mine CAU 97 was initiated with the Corrective Action Investigation Plan (CAIP) (DOE/NV, 2000a). The CAIP included a review of existing data on the CAU and proposed a set of data collection activities to collect additional characterization data. These recommendations were based on a value of information analysis (VOIA) (IT, 1999), which evaluated the value of different possible data collection activities, with respect to reduction in uncertainty of the contaminant boundary, through simplified transport modeling. The Yucca Flat/Climax Mine CAIP identifies a three-step model development process to evaluate the impact of underground nuclear testing on groundwater to determine a contaminant boundary (DOE/NV, 2000a). The three steps are as follows: (1) Data compilation and analysis that provides the necessary modeling data that is completed in two parts: the first addressing the groundwater flow model, and the second the transport model. (2) Development of a groundwater flow model. (3) Development of a groundwater transport model. This report presents the results of the first part of the first step, documenting the data compilation, evaluation, and analysis for the groundwater flow model. The second part, documentation of transport model data will be the subject of a separate report. The purpose of this document is to present the compilation and evaluation of the available hydrologic data and information relevant to the development of the Yucca Flat/Climax Mine CAU groundwater flow model, which is a fundamental tool in the prediction of the extent of contaminant migration. Where appropriate, data and information documented elsewhere are summarized with reference to the complete documentation. The specific task objectives for hydrologic data documentation are as follows: (1) Identify and compile available hydrologic data and supporting information required to develop and validate the groundwater flow model for the Yucca Flat/Climax Mine CAU. (2) Assess the quality of the data and associated documentation, and assign qualifiers to denote levels of quality. (3) Analyze the data to derive expected values or spatial distributions and estimates of the associated uncertainty and variability.« less

  17. Aeroelastic modeling for the FIT team F/A-18 simulation

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Wieseman, Carol D.

    1989-01-01

    Some details of the aeroelastic modeling of the F/A-18 aircraft done for the Functional Integration Technology (FIT) team's research in integrated dynamics modeling and how these are combined with the FIT team's integrated dynamics model are described. Also described are mean axis corrections to elastic modes, the addition of nonlinear inertial coupling terms into the equations of motion, and the calculation of internal loads time histories using the integrated dynamics model in a batch simulation program. A video tape made of a loads time history animation was included as a part of the oral presentation. Also discussed is work done in one of the areas of unsteady aerodynamic modeling identified as needing improvement, specifically, in correction factor methodologies for improving the accuracy of stability derivatives calculated with a doublet lattice code.

  18. Tijeras Arroyo Groundwater Current Conceptual Model and Corrective Measures Evaluation Report - December 2016.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Copland, John R.

    This Tijeras Arroyo Groundwater Current Conceptual Model and Corrective Measures Evaluation Report (CCM/CME Report) has been prepared by the U.S. Department of Energy (DOE) and Sandia Corporation (Sandia) to meet requirements under the Sandia National Laboratories-New Mexico (SNL/NM) Compliance Order on Consent (the Consent Order). The Consent Order, entered into by the New Mexico Environment Department (NMED), DOE, and Sandia, became effective on April 29, 2004. The Consent Order identified the Tijeras Arroyo Groundwater (TAG) Area of Concern (AOC) as an area of groundwater contamination requiring further characterization and corrective action. This report presents an updated Conceptual Site Model (CSM)more » of the TAG AOC that describes the contaminant release sites, the geological and hydrogeological setting, and the distribution and migration of contaminants in the subsurface. The dataset used for this report includes the analytical results from groundwater samples collected through December 2015.« less

  19. 77 FR 74616 - Amendments and Correction to Petitions for Waiver and Interim Waiver for Consumer Products and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-17

    ... decision and order must be used for all future testing for any basic models covered by the decision and... require petitioners to: (1) Specify the basic model(s) to which the waiver applies; (2) identify other manufacturers of similar products; (3) include any known alternate test procedures of the basic model, with the...

  20. Corrective Action Decision Document/Corrective Action Plan for Corrective Action Unit 98: Frenchman Flat, Nevada National Security Site, Nevada, Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Irene Farnham and Sam Marutzky

    2011-07-01

    This CADD/CAP follows the Corrective Action Investigation (CAI) stage, which results in development of a set of contaminant boundary forecasts produced from groundwater flow and contaminant transport modeling of the Frenchman Flat CAU. The Frenchman Flat CAU is located in the southeastern portion of the NNSS and comprises 10 underground nuclear tests. The tests were conducted between 1965 and 1971 and resulted in the release of radionuclides in the subsurface in the vicinity of the test cavities. Two important aspects of the corrective action process are presented within this CADD/CAP. The CADD portion describes the results of the Frenchman Flatmore » CAU data-collection and modeling activities completed during the CAI stage. The corrective action objectives and the actions recommended to meet the objectives are also described. The CAP portion describes the corrective action implementation plan. The CAP begins with the presentation of CAU regulatory boundary objectives and initial use restriction boundaries that are identified and negotiated by NNSA/NSO and the Nevada Division of Environmental Protection (NDEP). The CAP also presents the model evaluation process designed to build confidence that the flow and contaminant transport modeling results can be used for the regulatory decisions required for CAU closure. The first two stages of the strategy have been completed for the Frenchman Flat CAU. A value of information analysis and a CAIP were developed during the CAIP stage. During the CAI stage, a CAIP addendum was developed, and the activities proposed in the CAIP and addendum were completed. These activities included hydrogeologic investigation of the underground testing areas, aquifer testing, isotopic and geochemistry-based investigations, and integrated geophysical investigations. After these investigations, a groundwater flow and contaminant transport model was developed to forecast contaminant boundaries that enclose areas potentially exceeding the Safe Drinking Water Act radiological standards at any time within 1,000 years. An external peer review of the groundwater flow and contaminant transport model was completed, and the model was accepted by NDEP to allow advancement to the CADD/CAP stage. The CADD/CAP stage focuses on model evaluation to ensure that existing models provide adequate guidance for the regulatory decisions regarding monitoring and institutional controls. Data-collection activities are identified and implemented to address key uncertainties in the flow and contaminant transport models. During the CR stage, final use restriction boundaries and CAU regulatory boundaries are negotiated and established; a long-term closure monitoring program is developed and implemented; and the approaches and policies for institutional controls are initiated. The model evaluation process described in this plan consists of an iterative series of five steps designed to build confidence in the site conceptual model and model forecasts. These steps are designed to identify data-collection activities (Step 1), document the data-collection activities in the 0CADD/CAP (Step 2), and perform the activities (Step 3). The new data are then assessed; the model is refined, if necessary; the modeling results are evaluated; and a model evaluation report is prepared (Step 4). The assessments are made by the modeling team and presented to the pre-emptive review committee. The decision is made by the modeling team with the assistance of the pre-emptive review committee and concurrence of NNSA/NSO to continue data and model assessment/refinement, recommend additional data collection, or recommend advancing to the CR stage. A recommendation to advance to the CR stage is based on whether the model is considered to be sufficiently reliable for designing a monitoring system and developing effective institutional controls. The decision to advance to the CR stage or to return to step 1 of the process is then made by NDEP (Step 5).« less

  1. A Critical Meta-Analysis of Lens Model Studies in Human Judgment and Decision-Making

    PubMed Central

    Kaufmann, Esther; Reips, Ulf-Dietrich; Wittmann, Werner W.

    2013-01-01

    Achieving accurate judgment (‘judgmental achievement’) is of utmost importance in daily life across multiple domains. The lens model and the lens model equation provide useful frameworks for modeling components of judgmental achievement and for creating tools to help decision makers (e.g., physicians, teachers) reach better judgments (e.g., a correct diagnosis, an accurate estimation of intelligence). Previous meta-analyses of judgment and decision-making studies have attempted to evaluate overall judgmental achievement and have provided the basis for evaluating the success of bootstrapping (i.e., replacing judges by linear models that guide decision making). However, previous meta-analyses have failed to appropriately correct for a number of study design artifacts (e.g., measurement error, dichotomization), which may have potentially biased estimations (e.g., of the variability between studies) and led to erroneous interpretations (e.g., with regards to moderator variables). In the current study we therefore conduct the first psychometric meta-analysis of judgmental achievement studies that corrects for a number of study design artifacts. We identified 31 lens model studies (N = 1,151, k = 49) that met our inclusion criteria. We evaluated overall judgmental achievement as well as whether judgmental achievement depended on decision domain (e.g., medicine, education) and/or the level of expertise (expert vs. novice). We also evaluated whether using corrected estimates affected conclusions with regards to the success of bootstrapping with psychometrically-corrected models. Further, we introduce a new psychometric trim-and-fill method to estimate the effect sizes of potentially missing studies correct psychometric meta-analyses for effects of publication bias. Comparison of the results of the psychometric meta-analysis with the results of a traditional meta-analysis (which only corrected for sampling error) indicated that artifact correction leads to a) an increase in values of the lens model components, b) reduced heterogeneity between studies, and c) increases the success of bootstrapping. We argue that psychometric meta-analysis is useful for accurately evaluating human judgment and show the success of bootstrapping. PMID:24391781

  2. A critical meta-analysis of lens model studies in human judgment and decision-making.

    PubMed

    Kaufmann, Esther; Reips, Ulf-Dietrich; Wittmann, Werner W

    2013-01-01

    Achieving accurate judgment ('judgmental achievement') is of utmost importance in daily life across multiple domains. The lens model and the lens model equation provide useful frameworks for modeling components of judgmental achievement and for creating tools to help decision makers (e.g., physicians, teachers) reach better judgments (e.g., a correct diagnosis, an accurate estimation of intelligence). Previous meta-analyses of judgment and decision-making studies have attempted to evaluate overall judgmental achievement and have provided the basis for evaluating the success of bootstrapping (i.e., replacing judges by linear models that guide decision making). However, previous meta-analyses have failed to appropriately correct for a number of study design artifacts (e.g., measurement error, dichotomization), which may have potentially biased estimations (e.g., of the variability between studies) and led to erroneous interpretations (e.g., with regards to moderator variables). In the current study we therefore conduct the first psychometric meta-analysis of judgmental achievement studies that corrects for a number of study design artifacts. We identified 31 lens model studies (N = 1,151, k = 49) that met our inclusion criteria. We evaluated overall judgmental achievement as well as whether judgmental achievement depended on decision domain (e.g., medicine, education) and/or the level of expertise (expert vs. novice). We also evaluated whether using corrected estimates affected conclusions with regards to the success of bootstrapping with psychometrically-corrected models. Further, we introduce a new psychometric trim-and-fill method to estimate the effect sizes of potentially missing studies correct psychometric meta-analyses for effects of publication bias. Comparison of the results of the psychometric meta-analysis with the results of a traditional meta-analysis (which only corrected for sampling error) indicated that artifact correction leads to a) an increase in values of the lens model components, b) reduced heterogeneity between studies, and c) increases the success of bootstrapping. We argue that psychometric meta-analysis is useful for accurately evaluating human judgment and show the success of bootstrapping.

  3. Parameter Variability and Distributional Assumptions in the Diffusion Model

    ERIC Educational Resources Information Center

    Ratcliff, Roger

    2013-01-01

    If the diffusion model (Ratcliff & McKoon, 2008) is to account for the relative speeds of correct responses and errors, it is necessary that the components of processing identified by the model vary across the trials of a task. In standard applications, the rate at which information is accumulated by the diffusion process is assumed to be normally…

  4. Operator Training: Who Is Responsible?

    ERIC Educational Resources Information Center

    Wubbena, Robert L.

    1979-01-01

    Summarized are the findings of a study to identify and correct water pollution control operator training deficiencies. Several models are presented to aid in developing a coordinated delivery system for operator training and certification. (CS)

  5. Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)

    NASA Astrophysics Data System (ADS)

    Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.

    2018-04-01

    Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.

  6. Robust, open-source removal of systematics in Kepler data

    NASA Astrophysics Data System (ADS)

    Aigrain, S.; Parviainen, H.; Roberts, S.; Reece, S.; Evans, T.

    2017-10-01

    We present ARC2 (Astrophysically Robust Correction 2), an open-source python-based systematics-correction pipeline, to correct for the Kepler prime mission long-cadence light curves. The ARC2 pipeline identifies and corrects any isolated discontinuities in the light curves and then removes trends common to many light curves. These trends are modelled using the publicly available co-trending basis vectors, within an (approximate) Bayesian framework with 'shrinkage' priors to minimize the risk of overfitting and the injection of any additional noise into the corrected light curves, while keeping any astrophysical signals intact. We show that the ARC2 pipeline's performance matches that of the standard Kepler PDC-MAP data products using standard noise metrics, and demonstrate its ability to preserve astrophysical signals using injection tests with simulated stellar rotation and planetary transit signals. Although it is not identical, the ARC2 pipeline can thus be used as an open-source alternative to PDC-MAP, whenever the ability to model the impact of the systematics removal process on other kinds of signal is important.

  7. Eliminating bias in rainfall estimates from microwave links due to antenna wetting

    NASA Astrophysics Data System (ADS)

    Fencl, Martin; Rieckermann, Jörg; Bareš, Vojtěch

    2014-05-01

    Commercial microwave links (MWLs) are point-to-point radio systems which are widely used in telecommunication systems. They operate at frequencies where the transmitted power is mainly disturbed by precipitation. Thus, signal attenuation from MWLs can be used to estimate path-averaged rain rates, which is conceptually very promising, since MWLs cover about 20 % of surface area. Unfortunately, MWL rainfall estimates are often positively biased due to additional attenuation caused by antenna wetting. To correct MWL observations a posteriori to reduce the wet antenna effect (WAE), both empirically and physically based models have been suggested. However, it is challenging to calibrate these models, because the wet antenna attenuation depends both on the MWL properties (frequency, type of antennas, shielding etc.) and different climatic factors (temperature, due point, wind velocity and direction, etc.). Instead, it seems straight forward to keep antennas dry by shielding them. In this investigation we compare the effectiveness of antenna shielding to model-based corrections to reduce the WAE. The experimental setup, located in Dübendorf-Switzerland, consisted of 1.85-km long commercial dual-polarization microwave link at 38 GHz and 5 optical disdrometers. The MWL was operated without shielding in the period from March to October 2011 and with shielding from October 2011 to July 2012. This unique experimental design made it possible to identify the attenuation due to antenna wetting, which can be computed as the difference between the measured and theoretical attenuation. The theoretical path-averaged attenuation was calculated from the path-averaged drop size distribution. During the unshielded periods, the total bias caused by WAE was 0.74 dB, which was reduced by shielding to 0.39 dB for the horizontal polarization (vertical: reduction from 0.96 dB to 0.44 dB). Interestingly, the model-based correction (Schleiss et al. 2013) was more effective because it reduced the bias of unshielded periods to 0.07 dB for the horizontal polarization (vertical: 0.06 dB). Applying the same model-based correction to shielded periods reduces the bias even more, to -0.03 dB and -0.01 dB, respectively. This indicates that additional attenuation could be caused also by different effects, such as reflection of sidelobes from wet surfaces and other environmental factors. Further, model-based corrections do not capture correctly the nature of WAE, but more likely provide only an empirical correction. This claim is supported by the fact that detailed analysis of particular events reveals that both antenna shielding and model-based correction performance differ substantially from event to event. Further investigation based on direct observation of antenna wetting and other environmental variables needs to be performed to identify more properly the nature of the attenuation bias. Schleiss, M., J. Rieckermann, and A. Berne, 2013: Quantification and modeling of wet-antenna attenuation for commercial microwave links. IEEE Geosci. Remote Sens. Lett., 10.1109/LGRS.2012.2236074.

  8. Real-space post-processing correction of thermal drift and piezoelectric actuator nonlinearities in scanning tunneling microscope images.

    PubMed

    Yothers, Mitchell P; Browder, Aaron E; Bumm, Lloyd A

    2017-01-01

    We have developed a real-space method to correct distortion due to thermal drift and piezoelectric actuator nonlinearities on scanning tunneling microscope images using Matlab. The method uses the known structures typically present in high-resolution atomic and molecularly resolved images as an internal standard. Each image feature (atom or molecule) is first identified in the image. The locations of each feature's nearest neighbors are used to measure the local distortion at that location. The local distortion map across the image is simultaneously fit to our distortion model, which includes thermal drift in addition to piezoelectric actuator hysteresis and creep. The image coordinates of the features and image pixels are corrected using an inverse transform from the distortion model. We call this technique the thermal-drift, hysteresis, and creep transform. Performing the correction in real space allows defects, domain boundaries, and step edges to be excluded with a spatial mask. Additional real-space image analyses are now possible with these corrected images. Using graphite(0001) as a model system, we show lattice fitting to the corrected image, averaged unit cell images, and symmetry-averaged unit cell images. Statistical analysis of the distribution of the image features around their best-fit lattice sites measures the aggregate noise in the image, which can be expressed as feature confidence ellipsoids.

  9. Real-space post-processing correction of thermal drift and piezoelectric actuator nonlinearities in scanning tunneling microscope images

    NASA Astrophysics Data System (ADS)

    Yothers, Mitchell P.; Browder, Aaron E.; Bumm, Lloyd A.

    2017-01-01

    We have developed a real-space method to correct distortion due to thermal drift and piezoelectric actuator nonlinearities on scanning tunneling microscope images using Matlab. The method uses the known structures typically present in high-resolution atomic and molecularly resolved images as an internal standard. Each image feature (atom or molecule) is first identified in the image. The locations of each feature's nearest neighbors are used to measure the local distortion at that location. The local distortion map across the image is simultaneously fit to our distortion model, which includes thermal drift in addition to piezoelectric actuator hysteresis and creep. The image coordinates of the features and image pixels are corrected using an inverse transform from the distortion model. We call this technique the thermal-drift, hysteresis, and creep transform. Performing the correction in real space allows defects, domain boundaries, and step edges to be excluded with a spatial mask. Additional real-space image analyses are now possible with these corrected images. Using graphite(0001) as a model system, we show lattice fitting to the corrected image, averaged unit cell images, and symmetry-averaged unit cell images. Statistical analysis of the distribution of the image features around their best-fit lattice sites measures the aggregate noise in the image, which can be expressed as feature confidence ellipsoids.

  10. Mathematical modeling of erythrocyte chimerism informs genetic intervention strategies for sickle cell disease.

    PubMed

    Altrock, Philipp M; Brendel, Christian; Renella, Raffaele; Orkin, Stuart H; Williams, David A; Michor, Franziska

    2016-09-01

    Recent advances in gene therapy and genome-engineering technologies offer the opportunity to correct sickle cell disease (SCD), a heritable disorder caused by a point mutation in the β-globin gene. The developmental switch from fetal γ-globin to adult β-globin is governed in part by the transcription factor (TF) BCL11A. This TF has been proposed as a therapeutic target for reactivation of γ-globin and concomitant reduction of β-sickle globin. In this and other approaches, genetic alteration of a portion of the hematopoietic stem cell (HSC) compartment leads to a mixture of sickling and corrected red blood cells (RBCs) in periphery. To reverse the sickling phenotype, a certain proportion of corrected RBCs is necessary; the degree of HSC alteration required to achieve a desired fraction of corrected RBCs remains unknown. To address this issue, we developed a mathematical model describing aging and survival of sickle-susceptible and normal RBCs; the former can have a selective survival advantage leading to their overrepresentation. We identified the level of bone marrow chimerism required for successful stem cell-based gene therapies in SCD. Our findings were further informed using an experimental mouse model, where we transplanted mixtures of Berkeley SCD and normal murine bone marrow cells to establish chimeric grafts in murine hosts. Our integrative theoretical and experimental approach identifies the target frequency of HSC alterations required for effective treatment of sickling syndromes in humans. Our work replaces episodic observations of such target frequencies with a mathematical modeling framework that covers a large and continuous spectrum of chimerism conditions. Am. J. Hematol. 91:931-937, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  11. Standoff Human Identification Using Body Shape

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzner, Shari; Heredia-Langner, Alejandro; Amidan, Brett G.

    2015-09-01

    The ability to identify individuals is a key component of maintaining safety and security in public spaces and around critical infrastructure. Monitoring an open space is challenging because individuals must be identified and re-identified from a standoff distance nonintrusively, making methods like fingerprinting and even facial recognition impractical. We propose using body shape features as a means for identification from standoff sensing, either complementing other identifiers or as an alternative. An important challenge in monitoring open spaces is reconstructing identifying features when only a partial observation is available, because of the view-angle limitations and occlusion or subject pose changes. Tomore » address this challenge, we investigated the minimum number of features required for a high probability of correct identification, and we developed models for predicting a key body feature—height—from a limited set of observed features. We found that any set of nine randomly selected body measurements was sufficient to correctly identify an individual in a dataset of 4426 subjects. For predicting height, anthropometric measures were investigated for correlation with height. Their correlation coefficients and associated linear models were reported. These results—a sufficient number of features for identification and height prediction from a single feature—contribute to developing systems for standoff identification when views of a subject are limited.« less

  12. Phase I Flow and Transport Model Document for Corrective Action Unit 97: Yucca Flat/Climax Mine, Nevada National Security Site, Nye County, Nevada, Revision 1 with ROTCs 1 and 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, Robert

    The Underground Test Area (UGTA) Corrective Action Unit (CAU) 97, Yucca Flat/Climax Mine, in the northeast part of the Nevada National Security Site (NNSS) requires environmental corrective action activities to assess contamination resulting from underground nuclear testing. These activities are necessary to comply with the UGTA corrective action strategy (referred to as the UGTA strategy). The corrective action investigation phase of the UGTA strategy requires the development of groundwater flow and contaminant transport models whose purpose is to identify the lateral and vertical extent of contaminant migration over the next 1,000 years. In particular, the goal is to calculate themore » contaminant boundary, which is defined as a probabilistic model-forecast perimeter and a lower hydrostratigraphic unit (HSU) boundary that delineate the possible extent of radionuclide-contaminated groundwater from underground nuclear testing. Because of structural uncertainty in the contaminant boundary, a range of potential contaminant boundaries was forecast, resulting in an ensemble of contaminant boundaries. The contaminant boundary extent is determined by the volume of groundwater that has at least a 5 percent chance of exceeding the radiological standards of the Safe Drinking Water Act (SDWA) (CFR, 2012).« less

  13. Correction of the spectral calibration of the Joint European Torus core light detecting and ranging Thomson scattering diagnostic using ray tracing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hawke, J.; Scannell, R.; Maslov, M.

    2013-10-15

    This work isolated the cause of the observed discrepancy between the electron temperature (T{sub e}) measurements before and after the JET Core LIDAR Thomson Scattering (TS) diagnostic was upgraded. In the upgrade process, stray light filters positioned just before the detectors were removed from the system. Modelling showed that the shift imposed on the stray light filters transmission functions due to the variations in the incidence angles of the collected photons impacted plasma measurements. To correct for this identified source of error, correction factors were developed using ray tracing models for the calibration and operational states of the diagnostic. Themore » application of these correction factors resulted in an increase in the observed T{sub e}, resulting in the partial if not complete removal of the observed discrepancy in the measured T{sub e} between the JET core LIDAR TS diagnostic, High Resolution Thomson Scattering, and the Electron Cyclotron Emission diagnostics.« less

  14. Energy considerations in the Community Atmosphere Model (CAM)

    DOE PAGES

    Williamson, David L.; Olson, Jerry G.; Hannay, Cécile; ...

    2015-06-30

    An error in the energy formulation in the Community Atmosphere Model (CAM) is identified and corrected. Ten year AMIP simulations are compared using the correct and incorrect energy formulations. Statistics of selected primary variables all indicate physically insignificant differences between the simulations, comparable to differences with simulations initialized with rounding sized perturbations. The two simulations are so similar mainly because of an inconsistency in the application of the incorrect energy formulation in the original CAM. CAM used the erroneous energy form to determine the states passed between the parameterizations, but used a form related to the correct formulation for themore » state passed from the parameterizations to the dynamical core. If the incorrect form is also used to determine the state passed to the dynamical core the simulations are significantly different. In addition, CAM uses the incorrect form for the global energy fixer, but that seems to be less important. The difference of the magnitude of the fixers using the correct and incorrect energy definitions is very small.« less

  15. Clinical pharmacology of analgesics assessed with human experimental pain models: bridging basic and clinical research

    PubMed Central

    Oertel, Bruno Georg; Lötsch, Jörn

    2013-01-01

    The medical impact of pain is such that much effort is being applied to develop novel analgesic drugs directed towards new targets and to investigate the analgesic efficacy of known drugs. Ongoing research requires cost-saving tools to translate basic science knowledge into clinically effective analgesic compounds. In this review we have re-examined the prediction of clinical analgesia by human experimental pain models as a basis for model selection in phase I studies. The overall prediction of analgesic efficacy or failure of a drug correlated well between experimental and clinical settings. However, correct model selection requires more detailed information about which model predicts a particular clinical pain condition. We hypothesized that if an analgesic drug was effective in an experimental pain model and also a specific clinical pain condition, then that model might be predictive for that particular condition and should be selected for development as an analgesic for that condition. The validity of the prediction increases with an increase in the numbers of analgesic drug classes for which this agreement was shown. From available evidence, only five clinical pain conditions were correctly predicted by seven different pain models for at least three different drugs. Most of these models combine a sensitization method. The analysis also identified several models with low impact with respect to their clinical translation. Thus, the presently identified agreements and non-agreements between analgesic effects on experimental and on clinical pain may serve as a solid basis to identify complex sets of human pain models that bridge basic science with clinical pain research. PMID:23082949

  16. Phylogenetic Analysis and Classification of the Fungal bHLH Domain

    PubMed Central

    Sailsbery, Joshua K.; Atchley, William R.; Dean, Ralph A.

    2012-01-01

    The basic Helix-Loop-Helix (bHLH) domain is an essential highly conserved DNA-binding domain found in many transcription factors in all eukaryotic organisms. The bHLH domain has been well studied in the Animal and Plant Kingdoms but has yet to be characterized within Fungi. Herein, we obtained and evaluated the phylogenetic relationship of 490 fungal-specific bHLH containing proteins from 55 whole genome projects composed of 49 Ascomycota and 6 Basidiomycota organisms. We identified 12 major groupings within Fungi (F1–F12); identifying conserved motifs and functions specific to each group. Several classification models were built to distinguish the 12 groups and elucidate the most discerning sites in the domain. Performance testing on these models, for correct group classification, resulted in a maximum sensitivity and specificity of 98.5% and 99.8%, respectively. We identified 12 highly discerning sites and incorporated those into a set of rules (simplified model) to classify sequences into the correct group. Conservation of amino acid sites and phylogenetic analyses established that like plant bHLH proteins, fungal bHLH–containing proteins are most closely related to animal Group B. The models used in these analyses were incorporated into a software package, the source code for which is available at www.fungalgenomics.ncsu.edu. PMID:22114358

  17. External Peer Review Team Report for Corrective Action Unit 97: Yucca Flat/Climax Mine, Nevada National Security Site, Nye County, Nevada, Revision 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marutzky, Sam J.; Andrews, Robert

    The peer review team commends the Navarro-Intera, LLC (N-I), team for its efforts in using limited data to model the fate of radionuclides in groundwater at Yucca Flat. Recognizing the key uncertainties and related recommendations discussed in Section 6.0 of this report, the peer review team has concluded that U.S. Department of Energy (DOE) is ready for a transition to model evaluation studies in the corrective action decision document (CADD)/corrective action plan (CAP) stage. The DOE, National Nuclear Security Administration Nevada Field Office (NNSA/NFO) clarified the charge to the peer review team in a letter dated October 9, 2014, frommore » Bill R. Wilborn, NNSA/NFO Underground Test Area (UGTA) Activity Lead, to Sam J. Marutzky, N-I UGTA Project Manager: “The model and supporting information should be sufficiently complete that the key uncertainties can be adequately identified such that they can be addressed by appropriate model evaluation studies. The model evaluation studies may include data collection and model refinements conducted during the CADD/CAP stage. One major input to identifying ‘key uncertainties’ is the detailed peer review provided by independent qualified peers.” The key uncertainties that the peer review team recognized and potential concerns associated with each are outlined in Section 6.0, along with recommendations corresponding to each uncertainty. The uncertainties, concerns, and recommendations are summarized in Table ES-1. The number associated with each concern refers to the section in this report where the concern is discussed in detail.« less

  18. Improved atmospheric correction and chlorophyll-a remote sensing models for turbid waters in a dusty environment

    NASA Astrophysics Data System (ADS)

    Al Shehhi, Maryam R.; Gherboudj, Imen; Zhao, Jun; Ghedira, Hosni

    2017-11-01

    This study presents a comprehensive assessment of the performance of the commonly used atmospheric correction models (NIR, SWIR, NIR-SWIR and FM) and ocean color products (OC3 and OC2) derived from MODIS images over the Arabian Gulf, Sea of Oman, and Arabian Sea. The considered atmospheric correction models have been used to derive MODIS normalized water-leaving radiances (nLw), which are compared to in situ water nLw(λ) data collected at different locations by Masdar Institute, United Arab of Emirates, and from AERONET-OC (the ocean color component of the Aerosol Robotic Network) database. From this comparison, the NIR model has been found to be the best performing model among the considered atmospheric correction models, which in turn shows disparity, especially at short wavelengths (400-500 nm) under high aerosol optical depth conditions (AOT (869) > 0.3) and over turbid waters. To reduce the error induced by these factors, a modified model taking into consideration the atmospheric and water turbidity conditions has been proposed. A turbidity index was used to identify the turbid water and a threshold of AOT (869) = 0.3 was used to identify the dusty atmosphere. Despite improved results in the MODIS nLw(λ) using the proposed approach, Chl-a models (OC3 and OC2) show low performance when compared to the in situ Chl-a measurements collected during several field campaigns organized by local, regional and international organizations. This discrepancy might be caused by the improper parametrization of these models or/and the improper selection of bands. Thus, an adaptive power fit algorithm (R2 = 0.95) has been proposed to improve the estimation of Chl-a concentration from 0.07 to 10 mg/m3 by using a new blue/red MODIS band ratio of (443,488)/645 instead of the default band ratio used for OC3(443,488)/547. The selection of this new band ratio (443,488)/645 has been based on using band 645 nm which has been found to represent both water turbidity and algal absorption.

  19. A formal theory of feature binding in object perception.

    PubMed

    Ashby, F G; Prinzmetal, W; Ivry, R; Maddox, W T

    1996-01-01

    Visual objects are perceived correctly only if their features are identified and then bound together. Illusory conjunctions result when feature identification is correct but an error occurs during feature binding. A new model is proposed that assumes feature binding errors occur because of uncertainty about the location of visual features. This model accounted for data from 2 new experiments better than a model derived from A. M. Treisman and H. Schmidt's (1982) feature integration theory. The traditional method for detecting the occurrence of true illusory conjunctions is shown to be fundamentally flawed. A reexamination of 2 previous studies provided new insights into the role of attention and location information in object perception and a reinterpretation of the deficits in patients who exhibit attentional disorders.

  20. Classification and correction of the radar bright band with polarimetric radar

    NASA Astrophysics Data System (ADS)

    Hall, Will; Rico-Ramirez, Miguel; Kramer, Stefan

    2015-04-01

    The annular region of enhanced radar reflectivity, known as the Bright Band (BB), occurs when the radar beam intersects a layer of melting hydrometeors. Radar reflectivity is related to rainfall through a power law equation and so this enhanced region can lead to overestimations of rainfall by a factor of up to 5, so it is important to correct for this. The BB region can be identified by using several techniques including hydrometeor classification and freezing level forecasts from mesoscale meteorological models. Advances in dual-polarisation radar measurements and continued research in the field has led to increased accuracy in the ability to identify the melting snow region. A method proposed by Kitchen et al (1994), a form of which is currently used operationally in the UK, utilises idealised Vertical Profiles of Reflectivity (VPR) to correct for the BB enhancement. A simpler and more computationally efficient method involves the formation of an average VPR from multiple elevations for correction that can still cause a significant decrease in error (Vignal 2000). The purpose of this research is to evaluate a method that relies only on analysis of measurements from an operational C-band polarimetric radar without the need for computationally expensive models. Initial results show that LDR is a strong classifier of melting snow with a high Critical Success Index of 97% when compared to the other variables. An algorithm based on idealised VPRs resulted in the largest decrease in error when BB corrected scans are compared to rain gauges and to lower level scans with a reduction in RMSE of 61% for rain-rate measurements. References Kitchen, M., R. Brown, and A. G. Davies, 1994: Real-time correction of weather radar data for the effects of bright band, range and orographic growth in widespread precipitation. Q.J.R. Meteorol. Soc., 120, 1231-1254. Vignal, B. et al, 2000: Three methods to determine profiles of reflectivity from volumetric radar data to correct precipitation estimates. J. Appl. Meteor., 39(10), 1715-1726.

  1. Investigation of Primary Mirror Segment's Residual Errors for the Thirty Meter Telescope

    NASA Technical Reports Server (NTRS)

    Seo, Byoung-Joon; Nissly, Carl; Angeli, George; MacMynowski, Doug; Sigrist, Norbert; Troy, Mitchell; Williams, Eric

    2009-01-01

    The primary mirror segment aberrations after shape corrections with warping harness have been identified as the single largest error term in the Thirty Meter Telescope (TMT) image quality error budget. In order to better understand the likely errors and how they will impact the telescope performance we have performed detailed simulations. We first generated unwarped primary mirror segment surface shapes that met TMT specifications. Then we used the predicted warping harness influence functions and a Shack-Hartmann wavefront sensor model to determine estimates for the 492 corrected segment surfaces that make up the TMT primary mirror. Surface and control parameters, as well as the number of subapertures were varied to explore the parameter space. The corrected segment shapes were then passed to an optical TMT model built using the Jet Propulsion Laboratory (JPL) developed Modeling and Analysis for Controlled Optical Systems (MACOS) ray-trace simulator. The generated exit pupil wavefront error maps provided RMS wavefront error and image-plane characteristics like the Normalized Point Source Sensitivity (PSSN). The results have been used to optimize the segment shape correction and wavefront sensor designs as well as provide input to the TMT systems engineering error budgets.

  2. Rules based process window OPC

    NASA Astrophysics Data System (ADS)

    O'Brien, Sean; Soper, Robert; Best, Shane; Mason, Mark

    2008-03-01

    As a preliminary step towards Model-Based Process Window OPC we have analyzed the impact of correcting post-OPC layouts using rules based methods. Image processing on the Brion Tachyon was used to identify sites where the OPC model/recipe failed to generate an acceptable solution. A set of rules for 65nm active and poly were generated by classifying these failure sites. The rules were based upon segment runlengths, figure spaces, and adjacent figure widths. 2.1 million sites for active were corrected in a small chip (comparing the pre and post rules based operations), and 59 million were found at poly. Tachyon analysis of the final reticle layout found weak margin sites distinct from those sites repaired by rules-based corrections. For the active layer more than 75% of the sites corrected by rules would have printed without a defect indicating that most rulesbased cleanups degrade the lithographic pattern. Some sites were missed by the rules based cleanups due to either bugs in the DRC software or gaps in the rules table. In the end dramatic changes to the reticle prevented catastrophic lithography errors, but this method is far too blunt. A more subtle model-based procedure is needed changing only those sites which have unsatisfactory lithographic margin.

  3. Deterministic figure correction of piezoelectrically adjustable slumped glass optics

    NASA Astrophysics Data System (ADS)

    DeRoo, Casey T.; Allured, Ryan; Cotroneo, Vincenzo; Hertz, Edward; Marquez, Vanessa; Reid, Paul B.; Schwartz, Eric D.; Vikhlinin, Alexey A.; Trolier-McKinstry, Susan; Walker, Julian; Jackson, Thomas N.; Liu, Tianning; Tendulkar, Mohit

    2018-01-01

    Thin x-ray optics with high angular resolution (≤ 0.5 arcsec) over a wide field of view enable the study of a number of astrophysically important topics and feature prominently in Lynx, a next-generation x-ray observatory concept currently under NASA study. In an effort to address this technology need, piezoelectrically adjustable, thin mirror segments capable of figure correction after mounting and on-orbit are under development. We report on the fabrication and characterization of an adjustable cylindrical slumped glass optic. This optic has realized 100% piezoelectric cell yield and employs lithographically patterned traces and anisotropic conductive film connections to address the piezoelectric cells. In addition, the measured responses of the piezoelectric cells are found to be in good agreement with finite-element analysis models. While the optic as manufactured is outside the range of absolute figure correction, simulated corrections using the measured responses of the piezoelectric cells are found to improve 5 to 10 arcsec mirrors to 1 to 3 arcsec [half-power diameter (HPD), single reflection at 1 keV]. Moreover, a measured relative figure change which would correct the figure of a representative slumped glass piece from 6.7 to 1.2 arcsec HPD is empirically demonstrated. We employ finite-element analysis-modeled influence functions to understand the current frequency limitations of the correction algorithm employed and identify a path toward achieving subarcsecond corrections.

  4. 75 FR 69609 - Airworthiness Directives; Bombardier, Inc. Model CL-600-2B19 (Regional Jet Series 100 & 440...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-15

    .... Model CL-600-2B19 (Regional Jet Series 100 & 440) Airplanes AGENCY: Federal Aviation Administration (FAA... DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Part 39 [Docket No. FAA-2010... airworthiness information (MCAI) originated by an aviation authority of another country to identify and correct...

  5. A Systematic Approach for Identifying Level-1 Error Covariance Structures in Latent Growth Modeling

    ERIC Educational Resources Information Center

    Ding, Cherng G.; Jane, Ten-Der; Wu, Chiu-Hui; Lin, Hang-Rung; Shen, Chih-Kang

    2017-01-01

    It has been pointed out in the literature that misspecification of the level-1 error covariance structure in latent growth modeling (LGM) has detrimental impacts on the inferences about growth parameters. Since correct covariance structure is difficult to specify by theory, the identification needs to rely on a specification search, which,…

  6. Hydrological modeling as an evaluation tool of EURO-CORDEX climate projections and bias correction methods

    NASA Astrophysics Data System (ADS)

    Hakala, Kirsti; Addor, Nans; Seibert, Jan

    2017-04-01

    Streamflow stemming from Switzerland's mountainous landscape will be influenced by climate change, which will pose significant challenges to the water management and policy sector. In climate change impact research, the determination of future streamflow is impeded by different sources of uncertainty, which propagate through the model chain. In this research, we explicitly considered the following sources of uncertainty: (1) climate models, (2) downscaling of the climate projections to the catchment scale, (3) bias correction method and (4) parameterization of the hydrological model. We utilize climate projections at the 0.11 degree 12.5 km resolution from the EURO-CORDEX project, which are the most recent climate projections for the European domain. EURO-CORDEX is comprised of regional climate model (RCM) simulations, which have been downscaled from global climate models (GCMs) from the CMIP5 archive, using both dynamical and statistical techniques. Uncertainties are explored by applying a modeling chain involving 14 GCM-RCMs to ten Swiss catchments. We utilize the rainfall-runoff model HBV Light, which has been widely used in operational hydrological forecasting. The Lindström measure, a combination of model efficiency and volume error, was used as an objective function to calibrate HBV Light. Ten best sets of parameters are then achieved by calibrating using the genetic algorithm and Powell optimization (GAP) method. The GAP optimization method is based on the evolution of parameter sets, which works by selecting and recombining high performing parameter sets with each other. Once HBV is calibrated, we then perform a quantitative comparison of the influence of biases inherited from climate model simulations to the biases stemming from the hydrological model. The evaluation is conducted over two time periods: i) 1980-2009 to characterize the simulation realism under the current climate and ii) 2070-2099 to identify the magnitude of the projected change of streamflow under the climate scenarios RCP4.5 and RCP8.5. We utilize two techniques for correcting biases in the climate model output: quantile mapping and a new method, frequency bias correction. The FBC method matches the frequencies between observed and GCM-RCM data. In this way, it can be used to correct for all time scales, which is a known limitation of quantile mapping. A novel approach for the evaluation of the climate simulations and bias correction methods was then applied. Streamflow can be thought of as the "great integrator" of uncertainties. The ability, or the lack thereof, to correctly simulate streamflow is a way to assess the realism of the bias-corrected climate simulations. Long-term monthly mean as well as high and low flow metrics are used to evaluate the realism of the simulations under current climate and to gauge the impacts of climate change on streamflow. Preliminary results show that under present climate, calibration of the hydrological model comprises of a much smaller band of uncertainty in the modeling chain as compared to the bias correction of the GCM-RCMs. Therefore, for future time periods, we expect the bias correction of climate model data to have a greater influence on projected changes in streamflow than the calibration of the hydrological model.

  7. Dynamic Black-Level Correction and Artifact Flagging for Kepler Pixel Time Series

    NASA Technical Reports Server (NTRS)

    Kolodziejczak, J. J.; Clarke, B. D.; Caldwell, D. A.

    2011-01-01

    Methods applied to the calibration stage of Kepler pipeline data processing [1] (CAL) do not currently use all of the information available to identify and correct several instrument-induced artifacts. These include time-varying crosstalk from the fine guidance sensor (FGS) clock signals, and manifestations of drifting moire pattern as locally correlated nonstationary noise, and rolling bands in the images which find their way into the time series [2], [3]. As the Kepler Mission continues to improve the fidelity of its science data products, we are evaluating the benefits of adding pipeline steps to more completely model and dynamically correct the FGS crosstalk, then use the residuals from these model fits to detect and flag spatial regions and time intervals of strong time-varying black-level which may complicate later processing or lead to misinterpretation of instrument behavior as stellar activity.

  8. Baseline Correction of Diffuse Reflection Near-Infrared Spectra Using Searching Region Standard Normal Variate (SRSNV).

    PubMed

    Genkawa, Takuma; Shinzawa, Hideyuki; Kato, Hideaki; Ishikawa, Daitaro; Murayama, Kodai; Komiyama, Makoto; Ozaki, Yukihiro

    2015-12-01

    An alternative baseline correction method for diffuse reflection near-infrared (NIR) spectra, searching region standard normal variate (SRSNV), was proposed. Standard normal variate (SNV) is an effective pretreatment method for baseline correction of diffuse reflection NIR spectra of powder and granular samples; however, its baseline correction performance depends on the NIR region used for SNV calculation. To search for an optimal NIR region for baseline correction using SNV, SRSNV employs moving window partial least squares regression (MWPLSR), and an optimal NIR region is identified based on the root mean square error (RMSE) of cross-validation of the partial least squares regression (PLSR) models with the first latent variable (LV). The performance of SRSNV was evaluated using diffuse reflection NIR spectra of mixture samples consisting of wheat flour and granular glucose (0-100% glucose at 5% intervals). From the obtained NIR spectra of the mixture in the 10 000-4000 cm(-1) region at 4 cm intervals (1501 spectral channels), a series of spectral windows consisting of 80 spectral channels was constructed, and then SNV spectra were calculated for each spectral window. Using these SNV spectra, a series of PLSR models with the first LV for glucose concentration was built. A plot of RMSE versus the spectral window position obtained using the PLSR models revealed that the 8680–8364 cm(-1) region was optimal for baseline correction using SNV. In the SNV spectra calculated using the 8680–8364 cm(-1) region (SRSNV spectra), a remarkable relative intensity change between a band due to wheat flour at 8500 cm(-1) and that due to glucose at 8364 cm(-1) was observed owing to successful baseline correction using SNV. A PLSR model with the first LV based on the SRSNV spectra yielded a determination coefficient (R2) of 0.999 and an RMSE of 0.70%, while a PLSR model with three LVs based on SNV spectra calculated in the full spectral region gave an R2 of 0.995 and an RMSE of 2.29%. Additional evaluation of SRSNV was carried out using diffuse reflection NIR spectra of marzipan and corn samples, and PLSR models based on SRSNV spectra showed good prediction results. These evaluation results indicate that SRSNV is effective in baseline correction of diffuse reflection NIR spectra and provides regression models with good prediction accuracy.

  9. Authenticity assessment of banknotes using portable near infrared spectrometer and chemometrics.

    PubMed

    da Silva Oliveira, Vanessa; Honorato, Ricardo Saldanha; Honorato, Fernanda Araújo; Pereira, Claudete Fernandes

    2018-05-01

    Spectra recorded using a portable near infrared (NIR) spectrometer, Soft Independent Modeling of Class Analogy (SIMCA) and Linear Discriminant Analysis (LDA) associated to Successive Projections Algorithm (SPA) models were applied to identify counterfeit and authentic Brazilian Real (R$20, R$50 and R$100) banknotes, enabling a simple field analysis. NIR spectra (950-1650nm) were recorded from seven different areas of the banknotes (two with fluorescent ink, one over watermark, three with intaglio printing process and one over the serial numbers with typography printing). SIMCA and SPA-LDA models were built using 1st derivative preprocessed spectral data from one of the intaglio areas. For the SIMCA models, all authentic (300) banknotes were correctly classified and the counterfeits (227) were not classified. For the two classes SPA-LDA models (authentic and counterfeit currencies), all the test samples were correctly classified into their respective class. The number of selected variables by SPA varied from two to nineteen for R$20, R$50 and R$100 currencies. These results show that the use of the portable near-infrared with SIMCA or SPA-LDA models can be a completely effective, fast, and non-destructive way to identify authenticity of banknotes as well as permitting field analysis. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Response latencies are alive and well for identifying fakers on a self-report personality inventory: A reconsideration of van Hooft and Born (2012).

    PubMed

    Holden, Ronald R; Lambert, Christine E

    2015-12-01

    Van Hooft and Born (Journal of Applied Psychology 97:301-316, 2012) presented data challenging both the correctness of a congruence model of faking on personality test items and the relative merit (i.e., effect size) of response latencies for identifying fakers. We suggest that their analysis of response times was suboptimal, and that it followed neither from a congruence model of faking nor from published protocols on appropriately filtering the noise in personality test item answering times. Using new data and following recommended analytic procedures, we confirmed the relative utility of response times for identifying personality test fakers, and our obtained results, again, reinforce a congruence model of faking.

  11. Decision Making Configurations: An Alternative to the Centralization/Decentralization Conceptualization.

    ERIC Educational Resources Information Center

    Cullen, John B.; Perrewe, Pamela L.

    1981-01-01

    Used factors identified in the literature as predictors of centralization/decentralization as potential discriminating variables among several decision making configurations in university affiliated professional schools. The model developed from multiple discriminant analysis had reasonable success in classifying correctly only the decentralized…

  12. Tested Demonstrations.

    ERIC Educational Resources Information Center

    Gilbert, George L., Ed.

    1980-01-01

    Two demonstrations are described: (1) a variant of preparing purple benzene by phase transfer catalysis with quaternary ammonium salts and potassium permanganate in which crown ethers are used; (2) a corridor or "hallway" demonstration in which unknown molecular models are displayed and prizes awarded to students correctly identifying the…

  13. Correcting for Sample Contamination in Genotype Calling of DNA Sequence Data

    PubMed Central

    Flickinger, Matthew; Jun, Goo; Abecasis, Gonçalo R.; Boehnke, Michael; Kang, Hyun Min

    2015-01-01

    DNA sample contamination is a frequent problem in DNA sequencing studies and can result in genotyping errors and reduced power for association testing. We recently described methods to identify within-species DNA sample contamination based on sequencing read data, showed that our methods can reliably detect and estimate contamination levels as low as 1%, and suggested strategies to identify and remove contaminated samples from sequencing studies. Here we propose methods to model contamination during genotype calling as an alternative to removal of contaminated samples from further analyses. We compare our contamination-adjusted calls to calls that ignore contamination and to calls based on uncontaminated data. We demonstrate that, for moderate contamination levels (5%–20%), contamination-adjusted calls eliminate 48%–77% of the genotyping errors. For lower levels of contamination, our contamination correction methods produce genotypes nearly as accurate as those based on uncontaminated data. Our contamination correction methods are useful generally, but are particularly helpful for sample contamination levels from 2% to 20%. PMID:26235984

  14. A path towards uncertainty assignment in an operational cloud-phase algorithm from ARM vertically pointing active sensors

    DOE PAGES

    Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; ...

    2016-06-10

    Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty informationmore » on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. As a result, this is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.« less

  15. A path towards uncertainty assignment in an operational cloud-phase algorithm from ARM vertically pointing active sensors

    NASA Astrophysics Data System (ADS)

    Riihimaki, Laura D.; Comstock, Jennifer M.; Anderson, Kevin K.; Holmes, Aimee; Luke, Edward

    2016-06-01

    Knowledge of cloud phase (liquid, ice, mixed, etc.) is necessary to describe the radiative impact of clouds and their lifetimes, but is a property that is difficult to simulate correctly in climate models. One step towards improving those simulations is to make observations of cloud phase with sufficient accuracy to help constrain model representations of cloud processes. In this study, we outline a methodology using a basic Bayesian classifier to estimate the probabilities of cloud-phase class from Atmospheric Radiation Measurement (ARM) vertically pointing active remote sensors. The advantage of this method over previous ones is that it provides uncertainty information on the phase classification. We also test the value of including higher moments of the cloud radar Doppler spectrum than are traditionally used operationally. Using training data of known phase from the Mixed-Phase Arctic Cloud Experiment (M-PACE) field campaign, we demonstrate a proof of concept for how the method can be used to train an algorithm that identifies ice, liquid, mixed phase, and snow. Over 95 % of data are identified correctly for pure ice and liquid cases used in this study. Mixed-phase and snow cases are more problematic to identify correctly. When lidar data are not available, including additional information from the Doppler spectrum provides substantial improvement to the algorithm. This is a first step towards an operational algorithm and can be expanded to include additional categories such as drizzle with additional training data.

  16. Evaluation of performance of bacterial culture of feces and serum ELISA across stages of Johne's disease in cattle using a Bayesian latent class model.

    PubMed

    Espejo, L A; Zagmutt, F J; Groenendaal, H; Muñoz-Zanzi, C; Wells, S J

    2015-11-01

    The objective of this study was to evaluate the performance of bacterial culture of feces and serum ELISA to correctly identify cows with Mycobacterium avium ssp. paratuberculosis (MAP) at heavy, light, and non-fecal-shedding levels. A total of 29,785 parallel test results from bacterial culture of feces and serum ELISA were collected from 17 dairy herds in Minnesota, Pennsylvania, and Colorado. Samples were obtained from adult cows from dairy herds enrolled for up to 10 yr in the National Johne's Disease Demonstration Herd Project. A Bayesian latent class model was fitted to estimate the probabilities that bacterial culture of feces (using 72-h sedimentation or 30-min centrifugation methods) and serum ELISA results correctly identified cows as high positive, low positive, or negative given that cows were heavy, light, and non-shedders, respectively. The model assumed that no gold standard test was available and conditional independency existed between diagnostic tests. The estimated conditional probabilities that bacterial culture of feces correctly identified heavy shedders, light shedders, and non-shedders were 70.9, 32.0, and 98.5%, respectively. The same values for the serum ELISA were 60.6, 18.7, and 99.5%, respectively. Differences in diagnostic test performance were observed among states. These results improve the interpretation of results from bacterial culture of feces and serum ELISA for detection of MAP and MAP antibody (respectively), which can support on-farm infection control decisions and can be used to evaluate disease-testing strategies, taking into account the accuracy of these tests. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  17. Detecting signals of drug-drug interactions in a spontaneous reports database.

    PubMed

    Thakrar, Bharat T; Grundschober, Sabine Borel; Doessegger, Lucette

    2007-10-01

    The spontaneous reports database is widely used for detecting signals of ADRs. We have extended the methodology to include the detection of signals of ADRs that are associated with drug-drug interactions (DDI). In particular, we have investigated two different statistical assumptions for detecting signals of DDI. Using the FDA's spontaneous reports database, we investigated two models, a multiplicative and an additive model, to detect signals of DDI. We applied the models to four known DDIs (methotrexate-diclofenac and bone marrow depression, simvastatin-ciclosporin and myopathy, ketoconazole-terfenadine and torsades de pointes, and cisapride-erythromycin and torsades de pointes) and to four drug-event combinations where there is currently no evidence of a DDI (fexofenadine-ketoconazole and torsades de pointes, methotrexade-rofecoxib and bone marrow depression, fluvastatin-ciclosporin and myopathy, and cisapride-azithromycine and torsade de pointes) and estimated the measure of interaction on the two scales. The additive model correctly identified all four known DDIs by giving a statistically significant (P < 0.05) positive measure of interaction. The multiplicative model identified the first two of the known DDIs as having a statistically significant or borderline significant (P < 0.1) positive measure of interaction term, gave a nonsignificant positive trend for the third interaction (P = 0.27), and a negative trend for the last interaction. Both models correctly identified the four known non interactions by estimating a negative measure of interaction. The spontaneous reports database is a valuable resource for detecting signals of DDIs. In particular, the additive model is more sensitive in detecting such signals. The multiplicative model may further help qualify the strength of the signal detected by the additive model.

  18. Detecting signals of drug–drug interactions in a spontaneous reports database

    PubMed Central

    Thakrar, Bharat T; Grundschober, Sabine Borel; Doessegger, Lucette

    2007-01-01

    Aims The spontaneous reports database is widely used for detecting signals of ADRs. We have extended the methodology to include the detection of signals of ADRs that are associated with drug–drug interactions (DDI). In particular, we have investigated two different statistical assumptions for detecting signals of DDI. Methods Using the FDA's spontaneous reports database, we investigated two models, a multiplicative and an additive model, to detect signals of DDI. We applied the models to four known DDIs (methotrexate-diclofenac and bone marrow depression, simvastatin-ciclosporin and myopathy, ketoconazole-terfenadine and torsades de pointes, and cisapride-erythromycin and torsades de pointes) and to four drug-event combinations where there is currently no evidence of a DDI (fexofenadine-ketoconazole and torsades de pointes, methotrexade-rofecoxib and bone marrow depression, fluvastatin-ciclosporin and myopathy, and cisapride-azithromycine and torsade de pointes) and estimated the measure of interaction on the two scales. Results The additive model correctly identified all four known DDIs by giving a statistically significant (P< 0.05) positive measure of interaction. The multiplicative model identified the first two of the known DDIs as having a statistically significant or borderline significant (P< 0.1) positive measure of interaction term, gave a nonsignificant positive trend for the third interaction (P= 0.27), and a negative trend for the last interaction. Both models correctly identified the four known non interactions by estimating a negative measure of interaction. Conclusions The spontaneous reports database is a valuable resource for detecting signals of DDIs. In particular, the additive model is more sensitive in detecting such signals. The multiplicative model may further help qualify the strength of the signal detected by the additive model. PMID:17506784

  19. Simultaneous Mean and Covariance Correction Filter for Orbit Estimation.

    PubMed

    Wang, Xiaoxu; Pan, Quan; Ding, Zhengtao; Ma, Zhengya

    2018-05-05

    This paper proposes a novel filtering design, from a viewpoint of identification instead of the conventional nonlinear estimation schemes (NESs), to improve the performance of orbit state estimation for a space target. First, a nonlinear perturbation is viewed or modeled as an unknown input (UI) coupled with the orbit state, to avoid the intractable nonlinear perturbation integral (INPI) required by NESs. Then, a simultaneous mean and covariance correction filter (SMCCF), based on a two-stage expectation maximization (EM) framework, is proposed to simply and analytically fit or identify the first two moments (FTM) of the perturbation (viewed as UI), instead of directly computing such the INPI in NESs. Orbit estimation performance is greatly improved by utilizing the fit UI-FTM to simultaneously correct the state estimation and its covariance. Third, depending on whether enough information is mined, SMCCF should outperform existing NESs or the standard identification algorithms (which view the UI as a constant independent of the state and only utilize the identified UI-mean to correct the state estimation, regardless of its covariance), since it further incorporates the useful covariance information in addition to the mean of the UI. Finally, our simulations demonstrate the superior performance of SMCCF via an orbit estimation example.

  20. Bayesian Statistical Inference in Ion-Channel Models with Exact Missed Event Correction.

    PubMed

    Epstein, Michael; Calderhead, Ben; Girolami, Mark A; Sivilotti, Lucia G

    2016-07-26

    The stochastic behavior of single ion channels is most often described as an aggregated continuous-time Markov process with discrete states. For ligand-gated channels each state can represent a different conformation of the channel protein or a different number of bound ligands. Single-channel recordings show only whether the channel is open or shut: states of equal conductance are aggregated, so transitions between them have to be inferred indirectly. The requirement to filter noise from the raw signal further complicates the modeling process, as it limits the time resolution of the data. The consequence of the reduced bandwidth is that openings or shuttings that are shorter than the resolution cannot be observed; these are known as missed events. Postulated models fitted using filtered data must therefore explicitly account for missed events to avoid bias in the estimation of rate parameters and therefore assess parameter identifiability accurately. In this article, we present the first, to our knowledge, Bayesian modeling of ion-channels with exact missed events correction. Bayesian analysis represents uncertain knowledge of the true value of model parameters by considering these parameters as random variables. This allows us to gain a full appreciation of parameter identifiability and uncertainty when estimating values for model parameters. However, Bayesian inference is particularly challenging in this context as the correction for missed events increases the computational complexity of the model likelihood. Nonetheless, we successfully implemented a two-step Markov chain Monte Carlo method that we called "BICME", which performs Bayesian inference in models of realistic complexity. The method is demonstrated on synthetic and real single-channel data from muscle nicotinic acetylcholine channels. We show that parameter uncertainty can be characterized more accurately than with maximum-likelihood methods. Our code for performing inference in these ion channel models is publicly available. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  1. Relying on fin erosion to identify hatchery-reared brown trout in a Tennessee river

    USGS Publications Warehouse

    Meerbeek, Jonathan R.; Bettoli, Phillip William

    2012-01-01

    Hatchery-induced fin erosion can be used to identify recently stocked catchable-size brown trout Salmo trutta during annual surveys to qualitatively estimate contributions to a fishery. However, little is known about the longevity of this mark and its effectiveness as a short-term (≤ 1 year) mass-marking technique. We evaluated hatchery-induced pectoral fin erosion as a mass-marking technique for short-term stocking evaluations by stocking microtagged brown trout in a tailwater and repeatedly sampling those fish to observe and measure their pectoral fins. At Dale Hollow National Fish Hatchery, 99.1% (228 of 230) of microtagged brown trout in outdoor concrete raceways had eroded pectoral fins 1 d prior to stocking. Between 34 and 68 microtagged and 26-35 wild brown trout were collected during eight subsequent electrofishing samples. In a blind test based on visual examination of pectoral fins at up to 322 d poststocking, one observer correctly identified 91.7% to 100.0% (mean of 96.9%) of microtagged brown trout prior to checking for microtags. In the laboratory, pectoral fin length and width measurements were recorded to statistically compare the fin measurements of wild and microtagged hatchery brown trout. With only one exception, all pectoral fin measurements on each date averaged significantly larger for wild trout than for microtagged brown trout. Based on the number of pectoral fin measurements falling below 95% prediction intervals, 93.7% (148 of 158) of microtagged trout were correctly identified as hatchery fish based on regression models up to 160 d poststocking. Only 72.2% (70 of 97) of microtagged trout were identified correctly after 160 d based on pectoral fin measurements and the regression models. We concluded that visual examination of pectoral fin erosion was a very effective way to identify stocked brown trout for up to 322 d poststocking.

  2. Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna R.; Wickert, Mark A.

    2017-05-01

    A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.

  3. Chroma intra prediction based on inter-channel correlation for HEVC.

    PubMed

    Zhang, Xingyu; Gisquet, Christophe; François, Edouard; Zou, Feng; Au, Oscar C

    2014-01-01

    In this paper, we investigate a new inter-channel coding mode called LM mode proposed for the next generation video coding standard called high efficiency video coding. This mode exploits inter-channel correlation using reconstructed luma to predict chroma linearly with parameters derived from neighboring reconstructed luma and chroma pixels at both encoder and decoder to avoid overhead signaling. In this paper, we analyze the LM mode and prove that the LM parameters for predicting original chroma and reconstructed chroma are statistically the same. We also analyze the error sensitivity of the LM parameters. We identify some LM mode problematic situations and propose three novel LM-like modes called LMA, LML, and LMO to address the situations. To limit the increase in complexity due to the LM-like modes, we propose some fast algorithms with the help of some new cost functions. We further identify some potentially-problematic conditions in the parameter estimation (including regression dilution problem) and introduce a novel model correction technique to detect and correct those conditions. Simulation results suggest that considerable BD-rate reduction can be achieved by the proposed LM-like modes and model correction technique. In addition, the performance gain of the two techniques appears to be essentially additive when combined.

  4. Quality Control Analysis of Selected Aspects of Programs Administered by the Bureau of Student Financial Assistance. Error-Prone Model Derived from 1978-1979 Quality Control Study. Data Report. [Task 3.

    ERIC Educational Resources Information Center

    Saavedra, Pedro; Kuchak, JoAnn

    An error-prone model (EPM) to predict financial aid applicants who are likely to misreport on Basic Educational Opportunity Grant (BEOG) applications was developed, based on interviews conducted with a quality control sample of 1,791 students during 1978-1979. The model was designed to identify corrective methods appropriate for different types of…

  5. A sampling bias in identifying children in foster care using Medicaid data.

    PubMed

    Rubin, David M; Pati, Susmita; Luan, Xianqun; Alessandrini, Evaline A

    2005-01-01

    Prior research identified foster care children using Medicaid eligibility codes specific to foster care, but it is unknown whether these codes capture all foster care children. To describe the sampling bias in relying on Medicaid eligibility codes to identify foster care children. Using foster care administrative files linked to Medicaid data, we describe the proportion of children whose Medicaid eligibility was correctly encoded as foster child during a 1-year follow-up period following a new episode of foster care. Sampling bias is described by comparing claims in mental health, emergency department (ED), and other ambulatory settings among correctly and incorrectly classified foster care children. Twenty-eight percent of the 5683 sampled children were incorrectly classified in Medicaid eligibility files. In a multivariate logistic regression model, correct classification was associated with duration of foster care (>9 vs <2 months, odds ratio [OR] 7.67, 95% confidence interval [CI] 7.17-7.97), number of placements (>3 vs 1 placement, OR 4.20, 95% CI 3.14-5.64), and placement in a group home among adjudicated dependent children (OR 1.87, 95% CI 1.33-2.63). Compared with incorrectly classified children, correctly classified foster care children were 3 times more likely to use any services, 2 times more likely to visit the ED, 3 times more likely to make ambulatory visits, and 4 times more likely to use mental health care services (P < .001 for all comparisons). Identifying children in foster care using Medicaid eligibility files is prone to sampling bias that over-represents children in foster care who use more services.

  6. Identification of pesticide varieties by detecting characteristics of Chlorella pyrenoidosa using Visible/Near infrared hyperspectral imaging and Raman microspectroscopy technology.

    PubMed

    Shao, Yongni; Li, Yuan; Jiang, Linjun; Pan, Jian; He, Yong; Dou, Xiaoming

    2016-11-01

    The main goal of this research is to examine the feasibility of applying Visible/Near-infrared hyperspectral imaging (Vis/NIR-HSI) and Raman microspectroscopy technology for non-destructive identification of pesticide varieties (glyphosate and butachlor). Both mentioned technologies were explored to investigate how internal elements or characteristics of Chlorella pyrenoidosa change when pesticides are applied, and in the meantime, to identify varieties of the pesticides during this procedure. Successive projections algorithm (SPA) was introduced to our study to identify seven most effective wavelengths. With those wavelengths suggested by SPA, a model of the linear discriminant analysis (LDA) was established to classify the pesticide varieties, and the correct classification rate of the SPA-LDA model reached as high as 100%. For the Raman technique, a few partial least squares discriminant analysis models were established with different preprocessing methods from which we also identified one processing approach that achieved the most optimal result. The sensitive wavelengths (SWs) which are related to algae's pigment were chosen, and a model of LDA was established with the correct identification reached a high level of 90.0%. The results showed that both Vis/NIR-HSI and Raman microspectroscopy techniques are capable to identify pesticide varieties in an indirect but effective way, and SPA is an effective wavelength extracting method. The SWs corresponding to microalgae pigments, which were influenced by pesticides, could also help to characterize different pesticide varieties and benefit the variety identification. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. A downscaling method for the assessment of local climate change

    NASA Astrophysics Data System (ADS)

    Bruno, E.; Portoghese, I.; Vurro, M.

    2009-04-01

    The use of complimentary models is necessary to study the impact of climate change scenarios on the hydrological response at different space-time scales. However, the structure of GCMs is such that their space resolution (hundreds of kilometres) is too coarse and not adequate to describe the variability of extreme events at basin scale (Burlando and Rosso, 2002). To bridge the space-time gap between the climate scenarios and the usual scale of the inputs for hydrological prediction models is a fundamental requisite for the evaluation of climate change impacts on water resources. Since models operate a simplification of a complex reality, their results cannot be expected to fit with climate observations. Identifying local climate scenarios for impact analysis implies the definition of more detailed local scenario by downscaling GCMs or RCMs results. Among the output correction methods we consider the statistical approach by Déqué (2007) reported as a ‘Variable correction method' in which the correction of model outputs is obtained by a function build with the observation dataset and operating a quantile-quantile transformation (Q-Q transform). However, in the case of daily precipitation fields the Q-Q transform is not able to correct the temporal property of the model output concerning the dry-wet lacunarity process. An alternative correction method is proposed based on a stochastic description of the arrival-duration-intensity processes in coherence with the Poissonian Rectangular Pulse scheme (PRP) (Eagleson, 1972). In this proposed approach, the Q-Q transform is applied to the PRP variables derived from the daily rainfall datasets. Consequently the corrected PRP parameters are used for the synthetic generation of statistically homogeneous rainfall time series that mimic the persistency of daily observations for the reference period. Then the PRP parameters are forced through the GCM scenarios to generate local scale rainfall records for the 21st century. The statistical parameters characterizing daily storm occurrence, storm intensity and duration needed to apply the PRP scheme are considered among STARDEX collection of extreme indices.

  8. Hand-writing motion tracking with vision-inertial sensor fusion: calibration and error correction.

    PubMed

    Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J

    2014-08-25

    The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model.

  9. Observations and modeling of San Diego beaches during El Niño

    NASA Astrophysics Data System (ADS)

    Doria, André; Guza, R. T.; O'Reilly, William C.; Yates, M. L.

    2016-08-01

    Subaerial sand levels were observed at five southern California beaches for 16 years, including notable El Niños in 1997-98 and 2009-10. An existing, empirical shoreline equilibrium model, driven with wave conditions estimated using a regional buoy network, simulates well the seasonal changes in subaerial beach width (e.g. the cross-shore location of the MSL contour) during non-El Niño years, similar to previous results with a 5-year time series lacking an El Niño winter. The existing model correctly identifies the 1997-98 El Niño winter conditions as more erosive than 2009-10, but overestimates shoreline erosion during both El Niños. The good skill of the existing equilibrium model in typical conditions does not necessarily extrapolate to extreme erosion on these beaches where a few meters thick sand layer often overlies more resistant layers. The modest over-prediction of the 2009-10 El Niño is reduced by gradually decreasing the model mobility of highly eroded shorelines (simulating cobbles, kelp wrack, shell hash, or other stabilizing layers). Over prediction during the more severe 1997-98 El Niño is corrected by stopping model erosion when resilient surfaces (identified with aerial imagery) are reached. The trained model provides a computationally simple (e.g. nonlinear first order differential equation) representation of the observed relationship between incident waves and shoreline change.

  10. Use of statistically and dynamically downscaled atmospheric model output for hydrologic simulations in three mountainous basins in the western United States

    USGS Publications Warehouse

    Hay, L.E.; Clark, M.P.

    2003-01-01

    This paper examines the hydrologic model performance in three snowmelt-dominated basins in the western United States to dynamically- and statistically downscaled output from the National Centers for Environmental Prediction/National Center for Atmospheric Research Reanalysis (NCEP). Runoff produced using a distributed hydrologic model is compared using daily precipitation and maximum and minimum temperature timeseries derived from the following sources: (1) NCEP output (horizontal grid spacing of approximately 210 km); (2) dynamically downscaled (DDS) NCEP output using a Regional Climate Model (RegCM2, horizontal grid spacing of approximately 52 km); (3) statistically downscaled (SDS) NCEP output; (4) spatially averaged measured data used to calibrate the hydrologic model (Best-Sta) and (5) spatially averaged measured data derived from stations located within the area of the RegCM2 model output used for each basin, but excluding Best-Sta set (All-Sta). In all three basins the SDS-based simulations of daily runoff were as good as runoff produced using the Best-Sta timeseries. The NCEP, DDS, and All-Sta timeseries were able to capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all three basins, the NCEP-, DDS-, and All-Sta-based simulations of runoff showed little skill on a daily basis. When the precipitation and temperature biases were corrected in the NCEP, DDS, and All-Sta timeseries, the accuracy of the daily runoff simulations improved dramatically, but, with the exception of the bias-corrected All-Sta data set, these simulations were never as accurate as the SDS-based simulations. This need for a bias correction may be somewhat troubling, but in the case of the large station-timeseries (All-Sta), the bias correction did indeed 'correct' for the change in scale. It is unknown if bias corrections to model output will be valid in a future climate. Future work is warranted to identify the causes for (and removal of) systematic biases in DDS simulations, and improve DDS simulations of daily variability in local climate. Until then, SDS based simulations of runoff appear to be the safer downscaling choice.

  11. 75 FR 80293 - Airworthiness Directives; Eurocopter France Model AS 350 B, BA, B1, B2, B3, and D, and Model...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-22

    ... Airworthiness Directives; Eurocopter France Model AS 350 B, BA, B1, B2, B3, and D, and Model AS355 E, F, F1, F2... identified in the Applicability section, Table 1, of the AD. As published, two part numbers shown in Table 1... corrected to read as follows: Table 1 Component Part No. (P/N) Serial No. (S/N) Main rotor servo-control...

  12. Precision calculations for h → WW/ZZ → 4 fermions in the Two-Higgs-Doublet Model with Prophecy4f

    NASA Astrophysics Data System (ADS)

    Altenkamp, Lukas; Dittmaier, Stefan; Rzehak, Heidi

    2018-03-01

    We have calculated the next-to-leading-order electroweak and QCD corrections to the decay processes h → WW/ZZ → 4 fermions of the light CP-even Higgs boson h of various types of Two-Higgs-Doublet Models (Types I and II, "lepton-specific" and "flipped" models). The input parameters are defined in four different renormalization schemes, where parameters that are not directly accessible by experiments are defined in the \\overline{MS} scheme. Numerical results are presented for the corrections to partial decay widths for various benchmark scenarios previously motivated in the literature, where we investigate the dependence on the \\overline{MS} renormalization scale and on the choice of the renormalization scheme in detail. We find that it is crucial to be precise with these issues in parameter analyses, since parameter conversions between different schemes can involve sizeable or large corrections, especially in scenarios that are close to experimental exclusion limits or theoretical bounds. It even turns out that some renormalization schemes are not applicable in specific regions of parameter space. Our investigation of differential distributions shows that corrections beyond the Standard Model are mostly constant offsets induced by the mixing between the light and heavy CP-even Higgs bosons, so that differential analyses of h→4 f decay observables do not help to identify Two-Higgs-Doublet Models. Moreover, the decay widths do not significantly depend on the specific type of those models. The calculations are implemented in the public Monte Carlo generator Prophecy4f and ready for application.

  13. Universal relations for range corrections to Efimov features

    DOE PAGES

    Ji, Chen; Braaten, Eric; Phillips, Daniel R.; ...

    2015-09-09

    In a three-body system of identical bosons interacting through a large S-wave scattering length a, there are several sets of features related to the Efimov effect that are characterized by discrete scale invariance. Effective field theory was recently used to derive universal relations between these Efimov features that include the first-order correction due to a nonzero effective range r s. We reveal a simple pattern in these range corrections that had not been previously identified. The pattern is explained by the renormalization group for the effective field theory, which implies that the Efimov three-body parameter runs logarithmically with the momentummore » scale at a rate proportional to r s/a. The running Efimov parameter also explains the empirical observation that range corrections can be largely taken into account by shifting the Efimov parameter by an adjustable parameter divided by a. Furthermore, the accuracy of universal relations that include first-order range corrections is verified by comparing them with various theoretical calculations using models with nonzero range.« less

  14. caCORRECT2: Improving the accuracy and reliability of microarray data in the presence of artifacts

    PubMed Central

    2011-01-01

    Background In previous work, we reported the development of caCORRECT, a novel microarray quality control system built to identify and correct spatial artifacts commonly found on Affymetrix arrays. We have made recent improvements to caCORRECT, including the development of a model-based data-replacement strategy and integration with typical microarray workflows via caCORRECT's web portal and caBIG grid services. In this report, we demonstrate that caCORRECT improves the reproducibility and reliability of experimental results across several common Affymetrix microarray platforms. caCORRECT represents an advance over state-of-art quality control methods such as Harshlighting, and acts to improve gene expression calculation techniques such as PLIER, RMA and MAS5.0, because it incorporates spatial information into outlier detection as well as outlier information into probe normalization. The ability of caCORRECT to recover accurate gene expressions from low quality probe intensity data is assessed using a combination of real and synthetic artifacts with PCR follow-up confirmation and the affycomp spike in data. The caCORRECT tool can be accessed at the website: http://cacorrect.bme.gatech.edu. Results We demonstrate that (1) caCORRECT's artifact-aware normalization avoids the undesirable global data warping that happens when any damaged chips are processed without caCORRECT; (2) When used upstream of RMA, PLIER, or MAS5.0, the data imputation of caCORRECT generally improves the accuracy of microarray gene expression in the presence of artifacts more than using Harshlighting or not using any quality control; (3) Biomarkers selected from artifactual microarray data which have undergone the quality control procedures of caCORRECT are more likely to be reliable, as shown by both spike in and PCR validation experiments. Finally, we present a case study of the use of caCORRECT to reliably identify biomarkers for renal cell carcinoma, yielding two diagnostic biomarkers with potential clinical utility, PRKAB1 and NNMT. Conclusions caCORRECT is shown to improve the accuracy of gene expression, and the reproducibility of experimental results in clinical application. This study suggests that caCORRECT will be useful to clean up possible artifacts in new as well as archived microarray data. PMID:21957981

  15. Student Understanding of the Boltzmann Factor

    ERIC Educational Resources Information Center

    Smith, Trevor I.; Mountcastle, Donald B.; Thompson, John R.

    2015-01-01

    We present results of our investigation into student understanding of the physical significance and utility of the Boltzmann factor in several simple models. We identify various justifications, both correct and incorrect, that students use when answering written questions that require application of the Boltzmann factor. Results from written data…

  16. Case-Deletion Diagnostics for Maximum Likelihood Multipoint Quantitative Trait Locus Linkage Analysis

    PubMed Central

    Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.

    2009-01-01

    Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086

  17. Target mass effects in parton quasi-distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radyushkin, A. V.

    We study the impact of non-zero (and apparently large) value of the nucleon mass M on the shape of parton quasi-distributions Q(y,p 3), in particular on its change with the change of the nucleon momentum p 3. We observe that the usual target-mass corrections induced by the M-dependence of the twist-2 operators are rather small. Moreover, we show that within the framework based on parametrizations by transverse momentum dependent distribution functions (TMDs) these corrections are canceled by higher-twist contributions. Lastly, we identify a novel source of kinematic target-mass dependence of TMDs and build models corrected for such dependence. We findmore » that resulting changes may be safely neglected for p 3≳2M.« less

  18. Target mass effects in parton quasi-distributions

    DOE PAGES

    Radyushkin, A. V.

    2017-05-11

    We study the impact of non-zero (and apparently large) value of the nucleon mass M on the shape of parton quasi-distributions Q(y,p 3), in particular on its change with the change of the nucleon momentum p 3. We observe that the usual target-mass corrections induced by the M-dependence of the twist-2 operators are rather small. Moreover, we show that within the framework based on parametrizations by transverse momentum dependent distribution functions (TMDs) these corrections are canceled by higher-twist contributions. Lastly, we identify a novel source of kinematic target-mass dependence of TMDs and build models corrected for such dependence. We findmore » that resulting changes may be safely neglected for p 3≳2M.« less

  19. Isaac Newton and the astronomical refraction.

    PubMed

    Lehn, Waldemar H

    2008-12-01

    In a short interval toward the end of 1694, Isaac Newton developed two mathematical models for the theory of the astronomical refraction and calculated two refraction tables, but did not publish his theory. Much effort has been expended, starting with Biot in 1836, in the attempt to identify the methods and equations that Newton used. In contrast to previous work, a closed form solution is identified for the refraction integral that reproduces the table for his first model (in which density decays linearly with elevation). The parameters of his second model, which includes the exponential variation of pressure in an isothermal atmosphere, have also been identified by reproducing his results. The implication is clear that in each case Newton had derived exactly the correct equations for the astronomical refraction; furthermore, he was the first to do so.

  20. Utility of the serum C-reactive protein for detection of occult bacterial infection in children.

    PubMed

    Isaacman, Daniel J; Burke, Bonnie L

    2002-09-01

    To assess the utility of serum C-reactive protein (CRP) as a screen for occult bacterial infection in children. Febrile children ages 3 to 36 months who visited an urban children's hospital emergency department and received a complete blood cell count and blood culture as part of their evaluation were prospectively enrolled from February 2, 2000, through May 30, 2001. Informed consent was obtained for the withdrawal of an additional 1-mL aliquot of blood for use in CRP evaluation. Logistic regression and receiver operator characteristic (ROC) curves were modeled for each predictor to identify optimal test values, and were compared using likelihood ratio tests. Two hundred fifty-six patients were included in the analysis, with a median age of 15.3 months (range, 3.1-35.2 months) and median temperature at triage 40.0 degrees C (range, 39.0 degrees C-41.3 degrees C). Twenty-nine (11.3%) cases of occult bacterial infection (OBI) were identified, including 17 cases of pneumonia, 9 cases of urinary tract infection, and 3 cases of bacteremia. The median white blood cell count in this data set was 12.9 x 10(3)/ micro L [corrected] (range, 3.6-39.1 x10(3)/ micro L) [corrected], the median absolute neutrophil count (ANC) was 7.12 x 10(3)/L [corrected] (range, 0.56-28.16 x10(3)/L) [corrected], and the median CRP level was 1.7 mg/dL (range, 0.2-43.3 mg/dL). The optimal cut-off point for CRP in this data set (4.4 mg/dL) achieved a sensitivity of 63% and a specificity of 81% for detection of OBI in this population. Comparing models using cut-off values from individual laboratory predictors (ANC, white blood cell count, and CRP) that maximized sensitivity and specificity revealed that a model using an ANC of 10.6 x10(3)/L [corrected] (sensitivity, 69%; specificity, 79%) was the best predictive model. Adding CRP to the model insignificantly increased sensitivity to 79%, while significantly decreasing specificity to 50%. Active monitoring of emergency department blood cultures drawn during the study period from children between 3 and 36 months of age showed an overall bacteremia rate of 1.1% during this period. An ANC cut-off point of 10.6 x10(3)/L [corrected] offers the best predictive model for detection of occult bacterial infection using a single test. The addition of CRP to ANC adds little diagnostic utility. Furthermore, the lowered incidence of occult bacteremia in our population supports a decrease in the use of diagnostic screening in this population.

  1. Mobile image based color correction using deblurring

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.

    2015-03-01

    Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space.

  2. Identification of ground-state spin ordering in antiferromagnetic transition metal oxides using the Ising model and a genetic algorithm

    PubMed Central

    Lee, Kyuhyun; Youn, Yong; Han, Seungwu

    2017-01-01

    Abstract We identify ground-state collinear spin ordering in various antiferromagnetic transition metal oxides by constructing the Ising model from first-principles results and applying a genetic algorithm to find its minimum energy state. The present method can correctly reproduce the ground state of well-known antiferromagnetic oxides such as NiO, Fe2O3, Cr2O3 and MnO2. Furthermore, we identify the ground-state spin ordering in more complicated materials such as Mn3O4 and CoCr2O4. PMID:28458746

  3. Parameter as a Switch Between Dynamical States of a Network in Population Decoding.

    PubMed

    Yu, Jiali; Mao, Hua; Yi, Zhang

    2017-04-01

    Population coding is a method to represent stimuli using the collective activities of a number of neurons. Nevertheless, it is difficult to extract information from these population codes with the noise inherent in neuronal responses. Moreover, it is a challenge to identify the right parameter of the decoding model, which plays a key role for convergence. To address the problem, a population decoding model is proposed for parameter selection. Our method successfully identified the key conditions for a nonzero continuous attractor. Both the theoretical analysis and the application studies demonstrate the correctness and effectiveness of this strategy.

  4. NASA Standard for Models and Simulations: Philosophy and Requirements Overview

    NASA Technical Reports Server (NTRS)

    Blattnig, Steve R.; Luckring, James M.; Morrison, Joseph H.; Sylvester, Andre J.; Tripathi, Ram K.; Zang, Thomas A.

    2013-01-01

    Following the Columbia Accident Investigation Board report, the NASA Administrator chartered an executive team (known as the Diaz Team) to identify those CAIB report elements with NASA-wide applicability and to develop corrective measures to address each element. One such measure was the development of a standard for the development, documentation, and operation of models and simulations. This report describes the philosophy and requirements overview of the resulting NASA Standard for Models and Simulations.

  5. NASA Standard for Models and Simulations: Philosophy and Requirements Overview

    NASA Technical Reports Server (NTRS)

    Blattnig, St3eve R.; Luckring, James M.; Morrison, Joseph H.; Sylvester, Andre J.; Tripathi, Ram K.; Zang, Thomas A.

    2009-01-01

    Following the Columbia Accident Investigation Board report, the NASA Administrator chartered an executive team (known as the Diaz Team) to identify those CAIB report elements with NASA-wide applicability and to develop corrective measures to address each element. One such measure was the development of a standard for the development, documentation, and operation of models and simulations. This report describes the philosophy and requirements overview of the resulting NASA Standard for Models and Simulations.

  6. Investigating the potential influence of established multiple-choice test-taking cues on item response in a pharmacotherapy board certification examination preparatory manual: a pilot study.

    PubMed

    Gettig, Jacob P

    2006-04-01

    To determine the prevalence of established multiple-choice test-taking correct and incorrect answer cues in the American College of Clinical Pharmacy's Updates in Therapeutics: The Pharmacotherapy Preparatory Course, 2005 Edition, as an equal or lesser surrogate indication of the prevalence of such cues in the Pharmacotherapy board certification examination. All self-assessment and patient case question-and-answer sets were assessed individually to determine if they were subject to selected correct and incorrect answer cues commonly seen in multiple-choice question writing. If the question was considered evaluable, correct answer cues-longest answer, mid-range number, one of two similar choices, and one of two opposite choices-were tallied. In addition, incorrect answer cues- inclusionary language and grammatical mismatch-were also tallied. Each cue was counted if it did what was expected or did the opposite of what was expected. Multiple cues could be identified in each question. A total of 237 (47.7%) of 497 questions in the manual were deemed evaluable. A total of 325 correct answer cues and 35 incorrect answer cues were identified in the 237 evaluable questions. Most evaluable questions contained one to two correct and/or incorrect answer cue(s). Longest answer was the most frequently identified correct answer cue; however, it was the least likely to identify the correct answer. Inclusionary language was the most frequently identified incorrect answer cue. Incorrect answer cues were considerably more likely to identify incorrect answer choices than correct answer cues were able to identify correct answer choices. The use of established multiple-choice test-taking cues is unlikely to be of significant help when taking the Pharmacotherapy board certification examination, primarily because of the lack of questions subject to such cues and the inability of correct answer cues to accurately identify correct answers. Incorrect answer cues, especially the use of inclusionary language, almost always will accurately identify an incorrect answer choice. Assuming that questions in the preparatory course manual were equal or lesser surrogates of those in the board certification examination, it is unlikely that intuition alone can replace adequate preparation and studying as the sole determinant of examination success.

  7. Sex determination from the femur in Portuguese populations with classical and machine-learning classifiers.

    PubMed

    Curate, F; Umbelino, C; Perinha, A; Nogueira, C; Silva, A M; Cunha, E

    2017-11-01

    The assessment of sex is of paramount importance in the establishment of the biological profile of a skeletal individual. Femoral relevance for sex estimation is indisputable, particularly when other exceedingly dimorphic skeletal regions are missing. As such, this study intended to generate population-specific osteometric models for the estimation of sex with the femur and to compare the accuracy of the models obtained through classical and machine-learning classifiers. A set of 15 standard femoral measurements was acquired in a training sample (100 females; 100 males) from the Coimbra Identified Skeletal Collection (University of Coimbra, Portugal) and models for sex classification were produced with logistic regression (LR), linear discriminant analysis (LDA), support vector machines (SVM), and reduce error pruning trees (REPTree). Under cross-validation, univariable sectioning points generated with REPTree correctly estimated sex in 60.0-87.5% of cases (systematic error ranging from 0.0 to 37.0%), while multivariable models correctly classified sex in 84.0-92.5% of cases (bias from 0.0 to 7.0%). All models were assessed in a holdout sample (24 females; 34 males) from the 21st Century Identified Skeletal Collection (University of Coimbra, Portugal), with an allocation accuracy ranging from 56.9 to 86.2% (bias from 4.4 to 67.0%) in the univariable models, and from 84.5 to 89.7% (bias from 3.7 to 23.3%) in the multivariable models. This study makes available a detailed description of sexual dimorphism in femoral linear dimensions in two Portuguese identified skeletal samples, emphasizing the relevance of the femur for the estimation of sex in skeletal remains in diverse conditions of completeness and preservation. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  8. Flood Identification from Satellite Images Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Chang, L.; Kao, I.; Shih, K.

    2011-12-01

    Typhoons and storms hit Taiwan several times every year and they cause serious flood disasters. Because the rivers are short and steep, and their flows are relatively fast with floods lasting only few hours and usually less than one day. Flood identification can provide the flood disaster and extent information to disaster assistance and recovery centers. Due to the factors of the weather, it is not suitable for aircraft or traditional multispectral satellite; hence, the most appropriate way for investigating flooding extent is to use Synthetic Aperture Radar (SAR) satellite. In this study, back-propagation neural network (BPNN) model and multivariate linear regression (MLR) model are built to identify the flooding extent from SAR satellite images. The input variables of the BPNN model are Radar Cross Section (RCS) value and mean of the pixel, standard deviation, minimum and maximum of RCS values among its adjacent 3×3 pixels. The MLR model uses two images of the non-flooding and flooding periods, and The inputs are the difference between the RCS values of two images and the variances among its adjacent 3×3 pixels. The results show that the BPNN model can perform much better than the MLR model. The correct percentages are more than 80% and 73% in training and testing data, respectively. Many misidentified areas are very fragmented and unrelated. In order to reinforce the correct percentage, morphological image analysis is used to modify the outputs of these identification models. Through morphological operations, most of the small, fragmented and misidentified areas can be correctly assigned to flooding or non-flooding areas. The final results show that the flood identification of satellite images has been improved a lot and the correct percentages increases up to more than 90%.

  9. Modeling Student Software Testing Processes: Attitudes, Behaviors, Interventions, and Their Effects

    ERIC Educational Resources Information Center

    Buffardi, Kevin John

    2014-01-01

    Effective software testing identifies potential bugs and helps correct them, producing more reliable and maintainable software. As software development processes have evolved, incremental testing techniques have grown in popularity, particularly with introduction of test-driven development (TDD). However, many programmers struggle to adopt TDD's…

  10. A rank-based approach for correcting systematic biases in spatial disaggregation of coarse-scale climate simulations

    NASA Astrophysics Data System (ADS)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2017-07-01

    Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.

  11. Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1997-01-01

    Significant accomplishments made during the present reporting period are as follows: (1) We developed a new method for identifying the presence of absorbing aerosols and, simultaneously, performing atmospheric correction. The algorithm consists of optimizing the match between the top-of-atmosphere radiance spectrum and the result of models of both the ocean and aerosol optical properties; (2) We developed an algorithm for providing an accurate computation of the diffuse transmittance of the atmosphere given an aerosol model. A module for inclusion into the MODIS atmospheric-correction algorithm was completed; (3) We acquired reflectance data for oceanic whitecaps during a cruise on the RV Ka'imimoana in the Tropical Pacific (Manzanillo, Mexico to Honolulu, Hawaii). The reflectance spectrum of whitecaps was found to be similar to that for breaking waves in the surf zone measured by Frouin, Schwindling and Deschamps, however, the drop in augmented reflectance from 670 to 860 nm was not as great, and the magnitude of the augmented reflectance was significantly less than expected; and (4) We developed a method for the approximate correction for the effects of the MODIS polarization sensitivity. The correction, however, requires adequate characterization of the polarization sensitivity of MODIS prior to launch.

  12. Carotid Flow Time Test Performance for the Detection of Dehydration in Children With Diarrhea.

    PubMed

    Mackenzie, David C; Nasrin, Sabiha; Atika, Bita; Modi, Payal; Alam, Nur H; Levine, Adam C

    2018-06-01

    Unstructured clinical assessments of dehydration in children are inaccurate. Point-of-care ultrasound is a noninvasive diagnostic tool that can help evaluate the volume status; the corrected carotid artery flow time has been shown to predict volume depletion in adults. We sought to determine the ability of the corrected carotid artery flow time to identify dehydration in a population of children presenting with acute diarrhea in Dhaka, Bangladesh. Children presenting with acute diarrhea were recruited and rehydrated according to hospital protocols. The corrected carotid artery flow time was measured at the time of presentation. The percentage of weight change with rehydration was used to categorize each child's dehydration as severe (>9%), some (3%-9%), or none (<3%). A receiver operating characteristic curve was constructed to test the performance of the corrected carotid artery flow time for detecting severe dehydration. Linear regression was used to model the relationship between the corrected carotid artery flow time and percentage of dehydration. A total of 350 children (0-60 months) were enrolled. The mean corrected carotid artery flow time was 326 milliseconds (interquartile range, 295-351 milliseconds). The area under the receiver operating characteristic curve for the detection of severe dehydration was 0.51 (95% confidence interval, 0.42, 0.61). Linear regression modeling showed a weak association between the flow time and dehydration. The corrected carotid artery flow time was a poor predictor of severe dehydration in this population of children with diarrhea. © 2017 by the American Institute of Ultrasound in Medicine.

  13. Comprehensive and critical review of the predictive properties of the various mass models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haustein, P.E.

    1984-01-01

    Since the publication of the 1975 Mass Predictions approximately 300 new atomic masses have been reported. These data come from a variety of experimental studies using diverse techniques and they span a mass range from the lightest isotopes to the very heaviest. It is instructive to compare these data with the 1975 predictions and several others (Moeller and Nix, Monahan, Serduke, Uno and Yamada which appeared latter. Extensive numerical and graphical analyses have been performed to examine the quality of the mass predictions from the various models and to identify features in these models that require correction. In general, theremore » is only rough correlation between the ability of a particular model to reproduce the measured mass surface which had been used to refine its adjustable parameters and that model's ability to predict correctly the new masses. For some models distinct systematic features appear when the new mass data are plotted as functions of relevant physical variables. Global intercomparisons of all the models are made first, followed by several examples of types of analysis performed with individual mass models.« less

  14. Extinction of an instrumental response: a cognitive behavioral assay in Fmr1 knockout mice.

    PubMed

    Sidorov, M S; Krueger, D D; Taylor, M; Gisin, E; Osterweil, E K; Bear, M F

    2014-06-01

    Fragile X (FX) is the most common genetic cause of intellectual disability and autism. Previous studies have shown that partial inhibition of metabotropic glutamate receptor signaling is sufficient to correct behavioral phenotypes in a mouse model of FX, including audiogenic seizures, open-field hyperactivity and social behavior. These phenotypes model well the epilepsy (15%), hyperactivity (20%) and autism (30%) that are comorbid with FX in human patients. Identifying reliable and robust mouse phenotypes to model cognitive impairments is critical considering the 90% comorbidity of FX and intellectual disability. Recent work characterized a five-choice visuospatial discrimination assay testing cognitive flexibility, in which FX model mice show impairments associated with decreases in synaptic proteins in prefrontal cortex (PFC). In this study, we sought to determine whether instrumental extinction, another process requiring PFC, is altered in FX model mice, and whether downregulation of metabotropic glutamate receptor signaling pathways is sufficient to correct both visuospatial discrimination and extinction phenotypes. We report that instrumental extinction is consistently exaggerated in FX model mice. However, neither the extinction phenotype nor the visuospatial discrimination phenotype is corrected by approaches targeting metabotropic glutamate receptor signaling. This work describes a novel behavioral extinction assay to model impaired cognition in mouse models of neurodevelopmental disorders, provides evidence that extinction is exaggerated in the FX mouse model and suggests possible limitations of metabotropic glutamate receptor-based pharmacotherapy. © 2014 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.

  15. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less

  16. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    DOE PAGES

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-06-13

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less

  17. An automated curation procedure for addressing chemical errors and inconsistencies in public datasets used in QSAR modelling.

    PubMed

    Mansouri, K; Grulke, C M; Richard, A M; Judson, R S; Williams, A J

    2016-11-01

    The increasing availability of large collections of chemical structures and associated experimental data provides an opportunity to build robust QSAR models for applications in different fields. One common concern is the quality of both the chemical structure information and associated experimental data. Here we describe the development of an automated KNIME workflow to curate and correct errors in the structure and identity of chemicals using the publicly available PHYSPROP physicochemical properties and environmental fate datasets. The workflow first assembles structure-identity pairs using up to four provided chemical identifiers, including chemical name, CASRNs, SMILES, and MolBlock. Problems detected included errors and mismatches in chemical structure formats, identifiers and various structure validation issues, including hypervalency and stereochemistry descriptions. Subsequently, a machine learning procedure was applied to evaluate the impact of this curation process. The performance of QSAR models built on only the highest-quality subset of the original dataset was compared with the larger curated and corrected dataset. The latter showed statistically improved predictive performance. The final workflow was used to curate the full list of PHYSPROP datasets, and is being made publicly available for further usage and integration by the scientific community.

  18. Sensitivity analysis and calibration of a dynamic physically based slope stability model

    NASA Astrophysics Data System (ADS)

    Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens

    2017-06-01

    Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting that precipitation intensities during the investigated landslide-triggering rainfall events were already close to or above the soil's infiltration capacity.

  19. Evaluation of MODFLOW-LGR in connection with a synthetic regional-scale model

    USGS Publications Warehouse

    Vilhelmsen, T.N.; Christensen, S.; Mehl, S.W.

    2012-01-01

    This work studies costs and benefits of utilizing local-grid refinement (LGR) as implemented in MODFLOW-LGR to simulate groundwater flow in a buried tunnel valley interacting with a regional aquifer. Two alternative LGR methods were used: the shared-node (SN) method and the ghost-node (GN) method. To conserve flows the SN method requires correction of sources and sinks in cells at the refined/coarse-grid interface. We found that the optimal correction method is case dependent and difficult to identify in practice. However, the results showed little difference and suggest that identifying the optimal method was of minor importance in our case. The GN method does not require corrections at the models' interface, and it uses a simpler head interpolation scheme than the SN method. The simpler scheme is faster but less accurate so that more iterations may be necessary. However, the GN method solved our flow problem more efficiently than the SN method. The MODFLOW-LGR results were compared with the results obtained using a globally coarse (GC) grid. The LGR simulations required one to two orders of magnitude longer run times than the GC model. However, the improvements of the numerical resolution around the buried valley substantially increased the accuracy of simulated heads and flows compared with the GC simulation. Accuracy further increased locally around the valley flanks when improving the geological resolution using the refined grid. Finally, comparing MODFLOW-LGR simulation with a globally refined (GR) grid showed that the refinement proportion of the model should not exceed 10% to 15% in order to secure method efficiency. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.

  20. STRIDE: Species Tree Root Inference from Gene Duplication Events.

    PubMed

    Emms, David M; Kelly, Steven

    2017-12-01

    The correct interpretation of any phylogenetic tree is dependent on that tree being correctly rooted. We present STRIDE, a fast, effective, and outgroup-free method for identification of gene duplication events and species tree root inference in large-scale molecular phylogenetic analyses. STRIDE identifies sets of well-supported in-group gene duplication events from a set of unrooted gene trees, and analyses these events to infer a probability distribution over an unrooted species tree for the location of its root. We show that STRIDE correctly identifies the root of the species tree in multiple large-scale molecular phylogenetic data sets spanning a wide range of timescales and taxonomic groups. We demonstrate that the novel probability model implemented in STRIDE can accurately represent the ambiguity in species tree root assignment for data sets where information is limited. Furthermore, application of STRIDE to outgroup-free inference of the origin of the eukaryotic tree resulted in a root probability distribution that provides additional support for leading hypotheses for the origin of the eukaryotes. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  1. Correcting Systemic Deficiencies in Our Scientific Infrastructure

    PubMed Central

    Doss, Mohan

    2014-01-01

    Scientific method is inherently self-correcting. When different hypotheses are proposed, their study would result in the rejection of the invalid ones. If the study of a competing hypothesis is prevented because of the faith in an unverified one, scientific progress is stalled. This has happened in the study of low dose radiation. Though radiation hormesis was hypothesized to reduce cancers in 1980, it could not be studied in humans because of the faith in the unverified linear no-threshold model hypothesis, likely resulting in over 15 million preventable cancer deaths worldwide during the past two decades, since evidence has accumulated supporting the validity of the phenomenon of radiation hormesis. Since our society has been guided by scientific advisory committees that ostensibly follow the scientific method, the long duration of such large casualties is indicative of systemic deficiencies in the infrastructure that has evolved in our society for the application of science. Some of these deficiencies have been identified in a few elements of the scientific infrastructure, and remedial steps suggested. Identifying and correcting such deficiencies may prevent similar tolls in the future. PMID:24910580

  2. Rapid Automated Aircraft Simulation Model Updating from Flight Data

    NASA Technical Reports Server (NTRS)

    Brian, Geoff; Morelli, Eugene A.

    2011-01-01

    Techniques to identify aircraft aerodynamic characteristics from flight measurements and compute corrections to an existing simulation model of a research aircraft were investigated. The purpose of the research was to develop a process enabling rapid automated updating of aircraft simulation models using flight data and apply this capability to all flight regimes, including flight envelope extremes. The process presented has the potential to improve the efficiency of envelope expansion flight testing, revision of control system properties, and the development of high-fidelity simulators for pilot training.

  3. Model-free estimation of the psychometric function

    PubMed Central

    Żychaluk, Kamila; Foster, David H.

    2009-01-01

    A subject's response to the strength of a stimulus is described by the psychometric function, from which summary measures, such as a threshold or slope, may be derived. Traditionally, this function is estimated by fitting a parametric model to the experimental data, usually the proportion of successful trials at each stimulus level. Common models include the Gaussian and Weibull cumulative distribution functions. This approach works well if the model is correct, but it can mislead if not. In practice, the correct model is rarely known. Here, a nonparametric approach based on local linear fitting is advocated. No assumption is made about the true model underlying the data, except that the function is smooth. The critical role of the bandwidth is identified, and its optimum value estimated by a cross-validation procedure. As a demonstration, seven vision and hearing data sets were fitted by the local linear method and by several parametric models. The local linear method frequently performed better and never worse than the parametric ones. Supplemental materials for this article can be downloaded from app.psychonomic-journals.org/content/supplemental. PMID:19633355

  4. Identifying a key physical factor sensitive to the performance of Madden-Julian oscillation simulation in climate models

    NASA Astrophysics Data System (ADS)

    Kim, Go-Un; Seo, Kyong-Hwan

    2018-01-01

    A key physical factor in regulating the performance of Madden-Julian oscillation (MJO) simulation is examined by using 26 climate model simulations from the World Meteorological Organization's Working Group for Numerical Experimentation/Global Energy and Water Cycle Experiment Atmospheric System Study (WGNE and MJO-Task Force/GASS) global model comparison project. For this, intraseasonal moisture budget equation is analyzed and a simple, efficient physical quantity is developed. The result shows that MJO skill is most sensitive to vertically integrated intraseasonal zonal wind convergence (ZC). In particular, a specific threshold value of the strength of the ZC can be used as distinguishing between good and poor models. An additional finding is that good models exhibit the correct simultaneous convection and large-scale circulation phase relationship. In poor models, however, the peak circulation response appears 3 days after peak rainfall, suggesting unfavorable coupling between convection and circulation. For an improving simulation of the MJO in climate models, we propose that this delay of circulation in response to convection needs to be corrected in the cumulus parameterization scheme.

  5. Hysteresis modeling of magnetic shape memory alloy actuator based on Krasnosel'skii-Pokrovskii model.

    PubMed

    Zhou, Miaolei; Wang, Shoubin; Gao, Wei

    2013-01-01

    As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator.

  6. Hysteresis Modeling of Magnetic Shape Memory Alloy Actuator Based on Krasnosel'skii-Pokrovskii Model

    PubMed Central

    Wang, Shoubin; Gao, Wei

    2013-01-01

    As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator. PMID:23737730

  7. Automated Error Detection in Physiotherapy Training.

    PubMed

    Jovanović, Marko; Seiffarth, Johannes; Kutafina, Ekaterina; Jonas, Stephan M

    2018-01-01

    Manual skills teaching, such as physiotherapy education, requires immediate teacher feedback for the students during the learning process, which to date can only be performed by expert trainers. A machine-learning system trained only on correct performances to classify and score performed movements, to identify sources of errors in the movement and give feedback to the learner. We acquire IMU and sEMG sensor data from a commercial-grade wearable device and construct an HMM-based model for gesture classification, scoring and feedback giving. We evaluate the model on publicly available and self-generated data of an exemplary movement pattern executions. The model achieves an overall accuracy of 90.71% on the public dataset and 98.9% on our dataset. An AUC of 0.99 for the ROC of the scoring method could be achieved to discriminate between correct and untrained incorrect executions. The proposed system demonstrated its suitability for scoring and feedback in manual skills training.

  8. Mixture of autoregressive modeling orders and its implication on single trial EEG classification

    PubMed Central

    Atyabi, Adham; Shic, Frederick; Naples, Adam

    2016-01-01

    Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331

  9. In-vivo viscous properties of the heel pad by stress-relaxation experiment based on a spherical indentation.

    PubMed

    Suzuki, Ryo; Ito, Kohta; Lee, Taeyong; Ogihara, Naomichi

    2017-12-01

    Identifying the viscous properties of the plantar soft tissue is crucial not only for understanding the dynamic interaction of the foot with the ground during locomotion, but also for development of improved footwear products and therapeutic footwear interventions. In the present study, the viscous and hyperelastic material properties of the plantar soft tissue were experimentally identified using a spherical indentation test and an analytical contact model of the spherical indentation test. Force-relaxation curves of the heel pads were obtained from the indentation experiment. The curves were fit to the contact model incorporating a five-element Maxwell model to identify the viscous material parameters. The finite element method with the experimentally identified viscoelastic parameters could successfully reproduce the measured force-relaxation curves, indicating the material parameters were correctly estimated using the proposed method. Although there are some methodological limitations, the proposed framework to identify the viscous material properties may facilitate the development of subject-specific finite element modeling of the foot and other biological materials. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  10. Statistical tests and identifiability conditions for pooling and analyzing multisite datasets

    PubMed Central

    Zhou, Hao Henry; Singh, Vikas; Johnson, Sterling C.; Wahba, Grace

    2018-01-01

    When sample sizes are small, the ability to identify weak (but scientifically interesting) associations between a set of predictors and a response may be enhanced by pooling existing datasets. However, variations in acquisition methods and the distribution of participants or observations between datasets, especially due to the distributional shifts in some predictors, may obfuscate real effects when datasets are combined. We present a rigorous statistical treatment of this problem and identify conditions where we can correct the distributional shift. We also provide an algorithm for the situation where the correction is identifiable. We analyze various properties of the framework for testing model fit, constructing confidence intervals, and evaluating consistency characteristics. Our technical development is motivated by Alzheimer’s disease (AD) studies, and we present empirical results showing that our framework enables harmonizing of protein biomarkers, even when the assays across sites differ. Our contribution may, in part, mitigate a bottleneck that researchers face in clinical research when pooling smaller sized datasets and may offer benefits when the subjects of interest are difficult to recruit or when resources prohibit large single-site studies. PMID:29386387

  11. Predicting biological condition in southern California streams

    USGS Publications Warehouse

    Brown, Larry R.; May, Jason T.; Rehn, Andrew C.; Ode, Peter R.; Waite, Ian R.; Kennen, Jonathan G.

    2012-01-01

    As understanding of the complex relations among environmental stressors and biological responses improves, a logical next step is predictive modeling of biological condition at unsampled sites. We developed a boosted regression tree (BRT) model of biological condition, as measured by a benthic macroinvertebrate index of biotic integrity (BIBI), for streams in urbanized Southern Coastal California. We also developed a multiple linear regression (MLR) model as a benchmark for comparison with the BRT model. The BRT model explained 66% of the variance in B-IBI, identifying watershed population density and combined percentage agricultural and urban land cover in the riparian buffer as the most important predictors of B-IBI, but with watershed mean precipitation and watershed density of manmade channels also important. The MLR model explained 48% of the variance in B-IBI and included watershed population density and combined percentage agricultural and urban land cover in the riparian buffer. For a verification data set, the BRT model correctly classified 75% of impaired sites (B-IBI < 40) and 78% of unimpaired sites (B-IBI = 40). For the same verification data set, the MLR model correctly classified 69% of impaired sites and 87% of unimpaired sites. The BRT model should not be used to predict B-IBI for specific sites; however, the model can be useful for general applications such as identifying and prioritizing regions for monitoring, remediation or preservation, stratifying new bioassessments according to anticipated biological condition, or assessing the potential for change in stream biological condition based on anticipated changes in population density and development in stream buffers.

  12. Effect of the atmosphere on the classification of LANDSAT data. [Identifying sugar canes in Brazil

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Morimoto, T.; Kumar, R.; Molion, L. C. B.

    1979-01-01

    The author has identified the following significant results. In conjunction with Turner's model for the correction of satellite data for atmospheric interference, the LOWTRAN-3 computer was used to calculate the atmospheric interference. Use of the program improved the contrast between different natural targets in the MSS LANDSAT data of Brasilia, Brazil. The classification accuracy of sugar canes was improved by about 9% in the multispectral data of Ribeirao Preto, Sao Paulo.

  13. Impact of numerical choices on water conservation in the E3SM Atmosphere Model Version 1 (EAM V1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.

    The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations for sea level rise projection. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods formore » fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model is negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in the new model results in a very thin model layer at the Earth’s surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for this model.« less

  14. Assessing the Assessment Methods: Climate Change and Hydrologic Impacts

    NASA Astrophysics Data System (ADS)

    Brekke, L. D.; Clark, M. P.; Gutmann, E. D.; Mizukami, N.; Mendoza, P. A.; Rasmussen, R.; Ikeda, K.; Pruitt, T.; Arnold, J. R.; Rajagopalan, B.

    2014-12-01

    The Bureau of Reclamation, the U.S. Army Corps of Engineers, and other water management agencies have an interest in developing reliable, science-based methods for incorporating climate change information into longer-term water resources planning. Such assessments must quantify projections of future climate and hydrology, typically relying on some form of spatial downscaling and bias correction to produce watershed-scale weather information that subsequently drives hydrology and other water resource management analyses (e.g., water demands, water quality, and environmental habitat). Water agencies continue to face challenging method decisions in these endeavors: (1) which downscaling method should be applied and at what resolution; (2) what observational dataset should be used to drive downscaling and hydrologic analysis; (3) what hydrologic model(s) should be used and how should these models be configured and calibrated? There is a critical need to understand the ramification of these method decisions, as they affect the signal and uncertainties produced by climate change assessments and, thus, adaptation planning. This presentation summarizes results from a three-year effort to identify strengths and weaknesses of widely applied methods for downscaling climate projections and assessing hydrologic conditions. Methods were evaluated from two perspectives: historical fidelity, and tendency to modulate a global climate model's climate change signal. On downscaling, four methods were applied at multiple resolutions: statistically using Bias Correction Spatial Disaggregation, Bias Correction Constructed Analogs, and Asynchronous Regression; dynamically using the Weather Research and Forecasting model. Downscaling results were then used to drive hydrologic analyses over the contiguous U.S. using multiple models (VIC, CLM, PRMS), with added focus placed on case study basins within the Colorado Headwaters. The presentation will identify which types of climate changes are expressed robustly across methods versus those that are sensitive to method choice; which method choices seem relatively more important; and where strategic investments in research and development can substantially improve guidance on climate change provided to water managers.

  15. Statistical analysis of geomagnetic field intensity differences between ASM and VFM instruments onboard Swarm constellation

    NASA Astrophysics Data System (ADS)

    De Michelis, Paola; Tozzi, Roberta; Consolini, Giuseppe

    2017-02-01

    From the very first measurements made by the magnetometers onboard Swarm satellites launched by European Space Agency (ESA) in late 2013, it emerged a discrepancy between scalar and vector measurements. An accurate analysis of this phenomenon brought to build an empirical model of the disturbance, highly correlated with the Sun incidence angle, and to correct vector data accordingly. The empirical model adopted by ESA results in a significant decrease in the amplitude of the disturbance affecting VFM measurements so greatly improving the vector magnetic data quality. This study is focused on the characterization of the difference between magnetic field intensity measured by the absolute scalar magnetometer (ASM) and that reconstructed using the vector field magnetometer (VFM) installed on Swarm constellation. Applying empirical mode decomposition method, we find the intrinsic mode functions (IMFs) associated with ASM-VFM total intensity differences obtained with data both uncorrected and corrected for the disturbance correlated with the Sun incidence angle. Surprisingly, no differences are found in the nature of the IMFs embedded in the analyzed signals, being these IMFs characterized by the same dominant periodicities before and after correction. The effect of correction manifests in the decrease in the energy associated with some IMFs contributing to corrected data. Some IMFs identified by analyzing the ASM-VFM intensity discrepancy are characterized by the same dominant periodicities of those obtained by analyzing the temperature fluctuations of the VFM electronic unit. Thus, the disturbance correlated with the Sun incidence angle could be still present in the corrected magnetic data. Furthermore, the ASM-VFM total intensity difference and the VFM electronic unit temperature display a maximal shared information with a time delay that depends on local time. Taken together, these findings may help to relate the features of the observed VFM-ASM total intensity difference to the physical characteristics of the real disturbance thus contributing to improve the empirical model proposed for the correction of data.[Figure not available: see fulltext.

  16. Developing a quality assurance program for online services.

    PubMed Central

    Humphries, A W; Naisawald, G V

    1991-01-01

    A quality assurance (QA) program provides not only a mechanism for establishing training and competency standards, but also a method for continuously monitoring current service practices to correct shortcomings. The typical QA cycle includes these basic steps: select subject for review, establish measurable standards, evaluate existing services using the standards, identify problems, implement solutions, and reevaluate services. The Claude Moore Health Sciences Library (CMHSL) developed a quality assurance program for online services designed to evaluate services against specific criteria identified by research studies as being important to customer satisfaction. These criteria include reliability, responsiveness, approachability, communication, and physical factors. The application of these criteria to the library's existing online services in the quality review process is discussed with specific examples of the problems identified in each service area, as well as the solutions implemented to correct deficiencies. The application of the QA cycle to an online services program serves as a model of possible interventions. The use of QA principles to enhance online service quality can be extended to other library service areas. PMID:1909197

  17. Developing a quality assurance program for online services.

    PubMed

    Humphries, A W; Naisawald, G V

    1991-07-01

    A quality assurance (QA) program provides not only a mechanism for establishing training and competency standards, but also a method for continuously monitoring current service practices to correct shortcomings. The typical QA cycle includes these basic steps: select subject for review, establish measurable standards, evaluate existing services using the standards, identify problems, implement solutions, and reevaluate services. The Claude Moore Health Sciences Library (CMHSL) developed a quality assurance program for online services designed to evaluate services against specific criteria identified by research studies as being important to customer satisfaction. These criteria include reliability, responsiveness, approachability, communication, and physical factors. The application of these criteria to the library's existing online services in the quality review process is discussed with specific examples of the problems identified in each service area, as well as the solutions implemented to correct deficiencies. The application of the QA cycle to an online services program serves as a model of possible interventions. The use of QA principles to enhance online service quality can be extended to other library service areas.

  18. Identifying and Correcting Timing Errors at Seismic Stations in and around Iran

    DOE PAGES

    Syracuse, Ellen Marie; Phillips, William Scott; Maceira, Monica; ...

    2017-09-06

    A fundamental component of seismic research is the use of phase arrival times, which are central to event location, Earth model development, and phase identification, as well as derived products. Hence, the accuracy of arrival times is crucial. However, errors in the timing of seismic waveforms and the arrival times based on them may go unidentified by the end user, particularly when seismic data are shared between different organizations. Here, we present a method used to analyze travel-time residuals for stations in and around Iran to identify time periods that are likely to contain station timing problems. For the 14more » stations with the strongest evidence of timing errors lasting one month or longer, timing corrections are proposed to address the problematic time periods. Finally, two additional stations are identified with incorrect locations in the International Registry of Seismograph Stations, and one is found to have erroneously reported arrival times in 2011.« less

  19. Application of Geographical Information System Arc/info Grid-Based Surface Hyrologic Modeling to the Eastern Hellas Region, Mars

    NASA Astrophysics Data System (ADS)

    Mest, S. C.; Harbert, W.; Crown, D. A.

    2001-05-01

    Geographical Information System GRID-based raster modeling of surface water runoff in the eastern Hellas region of Mars has been completed. We utilized the 0.0625 by 0.0625 degree topographic map of Mars collected by the Mars Global Surveyor Mars Orbiter Laser Altimeter (MOLA) instrument to model watershed and surface runoff drainage systems. Scientific interpretation of these models with respect to ongoing geological mapping is presented in Mest et al., (2001). After importing a region of approximately 77,000,000 square kilometers into Arc/Info 8.0.2 we reprojected this digital elevation model (DEM) from a Mars sphere into a Mars ellipsoid. Using a simple cylindrical geographic projection and horizontal spatial units of decimal degrees and then an Albers projection with horizontal spatial units of meters, we completed basic hydrological modeling. Analysis of the raw DEM to determine slope, aspect, flow direction, watershed and flow accumulation grids demonstrated the need for correction of single pixel sink anomalies. After analysis of zonal elevation statistics associated with single pixel sinks, which identified 0.8 percent of the DEM points as having undefined surface water flow directions, we filled single pixel sink values of 89 meters or less. This correction is comparable with terrestrial DEMs that contain 0.9 percent to 4.7 percent of cells, which are sinks (Tarboton et al., 1991). The fill-corrected DEM was then used to determine slope, aspect, surface water flow direction and surface water flow accumulation. Within the region of interest 8,776 watersheds were identified. Using Arc/Info GRID flow direction and flow accumulation tools, regions of potential surface water flow accumulation were identified. These networks were then converted to a Strahler ordered stream network. Surface modeling produced Strahler orders one through six. As presented in Mest et al., (2001) comparisons of mapped features may prove compatible with drainage networks and watersheds derived using this methodology. Mest, Scott C., Crown, David A., and Harbert, William, 2001, Highland drainage basins and valley networks in the eastern Hellas Region of Mars, Abstract 1419, Lunar and Planetary Science XXXII Meeting Houston (CDROM). Tarboton D. G., Bras, R. L., and Rodriguez-Iturbe, 1991, On the Extraction of Channel Networks from Digital Elevation Data, Hydrological Processes, v. 5, 81-100. http://viking.eps.pitt.edu

  20. Effective connectivity associated with auditory error detection in musicians with absolute pitch

    PubMed Central

    Parkinson, Amy L.; Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Larson, Charles R.; Robin, Donald A.

    2014-01-01

    It is advantageous to study a wide range of vocal abilities in order to fully understand how vocal control measures vary across the full spectrum. Individuals with absolute pitch (AP) are able to assign a verbal label to musical notes and have enhanced abilities in pitch identification without reliance on an external referent. In this study we used dynamic causal modeling (DCM) to model effective connectivity of ERP responses to pitch perturbation in voice auditory feedback in musicians with relative pitch (RP), AP, and non-musician controls. We identified a network compromising left and right hemisphere superior temporal gyrus (STG), primary motor cortex (M1), and premotor cortex (PM). We specified nine models and compared two main factors examining various combinations of STG involvement in feedback pitch error detection/correction process. Our results suggest that modulation of left to right STG connections are important in the identification of self-voice error and sensory motor integration in AP musicians. We also identify reduced connectivity of left hemisphere PM to STG connections in AP and RP groups during the error detection and corrections process relative to non-musicians. We suggest that this suppression may allow for enhanced connectivity relating to pitch identification in the right hemisphere in those with more precise pitch matching abilities. Musicians with enhanced pitch identification abilities likely have an improved auditory error detection and correction system involving connectivity of STG regions. Our findings here also suggest that individuals with AP are more adept at using feedback related to pitch from the right hemisphere. PMID:24634644

  1. Spatiotemporal data visualisation for homecare monitoring of elderly people.

    PubMed

    Juarez, Jose M; Ochotorena, Jose M; Campos, Manuel; Combi, Carlo

    2015-10-01

    Elderly people who live alone can be assisted by home monitoring systems that identify risk scenarios such as falls, fatigue symptoms or burglary. Given that these systems have to manage spatiotemporal data, human intervention is required to validate automatic alarms due to the high number of false positives and the need for context interpretation. The goal of this work was to provide tools to support human action, to identify such potential risk scenarios based on spatiotemporal data visualisation. We propose the MTA (multiple temporal axes) model, a visual representation of temporal information of the activity of a single person at different locations. The main goal of this model is to visualize the behaviour of a person in their home, facilitating the identification of health-risk scenarios and repetitive patterns. We evaluate the model's insight capacity compared with other models using a standard evaluation protocol. We also test its practical suitability of the MTA graphical model in a commercial home monitoring system. In particular, we implemented 8VISU, a visualization tool based on MTA. MTA proved to be more than 90% accurate in identify non-risk scenarios, independently of the length of the record visualised. When the spatial complexity was increased (e.g. number of rooms) the model provided good accuracy form up to 5 rooms. Therefore, user preferences and user performance seem to be balanced. Moreover, it also gave high sensitivity levels (over 90%) for 5-8 rooms. Fall is the most recurrent incident for elderly people. The MTA model outperformed the other models considered in identifying fall scenarios (66% of correctness) and was the second best for burglary and fatigue scenarios (36% of correctness). Our experiments also confirm the hypothesis that cyclic models are the most suitable for fatigue scenarios, the Spiral and MTA models obtaining most positive identifications. In home monitoring systems, spatiotemporal visualization is a useful tool for identifying risk and preventing home accidents in elderly people living alone. The MTA model helps the visualisation in different stages of the temporal data analysis process. In particular, its explicit representation of space and movement is useful for identifying potential scenarios of risk, while the spiral structure can be used for the identification of recurrent patterns. The results of the experiments and the experience using the visualization tool 8VISU proof the potential of the MTA graphical model to mine temporal data and to support caregivers using home monitoring infrastructures. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Learning versus correct models: influence of model type on the learning of a free-weight squat lift.

    PubMed

    McCullagh, P; Meyer, K N

    1997-03-01

    It has been assumed that demonstrating the correct movement is the best way to impart task-relevant information. However, empirical verification with simple laboratory skills has shown that using a learning model (showing an individual in the process of acquiring the skill to be learned) may accelerate skill acquisition and increase retention more than using a correct model. The purpose of the present study was to compare the effectiveness of viewing correct versus learning models on the acquisition of a sport skill (free-weight squat lift). Forty female participants were assigned to four learning conditions: physical practice receiving feedback, learning model with model feedback, correct model with model feedback, and learning model without model feedback. Results indicated that viewing either a correct or learning model was equally effective in learning correct form in the squat lift.

  3. Inversion Schemes to Retrieve Atmospheric and Oceanic Parameters from SeaWiFS Data

    NASA Technical Reports Server (NTRS)

    Frouin, Robert; Deschamps, Pierre-Yves

    1997-01-01

    Firstly, we have analyzed atmospheric transmittance and sky radiance data connected at the Scripps Institution of Oceanography pier, La Jolla during the winters of 1993 and 1994. Aerosol optical thickness at 870 nm was generally low in La Jolla, with most values below 0.1 after correction for stratospheric aerosols. For such low optical thickness, variability in aerosol scattering properties cannot be determined, and a mean background model, specified regionally under stable stratospheric component, may be sufficient for ocean color remote sensing, from space. For optical thicknesses above 0. 1, two modes of variability characterized by Angstrom exponents of 1.2 and 0.5 and corresponding, to Tropospheric and Maritime models, respectively, were identified in the measurements. The aerosol models selected for ocean color remote sensing, allowed one to fit, within measurement inaccuracies, the derived values of Angstrom exponent and 'pseudo' phase function (the product of single scattering albedo and phase function), key atmospheric correction parameters. Importantly, the 'pseudo' phase function can be derived from measurements of the Angstrom exponent. Shipborne sun photometer measurements at the time of satellite overpass are usually sufficient to verify atmospheric correction for ocean color.

  4. 75 FR 68179 - Airworthiness Directives; Austro Engine GmbH Model E4 Diesel Piston Engines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-05

    ...We are adopting a new airworthiness directive (AD) for the products listed above. This AD results from mandatory continuing airworthiness information (MCAI) issued by an aviation authority of another country to identify and correct an unsafe condition on an aviation product. The MCAI describes the unsafe condition as:

  5. The Relationship Between Research by Women and Women's Experiential Roles

    ERIC Educational Resources Information Center

    Mead, Margaret

    1978-01-01

    Anthropological models can be used to study gender differences and identify biologically given differences, experientially given differences, and socially created differences in men and women. Research on gender-specific behavior should always be done by both men and women in cross-cultural contexts to correct for prejudice, bias, and myopia.…

  6. 75 FR 33162 - Airworthiness Directives; Microturbo Saphir 20 Model 095 Auxiliary Power Units (APUs)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-11

    ... information (MCAI) issued by the European Aviation Safety Agency (EASA) to identify and correct an unsafe... States Code specifies the FAA's authority to issue rules on aviation safety. Subtitle I, section 106... the AD docket. List of Subjects in 14 CFR Part 39 Air transportation, Aircraft, Aviation safety...

  7. The Origins of Force--Misconceptions and Classroom Controversy.

    ERIC Educational Resources Information Center

    Steinberg, Melvin S.

    Misconceptions associated with the origins of force and the effectiveness of a bridging strategy for developing correct conceptual models in mechanics are identified for high school physics teachers in this paper. The situation investigated was whether a table exerts an upward force on a book. Student misconceptions related to this phenomenon as…

  8. 75 FR 4477 - Airworthiness Directives; Airbus Model A330-201, -202, -203, -223, -243, -301, -302, -303, -321...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-28

    ... information (MCAI) originated by an aviation authority of another country to identify and correct an unsafe... FURTHER INFORMATION CONTACT: Vladimir Ulyanov, Aerospace Engineer, International Branch, ANM-116...) 227-1138; fax (425) 227-1149. SUPPLEMENTARY INFORMATION: Discussion We issued a notice of proposed...

  9. Systematic review of statistical approaches to quantify, or correct for, measurement error in a continuous exposure in nutritional epidemiology.

    PubMed

    Bennett, Derrick A; Landry, Denise; Little, Julian; Minelli, Cosetta

    2017-09-19

    Several statistical approaches have been proposed to assess and correct for exposure measurement error. We aimed to provide a critical overview of the most common approaches used in nutritional epidemiology. MEDLINE, EMBASE, BIOSIS and CINAHL were searched for reports published in English up to May 2016 in order to ascertain studies that described methods aimed to quantify and/or correct for measurement error for a continuous exposure in nutritional epidemiology using a calibration study. We identified 126 studies, 43 of which described statistical methods and 83 that applied any of these methods to a real dataset. The statistical approaches in the eligible studies were grouped into: a) approaches to quantify the relationship between different dietary assessment instruments and "true intake", which were mostly based on correlation analysis and the method of triads; b) approaches to adjust point and interval estimates of diet-disease associations for measurement error, mostly based on regression calibration analysis and its extensions. Two approaches (multiple imputation and moment reconstruction) were identified that can deal with differential measurement error. For regression calibration, the most common approach to correct for measurement error used in nutritional epidemiology, it is crucial to ensure that its assumptions and requirements are fully met. Analyses that investigate the impact of departures from the classical measurement error model on regression calibration estimates can be helpful to researchers in interpreting their findings. With regard to the possible use of alternative methods when regression calibration is not appropriate, the choice of method should depend on the measurement error model assumed, the availability of suitable calibration study data and the potential for bias due to violation of the classical measurement error model assumptions. On the basis of this review, we provide some practical advice for the use of methods to assess and adjust for measurement error in nutritional epidemiology.

  10. Evaluation Of Statistical Models For Forecast Errors From The HBV-Model

    NASA Astrophysics Data System (ADS)

    Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.

    2009-04-01

    Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.

  11. Corrective Action Decision Document for Corrective Action Unit 340: Pesticide Release sites, Nevada Test Site, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DOE /NV

    This Corrective Action Decision Document has been prepared for Corrective Action Unit 340, the NTS Pesticide Release Sites, in accordance with the Federal Facility Agreement and Consent Order of 1996 (FFACO, 1996). Corrective Action Unit 340 is located at the Nevada Test Site, Nevada, and is comprised of the following Corrective Action Sites: 23-21-01, Area 23 Quonset Hut 800 Pesticide Release Ditch; 23-18-03, Area 23 Skid Huts Pesticide Storage; and 15-18-02, Area 15 Quonset Hut 15-11 Pesticide Storage. The purpose of this Corrective Action Decision Document is to identify and provide a rationale for the selection of a recommended correctivemore » action alternative for each Corrective Action Site. The scope of this Corrective Action Decision Document consists of the following tasks: Develop corrective action objectives; Identify corrective action alternative screening criteria; Develop corrective action alternatives; Perform detailed and comparative evaluations of the corrective action alternatives in relation to the corrective action objectives and screening criteria; and Recommend and justify a preferred corrective action alternative for each Corrective Action Site.« less

  12. Analysis of interacting entropy-corrected holographic and new agegraphic dark energies

    NASA Astrophysics Data System (ADS)

    Ranjit, Chayan; Debnath, Ujjal

    In the present work, we assume the flat FRW model of the universe is filled with dark matter and dark energy where they are interacting. For dark energy model, we consider the entropy-corrected HDE (ECHDE) model and the entropy-corrected NADE (ECNADE). For entropy-corrected models, we assume logarithmic correction and power law correction. For ECHDE model, length scale L is assumed to be Hubble horizon and future event horizon. The ωde-ωde‧ analysis for our different horizons are discussed.

  13. Processor register error correction management

    DOEpatents

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  14. Predictive modeling and reducing cyclic variability in autoignition engines

    DOEpatents

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  15. 77 FR 52035 - Public Information Collection Requirements Submitted to the Office of Management and Budget (OMB...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-28

    ... Identifier: CMS-10003] Public Information Collection Requirements Submitted to the Office of Management and Budget (OMB); Correction AGENCY: Centers for Medicare & Medicaid Services (CMS), HHS. ACTION: Correction of notice. SUMMARY: This document corrects a technical error in the notice [Document Identifier: CMS...

  16. 76 FR 33305 - Public Information Collection Requirements Submitted to the Office of Management and Budget (OMB...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-08

    ... Identifier: CMS-10379] Public Information Collection Requirements Submitted to the Office of Management and Budget (OMB); Correction AGENCY: Centers for Medicare & Medicaid Services (CMS), HHS. ACTION: Correction of notice. SUMMARY: This document corrects the information provided for [Document Identifier: CMS...

  17. Hand-Writing Motion Tracking with Vision-Inertial Sensor Fusion: Calibration and Error Correction

    PubMed Central

    Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J.

    2014-01-01

    The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model. PMID:25157546

  18. Aligning observed and modelled behaviour based on workflow decomposition

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Du, YuYue; Liu, Wei

    2017-09-01

    When business processes are mostly supported by information systems, the availability of event logs generated from these systems, as well as the requirement of appropriate process models are increasing. Business processes can be discovered, monitored and enhanced by extracting process-related information. However, some events cannot be correctly identified because of the explosion of the amount of event logs. Therefore, a new process mining technique is proposed based on a workflow decomposition method in this paper. Petri nets (PNs) are used to describe business processes, and then conformance checking of event logs and process models is investigated. A decomposition approach is proposed to divide large process models and event logs into several separate parts that can be analysed independently; while an alignment approach based on a state equation method in PN theory enhances the performance of conformance checking. Both approaches are implemented in programmable read-only memory (ProM). The correctness and effectiveness of the proposed methods are illustrated through experiments.

  19. A confidence building exercise in data and identifiability: Modeling cancer chemotherapy as a case study.

    PubMed

    Eisenberg, Marisa C; Jain, Harsh V

    2017-10-27

    Mathematical modeling has a long history in the field of cancer therapeutics, and there is increasing recognition that it can help uncover the mechanisms that underlie tumor response to treatment. However, making quantitative predictions with such models often requires parameter estimation from data, raising questions of parameter identifiability and estimability. Even in the case of structural (theoretical) identifiability, imperfect data and the resulting practical unidentifiability of model parameters can make it difficult to infer the desired information, and in some cases, to yield biologically correct inferences and predictions. Here, we examine parameter identifiability and estimability using a case study of two compartmental, ordinary differential equation models of cancer treatment with drugs that are cell cycle-specific (taxol) as well as non-specific (oxaliplatin). We proceed through model building, structural identifiability analysis, parameter estimation, practical identifiability analysis and its biological implications, as well as alternative data collection protocols and experimental designs that render the model identifiable. We use the differential algebra/input-output relationship approach for structural identifiability, and primarily the profile likelihood approach for practical identifiability. Despite the models being structurally identifiable, we show that without consideration of practical identifiability, incorrect cell cycle distributions can be inferred, that would result in suboptimal therapeutic choices. We illustrate the usefulness of estimating practically identifiable combinations (in addition to the more typically considered structurally identifiable combinations) in generating biologically meaningful insights. We also use simulated data to evaluate how the practical identifiability of the model would change under alternative experimental designs. These results highlight the importance of understanding the underlying mechanisms rather than purely using parsimony or information criteria/goodness-of-fit to decide model selection questions. The overall roadmap for identifiability testing laid out here can be used to help provide mechanistic insight into complex biological phenomena, reduce experimental costs, and optimize model-driven experimentation. Copyright © 2017. Published by Elsevier Ltd.

  20. Improvements in the spatial representation of lakes and reservoirs in the contiguous United States for the National Water Model

    NASA Astrophysics Data System (ADS)

    Khan, S.; Salas, F.; Sampson, K. M.; Read, L. K.; Cosgrove, B.; Li, Z.; Gochis, D. J.

    2017-12-01

    The representation of inland surface water bodies in distributed hydrologic models at the continental scale is a challenge. The National Water Model (NWM) utilizes the National Hydrography Dataset Plus Version 2 (NHDPlusV2) "waterbody" dataset to represent lakes and reservoirs. The "waterbody" layer is a comprehensive dataset that represents surface water bodies using common features like lakes, ponds, reservoirs, estuaries, playas and swamps/marshes. However, a major issue that remains unresolved even in the latest revision of NHDPlus Version 2 is the inconsistency in waterbody digitization and delineation errors. Manually correcting the water body polygons becomes tedious and quickly impossible for continental-scale hydrologic models such as the NWM. In this study, we improved spatial representation of 6,802 lakes and reservoirs by analyzing 379,110 waterbodies in the contiguous United States (excluding the Laurentian Great Lakes). We performed a step-by- step process that integrates a set of geospatial analyses to identify, track, and correct the extent of lakes and reservoirs features that are larger than 0.75 km2. The following assumptions were applied while developing the new dataset: a) lakes and reservoirs cannot directly feed into each other; b) each waterbody must have one outlet; and c) a single lake or reservoir feature cannot have multiple parts. The majority of the NHDplusV2 waterbody features in the original dataset are delineated correctly. However approximately 3 % of the lake and reservoir polygons were found to be incorrect with topological errors and were corrected accordingly. It is important to fix these digitizing errors because the waterbody features are closely linked to the river topology. This new waterbody dataset will ensure that model-simulated water is directed into and through the lakes and reservoirs in a manner that supports the NWM code base and assumptions. The improved dataset will facilitate more effective integration of lakes and reservoirs with correct spatial features into the updated NWM.

  1. Potential use of ionic species for identifying source land-uses of stormwater runoff.

    PubMed

    Lee, Dong Hoon; Kim, Jin Hwi; Mendoza, Joseph A; Lee, Chang-Hee; Kang, Joo-Hyon

    2017-02-01

    Identifying critical land-uses or source areas is important to prioritize resources for cost-effective stormwater management. This study investigated the use of information on ionic composition as a fingerprint to identify the source land-use of stormwater runoff. We used 12 ionic species in stormwater runoff monitored for a total of 20 storm events at five sites with different land-use compositions during the 2012-2014 wet seasons. A stepwise forward discriminant function analysis (DFA) with the jack-knifed cross validation approach was used to select ionic species that better discriminate the land-use of its source. Of the 12 ionic species, 9 species (K + , Mg 2+ , Na + , NH 4 + , Br - , Cl - , F - , NO 2 - , and SO 4 2- ) were selected for better performance of the DFA. The DFA successfully differentiated stormwater samples from urban, rural, and construction sites using concentrations of the ionic species (70%, 95%, and 91% of correct classification, respectively). Over 80% of the new data cases were correctly classified by the trained DFA model. When applied to data cases from a mixed land-use catchment and downstream, the DFA model showed the greater impact of urban areas and rural areas respectively in the earlier and later parts of a storm event.

  2. Use of Linear Prediction Uncertainty Analysis to Guide Conditioning of Models Simulating Surface-Water/Groundwater Interactions

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; White, J.; Doherty, J.

    2011-12-01

    Linear prediction uncertainty analysis in a Bayesian framework was applied to guide the conditioning of an integrated surface water/groundwater model that will be used to predict the effects of groundwater withdrawals on surface-water and groundwater flows. Linear prediction uncertainty analysis is an effective approach for identifying (1) raw and processed data most effective for model conditioning prior to inversion, (2) specific observations and periods of time critically sensitive to specific predictions, and (3) additional observation data that would reduce model uncertainty relative to specific predictions. We present results for a two-dimensional groundwater model of a 2,186 km2 area of the Biscayne aquifer in south Florida implicitly coupled to a surface-water routing model of the actively managed canal system. The model domain includes 5 municipal well fields withdrawing more than 1 Mm3/day and 17 operable surface-water control structures that control freshwater releases from the Everglades and freshwater discharges to Biscayne Bay. More than 10 years of daily observation data from 35 groundwater wells and 24 surface water gages are available to condition model parameters. A dense parameterization was used to fully characterize the contribution of the inversion null space to predictive uncertainty and included bias-correction parameters. This approach allows better resolution of the boundary between the inversion null space and solution space. Bias-correction parameters (e.g., rainfall, potential evapotranspiration, and structure flow multipliers) absorb information that is present in structural noise that may otherwise contaminate the estimation of more physically-based model parameters. This allows greater precision in predictions that are entirely solution-space dependent, and reduces the propensity for bias in predictions that are not. Results show that application of this analysis is an effective means of identifying those surface-water and groundwater data, both raw and processed, that minimize predictive uncertainty, while simultaneously identifying the maximum solution-space dimensionality of the inverse problem supported by the data.

  3. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    USGS Publications Warehouse

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  4. Monitoring asthma control in children with allergies by soft computing of lung function and exhaled nitric oxide.

    PubMed

    Pifferi, Massimo; Bush, Andrew; Pioggia, Giovanni; Di Cicco, Maria; Chinellato, Iolanda; Bodini, Alessandro; Macchia, Pierantonio; Boner, Attilio L

    2011-02-01

    Asthma control is emphasized by new guidelines but remains poor in many children. Evaluation of control relies on subjective patient recall and may be overestimated by health-care professionals. This study assessed the value of spirometry and fractional exhaled nitric oxide (FeNO) measurements, used alone or in combination, in models developed by a machine learning approach in the objective classification of asthma control according to Global Initiative for Asthma guidelines and tested the model in a second group of children with asthma. Fifty-three children with persistent atopic asthma underwent two to six evaluations of asthma control, including spirometry and FeNO. Soft computing evaluation was performed by means of artificial neural networks and principal component analysis. The model was then tested in a cross-sectional study in an additional 77 children with allergic asthma. The machine learning method was not able to distinguish different levels of control using either spirometry or FeNO values alone. However, their use in combination modeled by soft computing was able to discriminate levels of asthma control. In particular, the model is able to recognize all children with uncontrolled asthma and correctly identify 99.0% of children with totally controlled asthma. In the cross-sectional study, the model prospectively identified correctly all the uncontrolled children and 79.6% of the controlled children. Soft computing analysis of spirometry and FeNO allows objective categorization of asthma control status.

  5. Color correction strategies in optical design

    NASA Astrophysics Data System (ADS)

    Pfisterer, Richard N.; Vorndran, Shelby D.

    2014-12-01

    An overview of color correction strategies is presented. Starting with basic first-order aberration theory, we identify known color corrected solutions for doublets and triplets. Reviewing the modern approaches of Robb-Mercado, Rayces-Aguilar, and C. de Albuquerque et al, we find that they confirm the existence of glass combinations for doublets and triplets that yield color corrected solutions that we already know exist. Finally we explore the use of the y, ӯ diagram in conjunction with aberration theory to identify the solution space of glasses capable of leading to color corrected solutions in arbitrary optical systems.

  6. A parametric approach for simultaneous bias correction and high-resolution downscaling of climate model rainfall

    NASA Astrophysics Data System (ADS)

    Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto; Marrocu, Marino

    2017-03-01

    Distribution mapping has been identified as the most efficient approach to bias-correct climate model rainfall, while reproducing its statistics at spatial and temporal resolutions suitable to run hydrologic models. Yet its implementation based on empirical distributions derived from control samples (referred to as nonparametric distribution mapping) makes the method's performance sensitive to sample length variations, the presence of outliers, the spatial resolution of climate model results, and may lead to biases, especially in extreme rainfall estimation. To address these shortcomings, we propose a methodology for simultaneous bias correction and high-resolution downscaling of climate model rainfall products that uses: (a) a two-component theoretical distribution model (i.e., a generalized Pareto (GP) model for rainfall intensities above a specified threshold u*, and an exponential model for lower rainrates), and (b) proper interpolation of the corresponding distribution parameters on a user-defined high-resolution grid, using kriging for uncertain data. We assess the performance of the suggested parametric approach relative to the nonparametric one, using daily raingauge measurements from a dense network in the island of Sardinia (Italy), and rainfall data from four GCM/RCM model chains of the ENSEMBLES project. The obtained results shed light on the competitive advantages of the parametric approach, which is proved more accurate and considerably less sensitive to the characteristics of the calibration period, independent of the GCM/RCM combination used. This is especially the case for extreme rainfall estimation, where the GP assumption allows for more accurate and robust estimates, also beyond the range of the available data.

  7. Increased genomic prediction accuracy in wheat breeding through spatial adjustment of field trial data.

    PubMed

    Lado, Bettina; Matus, Ivan; Rodríguez, Alejandra; Inostroza, Luis; Poland, Jesse; Belzile, François; del Pozo, Alejandro; Quincke, Martín; Castro, Marina; von Zitzewitz, Jarislav

    2013-12-09

    In crop breeding, the interest of predicting the performance of candidate cultivars in the field has increased due to recent advances in molecular breeding technologies. However, the complexity of the wheat genome presents some challenges for applying new technologies in molecular marker identification with next-generation sequencing. We applied genotyping-by-sequencing, a recently developed method to identify single-nucleotide polymorphisms, in the genomes of 384 wheat (Triticum aestivum) genotypes that were field tested under three different water regimes in Mediterranean climatic conditions: rain-fed only, mild water stress, and fully irrigated. We identified 102,324 single-nucleotide polymorphisms in these genotypes, and the phenotypic data were used to train and test genomic selection models intended to predict yield, thousand-kernel weight, number of kernels per spike, and heading date. Phenotypic data showed marked spatial variation. Therefore, different models were tested to correct the trends observed in the field. A mixed-model using moving-means as a covariate was found to best fit the data. When we applied the genomic selection models, the accuracy of predicted traits increased with spatial adjustment. Multiple genomic selection models were tested, and a Gaussian kernel model was determined to give the highest accuracy. The best predictions between environments were obtained when data from different years were used to train the model. Our results confirm that genotyping-by-sequencing is an effective tool to obtain genome-wide information for crops with complex genomes, that these data are efficient for predicting traits, and that correction of spatial variation is a crucial ingredient to increase prediction accuracy in genomic selection models.

  8. Continuous quantum error correction for non-Markovian decoherence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oreshkov, Ognyan; Brun, Todd A.; Communication Sciences Institute, University of Southern California, Los Angeles, California 90089

    2007-08-15

    We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximatelymore » follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics.« less

  9. The Deformity Angular Ratio: Does It Correlate With High-Risk Cases for Potential Spinal Cord Monitoring Alerts in Pediatric 3-Column Thoracic Spinal Deformity Corrective Surgery?

    PubMed

    Lewis, Noah D H; Keshen, Sam G N; Lenke, Lawrence G; Zywiel, Michael G; Skaggs, David L; Dear, Taylor E; Strantzas, Samuel; Lewis, Stephen J

    2015-08-01

    A retrospective analysis. The purpose of this study was to determine whether the deformity angular ratio (DAR) can reliably assess the neurological risks of patients undergoing deformity correction. Identifying high-risk patients and procedures can help ensure that appropriate measures are taken to minimize neurological complications during spinal deformity corrections. Subjectively, surgeons look at radiographs and evaluate the riskiness of the procedure. However, 2 curves of similar magnitude and location can have significantly different risks of neurological deficit during surgery. Whether the curve spans many levels or just a few can significantly influence surgical strategies. Lenke et al have proposed the DAR, which is a measure of curve magnitude per level of deformity. The data from 35 pediatric spinal deformity correction procedures with thoracic 3-column osteotomies were reviewed. Measurements from preoperative radiographs were used to calculate the DAR. Binary logistic regression was used to model the relationship between DARs (independent variables) and presence or absence of an intraoperative alert (dependent variable). In patients undergoing 3-column osteotomies, sagittal curve magnitude and total curve magnitude were associated with increased incidence of transcranial motor evoked potential changes. Total DAR greater than 45° per level and sagittal DAR greater than 22° per level were associated with a 75% incidence of a motor evoked potential alert, with the incidence increasing to 90% with sagittal DAR of 28° per level. In patients undergoing 3-column osteotomies for severe spinal deformities, the DAR was predictive of patients developing intraoperative motor evoked potential alerts. Identifying accurate radiographical, patient, and procedural risk factors in the correction of severe deformities can help prepare the surgical team to improve safety and outcomes when carrying out complex spinal corrections. 3.

  10. Fanconi anemia gene editing by the CRISPR/Cas9 system.

    PubMed

    Osborn, Mark J; Gabriel, Richard; Webber, Beau R; DeFeo, Anthony P; McElroy, Amber N; Jarjour, Jordan; Starker, Colby G; Wagner, John E; Joung, J Keith; Voytas, Daniel F; von Kalle, Christof; Schmidt, Manfred; Blazar, Bruce R; Tolar, Jakub

    2015-02-01

    Genome engineering with designer nucleases is a rapidly progressing field, and the ability to correct human gene mutations in situ is highly desirable. We employed fibroblasts derived from a patient with Fanconi anemia as a model to test the ability of the clustered regularly interspaced short palindromic repeats/Cas9 nuclease system to mediate gene correction. We show that the Cas9 nuclease and nickase each resulted in gene correction, but the nickase, because of its ability to preferentially mediate homology-directed repair, resulted in a higher frequency of corrected clonal isolates. To assess the off-target effects, we used both a predictive software platform to identify intragenic sequences of homology as well as a genome-wide screen utilizing linear amplification-mediated PCR. We observed no off-target activity and show RNA-guided endonuclease candidate sites that do not possess low sequence complexity function in a highly specific manner. Collectively, we provide proof of principle for precision genome editing in Fanconi anemia, a DNA repair-deficient human disorder.

  11. Mobile Image Based Color Correction Using Deblurring

    PubMed Central

    Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.

    2016-01-01

    Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space. PMID:28572697

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, David L.; Olson, Jerry G.; Hannay, Cécile

    An error in the energy formulation in the Community Atmosphere Model (CAM) is identified and corrected. Ten year AMIP simulations are compared using the correct and incorrect energy formulations. Statistics of selected primary variables all indicate physically insignificant differences between the simulations, comparable to differences with simulations initialized with rounding sized perturbations. The two simulations are so similar mainly because of an inconsistency in the application of the incorrect energy formulation in the original CAM. CAM used the erroneous energy form to determine the states passed between the parameterizations, but used a form related to the correct formulation for themore » state passed from the parameterizations to the dynamical core. If the incorrect form is also used to determine the state passed to the dynamical core the simulations are significantly different. In addition, CAM uses the incorrect form for the global energy fixer, but that seems to be less important. The difference of the magnitude of the fixers using the correct and incorrect energy definitions is very small.« less

  13. Impact of numerical choices on water conservation in the E3SM Atmosphere Model version 1 (EAMv1)

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.; Wan, Hui; Leung, Ruby; Ma, Po-Lun; Golaz, Jean-Christophe; Wolfe, Jon; Lin, Wuyin; Singh, Balwinder; Burrows, Susannah; Yoon, Jin-Ho; Wang, Hailong; Qian, Yun; Tang, Qi; Caldwell, Peter; Xie, Shaocheng

    2018-06-01

    The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods for fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model becomes negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors in early V1 versions decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in V1 results in a very thin model layer at the Earth's surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for V1.

  14. Comparison of Different Attitude Correction Models for ZY-3 Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Song, Wenping; Liu, Shijie; Tong, Xiaohua; Niu, Changling; Ye, Zhen; Zhang, Han; Jin, Yanmin

    2018-04-01

    ZY-3 satellite, launched in 2012, is the first civilian high resolution stereo mapping satellite of China. This paper analyzed the positioning errors of ZY-3 satellite imagery and conducted compensation for geo-position accuracy improvement using different correction models, including attitude quaternion correction, attitude angle offset correction, and attitude angle linear correction. The experimental results revealed that there exist systematic errors with ZY-3 attitude observations and the positioning accuracy can be improved after attitude correction with aid of ground controls. There is no significant difference between the results of attitude quaternion correction method and the attitude angle correction method. However, the attitude angle offset correction model produced steady improvement than the linear correction model when limited ground control points are available for single scene.

  15. A two-component Bayesian mixture model to identify implausible gestational age.

    PubMed

    Mohammadian-Khoshnoud, Maryam; Moghimbeigi, Abbas; Faradmal, Javad; Yavangi, Mahnaz

    2016-01-01

    Background: Birth weight and gestational age are two important variables in obstetric research. The primary measure of gestational age is based on a mother's recall of her last menstrual period. This recall may cause random or systematic errors. Therefore, the objective of this study is to utilize Bayesian mixture model in order to identify implausible gestational age. Methods: In this cross-sectional study, medical documents of 502 preterm infants born and hospitalized in Hamadan Fatemieh Hospital from 2009 to 2013 were gathered. Preterm infants were classified to less than 28 weeks and 28 to 31 weeks. A two-component Bayesian mixture model was utilized to identify implausible gestational age; the first component shows the probability of correct and the second one shows the probability of incorrect classification of gestational ages. The data were analyzed through OpenBUGS 3.2.2 and 'coda' package of R 3.1.1. Results: The mean (SD) of the second component of less than 28 weeks and 28 to 31 weeks were 1179 (0.0123) and 1620 (0.0074), respectively. These values were larger than the mean of the first component for both groups which were 815.9 (0.0123) and 1061 (0.0074), respectively. Conclusion: Errors occurred in recording the gestational ages of these two groups of preterm infants included recording the gestational age less than the actual value at birth. Therefore, developing scientific methods to correct these errors is essential to providing desirable health services and adjusting accurate health indicators.

  16. Complex Osteotomies of Tibial Plateau Malunions Using Computer-Assisted Planning and Patient-Specific Surgical Guides.

    PubMed

    Fürnstahl, Philipp; Vlachopoulos, Lazaros; Schweizer, Andreas; Fucentese, Sandro F; Koch, Peter P

    2015-08-01

    The accurate reduction of tibial plateau malunions can be challenging without guidance. In this work, we report on a novel technique that combines 3-dimensional computer-assisted planning with patient-specific surgical guides for improving reliability and accuracy of complex intraarticular corrective osteotomies. Preoperative planning based on 3-dimensional bone models was performed to simulate fragment mobilization and reduction in 3 cases. Surgical implementation of the preoperative plan using patient-specific cutting and reduction guides was evaluated; benefits and limitations of the approach were identified and discussed. The preliminary results are encouraging and show that complex, intraarticular corrective osteotomies can be accurately performed with this technique. For selective patients with complex malunions around the tibia plateau, this method might be an attractive option, with the potential to facilitate achieving the most accurate correction possible.

  17. Knowledge bases built on web languages from the point of view of predicate logics

    NASA Astrophysics Data System (ADS)

    Vajgl, Marek; Lukasová, Alena; Žáček, Martin

    2017-06-01

    The article undergoes evaluation of formal systems created on the base of web (ontology/concept) languages by simplifying the usual approach of knowledge representation within the FOPL, but sharing its expressiveness, semantic correct-ness, completeness and decidability. Evaluation of two of them - that one based on description logic and that one built on RDF model principles - identifies some of the lacks of those formal systems and presents, if possible, corrections of them. Possibilities to build an inference system capable to obtain new further knowledge over given knowledge bases including those describing domains by giant linked domain databases has been taken into account. Moreover, the directions towards simplifying FOPL language discussed here has been evaluated from the point of view of a possibility to become a web language for fulfilling an idea of semantic web.

  18. Nonassociative plasticity model for cohesionless materials and its implementation in soil-structure interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hashmi, Q.S.E.

    A constitutive model based on rate-independent elastoplasticity concepts is developed and used to simulate the behavior of geologic materials under arbitrary three-dimensional stress paths. The model accounts for various factors such as friction, stress path, and stress history that influence the behavior of geologic materials. A hierarchical approach is adopted whereby models of progressively increasing sophistication are developed from a basic isotropic-hardening associate model. Nonassociativeness is introduced as correction or perturbation to the basic model. Deviation of normality of the plastic-strain increments to the yield surface F is captured through nonassociativeness. The plastic potential Q is obtained by applying amore » correction to F. This simplified approach restricts the number of extra parameters required to define the plastic potential Q. The material constants associated with the model are identified, and they are evaluated for three different sands (Leighton Buzzard, Munich and McCormick Ranch). The model is then verified by comparing predictions with laboratory tests from which the constants were found, and typical tests not used for finding the constants. Based on the above findings, a soil-footing system is analyzed using finite-element techniques.« less

  19. Contact Stress Analysis of Spiral Bevel Gears Using Finite Element Analysis

    NASA Technical Reports Server (NTRS)

    Bibel, G. D.; Kumar, A; Reddy, S.; Handschuh, R.

    1995-01-01

    A procedure is presented for performing three-dimensional stress analysis of spiral bevel gears in mesh using the finite element method. The procedure involves generating a finite element model by solving equations that identify tooth surface coordinates. Coordinate transformations are used to orientate the gear and pinion for gear meshing. Contact boundary conditions are simulated with gap elements. A solution technique for correct orientation of the gap elements is given. Example models and results are presented.

  20. Derivation of revised formulae for eddy viscous forces used in the ocean general circulation model

    NASA Technical Reports Server (NTRS)

    Chou, Ru Ling

    1988-01-01

    Presented is a re-derivation of the eddy viscous dissipation tensor commonly used in present oceanographic general circulation models. When isotropy is imposed, the currently-used form of the tensor fails to return to the laplacian operator. In this paper, the source of this error is identified in a consistent derivation of the tensor in both rectangular and earth spherical coordinates, and the correct form of the eddy viscous tensor is presented.

  1. A predictive model to allocate frequent service users of community-based mental health services to different packages of care.

    PubMed

    Grigoletti, Laura; Amaddeo, Francesco; Grassi, Aldrigo; Boldrini, Massimo; Chiappelli, Marco; Percudani, Mauro; Catapano, Francesco; Fiorillo, Andrea; Perris, Francesco; Bacigalupi, Maurizio; Albanese, Paolo; Simonetti, Simona; De Agostini, Paola; Tansella, Michele

    2010-01-01

    To develop predictive models to allocate patients into frequent and low service users groups within the Italian Community-based Mental Health Services (CMHSs). To allocate frequent users to different packages of care, identifying the costs of these packages. Socio-demographic and clinical data and GAF scores at baseline were collected for 1250 users attending five CMHSs. All psychiatric contacts made by these patients during six months were recorded. A logistic regression identified frequent service users predictive variables. Multinomial logistic regression identified variables able to predict the most appropriate package of care. A cost function was utilised to estimate costs. Frequent service users were 49%, using nearly 90% of all contacts. The model classified correctly 80% of users in the frequent and low users groups. Three packages of care were identified: Basic Community Treatment (4,133 Euro per six months); Intensive Community Treatment (6,180 Euro) and Rehabilitative Community Treatment (11,984 Euro) for 83%, 6% and 11% of frequent service users respectively. The model was found to be accurate for 85% of users. It is possible to develop predictive models to identify frequent service users and to assign them to pre-defined packages of care, and to use these models to inform the funding of psychiatric care.

  2. Photoinduced charge-transfer electronic excitation of tetracyanoethylene/tetramethylethylene complex in dichloromethane

    NASA Astrophysics Data System (ADS)

    Xu, Long-Kun; Bi, Ting-Jun; Ming, Mei-Jun; Wang, Jing-Bo; Li, Xiang-Yuan

    2017-07-01

    Based on the previous work on nonequilibrium solvation model by the authors, Intermolecular charge-transfer electronic excitation of tetracyanoethylene (TCE)/tetramethylethylene (TME) π -stacked complex in dichloromethane (DCM) has been investigated. For weak interaction correction, dispersion corrected functional DFT-D3 is adopted for geometry optimization. In order to identify the excitation metric, dipole moment components of each Cartesian direction, atomic charge, charge separation and Δr index are analyzed for TCE/TME complex. Calculation shows that the calculated excitation energy is dependent on the functional choice, when conjuncted with suitable time-dependent density functional, the modified nonequilibrium expression gives satisfied results for intermolecular charge-transfer electronic excitation.

  3. [Risk factors negatively affecting on the formation of musculoskeletal system in children and adolescents in the present conditions].

    PubMed

    Mirskaya, N B

    2013-01-01

    Identifying risk factors affecting the formation of the musculoskeletal system (MSS) in children and adolescents is considered by the author as a necessary condition for the implementation of prevention, timely diagnosis and adequate correction of the MSS disorders and diseases. Introduction in the educational process developed by the author for the first time a conceptual model of prevention and correction of the MSS disorders and diseases in schoolchildren allowed significantly reduce the prevalence of functional disorders and early forms of the MSS diseases in students of a number of comprehensive schools in Moscow by 50%.

  4. 75 FR 71538 - Airworthiness Directives; Airbus Model A340-500 and A340-600 Series Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-24

    ...We are adopting a new airworthiness directive (AD) for the products listed above. This AD results from mandatory continuing airworthiness information (MCAI) originated by an aviation authority of another country to identify and correct an unsafe condition on an aviation product. The MCAI describes the unsafe condition as:

  5. Classical, Generalizability, and Multifaceted Rasch Detection of Interrater Variability in Large, Sparse Data Sets.

    ERIC Educational Resources Information Center

    MacMillan, Peter D.

    2000-01-01

    Compared classical test theory (CTT), generalizability theory (GT), and multifaceted Rasch model (MFRM) approaches to detecting and correcting for rater variability using responses of 4,930 high school students graded by 3 raters on 9 scales. The MFRM approach identified far more raters as different than did the CTT analysis. GT and Rasch…

  6. 75 FR 2060 - Airworthiness Directives; Empresa Brasileira de Aeronautica S.A. (EMBRAER) Model ERJ 170 Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-14

    ...We are adopting a new airworthiness directive (AD) for the products listed above. This AD results from mandatory continuing airworthiness information (MCAI) originated by an aviation authority of another country to identify and correct an unsafe condition on an aviation product. The MCAI describes the unsafe condition as:

  7. Exposed and embedded corrections in aphasia therapy: issues of voice and identity.

    PubMed

    Simmons-Mackie, Nina; Damico, Jack S

    2008-01-01

    Because communication after the onset of aphasia can be fraught with errors, therapist corrections are pervasive in therapy for aphasia. Although corrections are designed to improve the accuracy of communication, some corrections can have social and emotional consequences during interactions. That is, exposure of errors can potentially silence the 'voice' of a speaker by orienting to an utterance as unacceptable. Although corrections can marginalize speakers with aphasia, the practice has not been widely investigated. A qualitative study of corrections during aphasia therapy was undertaken to describe corrections in therapy, identify patterns of occurrence, and develop hypotheses regarding the potential effects of corrections. Videotapes of six individual and five group aphasia therapy sessions were analysed. Sequences consistent with a definition of a therapist 'correction' were identified. Corrections were defined as instances when the therapist offered a 'fix' for a perceived error in the client's talk even though the intent was apparent. Two categories of correction were identified and were consistent with Jefferson's (1987) descriptions of exposed and embedded corrections. Exposed corrections involved explicit correcting by the therapist, while embedded corrections occurred implicitly within the ongoing talk. Patterns of occurrence appeared consistent with philosophical orientations of therapy sessions. Exposed corrections were more prevalent in sessions focusing on repairing deficits, while embedded corrections were prevalent in sessions focusing on natural communication events (e.g. conversation). In addition, exposed corrections were sometimes used when client offerings were plausible or appropriate, but were inconsistent with therapist expectations. The observation that some instances of exposed corrections effectively silenced the voice or self-expression of the person with aphasia has significant implications for outcomes from aphasia therapy. By focusing on accurate productions versus communicative intents, therapy runs the risk of reducing self-esteem and communicative confidence, as well as reinforcing a sense of 'helplessness' and disempowerment among people with aphasia. The results suggest that clinicians should carefully calibrate the use of exposed and embedded corrections to balance linguistic and psychosocial goals.

  8. Aspherical-atom modeling of coordination compounds by single-crystal X-ray diffraction allows the correct metal atom to be identified.

    PubMed

    Dittrich, Birger; Wandtke, Claudia M; Meents, Alke; Pröpper, Kevin; Mondal, Kartik Chandra; Samuel, Prinson P; Amin Sk, Nurul; Singh, Amit Pratap; Roesky, Herbert W; Sidhu, Navdeep

    2015-02-02

    Single-crystal X-ray diffraction (XRD) is often considered the gold standard in analytical chemistry, as it allows element identification as well as determination of atom connectivity and the solid-state structure of completely unknown samples. Element assignment is based on the number of electrons of an atom, so that a distinction of neighboring heavier elements in the periodic table by XRD is often difficult. A computationally efficient procedure for aspherical-atom least-squares refinement of conventional diffraction data of organometallic compounds is proposed. The iterative procedure is conceptually similar to Hirshfeld-atom refinement (Acta Crystallogr. Sect. A- 2008, 64, 383-393; IUCrJ. 2014, 1,61-79), but it relies on tabulated invariom scattering factors (Acta Crystallogr. Sect. B- 2013, 69, 91-104) and the Hansen/Coppens multipole model; disordered structures can be handled as well. Five linear-coordinate 3d metal complexes, for which the wrong element is found if standard independent-atom model scattering factors are relied upon, are studied, and it is shown that only aspherical-atom scattering factors allow a reliable assignment. The influence of anomalous dispersion in identifying the correct element is investigated and discussed. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. What is the evidence for retrieval problems in the elderly?

    PubMed

    White, N; Cunningham, W R

    1982-01-01

    To determine whether older adults experience particular problems with retrieval, groups of young and elderly adults were given free recall and recognition tests of supraspan lists of unrelated words. Analysis of number of words correctly recalled and recognized yielded a significant age by retention test interaction: greater age differences were observed for recall than for recognition. In a second analysis of words recalled and recognized, corrected for guessing, the interaction disappeared. It was concluded that previous interpretations that age by retention test interactions are indicative of retrieval problems of the elderly may have been confounded by methodological problems. Furthermore, it was suggested that researchers in aging and memory need to be explicit in identifying their underlying models of error processes when analyzing recognition scores: different error models may lead to different results and interpretations.

  10. Runner's knowledge of their foot type: do they really know?

    PubMed

    Hohmann, Erik; Reaburn, Peter; Imhoff, Andreas

    2012-09-01

    The use of correct individually selected running shoes may reduce the incidence of running injuries. However, the runner needs to be aware of their foot anatomy to ensure the "correct" footwear is chosen. The purpose of this study was to compare the individual runner's knowledge of their arch type to the arch index derived from a static footprint. We examined 92 recreational runners with a mean age of 35.4±11.4 (12-63) years. A questionnaire was used to investigate the knowledge of the runners about arch height and overpronation. A clinical examination was undertaken using defined criteria and the arch index was analysed using weight-bearing footprints. Forty-five runners (49%) identified their foot arch correctly. Eighteen of the 41 flat-arched runners (44%) identified their arch correctly. Twenty-four of the 48 normal-arched athletes (50%) identified their arch correctly. Three subjects with a high arch identified their arch correctly. Thirty-eight runners assessed themselves as overpronators; only four (11%) of these athletes were positively identified. Of the 34 athletes who did not categorize themselves as overpronators, four runners (12%) had clinical overpronation. The findings of this research suggest that runners possess poor knowledge of both their foot arch and dynamic pronation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Processing of the marine magnetic anomalies of the Caribbean region and the Gulf of Mexico (GOM)

    NASA Astrophysics Data System (ADS)

    Garcia, Andreina; Dyment, Jérôme; Thébault, Erwan

    2015-04-01

    Marine magnetic anomalies are useful to better understand the structure and age of the seafloor and constrain its nature and formation. In this work, we applied a dedicated processing of the NGDC marine magnetic measurements over the Caribbean region. The number of available surveys amounts to 516 representing 2.612.994 data points between epochs 1958 and 2012. The pre-processing was done by survey. First, data associated to velocities lesser than 5 knots were rejected. Then, the data were corrected for the main internal field using the CM4 model for epochs ranging between 1960 and 2002,5 and the IGRF-11 model outside the time range of the CM4 model. A visual inspection of the anomalies allowed us to identify, to remove evident outliers and to define a priority order for each survey. We evaluated the magnetic heading effect and corrected the data for it although statistics analysis suggested that this correction brings only a marginal improvement. The cross-overs differences were estimated using the x2sys package (Wessel, 2010) and then corrected using a Matlab code. The statistics confirmed the importance of this processing and improved the internal crossovers, with in particular a clear reduction of extreme values. This processing allows us to present a marine magnetic anomaly map of the Caribbean region and the Gulf of Mexico to 0.18 degree spatial resolution and to discuss the magnetic signature of some of the striking structures of the area.

  12. Defense Logistics Agency Disposition Services Afghanistan Disposal Process Needed Improvement

    DTIC Science & Technology

    2013-11-08

    audit, and management was proactive in correcting the deficiencies we identified. DLA DS eliminated backlogs, identified and corrected system ...problems, provided additional system training, corrected coding errors, added personnel to key positions, addressed scale issues, submitted debit...Service Automated Information System to the Reutilization Business Integration2 (RBI) solution. The implementation of RBI in Afghanistan occurred in

  13. Search-based model identification of smart-structure damage

    NASA Technical Reports Server (NTRS)

    Glass, B. J.; Macalou, A.

    1991-01-01

    This paper describes the use of a combined model and parameter identification approach, based on modal analysis and artificial intelligence (AI) techniques, for identifying damage or flaws in a rotating truss structure incorporating embedded piezoceramic sensors. This smart structure example is representative of a class of structures commonly found in aerospace systems and next generation space structures. Artificial intelligence techniques of classification, heuristic search, and an object-oriented knowledge base are used in an AI-based model identification approach. A finite model space is classified into a search tree, over which a variant of best-first search is used to identify the model whose stored response most closely matches that of the input. Newly-encountered models can be incorporated into the model space. This adaptativeness demonstrates the potential for learning control. Following this output-error model identification, numerical parameter identification is used to further refine the identified model. Given the rotating truss example in this paper, noisy data corresponding to various damage configurations are input to both this approach and a conventional parameter identification method. The combination of the AI-based model identification with parameter identification is shown to lead to smaller parameter corrections than required by the use of parameter identification alone.

  14. Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting.

    PubMed

    Khan, Tarik A; Friedensohn, Simon; Gorter de Vries, Arthur R; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T

    2016-03-01

    High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion-the intraclonal diversity index-which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology.

  15. Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting

    PubMed Central

    Khan, Tarik A.; Friedensohn, Simon; de Vries, Arthur R. Gorter; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T.

    2016-01-01

    High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion—the intraclonal diversity index—which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology. PMID:26998518

  16. Prognostic models for predicting posttraumatic seizures during acute hospitalization, and at 1 and 2 years following traumatic brain injury.

    PubMed

    Ritter, Anne C; Wagner, Amy K; Szaflarski, Jerzy P; Brooks, Maria M; Zafonte, Ross D; Pugh, Mary Jo V; Fabio, Anthony; Hammond, Flora M; Dreer, Laura E; Bushnik, Tamara; Walker, William C; Brown, Allen W; Johnson-Greene, Doug; Shea, Timothy; Krellman, Jason W; Rosenthal, Joseph A

    2016-09-01

    Posttraumatic seizures (PTS) are well-recognized acute and chronic complications of traumatic brain injury (TBI). Risk factors have been identified, but considerable variability in who develops PTS remains. Existing PTS prognostic models are not widely adopted for clinical use and do not reflect current trends in injury, diagnosis, or care. We aimed to develop and internally validate preliminary prognostic regression models to predict PTS during acute care hospitalization, and at year 1 and year 2 postinjury. Prognostic models predicting PTS during acute care hospitalization and year 1 and year 2 post-injury were developed using a recent (2011-2014) cohort from the TBI Model Systems National Database. Potential PTS predictors were selected based on previous literature and biologic plausibility. Bivariable logistic regression identified variables with a p-value < 0.20 that were used to fit initial prognostic models. Multivariable logistic regression modeling with backward-stepwise elimination was used to determine reduced prognostic models and to internally validate using 1,000 bootstrap samples. Fit statistics were calculated, correcting for overfitting (optimism). The prognostic models identified sex, craniotomy, contusion load, and pre-injury limitation in learning/remembering/concentrating as significant PTS predictors during acute hospitalization. Significant predictors of PTS at year 1 were subdural hematoma (SDH), contusion load, craniotomy, craniectomy, seizure during acute hospitalization, duration of posttraumatic amnesia, preinjury mental health treatment/psychiatric hospitalization, and preinjury incarceration. Year 2 significant predictors were similar to those of year 1: SDH, intraparenchymal fragment, craniotomy, craniectomy, seizure during acute hospitalization, and preinjury incarceration. Corrected concordance (C) statistics were 0.599, 0.747, and 0.716 for acute hospitalization, year 1, and year 2 models, respectively. The prognostic model for PTS during acute hospitalization did not discriminate well. Year 1 and year 2 models showed fair to good predictive validity for PTS. Cranial surgery, although medically necessary, requires ongoing research regarding potential benefits of increased monitoring for signs of epileptogenesis, PTS prophylaxis, and/or rehabilitation/social support. Future studies should externally validate models and determine clinical utility. Wiley Periodicals, Inc. © 2016 International League Against Epilepsy.

  17. Gradient nonlinearity calibration and correction for a compact, asymmetric magnetic resonance imaging gradient system.

    PubMed

    Tao, S; Trzasko, J D; Gunter, J L; Weavers, P T; Shu, Y; Huston, J; Lee, S K; Tan, E T; Bernstein, M A

    2017-01-21

    Due to engineering limitations, the spatial encoding gradient fields in conventional magnetic resonance imaging cannot be perfectly linear and always contain higher-order, nonlinear components. If ignored during image reconstruction, gradient nonlinearity (GNL) manifests as image geometric distortion. Given an estimate of the GNL field, this distortion can be corrected to a degree proportional to the accuracy of the field estimate. The GNL of a gradient system is typically characterized using a spherical harmonic polynomial model with model coefficients obtained from electromagnetic simulation. Conventional whole-body gradient systems are symmetric in design; typically, only odd-order terms up to the 5th-order are required for GNL modeling. Recently, a high-performance, asymmetric gradient system was developed, which exhibits more complex GNL that requires higher-order terms including both odd- and even-orders for accurate modeling. This work characterizes the GNL of this system using an iterative calibration method and a fiducial phantom used in ADNI (Alzheimer's Disease Neuroimaging Initiative). The phantom was scanned at different locations inside the 26 cm diameter-spherical-volume of this gradient, and the positions of fiducials in the phantom were estimated. An iterative calibration procedure was utilized to identify the model coefficients that minimize the mean-squared-error between the true fiducial positions and the positions estimated from images corrected using these coefficients. To examine the effect of higher-order and even-order terms, this calibration was performed using spherical harmonic polynomial of different orders up to the 10th-order including even- and odd-order terms, or odd-order only. The results showed that the model coefficients of this gradient can be successfully estimated. The residual root-mean-squared-error after correction using up to the 10th-order coefficients was reduced to 0.36 mm, yielding spatial accuracy comparable to conventional whole-body gradients. The even-order terms were necessary for accurate GNL modeling. In addition, the calibrated coefficients improved image geometric accuracy compared with the simulation-based coefficients.

  18. Statistical Evaluation of Causal Factors Associated with Astronaut Shoulder Injury in Space Suits.

    PubMed

    Anderson, Allison P; Newman, Dava J; Welsch, Roy E

    2015-07-01

    Shoulder injuries due to working inside the space suit are some of the most serious and debilitating injuries astronauts encounter. Space suit injuries occur primarily in the Neutral Buoyancy Laboratory (NBL) underwater training facility due to accumulated musculoskeletal stress. We quantitatively explored the underlying causal mechanisms of injury. Logistic regression was used to identify relevant space suit components, training environment variables, and anthropometric dimensions related to an increased propensity for space-suited injury. Two groups of subjects were analyzed: those whose reported shoulder incident is attributable to the NBL or working in the space suit, and those whose shoulder incidence began in active duty, meaning working in the suit could be a contributing factor. For both groups, percent of training performed in the space suit planar hard upper torso (HUT) was the most important predictor variable for injury. Frequency of training and recovery between training were also significant metrics. The most relevant anthropometric dimensions were bideltoid breadth, expanded chest depth, and shoulder circumference. Finally, record of previous injury was found to be a relevant predictor for subsequent injury. The first statistical model correctly identifies 39% of injured subjects, while the second model correctly identifies 68% of injured subjects. A review of the literature suggests this is the first work to quantitatively evaluate the hypothesized causal mechanisms of all space-suited shoulder injuries. Although limited in predictive capability, each of the identified variables can be monitored and modified operationally to reduce future impacts on an astronaut's health.

  19. On the importance of appropriate precipitation gauge catch correction for hydrological modelling at mid to high latitudes

    NASA Astrophysics Data System (ADS)

    Stisen, S.; Højberg, A. L.; Troldborg, L.; Refsgaard, J. C.; Christensen, B. S. B.; Olsen, M.; Henriksen, H. J.

    2012-11-01

    Precipitation gauge catch correction is often given very little attention in hydrological modelling compared to model parameter calibration. This is critical because significant precipitation biases often make the calibration exercise pointless, especially when supposedly physically-based models are in play. This study addresses the general importance of appropriate precipitation catch correction through a detailed modelling exercise. An existing precipitation gauge catch correction method addressing solid and liquid precipitation is applied, both as national mean monthly correction factors based on a historic 30 yr record and as gridded daily correction factors based on local daily observations of wind speed and temperature. The two methods, named the historic mean monthly (HMM) and the time-space variable (TSV) correction, resulted in different winter precipitation rates for the period 1990-2010. The resulting precipitation datasets were evaluated through the comprehensive Danish National Water Resources model (DK-Model), revealing major differences in both model performance and optimised model parameter sets. Simulated stream discharge is improved significantly when introducing the TSV correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimised model parameters are much more physically plausible for the model based on the TSV correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the TSV correction. Similarly, the performances of the TSV correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests). We conclude that TSV precipitation correction should be carried out for studies requiring a sound dynamic description of hydrological processes, and it is of particular importance when using hydrological models to make predictions for future climates when the snow/rain composition will differ from the past climate. This conclusion is expected to be applicable for mid to high latitudes, especially in coastal climates where winter precipitation types (solid/liquid) fluctuate significantly, causing climatological mean correction factors to be inadequate.

  20. CRISPR-Cas9: a promising genetic engineering approach in cancer research.

    PubMed

    Ratan, Zubair Ahmed; Son, Young-Jin; Haidere, Mohammad Faisal; Uddin, Bhuiyan Mohammad Mahtab; Yusuf, Md Abdullah; Zaman, Sojib Bin; Kim, Jong-Hoon; Banu, Laila Anjuman; Cho, Jae Youl

    2018-01-01

    Bacteria and archaea possess adaptive immunity against foreign genetic materials through clustered regularly interspaced short palindromic repeat (CRISPR) systems. The discovery of this intriguing bacterial system heralded a revolutionary change in the field of medical science. The CRISPR and CRISPR-associated protein 9 (Cas9) based molecular mechanism has been applied to genome editing. This CRISPR-Cas9 technique is now able to mediate precise genetic corrections or disruptions in in vitro and in vivo environments. The accuracy and versatility of CRISPR-Cas have been capitalized upon in biological and medical research and bring new hope to cancer research. Cancer involves complex alterations and multiple mutations, translocations and chromosomal losses and gains. The ability to identify and correct such mutations is an important goal in cancer treatment. In the context of this complex cancer genomic landscape, there is a need for a simple and flexible genetic tool that can easily identify functional cancer driver genes within a comparatively short time. The CRISPR-Cas system shows promising potential for modeling, repairing and correcting genetic events in different types of cancer. This article reviews the concept of CRISPR-Cas, its application and related advantages in oncology.

  1. Ecological changes and local knowledge in a giant honey bee (Apis dorsata F.) hunting community in Palawan, Philippines.

    PubMed

    Matias, Denise Margaret S; Borgemeister, Christian; von Wehrden, Henrik

    2018-02-24

    One of the traditional livelihood practices of indigenous Tagbanuas in Palawan, Philippines is wild honey hunting and gathering from the giant honey bee (Apis dorsata F.). In order to analyze the linkages of the social and ecological systems involved in this indigenous practice, we conducted spatial, quantitative, and qualitative analyses on field data gathered through mapping of global positioning system coordinates, community surveys, and key informant interviews. We found that only 24% of the 251 local community members surveyed could correctly identify the giant honey bee. Inferential statistics showed that a lower level of formal education strongly correlates with correct identification of the giant honey bee. Spatial analysis revealed that mean NDVI of sampled nesting tree areas has dropped from 0.61 in the year 1988 to 0.41 in 2015. However, those who correctly identified the giant honey bee lived in areas with high vegetation cover. Decreasing vegetation cover limits the presence of wild honey bees and this may also be limiting direct experience of the community with wild honey bees. However, with causality yet to be established, we recommend conducting further studies to concretely model feedbacks between ecological changes and local knowledge.

  2. A fundamental model of quasi-static wheelchair biomechanics.

    PubMed

    Leary, M; Gruijters, J; Mazur, M; Subic, A; Burton, M; Fuss, F K

    2012-11-01

    The performance of a wheelchair system is a function of user anatomy, including arm segment lengths and muscle parameters, and wheelchair geometry, in particular, seat position relative to the wheel hub. To quantify performance, researchers have proposed a number of predictive models. In particular, the model proposed by Richter is extremely useful for providing initial analysis as it is simple to apply and provides insight into the peak and transient joint torques required to achieve a given angular velocity. The work presented in this paper identifies and corrects a critical error; specifically that the Richter model incorrectly predicts that shoulder torque is due to an anteflexing muscle moment. This identified error was confirmed analytically, graphically and numerically. The authors have developed a corrected, fundamental model which identifies that the shoulder anteflexes only in the first half of the push phase and retroflexes in the second half. The fundamental model has been extended by the authors to obtain novel data on joint and net power as a function of push progress. These outcomes indicate that shoulder power is positive in the first half of the push phase (concentrically contracting anteflexors) and negative in the second half (eccentrically contracting retroflexors). As the eccentric contraction introduces adverse negative power, these considerations are essential when optimising wheelchair design in terms of the user's musculoskeletal system. The proposed fundamental model was applied to assess the effect of vertical seat position on joint torques and power. Increasing the seat height increases the peak positive (concentric) shoulder and elbow torques while reducing the associated (eccentric) peak negative torque. Furthermore, the transition from positive to negative shoulder torque (as well as from positive to negative power) occurs later in the push phase with increasing seat height. These outcomes will aid in the optimisation of manual wheelchair propulsion biomechanics by minimising adverse negative muscle power, and allow joint torques to be manipulated as required to minimise injury or aid in rehabilitation. Copyright © 2012. Published by Elsevier Ltd.

  3. Statistical tests and identifiability conditions for pooling and analyzing multisite datasets.

    PubMed

    Zhou, Hao Henry; Singh, Vikas; Johnson, Sterling C; Wahba, Grace

    2018-02-13

    When sample sizes are small, the ability to identify weak (but scientifically interesting) associations between a set of predictors and a response may be enhanced by pooling existing datasets. However, variations in acquisition methods and the distribution of participants or observations between datasets, especially due to the distributional shifts in some predictors, may obfuscate real effects when datasets are combined. We present a rigorous statistical treatment of this problem and identify conditions where we can correct the distributional shift. We also provide an algorithm for the situation where the correction is identifiable. We analyze various properties of the framework for testing model fit, constructing confidence intervals, and evaluating consistency characteristics. Our technical development is motivated by Alzheimer's disease (AD) studies, and we present empirical results showing that our framework enables harmonizing of protein biomarkers, even when the assays across sites differ. Our contribution may, in part, mitigate a bottleneck that researchers face in clinical research when pooling smaller sized datasets and may offer benefits when the subjects of interest are difficult to recruit or when resources prohibit large single-site studies. Copyright © 2018 the Author(s). Published by PNAS.

  4. Tracking linkage to HIV care for former prisoners

    PubMed Central

    Montague, Brian T.; Rosen, David L.; Solomon, Liza; Nunn, Amy; Green, Traci; Costa, Michael; Baillargeon, Jacques; Wohl, David A.; Paar, David P.; Rich, Josiah D.; Study Group, on behalf of the LINCS

    2012-01-01

    Improving testing and uptake to care among highly impacted populations is a critical element of Seek, Test, Treat and Retain strategies for reducing HIV incidence in the community. HIV disproportionately impacts prisoners. Though, incarceration provides an opportunity to diagnose and initiate therapy, treatment is frequently disrupted after release. Though model programs exist to support linkage to care on release, there is a lack of scalable metrics with which to assess adequacy of linkage to care after release. The linking data from Ryan White program Client Level Data (CLD) files reported to HRSA with corrections release data offers an attractive means of generating these metrics. Identified only by use of a confidential encrypted Unique Client Identifier (eUCI) these CLD files allow collection of key clinical indicators across the system of Ryan White funded providers. Using eUCIs generated from corrections release data sets as a linkage tool, the time to the first service at community providers along with key clinical indicators of patient status at entry into care can be determined as measures of linkage adequacy. Using this strategy, high and low performing sites can be identified and best practices can be identified to reproduce these successes in other settings. PMID:22561157

  5. Cancer biomarker discovery is improved by accounting for variability in general levels of drug sensitivity in pre-clinical models.

    PubMed

    Geeleher, Paul; Cox, Nancy J; Huang, R Stephanie

    2016-09-21

    We show that variability in general levels of drug sensitivity in pre-clinical cancer models confounds biomarker discovery. However, using a very large panel of cell lines, each treated with many drugs, we could estimate a general level of sensitivity to all drugs in each cell line. By conditioning on this variable, biomarkers were identified that were more likely to be effective in clinical trials than those identified using a conventional uncorrected approach. We find that differences in general levels of drug sensitivity are driven by biologically relevant processes. We developed a gene expression based method that can be used to correct for this confounder in future studies.

  6. A realizable explicit algebraic Reynolds stress model for compressible turbulent flow with significant mean dilatation

    NASA Astrophysics Data System (ADS)

    Grigoriev, I. A.; Wallin, S.; Brethouwer, G.; Johansson, A. V.

    2013-10-01

    The explicit algebraic Reynolds stress model of Wallin and Johansson [J. Fluid Mech. 403, 89 (2000)] is extended to compressible and variable-density turbulent flows. This is achieved by correctly taking into account the influence of the mean dilatation on the rapid pressure-strain correlation. The resulting model is formally identical to the original model in the limit of constant density. For two-dimensional mean flows the model is analyzed and the physical root of the resulting quartic equation is identified. Using a fixed-point analysis of homogeneously sheared and strained compressible flows, we show that the new model is realizable, unlike the previous model. Application of the model together with a K - ω model to quasi one-dimensional plane nozzle flow, transcending from subsonic to supersonic regime, also demonstrates realizability. Negative "dilatational" production of turbulence kinetic energy competes with positive "incompressible" production, eventually making the total production negative during the spatial evolution of the nozzle flow. Finally, an approach to include the baroclinic effect into the dissipation equation is proposed and an algebraic model for density-velocity correlations is outlined to estimate the corrections associated with density fluctuations. All in all, the new model can become a significant tool for CFD (computational fluid dynamics) of compressible flows.

  7. Author Correction: Nitrogen-rich organic soils under warm well-drained conditions are global nitrous oxide emission hotspots.

    PubMed

    Pärn, Jaan; Verhoeven, Jos T A; Butterbach-Bahl, Klaus; Dise, Nancy B; Ullah, Sami; Aasa, Anto; Egorov, Sergey; Espenberg, Mikk; Järveoja, Järvi; Jauhiainen, Jyrki; Kasak, Kuno; Klemedtsson, Leif; Kull, Ain; Laggoun-Défarge, Fatima; Lapshina, Elena D; Lohila, Annalea; Lõhmus, Krista; Maddison, Martin; Mitsch, William J; Müller, Christoph; Niinemets, Ülo; Osborne, Bruce; Pae, Taavi; Salm, Jüri-Ott; Sgouridis, Fotis; Sohar, Kristina; Soosaar, Kaido; Storey, Kathryn; Teemusk, Alar; Tenywa, Moses M; Tournebize, Julien; Truu, Jaak; Veber, Gert; Villa, Jorge A; Zaw, Seint Sann; Mander, Ülo

    2018-04-26

    The original version of this Article contained an error in the first sentence of the Acknowledgements section, which incorrectly referred to the Estonian Research Council grant identifier as "PUTJD618". The correct version replaces the grant identifier with "PUTJD619". This has been corrected in both the PDF and HTML versions of the Article.

  8. Competency criteria and the class inclusion task: modeling judgments and justifications.

    PubMed

    Thomas, H; Horton, J J

    1997-11-01

    Preschool age children's class inclusion task responses were modeled as mixtures of different probability distributions. The main idea: Different response strategies are equivalent to different probability distributions. A child displays cognitive strategy s if P (child uses strategy s, given the child's observed score X = x) = p(s) is the most probable strategy. The general approach is widely applicable to many settings. Both judgment and justification questions were asked. Judgment response strategies identified were subclass comparison, guessing, and inclusion logic. Children's justifications lagged their judgments in development. Although justification responses may be useful, C. J. Brainerd was largely correct: If a single response variable is to be selected, a judgments variable is likely the preferable one. But the process must be modeled to identify cognitive strategies, as B. Hodkin has demonstrated.

  9. Contact stress analysis of spiral bevel gears using nonlinear finite element static analysis

    NASA Technical Reports Server (NTRS)

    Bibel, G. D.; Kumar, A.; Reddy, S.; Handschuh, R.

    1993-01-01

    A procedure is presented for performing three-dimensional stress analysis of spiral bevel gears in mesh using the finite element method. The procedure involves generating a finite element model by solving equations that identify tooth surface coordinates. Coordinate transformations are used to orientate the gear and pinion for gear meshing. Contact boundary conditions are simulated with gap elements. A solution technique for correct orientation of the gap elements is given. Example models and results are presented.

  10. Certainty grids for mobile robots

    NASA Technical Reports Server (NTRS)

    Moravec, H. P.

    1987-01-01

    A numerical representation of uncertain and incomplete sensor knowledge called Certainty Grids has been used successfully in several mobile robot control programs, and has proven itself to be a powerful and efficient unifying solution for sensor fusion, motion planning, landmark identification, and many other central problems. Researchers propose to build a software framework running on processors onboard the new Uranus mobile robot that will maintain a probabilistic, geometric map of the robot's surroundings as it moves. The certainty grid representation will allow this map to be incrementally updated in a uniform way from various sources including sonar, stereo vision, proximity and contact sensors. The approach can correctly model the fuzziness of each reading, while at the same time combining multiple measurements to produce sharper map features, and it can deal correctly with uncertainties in the robot's motion. The map will be used by planning programs to choose clear paths, identify locations (by correlating maps), identify well-known and insufficiently sensed terrain, and perhaps identify objects by shape. The certainty grid representation can be extended in the same dimension and used to detect and track moving objects.

  11. Balanced Cortical Microcircuitry for Spatial Working Memory Based on Corrective Feedback Control

    PubMed Central

    2014-01-01

    A hallmark of working memory is the ability to maintain graded representations of both the spatial location and amplitude of a memorized stimulus. Previous work has identified a neural correlate of spatial working memory in the persistent maintenance of spatially specific patterns of neural activity. How such activity is maintained by neocortical circuits remains unknown. Traditional models of working memory maintain analog representations of either the spatial location or the amplitude of a stimulus, but not both. Furthermore, although most previous models require local excitation and lateral inhibition to maintain spatially localized persistent activity stably, the substrate for lateral inhibitory feedback pathways is unclear. Here, we suggest an alternative model for spatial working memory that is capable of maintaining analog representations of both the spatial location and amplitude of a stimulus, and that does not rely on long-range feedback inhibition. The model consists of a functionally columnar network of recurrently connected excitatory and inhibitory neural populations. When excitation and inhibition are balanced in strength but offset in time, drifts in activity trigger spatially specific negative feedback that corrects memory decay. The resulting networks can temporally integrate inputs at any spatial location, are robust against many commonly considered perturbations in network parameters, and, when implemented in a spiking model, generate irregular neural firing characteristic of that observed experimentally during persistent activity. This work suggests balanced excitatory–inhibitory memory circuits implementing corrective negative feedback as a substrate for spatial working memory. PMID:24828633

  12. Three-dimensional ray tracing for refractive correction of human eye ametropies

    NASA Astrophysics Data System (ADS)

    Jimenez-Hernandez, J. A.; Diaz-Gonzalez, G.; Trujillo-Romero, F.; Iturbe-Castillo, M. D.; Juarez-Salazar, R.; Santiago-Alvarado, A.

    2016-09-01

    Ametropies of the human eye, are refractive defects hampering the correct imaging on the retina. The most common ways to correct them is by means of spectacles, contact lenses, and modern methods as laser surgery. However, in any case it is very important to identify the ametropia grade for designing the optimum correction action. In the case of laser surgery, it is necessary to define a new shape of the cornea in order to obtain the wanted refractive correction. Therefore, a computational tool to calculate the focal length of the optical system of the eye versus variations on its geometrical parameters is required. Additionally, a clear and understandable visualization of the evaluation process is desirable. In this work, a model of the human eye based on geometrical optics principles is presented. Simulations of light rays coming from a punctual source at six meter from the cornea are shown. We perform a ray-tracing in three dimensions in order to visualize the focusing regions and estimate the power of the optical system. The common parameters of ametropies can be easily modified and analyzed in the simulation by an intuitive graphic user interface.

  13. 76 FR 7513 - Airworthiness Directives; The Boeing Company Model 747-400 and -400F Series Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-10

    .... Cracking in the MEC drip shield and exhaust plenum has been identified as part of the water leak path into the MEC. This condition, if not corrected, could result in water penetration into the MEC, which could... of cracked MEC drip shields. We are proposing this AD to prevent water penetration into the MEC...

  14. 75 FR 5695 - Airworthiness Directives; PIAGGIO AERO INDUSTRIES S.p.A Model PIAGGIO P-180 Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-04

    ... airworthiness information (MCAI) issued by an aviation authority of another country to identify and correct an... confirm the low stress level in that area, a reinforcement of the ``0'' pressure bulkhead is suggested to... aircraft from MSN 1106 up to 1189 could have the same cracks. Although calculations confirm the low stress...

  15. 77 FR 76356 - Privacy of Consumer Financial Information Under Title V of the Gramm-Leach-Bliley Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-28

    ... Under Title V of the Gramm-Leach-Bliley Act CFR Correction In Title 17 of the Code of Federal...). (2) Title. (3) Key frame (Why?, What?, How?). (4) Disclosure table (``Reasons we can share your... financial institution provides the model form and that institution is clearly identified in the title on...

  16. Increased Genomic Prediction Accuracy in Wheat Breeding Through Spatial Adjustment of Field Trial Data

    PubMed Central

    Lado, Bettina; Matus, Ivan; Rodríguez, Alejandra; Inostroza, Luis; Poland, Jesse; Belzile, François; del Pozo, Alejandro; Quincke, Martín; Castro, Marina; von Zitzewitz, Jarislav

    2013-01-01

    In crop breeding, the interest of predicting the performance of candidate cultivars in the field has increased due to recent advances in molecular breeding technologies. However, the complexity of the wheat genome presents some challenges for applying new technologies in molecular marker identification with next-generation sequencing. We applied genotyping-by-sequencing, a recently developed method to identify single-nucleotide polymorphisms, in the genomes of 384 wheat (Triticum aestivum) genotypes that were field tested under three different water regimes in Mediterranean climatic conditions: rain-fed only, mild water stress, and fully irrigated. We identified 102,324 single-nucleotide polymorphisms in these genotypes, and the phenotypic data were used to train and test genomic selection models intended to predict yield, thousand-kernel weight, number of kernels per spike, and heading date. Phenotypic data showed marked spatial variation. Therefore, different models were tested to correct the trends observed in the field. A mixed-model using moving-means as a covariate was found to best fit the data. When we applied the genomic selection models, the accuracy of predicted traits increased with spatial adjustment. Multiple genomic selection models were tested, and a Gaussian kernel model was determined to give the highest accuracy. The best predictions between environments were obtained when data from different years were used to train the model. Our results confirm that genotyping-by-sequencing is an effective tool to obtain genome-wide information for crops with complex genomes, that these data are efficient for predicting traits, and that correction of spatial variation is a crucial ingredient to increase prediction accuracy in genomic selection models. PMID:24082033

  17. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    NASA Astrophysics Data System (ADS)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.

  18. Identification and analysis of student conceptions used to solve chemical equilibrium problems

    NASA Astrophysics Data System (ADS)

    Voska, Kirk William

    This study identified and quantified chemistry conceptions students use when solving chemical equilibrium problems requiring the application of Le Chatelier's principle, and explored the feasibility of designing a paper and pencil test for this purpose. It also demonstrated the utility of conditional probabilities to assess test quality. A 10-item pencil-and-paper, two-tier diagnostic instrument, the Test to Identify Student Conceptualizations (TISC) was developed and administered to 95 second-semester university general chemistry students after they received regular course instruction concerning equilibrium in homogeneous aqueous, heterogeneous aqueous, and homogeneous gaseous systems. The content validity of TISC was established through a review of TISC by a panel of experts; construct validity was established through semi-structured interviews and conditional probabilities. Nine students were then selected from a stratified random sample for interviews to validate TISC. The probability that TISC correctly identified an answer given by a student in an interview was p = .64, while the probability that TISC correctly identified a reason given by a student in an interview was p=.49. Each TISC item contained two parts. In the first part the student selected the correct answer to a problem from a set of four choices. In the second part students wrote reasons for their answer to the first part. TISC questions were designed to identify students' conceptions concerning the application of Le Chatelier's principle, the constancy of the equilibrium constant, K, and the effect of a catalyst. Eleven prevalent incorrect conceptions were identified. This study found students consistently selected correct answers more frequently (53% of the time) than they provided correct reasons (33% of the time). The association between student answers and respective reasons on each TISC item was quantified using conditional probabilities calculated from logistic regression coefficients. The probability a student provided correct reasoning (B) when the student selected a correct answer (A) ranged from P(B| A) =.32 to P(B| A) =.82. However, the probability a student selected a correct answer when they provided correct reasoning ranged from P(A| B) =.96 to P(A| B) = 1. The K-R 20 reliability for TISC was found to be.79.

  19. Topographic correction realization based on the CBERS-02B image

    NASA Astrophysics Data System (ADS)

    Qin, Hui-ping; Yi, Wei-ning; Fang, Yong-hua

    2011-08-01

    The special topography of mountain terrain will induce the retrieval distortion in same species and surface spectral lines. In order to improve the research accuracy of topographic surface characteristic, many researchers have focused on topographic correction. Topographic correction methods can be statistical-empirical model or physical model, in which the methods based on the digital elevation model data are most popular. Restricted by spatial resolution, previous model mostly corrected topographic effect based on Landsat TM image, whose spatial resolution is 30 meter that can be easily achieved from internet or calculated from digital map. Some researchers have also done topographic correction based on high spatial resolution images, such as Quickbird and Ikonos, but there is little correlative research on the topographic correction of CBERS-02B image. In this study, liao-ning mountain terrain was taken as the objective. The digital elevation model data was interpolated to 2.36 meter by 15 meter original digital elevation model one meter by one meter. The C correction, SCS+C correction, Minnaert correction and Ekstrand-r were executed to correct the topographic effect. Then the corrected results were achieved and compared. The images corrected with C correction, SCS+C correction, Minnaert correction and Ekstrand-r were compared, and the scatter diagrams between image digital number and cosine of solar incidence angel with respect to surface normal were shown. The mean value, standard variance, slope of scatter diagram, and separation factor were statistically calculated. The analysed result shows that the shadow is weakened in corrected images than the original images, and the three-dimensional affect is removed. The absolute slope of fitting lines in scatter diagram is minished. Minnaert correction method has the most effective result. These demonstrate that the former correction methods can be successfully adapted to CBERS-02B images. The DEM data can be interpolated step by step to get the corresponding spatial resolution approximately for the condition that high spatial resolution elevation data is hard to get.

  20. Bayesian semi-parametric analysis of Poisson change-point regression models: application to policy making in Cali, Colombia.

    PubMed

    Park, Taeyoung; Krafty, Robert T; Sánchez, Alvaro I

    2012-07-27

    A Poisson regression model with an offset assumes a constant baseline rate after accounting for measured covariates, which may lead to biased estimates of coefficients in an inhomogeneous Poisson process. To correctly estimate the effect of time-dependent covariates, we propose a Poisson change-point regression model with an offset that allows a time-varying baseline rate. When the nonconstant pattern of a log baseline rate is modeled with a nonparametric step function, the resulting semi-parametric model involves a model component of varying dimension and thus requires a sophisticated varying-dimensional inference to obtain correct estimates of model parameters of fixed dimension. To fit the proposed varying-dimensional model, we devise a state-of-the-art MCMC-type algorithm based on partial collapse. The proposed model and methods are used to investigate an association between daily homicide rates in Cali, Colombia and policies that restrict the hours during which the legal sale of alcoholic beverages is permitted. While simultaneously identifying the latent changes in the baseline homicide rate which correspond to the incidence of sociopolitical events, we explore the effect of policies governing the sale of alcohol on homicide rates and seek a policy that balances the economic and cultural dependencies on alcohol sales to the health of the public.

  1. Improved techniques for thermomechanical testing in support of deformation modeling

    NASA Technical Reports Server (NTRS)

    Castelli, Michael G.; Ellis, John R.

    1992-01-01

    The feasibility of generating precise thermomechanical deformation data to support constitutive model development was investigated. Here, the requirement is for experimental data that is free from anomalies caused by less than ideal equipment and procedures. A series of exploratory tests conducted on Hastelloy X showed that generally accepted techniques for strain controlled tests were lacking in at least three areas. Specifically, problems were encountered with specimen stability, thermal strain compensation, and temperature/mechanical strain phasing. The source of these difficulties was identified and improved thermomechanical testing techniques to correct them were developed. These goals were achieved by developing improved procedures for measuring and controlling thermal gradients and by designing a specimen specifically for thermomechanical testing. In addition, innovative control strategies were developed to correctly proportion and phase the thermal and mechanical components of strain. Subsequently, the improved techniques were used to generate deformation data for Hastelloy X over the temperature range, 200 to 1000 C.

  2. The relationship between tree growth patterns and likelihood of mortality: A study of two tree species in the Sierra Nevada

    USGS Publications Warehouse

    Das, A.J.; Battles, J.J.; Stephenson, N.L.; van Mantgem, P.J.

    2007-01-01

    We examined mortality of Abies concolor (Gord. & Glend.) Lindl. (white fir) and Pinus lambertiana Dougl. (sugar pine) by developing logistic models using three growth indices obtained from tree rings: average growth, growth trend, and count of abrupt growth declines. For P. lambertiana, models with average growth, growth trend, and count of abrupt declines improved overall prediction (78.6% dead trees correctly classified, 83.7% live trees correctly classified) compared with a model with average recent growth alone (69.6% dead trees correctly classified, 67.3% live trees correctly classified). For A. concolor, counts of abrupt declines and longer time intervals improved overall classification (trees with DBH ???20 cm: 78.9% dead trees correctly classified and 76.7% live trees correctly classified vs. 64.9% dead trees correctly classified and 77.9% live trees correctly classified; trees with DBH <20 cm: 71.6% dead trees correctly classified and 71.0% live trees correctly classified vs. 67.2% dead trees correctly classified and 66.7% live trees correctly classified). In general, count of abrupt declines improved live-tree classification. External validation of A. concolor models showed that they functioned well at stands not used in model development, and the development of size-specific models demonstrated important differences in mortality risk between understory and canopy trees. Population-level mortality-risk models were developed for A. concolor and generated realistic mortality rates at two sites. Our results support the contention that a more comprehensive use of the growth record yields a more robust assessment of mortality risk. ?? 2007 NRC.

  3. Non-adiabatic effects in thermochemistry, spectroscopy and kinetics: the general importance of all three Born-Oppenheimer breakdown corrections.

    PubMed

    Reimers, Jeffrey R; McKemmish, Laura K; McKenzie, Ross H; Hush, Noel S

    2015-10-14

    Using a simple model Hamiltonian, the three correction terms for Born-Oppenheimer (BO) breakdown, the adiabatic diagonal correction (DC), the first-derivative momentum non-adiabatic correction (FD), and the second-derivative kinetic-energy non-adiabatic correction (SD), are shown to all contribute to thermodynamic and spectroscopic properties as well as to thermal non-diabatic chemical reaction rates. While DC often accounts for >80% of thermodynamic and spectroscopic property changes, the commonly used practice of including only the FD correction in kinetics calculations is rarely found to be adequate. For electron-transfer reactions not in the inverted region, the common physical picture that diabatic processes occur because of surface hopping at the transition state is proven inadequate as the DC acts first to block access, increasing the transition state energy by (ℏω)(2)λ/16J(2) (where λ is the reorganization energy, J the electronic coupling and ω the vibration frequency). However, the rate constant in the weakly-coupled Golden-Rule limit is identified as being only inversely proportional to this change rather than exponentially damped, owing to the effects of tunneling and surface hopping. Such weakly-coupled long-range electron-transfer processes should therefore not be described as "non-adiabatic" processes as they are easily described by Born-Huang ground-state adiabatic surfaces made by adding the DC to the BO surfaces; instead, they should be called just "non-Born-Oppenheimer" processes. The model system studied consists of two diabatic harmonic potential-energy surfaces coupled linearly through a single vibration, the "two-site Holstein model". Analytical expressions are derived for the BO breakdown terms, and the model is solved over a large parameter space focusing on both the lowest-energy spectroscopic transitions and the quantum dynamics of coherent-state wavepackets. BO breakdown is investigated pertinent to: ammonia inversion, aromaticity in benzene, the Creutz-Taube ion, the bacterial photosynthetic reaction centre, BNB, the molecular conductor Alq3, and inverted-region charge recombination in a ferrocene-porphyrin-fullerene triad photosynthetic model compound. Throughout, the fundamental nature of BO breakdown is linked to the properties of the cusp catastrophe: the cusp diameter is shown to determine the magnitudes of all couplings, numerical basis-set and trajectory-integration requirements, and to determine the transmission coefficient κ used to understand deviations from transition-state theory.

  4. Knowledge of physical activity guidelines among adults in the United States, HealthStyles 2003-2005.

    PubMed

    Moore, Latetia V; Fulton, Janet; Kruger, Judy; McDivitt, Judith

    2010-03-01

    We estimated percentages of US adults (>/=18 years) who knew that prior federal physical activity (PA) guidelines call for a minimum of 30 minutes of moderate-intensity PA most days (>/=5)/week using 2003 to 2005 HealthStyles, an annual mail survey. 10,117 participants identified "the minimum amount of moderate-intensity PA the government recommends to get overall health benefits." Response options included 30/>/=5, 20/>/=3, 30/7, and 60/7 (minutes/days per week), "none of these," and "don't know." The odds of correctly identifying the guideline was modeled by participant sex, age, race/ethnicity, income, education, marital status, body mass index, physical activity level, and survey year using logistic regression. 25.6% of respondents correctly identified the guideline. Women were 30% more likely to identify the guideline than men (Odds Ratio [95% Confidence Limits] (OR) = 1.28 [1.15, 1.44]). Regular PA was positively associated with identifying the guideline versus inactivity (OR = 2.08 [1.73, 2.50]). Blacks and those earning <$15,000 annually were 24% to 32% less likely to identify the guideline than whites and those earning >$60,000, respectively. Most adults did not know the previous moderate-intensity PA recommendation, which indicates a need for effective communication strategies for the new 2008 Physical Activity Guidelines for Adults.

  5. Evaluation of the solubility constants of the hydrated solid phases in the H2O-Al2O3-SO3 ternary system

    NASA Astrophysics Data System (ADS)

    Teyssier, A.; Lagneau, V.; Schmitt, J. M.; Counioux, J. J.; Goutaudier, C.

    2017-04-01

    During the acid processing of aluminosilicate ores, the precipitation of a solid phase principally consisting of hydrated aluminium hydroxysulfates may be observed. The experimental study of the H2O-Al2O3-SO3 ternary system at 25 ∘C and 101 kPa enabled to describe the solid-liquid equilibra and to identify the nature, the composition and the solubility of the solid phases which may form during the acid leaching. To predict the appearance of these aluminium hydroxysulfates in more complex systems, their solubility constants were calculated by modelling the experimental solubility results, using a geochemical reaction modelling software, CHESS. A model for non-ideality correction, based on the B-dot equation, was used as it was suitable for the considered ion concentration range. The solubility constants of three out of four solid phases were calculated: 104.08 for jurbanite (Al(SO4)(OH).5H2O), 1028.09 for the solid T (Al8(SO4)5(OH)14.34H2O) and 1027.28 for the solid V (Al10(SO4)3(OH)24.20H2O). However the activity correction model was not suitable to determine the solubility constant of alunogen (Al2(SO4)3.15.8H2O), as the ion concentrations of the mixtures were too high and beyond the allowable limits of the model. Another ionic activity correction model, based on the Pitzer equation for example, must be applied to calculate the solubility constant of alunogen.

  6. The politics of nursing care: correcting deviance in accordance with the social contract.

    PubMed

    O'Byrne, Patrick; Holmes, Dave

    2009-05-01

    Despite numerous theories, models, and philosophies describing what nurses are and what they do, nursing care is often presented as an apolitical process which primarily focuses on patient needs and priorities. However, it is our position that nursing practice-in all regards-is political. To support this argument, we have drawn on works describing of soft/hard power, pastoral power, stigma, deviance, and governmentality, in addition to explaining our institutional social contract conceptualization of politics. In using these concepts, our political perspective reframes nursing practice as a means by which an individual's potential or actual deviance (meaning a deviation from social norms) can be identified and then corrected.

  7. On the Yakhot-Orszag renormalization group method for deriving turbulence statistics and models

    NASA Technical Reports Server (NTRS)

    Smith, L. M.; Reynolds, W. C.

    1992-01-01

    An independent, comprehensive, critical review of the 'renormalization group' (RNG) theory of turbulence developed by Yakhot and Orszag (1986) is provided. Their basic theory for the Navier-Stokes equations is confirmed, and approximations in the scale removal procedure are discussed. The YO derivations of the velocity-derivative skewness and the transport equation for the energy dissipation rate are examined. An algebraic error in the derivation of the skewness is corrected. The corrected RNG skewness value of -0.59 is in agreement with experiments at moderate Reynolds numbers. Several problems are identified in the derivation of the energy dissipation rate equations which suggest that the derivation should be reformulated.

  8. Objective Model Selection for Identifying the Human Feedforward Response in Manual Control.

    PubMed

    Drop, Frank M; Pool, Daan M; van Paassen, Marinus Rene M; Mulder, Max; Bulthoff, Heinrich H

    2018-01-01

    Realistic manual control tasks typically involve predictable target signals and random disturbances. The human controller (HC) is hypothesized to use a feedforward control strategy for target-following, in addition to feedback control for disturbance-rejection. Little is known about human feedforward control, partly because common system identification methods have difficulty in identifying whether, and (if so) how, the HC applies a feedforward strategy. In this paper, an identification procedure is presented that aims at an objective model selection for identifying the human feedforward response, using linear time-invariant autoregressive with exogenous input models. A new model selection criterion is proposed to decide on the model order (number of parameters) and the presence of feedforward in addition to feedback. For a range of typical control tasks, it is shown by means of Monte Carlo computer simulations that the classical Bayesian information criterion (BIC) leads to selecting models that contain a feedforward path from data generated by a pure feedback model: "false-positive" feedforward detection. To eliminate these false-positives, the modified BIC includes an additional penalty on model complexity. The appropriate weighting is found through computer simulations with a hypothesized HC model prior to performing a tracking experiment. Experimental human-in-the-loop data will be considered in future work. With appropriate weighting, the method correctly identifies the HC dynamics in a wide range of control tasks, without false-positive results.

  9. Evaluation of the Vitek MS v3.0 Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry System for Identification of Mycobacterium and Nocardia Species.

    PubMed

    Body, Barbara A; Beard, Melodie A; Slechta, E Susan; Hanson, Kimberly E; Barker, Adam P; Babady, N Esther; McMillen, Tracy; Tang, Yi-Wei; Brown-Elliott, Barbara A; Iakhiaeva, Elena; Vasireddy, Ravikiran; Vasireddy, Sruthi; Smith, Terry; Wallace, Richard J; Turner, S; Curtis, L; Butler-Wu, Susan; Rychert, Jenna

    2018-06-01

    This multicenter study was designed to assess the accuracy and reproducibility of the Vitek MS v3.0 matrix-assisted laser desorption ionization-time of flight (MALDI-TOF) mass spectrometry system for identification of Mycobacterium and Nocardia species compared to DNA sequencing. A total of 963 clinical isolates representing 51 taxa were evaluated. In all, 663 isolates were correctly identified to the species level (69%), with another 231 (24%) correctly identified to the complex or group level. Fifty-five isolates (6%) could not be identified despite repeat testing. All of the tuberculous mycobacteria (45/45; 100%) and most of the nontuberculous mycobacteria (569/606; 94%) were correctly identified at least to the group or complex level. However, not all species or subspecies within the M. tuberculosis , M. abscessus , and M. avium complexes and within the M. fortuitum and M. mucogenicum groups could be differentiated. Among the 312 Nocardia isolates tested, 236 (76%) were correctly identified to the species level, with an additional 44 (14%) correctly identified to the complex level. Species within the N. nova and N. transvalensis complexes could not always be differentiated. Eleven percent of the isolates (103/963) underwent repeat testing in order to get a final result. Identification of a representative set of Mycobacterium and Nocardia species was highly reproducible, with 297 of 300 (99%) replicates correctly identified using multiple kit lots, instruments, analysts, and sites. These findings demonstrate that the system is robust and has utility for the routine identification of mycobacteria and Nocardia in clinical practice. Copyright © 2018 American Society for Microbiology.

  10. On the impact of power corrections in the prediction of B → K *μ+μ- observables

    NASA Astrophysics Data System (ADS)

    Descotes-Genon, Sébastien; Hofer, Lars; Matias, Joaquim; Virto, Javier

    2014-12-01

    The recent LHCb angular analysis of the exclusive decay B → K * μ + μ - has indicated significant deviations from the Standard Model expectations. Accurate predictions can be achieved at large K *-meson recoil for an optimised set of observables designed to have no sensitivity to hadronic input in the heavy-quark limit at leading order in α s . However, hadronic uncertainties reappear through non-perturbative ΛQCD /m b power corrections, which must be assessed precisely. In the framework of QCD factorisation we present a systematic method to include factorisable power corrections and point out that their impact on angular observables depends on the scheme chosen to define the soft form factors. Associated uncertainties are found to be under control, contrary to earlier claims in the literature. We also discuss the impact of possible non-factorisable power corrections, including an estimate of charm-loop effects. We provide results for angular observables at large recoil for two different sets of inputs for the form factors, spelling out the different sources of theoretical uncertainties. Finally, we comment on a recent proposal to explain the anomaly in B → K * μ + μ - observables through charm-resonance effects, and we propose strategies to test this proposal identifying observables and kinematic regions where either the charm-loop model can be disentangled from New Physics effects or the two options leave different imprints.

  11. Higher Flexibility and Better Immediate Spontaneous Correction May Not Gain Better Results for Nonstructural Thoracic Curve in Lenke 5C AIS Patients

    PubMed Central

    Zhang, Yanbin; Lin, Guanfeng; Wang, Shengru; Zhang, Jianguo; Shen, Jianxiong; Wang, Yipeng; Guo, Jianwei; Yang, Xinyu; Zhao, Lijuan

    2016-01-01

    Study Design. Retrospective study. Objective. To study the behavior of the unfused thoracic curve in Lenke type 5C during the follow-up and to identify risk factors for its correction loss. Summary of Background Data. Few studies have focused on the spontaneous behaviors of the unfused thoracic curve after selective thoracolumbar or lumbar fusion during the follow-up and the risk factors for spontaneous correction loss. Methods. We retrospectively reviewed 45 patients (41 females and 4 males) with AIS who underwent selective TL/L fusion from 2006 to 2012 in a single institution. The follow-up averaged 36 months (range, 24–105 months). Patients were divided into two groups. Thoracic curves in group A improved or maintained their curve magnitude after spontaneous correction, with a negative or no correction loss during the follow-up. Thoracic curves in group B deteriorated after spontaneous correction with a positive correction loss. Univariate analysis and multivariate analysis were built to identify the risk factors for correction loss of the unfused thoracic curves. Results. The minor thoracic curve was 26° preoperatively. It was corrected to 13° immediately with a spontaneous correction of 48.5%. At final follow-up it was 14° with a correction loss of 1°. Thoracic curves did not deteriorate after spontaneous correction in 23 cases in group A, while 22 cases were identified with thoracic curve progressing in group B. In multivariate analysis, two risk factors were independently associated with thoracic correction loss: higher flexibility and better immediate spontaneous correction rate of thoracic curve. Conclusion. Posterior selective TL/L fusion with pedicle screw constructs is an effective treatment for Lenke 5C AIS patients. Nonstructural thoracic curves with higher flexibility or better immediate correction are more likely to progress during the follow-up and close attentions must be paid to these patients in case of decompensation. Level of Evidence: 4 PMID:27831989

  12. How does bias correction of RCM precipitation affect modelled runoff?

    NASA Astrophysics Data System (ADS)

    Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Vaze, J.; Evans, J. P.

    2014-09-01

    Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the difference between the tested methods is small in the modelling experiments here (and as reported in the literature), mainly because of the substantial corrections required and inconsistent errors over time (non-stationarity). The errors remaining in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitation of RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.

  13. Investigating bias in squared regression structure coefficients

    PubMed Central

    Nimon, Kim F.; Zientek, Linda R.; Thompson, Bruce

    2015-01-01

    The importance of structure coefficients and analogs of regression weights for analysis within the general linear model (GLM) has been well-documented. The purpose of this study was to investigate bias in squared structure coefficients in the context of multiple regression and to determine if a formula that had been shown to correct for bias in squared Pearson correlation coefficients and coefficients of determination could be used to correct for bias in squared regression structure coefficients. Using data from a Monte Carlo simulation, this study found that squared regression structure coefficients corrected with Pratt's formula produced less biased estimates and might be more accurate and stable estimates of population squared regression structure coefficients than estimates with no such corrections. While our findings are in line with prior literature that identified multicollinearity as a predictor of bias in squared regression structure coefficients but not coefficients of determination, the findings from this study are unique in that the level of predictive power, number of predictors, and sample size were also observed to contribute bias in squared regression structure coefficients. PMID:26217273

  14. Novel strategy for typing Mycoplasma pneumoniae isolates by use of matrix-assisted laser desorption ionization-time of flight mass spectrometry coupled with ClinProTools.

    PubMed

    Xiao, Di; Zhao, Fei; Zhang, Huifang; Meng, Fanliang; Zhang, Jianzhong

    2014-08-01

    The typing of Mycoplasma pneumoniae mainly relies on the detection of nucleic acid, which is limited by the use of a single gene target, complex operation procedures, and a lengthy assay time. Here, matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) coupled to ClinProTools was used to discover MALDI-TOF MS biomarker peaks and to generate a classification model based on a genetic algorithm (GA) to differentiate between type 1 and type 2 M. pneumoniae isolates. Twenty-five M. pneumoniae strains were used to construct an analysis model, and 43 Mycoplasma strains were used for validation. For the GA typing model, the cross-validation values, which reflect the ability of the model to handle variability among the test spectra and the recognition capability value, which reflects the model's ability to correctly identify its component spectra, were all 100%. This model contained 7 biomarker peaks (m/z 3,318.8, 3,215.0, 5,091.8, 5,766.8, 6,337.1, 6,431.1, and 6,979.9) used to correctly identify 31 type 1 and 7 type 2 M. pneumoniae isolates from 43 Mycoplasma strains with a sensitivity and specificity of 100%. The strain distribution map and principle component analysis based on the GA classification model also clearly showed that the type 1 and type 2 M. pneumoniae isolates can be divided into two categories based on their peptide mass fingerprints. With the obvious advantages of being rapid, highly accurate, and highly sensitive and having a low cost and high throughput, MALDI-TOF MS ClinProTools is a powerful and reliable tool for M. pneumoniae typing. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  15. Examination of efficacious, efficient, and socially valid error-correction procedures to teach sight words and prepositions to children with autism spectrum disorder.

    PubMed

    Kodak, Tiffany; Campbell, Vincent; Bergmann, Samantha; LeBlanc, Brittany; Kurtz-Nelson, Eva; Cariveau, Tom; Haq, Shaji; Zemantic, Patricia; Mahon, Jacob

    2016-09-01

    Prior research shows that learners have idiosyncratic responses to error-correction procedures during instruction. Thus, assessments that identify error-correction strategies to include in instruction can aid practitioners in selecting individualized, efficacious, and efficient interventions. The current investigation conducted an assessment to compare 5 error-correction procedures that have been evaluated in the extant literature and are common in instructional practice for children with autism spectrum disorder (ASD). Results showed that the assessment identified efficacious and efficient error-correction procedures for all participants, and 1 procedure was efficient for 4 of the 5 participants. To examine the social validity of error-correction procedures, participants selected among efficacious and efficient interventions in a concurrent-chains assessment. We discuss the results in relation to prior research on error-correction procedures and current instructional practices for learners with ASD. © 2016 Society for the Experimental Analysis of Behavior.

  16. Quantifying sources of bias in National Healthcare Safety Network laboratory-identified Clostridium difficile infection rates.

    PubMed

    Haley, Valerie B; DiRienzo, A Gregory; Lutterloh, Emily C; Stricof, Rachel L

    2014-01-01

    To assess the effect of multiple sources of bias on state- and hospital-specific National Healthcare Safety Network (NHSN) laboratory-identified Clostridium difficile infection (CDI) rates. Sensitivity analysis. A total of 124 New York hospitals in 2010. New York NHSN CDI events from audited hospitals were matched to New York hospital discharge billing records to obtain additional information on patient age, length of stay, and previous hospital discharges. "Corrected" hospital-onset (HO) CDI rates were calculated after (1) correcting inaccurate case reporting found during audits, (2) incorporating knowledge of laboratory results from outside hospitals, (3) excluding days when patients were not at risk from the denominator of the rates, and (4) adjusting for patient age. Data sets were simulated with each of these sources of bias reintroduced individually and combined. The simulated rates were compared with the corrected rates. Performance (ie, better, worse, or average compared with the state average) was categorized, and misclassification compared with the corrected data set was measured. Counting days patients were not at risk in the denominator reduced the state HO rate by 45% and resulted in 8% misclassification. Age adjustment and reporting errors also shifted rates (7% and 6% misclassification, respectively). Changing the NHSN protocol to require reporting of age-stratified patient-days and adjusting for patient-days at risk would improve comparability of rates across hospitals. Further research is needed to validate the risk-adjustment model before these data should be used as hospital performance measures.

  17. A Proper-Motion Corrected, Cross-Matched Catalog Of M Dwarfs In SDSS And FIRST

    NASA Astrophysics Data System (ADS)

    Arai, Erin; West, A. A.; Thyagarajan, N.; Agüeros, M.; Helfand, D.

    2011-05-01

    We present a preliminary analysis of M dwarfs identified in both the Sloan Digital Sky Survey (SDSS) and the Very Large Array's (VLA) Faint Images of the Radio Sky at Twenty-centimeters survey (FIRST). The presence of magnetic fields is often associated with indirect magnetic activity measurements, such as H-alpha or X-ray emission. Radio emission, in contrast, is directly proportional to the magnetic field strength in addition to being another measure of activity. We search for stellar radio emission by cross-matching the SDSS DR7 M dwarf sample with the FIRST catalog. The SDSS data allow us to examine the spectra of our objects and correlate the magnetic activity (H-alpha) with the magnetic field strength (radio emission). Accurate positions and proper motions are important for obtaining a complete list of overlapping targets. Positions in FIRST and SDSS need to be proper motion corrected in order to ensure unique target matches since nearby M dwarfs can have significant proper motions (up to 1'' per year). Some previous studies have neglected the significance of proper motions in identifying overlapping targets between SDSS and FIRST; we correct for some of these previous oversights. In addition the FIRST data were taken in multiple epochs; individual images need to be proper motion corrected before the images can be co-added. Our cross-match catalog puts important constraints on models of magnetic field generation in low-mass stars in addition to the true habitability of attending planets.

  18. Egg embryo development detection with hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Lawrence, Kurt C.; Smith, Douglas P.; Windham, William R.; Heitschmidt, Gerald W.; Park, Bosoon

    2006-10-01

    In the U. S. egg industry, anywhere from 130 million to over one billion infertile eggs are incubated each year. Some of these infertile eggs explode in the hatching cabinet and can potentially spread molds or bacteria to all the eggs in the cabinet. A method to detect the embryo development of incubated eggs was developed. Twelve brown-shell hatching eggs from two replicates (n=24) were incubated and imaged to identify embryo development. A hyperspectral imaging system was used to collect transmission images from 420 to 840 nm of brown-shell eggs positioned with the air cell vertical and normal to the camera lens. Raw transmission images from about 400 to 900 nm were collected for every egg on days 0, 1, 2, and 3 of incubation. A total of 96 images were collected and eggs were broken out on day 6 to determine fertility. After breakout, all eggs were found to be fertile. Therefore, this paper presents results for egg embryo development, not fertility. The original hyperspectral data and spectral means for each egg were both used to create embryo development models. With the hyperspectral data range reduced to about 500 to 700 nm, a minimum noise fraction transformation was used, along with a Mahalanobis Distance classification model, to predict development. Days 2 and 3 were all correctly classified (100%), while day 0 and day 1 were classified at 95.8% and 91.7%, respectively. Alternatively, the mean spectra from each egg were used to develop a partial least squares regression (PLSR) model. First, a PLSR model was developed with all eggs and all days. The data were multiplicative scatter corrected, spectrally smoothed, and the wavelength range was reduced to 539 - 770 nm. With a one-out cross validation, all eggs for all days were correctly classified (100%). Second, a PLSR model was developed with data from day 0 and day 3, and the model was validated with data from day 1 and 2. For day 1, 22 of 24 eggs were correctly classified (91.7%) and for day 2, all eggs were correctly classified (100%). Although the results are based on relatively small sample sizes, they are encouraging. However, larger sample sizes, from multiple flocks, will be needed to fully validate and verify these models. Additionally, future experiments must also include non-fertile eggs so the fertile / non-fertile effect can be determined.

  19. Extra-dimensional models on the lattice

    DOE PAGES

    Knechtli, Francesco; Rinaldi, Enrico

    2016-08-05

    In this paper we summarize the ongoing effort to study extra-dimensional gauge theories with lattice simulations. In these models the Higgs field is identified with extra-dimensional components of the gauge field. The Higgs potential is generated by quantum corrections and is protected from divergences by the higher dimensional gauge symmetry. Dimensional reduction to four dimensions can occur through compactification or localization. Gauge-Higgs unification models are often studied using perturbation theory. Numerical lattice simulations are used to go beyond these perturbative expectations and to include nonperturbative effects. We describe the known perturbative predictions and their fate in the strongly-coupled regime formore » various extra-dimensional models.« less

  20. Evaluation of the new Vitek 2 ANC card for identification of medically relevant anaerobic bacteria.

    PubMed

    Mory, Francine; Alauzet, Corentine; Matuszeswski, Céline; Riegel, Philippe; Lozniewski, Alain

    2009-06-01

    Of 261 anaerobic clinical isolates tested with the new Vitek 2 ANC card, 257 (98.5%) were correctly identified at the genus level. Among the 251 strains for which identification at the species level is possible with regard to the ANC database, 217 (86.5%) were correctly identified at the species level. Two strains (0.8%) were not identified, and eight were misidentified (3.1%). Of the 21 strains (8.1%) with low-level discrimination results, 14 were correctly identified at the species level by using the recommended additional tests. This system is a satisfactory new automated tool for the rapid identification of most anaerobic bacteria isolated in clinical laboratories.

  1. How does bias correction of regional climate model precipitation affect modelled runoff?

    NASA Astrophysics Data System (ADS)

    Teng, J.; Potter, N. J.; Chiew, F. H. S.; Zhang, L.; Wang, B.; Vaze, J.; Evans, J. P.

    2015-02-01

    Many studies bias correct daily precipitation from climate models to match the observed precipitation statistics, and the bias corrected data are then used for various modelling applications. This paper presents a review of recent methods used to bias correct precipitation from regional climate models (RCMs). The paper then assesses four bias correction methods applied to the weather research and forecasting (WRF) model simulated precipitation, and the follow-on impact on modelled runoff for eight catchments in southeast Australia. Overall, the best results are produced by either quantile mapping or a newly proposed two-state gamma distribution mapping method. However, the differences between the methods are small in the modelling experiments here (and as reported in the literature), mainly due to the substantial corrections required and inconsistent errors over time (non-stationarity). The errors in bias corrected precipitation are typically amplified in modelled runoff. The tested methods cannot overcome limitations of the RCM in simulating precipitation sequence, which affects runoff generation. Results further show that whereas bias correction does not seem to alter change signals in precipitation means, it can introduce additional uncertainty to change signals in high precipitation amounts and, consequently, in runoff. Future climate change impact studies need to take this into account when deciding whether to use raw or bias corrected RCM results. Nevertheless, RCMs will continue to improve and will become increasingly useful for hydrological applications as the bias in RCM simulations reduces.

  2. A Dirichlet process model for classifying and forecasting epidemic curves.

    PubMed

    Nsoesie, Elaine O; Leman, Scotland C; Marathe, Madhav V

    2014-01-09

    A forecast can be defined as an endeavor to quantitatively estimate a future event or probabilities assigned to a future occurrence. Forecasting stochastic processes such as epidemics is challenging since there are several biological, behavioral, and environmental factors that influence the number of cases observed at each point during an epidemic. However, accurate forecasts of epidemics would impact timely and effective implementation of public health interventions. In this study, we introduce a Dirichlet process (DP) model for classifying and forecasting influenza epidemic curves. The DP model is a nonparametric Bayesian approach that enables the matching of current influenza activity to simulated and historical patterns, identifies epidemic curves different from those observed in the past and enables prediction of the expected epidemic peak time. The method was validated using simulated influenza epidemics from an individual-based model and the accuracy was compared to that of the tree-based classification technique, Random Forest (RF), which has been shown to achieve high accuracy in the early prediction of epidemic curves using a classification approach. We also applied the method to forecasting influenza outbreaks in the United States from 1997-2013 using influenza-like illness (ILI) data from the Centers for Disease Control and Prevention (CDC). We made the following observations. First, the DP model performed as well as RF in identifying several of the simulated epidemics. Second, the DP model correctly forecasted the peak time several days in advance for most of the simulated epidemics. Third, the accuracy of identifying epidemics different from those already observed improved with additional data, as expected. Fourth, both methods correctly classified epidemics with higher reproduction numbers (R) with a higher accuracy compared to epidemics with lower R values. Lastly, in the classification of seasonal influenza epidemics based on ILI data from the CDC, the methods' performance was comparable. Although RF requires less computational time compared to the DP model, the algorithm is fully supervised implying that epidemic curves different from those previously observed will always be misclassified. In contrast, the DP model can be unsupervised, semi-supervised or fully supervised. Since both methods have their relative merits, an approach that uses both RF and the DP model could be beneficial.

  3. Roi-Orientated Sensor Correction Based on Virtual Steady Reimaging Model for Wide Swath High Resolution Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Zhu, Y.; Jin, S.; Tian, Y.; Wang, M.

    2017-09-01

    To meet the requirement of high accuracy and high speed processing for wide swath high resolution optical satellite imagery under emergency situation in both ground processing system and on-board processing system. This paper proposed a ROI-orientated sensor correction algorithm based on virtual steady reimaging model for wide swath high resolution optical satellite imagery. Firstly, the imaging time and spatial window of the ROI is determined by a dynamic search method. Then, the dynamic ROI sensor correction model based on virtual steady reimaging model is constructed. Finally, the corrected image corresponding to the ROI is generated based on the coordinates mapping relationship which is established by the dynamic sensor correction model for corrected image and rigours imaging model for original image. Two experimental results show that the image registration between panchromatic and multispectral images can be well achieved and the image distortion caused by satellite jitter can be also corrected efficiently.

  4. Modeling The Hydrology And Water Allocation Under Climate Change In Rural River Basins: A Case Study From Nam Ngum River Basin, Laos

    NASA Astrophysics Data System (ADS)

    Jayasekera, D. L.; Kaluarachchi, J.; Kim, U.

    2011-12-01

    Rural river basins with sufficient water availability to maintain economic livelihoods can be affected with seasonal fluctuations of precipitation and sometimes by droughts. In addition, climate change impacts can also alter future water availability. General Circulation Models (GCMs) provide credible quantitative estimates of future climate conditions but such estimates are often characterized by bias and coarse scale resolution making it necessary to downscale the outputs for use in regional hydrologic models. This study develops a methodology to downscale and project future monthly precipitation in moderate scale basins where data are limited. A stochastic framework for single-site and multi-site generation of weekly rainfall is developed while preserving the historical temporal and spatial correlation structures. The spatial correlations in the simulated occurrences and the amounts are induced using spatially correlated yet serially independent random numbers. This method is applied to generate weekly precipitation data for a 100-year period in the Nam Ngum River Basin (NNRB) that has a land area of 16,780 km2 located in Lao P.D.R. This method is developed and applied using precipitation data from 1961 to 2000 for 10 selected weather stations that represents the basin rainfall characteristics. Bias-correction method, based on fitted theoretical probability distribution transformations, is applied to improve monthly mean frequency, intensity and the amount of raw GCM precipitation predicted at a given weather station using CGCM3.1 and ECHAM5 for SRES A2 emission scenario. Bias-correction procedure adjusts GCM precipitation to approximate the long-term frequency and the intensity distribution observed at a given weather station. Index of agreement and mean absolute error are determined to assess the overall ability and performance of the bias correction method. The generated precipitation series aggregated at monthly time step was perturbed by the change factors estimated using the corrected GCM and baseline scenarios for future time periods of 2011-2050 and 2051-2090. A network based hydrologic and water resources model, WEAP, was used to simulate the current water allocation and management practices to identify the impacts of climate change in the 20th century. The results of this work are used to identify the multiple challenges faced by stakeholders and planners in water allocation for competing demands in the presence of climate change impacts.

  5. 30 CFR 250.1452 - What if I correct the violation?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 2 2011-07-01 2011-07-01 false What if I correct the violation? 250.1452... Continental Shelf Civil Penalties Penalties After A Period to Correct § 250.1452 What if I correct the violation? The matter will be closed if you correct all of the violations identified in the Notice of...

  6. Effects of transverse photon exchange in helium Rydberg states - Corrections beyond the Coulomb-Breit interaction

    NASA Technical Reports Server (NTRS)

    Au, C. K.

    1989-01-01

    The Breit correction only accounts for part of the transverse photon exchange correction in the calculation of the energy levels in helium Rydberg states. The remaining leading corrections are identified and each is expressed in an effective potential form. The relevance to the Casimir correction potential in various limits is also discussed.

  7. Modeling Self-Referencing Interferometers with Extended Beacons and Strong Turbulence

    DTIC Science & Technology

    2011-09-01

    identified then typically compensated. These results not only serve to address problems when using adaptive optics to correct for strong turbulence ...compensat- ing for distortions due to atmospheric turbulence with adaptive optics (AO) [70, 84]. AO typically compensates for atmospheric distortions... used in Chapter VII to discuss how strong atmospheric turbulence and extended beacons affect the performance of an SRI. Additionally, it enumerates the

  8. Defense Mapping Agency (DMA) Raster-to-Vector Analysis

    DTIC Science & Technology

    1984-11-30

    model) to pinpoint critical deficiencies and understand trade-offs between alternative solutions. This may be exemplified by the allocation of human ...process, prone to errors (i.e., human operator eye/motor control limitations), and its time consuming nature (as a function of data density). It should...achieved through the facilities of coinputer interactive graphics. Each error or anomaly is individually identified by a human operator and corrected

  9. Training Correctional Educators: A Needs Assessment Study.

    ERIC Educational Resources Information Center

    Jurich, Sonia; Casper, Marta; Hull, Kim A.

    2001-01-01

    Focus groups and a training needs survey of Virginia correctional educators identified educational philosophy, communication skills, human behavior, and teaching techniques as topics of interest. Classroom observations identified additional areas: teacher isolation, multiple challenges, absence of grade structure, and safety constraints. (Contains…

  10. A correction method for systematic error in (1)H-NMR time-course data validated through stochastic cell culture simulation.

    PubMed

    Sokolenko, Stanislav; Aucoin, Marc G

    2015-09-04

    The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small as 2.5 % under a wide range of conditions. Both the simulation framework and error correction method represent examples of time-course analysis that can be applied to further developments in (1)H-NMR methodology and the more general application of quantitative metabolomics.

  11. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    NASA Astrophysics Data System (ADS)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present a local classical processing scheme for correcting errors on toric codes, which demonstrates that quantum information can be maintained in two dimensions by purely local (quantum and classical) resources.

  12. Ambiguities in model-independent partial-wave analysis

    NASA Astrophysics Data System (ADS)

    Krinner, F.; Greenwald, D.; Ryabchikov, D.; Grube, B.; Paul, S.

    2018-06-01

    Partial-wave analysis is an important tool for analyzing large data sets in hadronic decays of light and heavy mesons. It commonly relies on the isobar model, which assumes multihadron final states originate from successive two-body decays of well-known undisturbed intermediate states. Recently, analyses of heavy-meson decays and diffractively produced states have attempted to overcome the strong model dependences of the isobar model. These analyses have overlooked that model-independent, or freed-isobar, partial-wave analysis can introduce mathematical ambiguities in results. We show how these ambiguities arise and present general techniques for identifying their presence and for correcting for them. We demonstrate these techniques with specific examples in both heavy-meson decay and pion-proton scattering.

  13. Testing the predictive value of peripheral gene expression for nonremission following citalopram treatment for major depression.

    PubMed

    Guilloux, Jean-Philippe; Bassi, Sabrina; Ding, Ying; Walsh, Chris; Turecki, Gustavo; Tseng, George; Cyranowski, Jill M; Sibille, Etienne

    2015-02-01

    Major depressive disorder (MDD) in general, and anxious-depression in particular, are characterized by poor rates of remission with first-line treatments, contributing to the chronic illness burden suffered by many patients. Prospective research is needed to identify the biomarkers predicting nonremission prior to treatment initiation. We collected blood samples from a discovery cohort of 34 adult MDD patients with co-occurring anxiety and 33 matched, nondepressed controls at baseline and after 12 weeks (of citalopram plus psychotherapy treatment for the depressed cohort). Samples were processed on gene arrays and group differences in gene expression were investigated. Exploratory analyses suggest that at pretreatment baseline, nonremitting patients differ from controls with gene function and transcription factor analyses potentially related to elevated inflammation and immune activation. In a second phase, we applied an unbiased machine learning prediction model and corrected for model-selection bias. Results show that baseline gene expression predicted nonremission with 79.4% corrected accuracy with a 13-gene model. The same gene-only model predicted nonremission after 8 weeks of citalopram treatment with 76% corrected accuracy in an independent validation cohort of 63 MDD patients treated with citalopram at another institution. Together, these results demonstrate the potential, but also the limitations, of baseline peripheral blood-based gene expression to predict nonremission after citalopram treatment. These results not only support their use in future prediction tools but also suggest that increased accuracy may be obtained with the inclusion of additional predictors (eg, genetics and clinical scales).

  14. Practical Bias Correction in Aerial Surveys of Large Mammals: Validation of Hybrid Double-Observer with Sightability Method against Known Abundance of Feral Horse (Equus caballus) Populations

    PubMed Central

    2016-01-01

    Reliably estimating wildlife abundance is fundamental to effective management. Aerial surveys are one of the only spatially robust tools for estimating large mammal populations, but statistical sampling methods are required to address detection biases that affect accuracy and precision of the estimates. Although various methods for correcting aerial survey bias are employed on large mammal species around the world, these have rarely been rigorously validated. Several populations of feral horses (Equus caballus) in the western United States have been intensively studied, resulting in identification of all unique individuals. This provided a rare opportunity to test aerial survey bias correction on populations of known abundance. We hypothesized that a hybrid method combining simultaneous double-observer and sightability bias correction techniques would accurately estimate abundance. We validated this integrated technique on populations of known size and also on a pair of surveys before and after a known number was removed. Our analysis identified several covariates across the surveys that explained and corrected biases in the estimates. All six tests on known populations produced estimates with deviations from the known value ranging from -8.5% to +13.7% and <0.7 standard errors. Precision varied widely, from 6.1% CV to 25.0% CV. In contrast, the pair of surveys conducted around a known management removal produced an estimated change in population between the surveys that was significantly larger than the known reduction. Although the deviation between was only 9.1%, the precision estimate (CV = 1.6%) may have been artificially low. It was apparent that use of a helicopter in those surveys perturbed the horses, introducing detection error and heterogeneity in a manner that could not be corrected by our statistical models. Our results validate the hybrid method, highlight its potentially broad applicability, identify some limitations, and provide insight and guidance for improving survey designs. PMID:27139732

  15. Practical Bias Correction in Aerial Surveys of Large Mammals: Validation of Hybrid Double-Observer with Sightability Method against Known Abundance of Feral Horse (Equus caballus) Populations.

    PubMed

    Lubow, Bruce C; Ransom, Jason I

    2016-01-01

    Reliably estimating wildlife abundance is fundamental to effective management. Aerial surveys are one of the only spatially robust tools for estimating large mammal populations, but statistical sampling methods are required to address detection biases that affect accuracy and precision of the estimates. Although various methods for correcting aerial survey bias are employed on large mammal species around the world, these have rarely been rigorously validated. Several populations of feral horses (Equus caballus) in the western United States have been intensively studied, resulting in identification of all unique individuals. This provided a rare opportunity to test aerial survey bias correction on populations of known abundance. We hypothesized that a hybrid method combining simultaneous double-observer and sightability bias correction techniques would accurately estimate abundance. We validated this integrated technique on populations of known size and also on a pair of surveys before and after a known number was removed. Our analysis identified several covariates across the surveys that explained and corrected biases in the estimates. All six tests on known populations produced estimates with deviations from the known value ranging from -8.5% to +13.7% and <0.7 standard errors. Precision varied widely, from 6.1% CV to 25.0% CV. In contrast, the pair of surveys conducted around a known management removal produced an estimated change in population between the surveys that was significantly larger than the known reduction. Although the deviation between was only 9.1%, the precision estimate (CV = 1.6%) may have been artificially low. It was apparent that use of a helicopter in those surveys perturbed the horses, introducing detection error and heterogeneity in a manner that could not be corrected by our statistical models. Our results validate the hybrid method, highlight its potentially broad applicability, identify some limitations, and provide insight and guidance for improving survey designs.

  16. The impact of covariance misspecification in group-based trajectory models for longitudinal data with non-stationary covariance structure.

    PubMed

    Davies, Christopher E; Glonek, Gary Fv; Giles, Lynne C

    2017-08-01

    One purpose of a longitudinal study is to gain a better understanding of how an outcome of interest changes among a given population over time. In what follows, a trajectory will be taken to mean the series of measurements of the outcome variable for an individual. Group-based trajectory modelling methods seek to identify subgroups of trajectories within a population, such that trajectories that are grouped together are more similar to each other than to trajectories in distinct groups. Group-based trajectory models generally assume a certain structure in the covariances between measurements, for example conditional independence, homogeneous variance between groups or stationary variance over time. Violations of these assumptions could be expected to result in poor model performance. We used simulation to investigate the effect of covariance misspecification on misclassification of trajectories in commonly used models under a range of scenarios. To do this we defined a measure of performance relative to the ideal Bayesian correct classification rate. We found that the more complex models generally performed better over a range of scenarios. In particular, incorrectly specified covariance matrices could significantly bias the results but using models with a correct but more complicated than necessary covariance matrix incurred little cost.

  17. Evaluation of respiratory and cardiac motion correction schemes in dual gated PET/CT cardiac imaging.

    PubMed

    Lamare, F; Le Maitre, A; Dawood, M; Schäfers, K P; Fernandez, P; Rimoldi, O E; Visvikis, D

    2014-07-01

    Cardiac imaging suffers from both respiratory and cardiac motion. One of the proposed solutions involves double gated acquisitions. Although such an approach may lead to both respiratory and cardiac motion compensation there are issues associated with (a) the combination of data from cardiac and respiratory motion bins, and (b) poor statistical quality images as a result of using only part of the acquired data. The main objective of this work was to evaluate different schemes of combining binned data in order to identify the best strategy to reconstruct motion free cardiac images from dual gated positron emission tomography (PET) acquisitions. A digital phantom study as well as seven human studies were used in this evaluation. PET data were acquired in list mode (LM). A real-time position management system and an electrocardiogram device were used to provide the respiratory and cardiac motion triggers registered within the LM file. Acquired data were subsequently binned considering four and six cardiac gates, or the diastole only in combination with eight respiratory amplitude gates. PET images were corrected for attenuation, but no randoms nor scatter corrections were included. Reconstructed images from each of the bins considered above were subsequently used in combination with an affine or an elastic registration algorithm to derive transformation parameters allowing the combination of all acquired data in a particular position in the cardiac and respiratory cycles. Images were assessed in terms of signal-to-noise ratio (SNR), contrast, image profile, coefficient-of-variation (COV), and relative difference of the recovered activity concentration. Regardless of the considered motion compensation strategy, the nonrigid motion model performed better than the affine model, leading to higher SNR and contrast combined with a lower COV. Nevertheless, when compensating for respiration only, no statistically significant differences were observed in the performance of the two motion models considered. Superior image SNR and contrast were seen using the affine respiratory motion model in combination with the diastole cardiac bin in comparison to the use of the whole cardiac cycle. In contrast, when simultaneously correcting for cardiac beating and respiration, the elastic respiratory motion model outperformed the affine model. In this context, four cardiac bins associated with eight respiratory amplitude bins seemed to be adequate. Considering the compensation of respiratory motion effects only, both affine and elastic based approaches led to an accurate resizing and positioning of the myocardium. The use of the diastolic phase combined with an affine model based respiratory motion correction may therefore be a simple approach leading to significant quality improvements in cardiac PET imaging. However, the best performance was obtained with the combined correction for both cardiac and respiratory movements considering all the dual-gated bins independently through the use of an elastic model based motion compensation.

  18. Towards a Clinical Decision Support System for External Beam Radiation Oncology Prostate Cancer Patients: Proton vs. Photon Radiotherapy? A Radiobiological Study of Robustness and Stability

    PubMed Central

    Walsh, Seán; Roelofs, Erik; Kuess, Peter; van Wijk, Yvonka; Lambin, Philippe; Jones, Bleddyn; Verhaegen, Frank

    2018-01-01

    We present a methodology which can be utilized to select proton or photon radiotherapy in prostate cancer patients. Four state-of-the-art competing treatment modalities were compared (by way of an in silico trial) for a cohort of 25 prostate cancer patients, with and without correction strategies for prostate displacements. Metrics measured from clinical image guidance systems were used. Three correction strategies were investigated; no-correction, extended-no-action-limit, and online-correction. Clinical efficacy was estimated via radiobiological models incorporating robustness (how probable a given treatment plan was delivered) and stability (the consistency between the probable best and worst delivered treatments at the 95% confidence limit). The results obtained at the cohort level enabled the determination of a threshold for likely clinical benefit at the individual level. Depending on the imaging system and correction strategy; 24%, 32% and 44% of patients were identified as suitable candidates for proton therapy. For the constraints of this study: Intensity-modulated proton therapy with online-correction was on average the most effective modality. Irrespective of the imaging system, each treatment modality is similar in terms of robustness, with and without the correction strategies. Conversely, there is substantial variation in stability between the treatment modalities, which is greatly reduced by correction strategies. This study provides a ‘proof-of-concept’ methodology to enable the prospective identification of individual patients that will most likely (above a certain threshold) benefit from proton therapy. PMID:29463018

  19. Balanced cortical microcircuitry for spatial working memory based on corrective feedback control.

    PubMed

    Lim, Sukbin; Goldman, Mark S

    2014-05-14

    A hallmark of working memory is the ability to maintain graded representations of both the spatial location and amplitude of a memorized stimulus. Previous work has identified a neural correlate of spatial working memory in the persistent maintenance of spatially specific patterns of neural activity. How such activity is maintained by neocortical circuits remains unknown. Traditional models of working memory maintain analog representations of either the spatial location or the amplitude of a stimulus, but not both. Furthermore, although most previous models require local excitation and lateral inhibition to maintain spatially localized persistent activity stably, the substrate for lateral inhibitory feedback pathways is unclear. Here, we suggest an alternative model for spatial working memory that is capable of maintaining analog representations of both the spatial location and amplitude of a stimulus, and that does not rely on long-range feedback inhibition. The model consists of a functionally columnar network of recurrently connected excitatory and inhibitory neural populations. When excitation and inhibition are balanced in strength but offset in time, drifts in activity trigger spatially specific negative feedback that corrects memory decay. The resulting networks can temporally integrate inputs at any spatial location, are robust against many commonly considered perturbations in network parameters, and, when implemented in a spiking model, generate irregular neural firing characteristic of that observed experimentally during persistent activity. This work suggests balanced excitatory-inhibitory memory circuits implementing corrective negative feedback as a substrate for spatial working memory. Copyright © 2014 the authors 0270-6474/14/346790-17$15.00/0.

  20. 9 CFR 417.3 - Corrective actions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Corrective actions. 417.3 Section 417... ANALYSIS AND CRITICAL CONTROL POINT (HACCP) SYSTEMS § 417.3 Corrective actions. (a) The written HACCP plan shall identify the corrective action to be followed in response to a deviation from a critical limit...

  1. Pattern of students' conceptual change on magnetic field based on students' mental models

    NASA Astrophysics Data System (ADS)

    Hamid, Rimba; Widodo, Ari; Sopandi, Wahyu

    2017-05-01

    Students understanding about natural phenomena can be identified by analyzing their mental model. Changes in students' mental model are good indicator of students' conceptual change. This research aims at identifying students' conceptual change by analyzing changes in students' mental model. Participants of the study were twenty five elementary school students. Data were collected through throughout the lessons (prior to the lessons, during the lessons and after the lessons) based on students' written responses and individual interviews. Lessons were designed to facilitate students' conceptual change by allowing students to work in groups of students who have the similar ideas. Therefore, lessons were students-directed. Changes of students' ideas in every stage of the lessons were identified and analyzed. The results showed that there are three patterns of students' mental models, namely type of scientific (44%), analogous to everyday life (52%), and intuitive (4%). Further analyses of the pattern of their conceptual change identifies four different patterns, i.e. consistently correct (20%), consistently incomplete (16%), changing from incorrect to incomplete (8%), changing from incomplete to complete (32%), changing from complete to incorrect (4%), and changing from incorrect to complete (4%). This study suggest that the process of learning science does not move in a linear and progressive ways, rather they move in random and may move backward and forward.

  2. Predicting waist circumference from body mass index.

    PubMed

    Bozeman, Samuel R; Hoaglin, David C; Burton, Tanya M; Pashos, Chris L; Ben-Joseph, Rami H; Hollenbeak, Christopher S

    2012-08-03

    Being overweight or obese increases risk for cardiometabolic disorders. Although both body mass index (BMI) and waist circumference (WC) measure the level of overweight and obesity, WC may be more important because of its closer relationship to total body fat. Because WC is typically not assessed in clinical practice, this study sought to develop and verify a model to predict WC from BMI and demographic data, and to use the predicted WC to assess cardiometabolic risk. Data were obtained from the Third National Health and Nutrition Examination Survey (NHANES) and the Atherosclerosis Risk in Communities Study (ARIC). We developed linear regression models for men and women using NHANES data, fitting waist circumference as a function of BMI. For validation, those regressions were applied to ARIC data, assigning a predicted WC to each individual. We used the predicted WC to assess abdominal obesity and cardiometabolic risk. The model correctly classified 88.4% of NHANES subjects with respect to abdominal obesity. Median differences between actual and predicted WC were -0.07 cm for men and 0.11 cm for women. In ARIC, the model closely estimated the observed WC (median difference: -0.34 cm for men, +3.94 cm for women), correctly classifying 86.1% of ARIC subjects with respect to abdominal obesity and 91.5% to 99.5% as to cardiometabolic risk.The model is generalizable to Caucasian and African-American adult populations because it was constructed from data on a large, population-based sample of men and women in the United States, and then validated in a population with a larger representation of African-Americans. The model accurately estimates WC and identifies cardiometabolic risk. It should be useful for health care practitioners and public health officials who wish to identify individuals and populations at risk for cardiometabolic disease when WC data are unavailable.

  3. A novel iterative mixed model to remap three complex orthopedic traits in dogs

    PubMed Central

    Huang, Meng; Hayward, Jessica J.; Corey, Elizabeth; Garrison, Susan J.; Wagner, Gabriela R.; Krotscheck, Ursula; Hayashi, Kei; Schweitzer, Peter A.; Lust, George; Boyko, Adam R.; Todhunter, Rory J.

    2017-01-01

    Hip dysplasia (HD), elbow dysplasia (ED), and rupture of the cranial (anterior) cruciate ligament (RCCL) are the most common complex orthopedic traits of dogs and all result in debilitating osteoarthritis. We reanalyzed previously reported data: the Norberg angle (a quantitative measure of HD) in 921 dogs, ED in 113 cases and 633 controls, and RCCL in 271 cases and 399 controls and their genotypes at ~185,000 single nucleotide polymorphisms. A novel fixed and random model with a circulating probability unification (FarmCPU) function, with marker-based principal components and a kinship matrix to correct for population stratification, was used. A Bonferroni correction at p<0.01 resulted in a P< 6.96 ×10−8. Six loci were identified; three for HD and three for RCCL. An associated locus at CFA28:34,369,342 for HD was described previously in the same dogs using a conventional mixed model. No loci were identified for RCCL in the previous report but the two loci for ED in the previous report did not reach genome-wide significance using the FarmCPU model. These results were supported by simulation which demonstrated that the FarmCPU held no power advantage over the linear mixed model for the ED sample but provided additional power for the HD and RCCL samples. Candidate genes for HD and RCCL are discussed. When using FarmCPU software, we recommend a resampling test, that a positive control be used to determine the optimum pseudo quantitative trait nucleotide-based covariate structure of the model, and a negative control be used consisting of permutation testing and the identical resampling test as for the non-permuted phenotypes. PMID:28614352

  4. MRI-alone radiation therapy planning for prostate cancer: Automatic fiducial marker detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghose, Soumya, E-mail: soumya.ghose@case.edu; Mitra, Jhimli; Rivest-Hénault, David

    Purpose: The feasibility of radiation therapy treatment planning using substitute computed tomography (sCT) generated from magnetic resonance images (MRIs) has been demonstrated by a number of research groups. One challenge with an MRI-alone workflow is the accurate identification of intraprostatic gold fiducial markers, which are frequently used for prostate localization prior to each dose delivery fraction. This paper investigates a template-matching approach for the detection of these seeds in MRI. Methods: Two different gradient echo T1 and T2* weighted MRI sequences were acquired from fifteen prostate cancer patients and evaluated for seed detection. For training, seed templates from manual contoursmore » were selected in a spectral clustering manifold learning framework. This aids in clustering “similar” gold fiducial markers together. The marker with the minimum distance to a cluster centroid was selected as the representative template of that cluster during training. During testing, Gaussian mixture modeling followed by a Markovian model was used in automatic detection of the probable candidates. The probable candidates were rigidly registered to the templates identified from spectral clustering, and a similarity metric is computed for ranking and detection. Results: A fiducial detection accuracy of 95% was obtained compared to manual observations. Expert radiation therapist observers were able to correctly identify all three implanted seeds on 11 of the 15 scans (the proposed method correctly identified all seeds on 10 of the 15). Conclusions: An novel automatic framework for gold fiducial marker detection in MRI is proposed and evaluated with detection accuracies comparable to manual detection. When radiation therapists are unable to determine the seed location in MRI, they refer back to the planning CT (only available in the existing clinical framework); similarly, an automatic quality control is built into the automatic software to ensure that all gold seeds are either correctly detected or a warning is raised for further manual intervention.« less

  5. MRI-alone radiation therapy planning for prostate cancer: Automatic fiducial marker detection.

    PubMed

    Ghose, Soumya; Mitra, Jhimli; Rivest-Hénault, David; Fazlollahi, Amir; Stanwell, Peter; Pichler, Peter; Sun, Jidi; Fripp, Jurgen; Greer, Peter B; Dowling, Jason A

    2016-05-01

    The feasibility of radiation therapy treatment planning using substitute computed tomography (sCT) generated from magnetic resonance images (MRIs) has been demonstrated by a number of research groups. One challenge with an MRI-alone workflow is the accurate identification of intraprostatic gold fiducial markers, which are frequently used for prostate localization prior to each dose delivery fraction. This paper investigates a template-matching approach for the detection of these seeds in MRI. Two different gradient echo T1 and T2* weighted MRI sequences were acquired from fifteen prostate cancer patients and evaluated for seed detection. For training, seed templates from manual contours were selected in a spectral clustering manifold learning framework. This aids in clustering "similar" gold fiducial markers together. The marker with the minimum distance to a cluster centroid was selected as the representative template of that cluster during training. During testing, Gaussian mixture modeling followed by a Markovian model was used in automatic detection of the probable candidates. The probable candidates were rigidly registered to the templates identified from spectral clustering, and a similarity metric is computed for ranking and detection. A fiducial detection accuracy of 95% was obtained compared to manual observations. Expert radiation therapist observers were able to correctly identify all three implanted seeds on 11 of the 15 scans (the proposed method correctly identified all seeds on 10 of the 15). An novel automatic framework for gold fiducial marker detection in MRI is proposed and evaluated with detection accuracies comparable to manual detection. When radiation therapists are unable to determine the seed location in MRI, they refer back to the planning CT (only available in the existing clinical framework); similarly, an automatic quality control is built into the automatic software to ensure that all gold seeds are either correctly detected or a warning is raised for further manual intervention.

  6. The importance of topographically corrected null models for analyzing ecological point processes.

    PubMed

    McDowall, Philip; Lynch, Heather J

    2017-07-01

    Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.

  7. An experimental comparison of ETM+ image geometric correction methods in the mountainous areas of Yunnan Province, China

    NASA Astrophysics Data System (ADS)

    Wang, Jinliang; Wu, Xuejiao

    2010-11-01

    Geometric correction of imagery is a basic application of remote sensing technology. Its precision will impact directly on the accuracy and reliability of applications. The accuracy of geometric correction depends on many factors, including the used model for correction and the accuracy of the reference map, the number of ground control points (GCP) and its spatial distribution, resampling methods. The ETM+ image of Kunming Dianchi Lake Basin and 1:50000 geographical maps had been used to compare different correction methods. The results showed that: (1) The correction errors were more than one pixel and some of them were several pixels when the polynomial model was used. The correction accuracy was not stable when the Delaunay model was used. The correction errors were less than one pixel when the collinearity equation was used. (2) 6, 9, 25 and 35 GCP were selected randomly for geometric correction using the polynomial correction model respectively, the best result was obtained when 25 GCPs were used. (3) The contrast ratio of image corrected by using nearest neighbor and the best resampling rate was compared to that of using the cubic convolution and bilinear model. But the continuity of pixel gravy value was not very good. The contrast of image corrected was the worst and the computation time was the longest by using the cubic convolution method. According to the above results, the result was the best by using bilinear to resample.

  8. Entanglement entropy at infinite-randomness fixed points in higher dimensions.

    PubMed

    Lin, Yu-Cheng; Iglói, Ferenc; Rieger, Heiko

    2007-10-05

    The entanglement entropy of the two-dimensional random transverse Ising model is studied with a numerical implementation of the strong-disorder renormalization group. The asymptotic behavior of the entropy per surface area diverges at, and only at, the quantum phase transition that is governed by an infinite-randomness fixed point. Here we identify a double-logarithmic multiplicative correction to the area law for the entanglement entropy. This contrasts with the pure area law valid at the infinite-randomness fixed point in the diluted transverse Ising model in higher dimensions.

  9. A compact quantum correction model for symmetric double gate metal-oxide-semiconductor field-effect transistor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Edward Namkyu; Shin, Yong Hyeon; Yun, Ilgu, E-mail: iyun@yonsei.ac.kr

    2014-11-07

    A compact quantum correction model for a symmetric double gate (DG) metal-oxide-semiconductor field-effect transistor (MOSFET) is investigated. The compact quantum correction model is proposed from the concepts of the threshold voltage shift (ΔV{sub TH}{sup QM}) and the gate capacitance (C{sub g}) degradation. First of all, ΔV{sub TH}{sup QM} induced by quantum mechanical (QM) effects is modeled. The C{sub g} degradation is then modeled by introducing the inversion layer centroid. With ΔV{sub TH}{sup QM} and the C{sub g} degradation, the QM effects are implemented in previously reported classical model and a comparison between the proposed quantum correction model and numerical simulationmore » results is presented. Based on the results, the proposed quantum correction model can be applicable to the compact model of DG MOSFET.« less

  10. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    PubMed Central

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-01-01

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW’s) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach. PMID:27314363

  11. Bias correction of satellite-based rainfall data

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Biswa; Solomatine, Dimitri

    2015-04-01

    Limitation in hydro-meteorological data availability in many catchments limits the possibility of reliable hydrological analyses especially for near-real-time predictions. However, the variety of satellite based and meteorological model products for rainfall provides new opportunities. Often times the accuracy of these rainfall products, when compared to rain gauge measurements, is not impressive. The systematic differences of these rainfall products from gauge observations can be partially compensated by adopting a bias (error) correction. Many of such methods correct the satellite based rainfall data by comparing their mean value to the mean value of rain gauge data. Refined approaches may also first find out a suitable time scale at which different data products are better comparable and then employ a bias correction at that time scale. More elegant methods use quantile-to-quantile bias correction, which however, assumes that the available (often limited) sample size can be useful in comparing probabilities of different rainfall products. Analysis of rainfall data and understanding of the process of its generation reveals that the bias in different rainfall data varies in space and time. The time aspect is sometimes taken into account by considering the seasonality. In this research we have adopted a bias correction approach that takes into account the variation of rainfall in space and time. A clustering based approach is employed in which every new data point (e.g. of Tropical Rainfall Measuring Mission (TRMM)) is first assigned to a specific cluster of that data product and then, by identifying the corresponding cluster of gauge data, the bias correction specific to that cluster is adopted. The presented approach considers the space-time variation of rainfall and as a result the corrected data is more realistic. Keywords: bias correction, rainfall, TRMM, satellite rainfall

  12. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data.

    PubMed

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-06-15

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration's (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003-2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW's) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  13. Empirical Derivation of Correction Factors for Human Spiral Ganglion Cell Nucleus and Nucleolus Count Units.

    PubMed

    Robert, Mark E; Linthicum, Fred H

    2016-01-01

    Profile count method for estimating cell number in sectioned tissue applies a correction factor for double count (resulting from transection during sectioning) of count units selected to represent the cell. For human spiral ganglion cell counts, we attempted to address apparent confusion between published correction factors for nucleus and nucleolus count units that are identical despite the role of count unit diameter in a commonly used correction factor formula. We examined a portion of human cochlea to empirically derive correction factors for the 2 count units, using 3-dimensional reconstruction software to identify double counts. The Neurotology and House Histological Temporal Bone Laboratory at University of California at Los Angeles. Using a fully sectioned and stained human temporal bone, we identified and generated digital images of sections of the modiolar region of the lower first turn of cochlea, identified count units with a light microscope, labeled them on corresponding digital sections, and used 3-dimensional reconstruction software to identify double-counted count units. For 25 consecutive sections, we determined that double-count correction factors for nucleus count unit (0.91) and nucleolus count unit (0.92) matched the published factors. We discovered that nuclei and, therefore, spiral ganglion cells were undercounted by 6.3% when using nucleolus count units. We determined that correction factors for count units must include an element for undercounting spiral ganglion cells as well as the double-count element. We recommend a correction factor of 0.91 for the nucleus count unit and 0.98 for the nucleolus count unit when using 20-µm sections. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  14. Generation of future potential scenarios in an Alpine Catchment by applying bias-correction techniques, delta-change approaches and stochastic Weather Generators at different spatial scale. Analysis of their influence on basic and drought statistics.

    NASA Astrophysics Data System (ADS)

    Collados-Lara, Antonio-Juan; Pulido-Velazquez, David; Pardo-Iguzquiza, Eulogio

    2017-04-01

    Assessing impacts of potential future climate change scenarios in precipitation and temperature is essential to design adaptive strategies in water resources systems. The objective of this work is to analyze the possibilities of different statistical downscaling methods to generate future potential scenarios in an Alpine Catchment from historical data and the available climate models simulations performed in the frame of the CORDEX EU project. The initial information employed to define these downscaling approaches are the historical climatic data (taken from the Spain02 project for the period 1971-2000 with a spatial resolution of 12.5 Km) and the future series provided by climatic models in the horizon period 2071-2100 . We have used information coming from nine climate model simulations (obtained from five different Regional climate models (RCM) nested to four different Global Climate Models (GCM)) from the European CORDEX project. In our application we have focused on the Representative Concentration Pathways (RCP) 8.5 emissions scenario, which is the most unfavorable scenario considered in the fifth Assessment Report (AR5) by the Intergovernmental Panel on Climate Change (IPCC). For each RCM we have generated future climate series for the period 2071-2100 by applying two different approaches, bias correction and delta change, and five different transformation techniques (first moment correction, first and second moment correction, regression functions, quantile mapping using distribution derived transformation and quantile mapping using empirical quantiles) for both of them. Ensembles of the obtained series were proposed to obtain more representative potential future climate scenarios to be employed to study potential impacts. In this work we propose a non-equifeaseble combination of the future series giving more weight to those coming from models (delta change approaches) or combination of models and techniques that provides better approximation to the basic and drought statistic of the historical data. A multi-objective analysis using basic statistics (mean, standard deviation and asymmetry coefficient) and droughts statistics (duration, magnitude and intensity) has been performed to identify which models are better in terms of goodness of fit to reproduce the historical series. The drought statistics have been obtained from the Standard Precipitation index (SPI) series using the Theory of Runs. This analysis allows discriminate the best RCM and the best combination of model and correction technique in the bias-correction method. We have also analyzed the possibilities of using different Stochastic Weather Generators to approximate the basic and droughts statistics of the historical series. These analyses have been performed in our case study in a lumped and in a distributed way in order to assess its sensibility to the spatial scale. The statistic of the future temperature series obtained with different ensemble options are quite homogeneous, but the precipitation shows a higher sensibility to the adopted method and spatial scale. The global increment in the mean temperature values are 31.79 %, 31.79 %, 31.03 % and 31.74 % for the distributed bias-correction, distributed delta-change, lumped bias-correction and lumped delta-change ensembles respectively and in the precipitation they are -25.48 %, -28.49 %, -26.42 % and -27.35% respectively. Acknowledgments: This research work has been partially supported by the GESINHIMPADAPT project (CGL2013-48424-C2-2-R) with Spanish MINECO funds. We would also like to thank Spain02 and CORDEX projects for the data provided for this study and the R package qmap.

  15. Analysis of different models for atmospheric correction of meteosat infrared images. A new approach

    NASA Astrophysics Data System (ADS)

    Pérez, A. M.; Illera, P.; Casanova, J. L.

    A comparative study of several atmospheric correction models has been carried out. As primary data, atmospheric profiles of temperature and humidity obtained from radiosoundings on cloud-free days have been used. Special attention has been paid to the model used operationally in the European Space operations Centre (ESOC) for sea temperature calculations. The atmospheric correction results are expressed in terms of the increase in the brightness temperature and the surface temperature. A difference of up to a maximum of 1.4 degrees with respect to the correction obtained in the studied models has been observed. The radiances calculated by models are also compared with those obtained directly from the satellite. The temperature corrections by the latter are greater than the former in practically every case. As a result of this, the operational calibration coefficients should be first recalculated if we wish to apply an atmospheric correction model to the satellite data. Finally, a new simplified calculation scheme which may be introduced into any model is proposed.

  16. Caliber Corrected Markov Modeling (C2M2): Correcting Equilibrium Markov Models.

    PubMed

    Dixit, Purushottam D; Dill, Ken A

    2018-02-13

    Rate processes are often modeled using Markov State Models (MSMs). Suppose you know a prior MSM and then learn that your prediction of some particular observable rate is wrong. What is the best way to correct the whole MSM? For example, molecular dynamics simulations of protein folding may sample many microstates, possibly giving correct pathways through them while also giving the wrong overall folding rate when compared to experiment. Here, we describe Caliber Corrected Markov Modeling (C 2 M 2 ), an approach based on the principle of maximum entropy for updating a Markov model by imposing state- and trajectory-based constraints. We show that such corrections are equivalent to asserting position-dependent diffusion coefficients in continuous-time continuous-space Markov processes modeled by a Smoluchowski equation. We derive the functional form of the diffusion coefficient explicitly in terms of the trajectory-based constraints. We illustrate with examples of 2D particle diffusion and an overdamped harmonic oscillator.

  17. Using risk-adjustment models to identify high-cost risks.

    PubMed

    Meenan, Richard T; Goodman, Michael J; Fishman, Paul A; Hornbrook, Mark C; O'Keeffe-Rosetti, Maureen C; Bachman, Donald J

    2003-11-01

    We examine the ability of various publicly available risk models to identify high-cost individuals and enrollee groups using multi-HMO administrative data. Five risk-adjustment models (the Global Risk-Adjustment Model [GRAM], Diagnostic Cost Groups [DCGs], Adjusted Clinical Groups [ACGs], RxRisk, and Prior-expense) were estimated on a multi-HMO administrative data set of 1.5 million individual-level observations for 1995-1996. Models produced distributions of individual-level annual expense forecasts for comparison to actual values. Prespecified "high-cost" thresholds were set within each distribution. The area under the receiver operating characteristic curve (AUC) for "high-cost" prevalences of 1% and 0.5% was calculated, as was the proportion of "high-cost" dollars correctly identified. Results are based on a separate 106,000-observation validation dataset. For "high-cost" prevalence targets of 1% and 0.5%, ACGs, DCGs, GRAM, and Prior-expense are very comparable in overall discrimination (AUCs, 0.83-0.86). Given a 0.5% prevalence target and a 0.5% prediction threshold, DCGs, GRAM, and Prior-expense captured $963,000 (approximately 3%) more "high-cost" sample dollars than other models. DCGs captured the most "high-cost" dollars among enrollees with asthma, diabetes, and depression; predictive performance among demographic groups (Medicaid members, members over 64, and children under 13) varied across models. Risk models can efficiently identify enrollees who are likely to generate future high costs and who could benefit from case management. The dollar value of improved prediction performance of the most accurate risk models should be meaningful to decision-makers and encourage their broader use for identifying high costs.

  18. Improving global estimates of syphilis in pregnancy by diagnostic test type: A systematic review and meta-analysis.

    PubMed

    Ham, D Cal; Lin, Carol; Newman, Lori; Wijesooriya, N Saman; Kamb, Mary

    2015-06-01

    "Probable active syphilis," is defined as seroreactivity in both non-treponemal and treponemal tests. A correction factor of 65%, namely the proportion of pregnant women reactive in one syphilis test type that were likely reactive in the second, was applied to reported syphilis seropositivity data reported to WHO for global estimates of syphilis during pregnancy. To identify more accurate correction factors based on test type reported. Medline search using: "Syphilis [Mesh] and Pregnancy [Mesh]," "Syphilis [Mesh] and Prenatal Diagnosis [Mesh]," and "Syphilis [Mesh] and Antenatal [Keyword]. Eligible studies must have reported results for pregnant or puerperal women for both non-treponemal and treponemal serology. We manually calculated the crude percent estimates of subjects with both reactive treponemal and reactive non-treponemal tests among subjects with reactive treponemal and among subjects with reactive non-treponemal tests. We summarized the percent estimates using random effects models. Countries reporting both reactive non-treponemal and reactive treponemal testing required no correction factor. Countries reporting non-treponemal testing or treponemal testing alone required a correction factor of 52.2% and 53.6%, respectively. Countries not reporting test type required a correction factor of 68.6%. Future estimates should adjust reported maternal syphilis seropositivity by test type to ensure accuracy. Published by Elsevier Ireland Ltd.

  19. Refraction error correction for deformation measurement by digital image correlation at elevated temperature

    NASA Astrophysics Data System (ADS)

    Su, Yunquan; Yao, Xuefeng; Wang, Shen; Ma, Yinji

    2017-03-01

    An effective correction model is proposed to eliminate the refraction error effect caused by an optical window of a furnace in digital image correlation (DIC) deformation measurement under high-temperature environment. First, a theoretical correction model with the corresponding error correction factor is established to eliminate the refraction error induced by double-deck optical glass in DIC deformation measurement. Second, a high-temperature DIC experiment using a chromium-nickel austenite stainless steel specimen is performed to verify the effectiveness of the correction model by the correlation calculation results under two different conditions (with and without the optical glass). Finally, both the full-field and the divisional displacement results with refraction influence are corrected by the theoretical model and then compared to the displacement results extracted from the images without refraction influence. The experimental results demonstrate that the proposed theoretical correction model can effectively improve the measurement accuracy of DIC method by decreasing the refraction errors from measured full-field displacements under high-temperature environment.

  20. Users manual for an expert system (HSPEXP) for calibration of the hydrological simulation program; Fortran

    USGS Publications Warehouse

    Lumb, A.M.; McCammon, R.B.; Kittle, J.L.

    1994-01-01

    Expert system software was developed to assist less experienced modelers with calibration of a watershed model and to facilitate the interaction between the modeler and the modeling process not provided by mathematical optimization. A prototype was developed with artificial intelligence software tools, a knowledge engineer, and two domain experts. The manual procedures used by the domain experts were identified and the prototype was then coded by the knowledge engineer. The expert system consists of a set of hierarchical rules designed to guide the calibration of the model through a systematic evaluation of model parameters. When the prototype was completed and tested, it was rewritten for portability and operational use and was named HSPEXP. The watershed model Hydrological Simulation Program--Fortran (HSPF) is used in the expert system. This report is the users manual for HSPEXP and contains a discussion of the concepts and detailed steps and examples for using the software. The system has been tested on watersheds in the States of Washington and Maryland, and the system correctly identified the model parameters to be adjusted and the adjustments led to improved calibration.

  1. a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images

    NASA Astrophysics Data System (ADS)

    Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei

    2018-04-01

    Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.

  2. Design and Implementation of a Novel Web-Based E-Learning Tool for Education of Health Professionals on the Antibiotic Vancomycin.

    PubMed

    Bond, Stuart Evan; Crowther, Shelley P; Adhikari, Suman; Chubaty, Adriana J; Yu, Ping; Borchard, Jay P; Boutlis, Craig Steven; Yeo, Wilfred Winston; Miyakis, Spiros

    2017-03-30

    Traditional approaches to health professional education are being challenged by increased clinical demands and decreased available time. Web-based e-learning tools offer a convenient and effective method of delivering education, particularly across multiple health care facilities. The effectiveness of this model for health professional education needs to be explored in context. The study aimed to (1) determine health professionals' experience and knowledge of clinical use of vancomycin, an antibiotic used for treatment of serious infections caused by methicillin-resistant Staphylococcus aureus (MRSA) and (2) describe the design and implementation of a Web-based e-learning tool created to improve knowledge in this area. We conducted a study on the design and implementation of a video-enhanced, Web-based e-learning tool between April 2014 and January 2016. A Web-based survey was developed to determine prior experience and knowledge of vancomycin use among nurses, doctors, and pharmacists. The Vancomycin Interactive (VI) involved a series of video clips interspersed with question and answer scenarios, where a correct response allowed for progression. Dramatic tension and humor were used as tools to engage users. Health professionals' knowledge of clinical vancomycin use was obtained from website data; qualitative participant feedback was also collected. From the 577 knowledge survey responses, pharmacists (n=70) answered the greatest number of questions correctly (median score 4/5), followed by doctors (n=271; 3/5) and nurses (n=236; 2/5; P<.001). Survey questions on target trough concentration (75.0%, 433/577) and rate of administration (64.9%, 375/577) were answered most correctly, followed by timing of first level (49%, 283/577), maintenance dose (41.9%, 242/577), and loading dose (38.0%, 219/577). Self-reported "very" and "reasonably" experienced health professionals were also more likely to achieve correct responses. The VI was completed by 163 participants during the study period. The rate of correctly answered VI questions on first attempt was 65% for nurses (n=63), 68% for doctors (n=86), and 82% for pharmacists (n=14; P<.001), reflecting a similar pattern to the knowledge survey. Knowledge gaps were identified for loading dose (39.2% correct on first attempt; 64/163), timing of first trough level (50.3%, 82/163), and subsequent trough levels (47.9%, 78/163). Of the 163 participants, we received qualitative user feedback from 51 participants following completion of the VI. Feedback was predominantly positive with themes of "entertaining," "engaging," and "fun" identified; however, there were some technical issues identified relating to accessibility from different operating systems and browsers. A novel Web-based e-learning tool was successfully developed combining game design principles and humor to improve user engagement. Knowledge gaps were identified that allowed for targeting of future education strategies. The VI provides an innovative model for delivering Web-based education to busy health professionals in different locations. ©Stuart Evan Bond, Shelley P Crowther, Suman Adhikari, Adriana J Chubaty, Ping Yu, Jay P Borchard, Craig Steven Boutlis, Wilfred Winston Yeo, Spiros Miyakis. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 30.03.2017.

  3. Observation model and parameter partials for the JPL geodetic GPS modeling software GPSOMC

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Border, J. S.

    1988-01-01

    The physical models employed in GPSOMC and the modeling module of the GIPSY software system developed at JPL for analysis of geodetic Global Positioning Satellite (GPS) measurements are described. Details of the various contributions to range and phase observables are given, as well as the partial derivatives of the observed quantities with respect to model parameters. A glossary of parameters is provided to enable persons doing data analysis to identify quantities in the current report with their counterparts in the computer programs. There are no basic model revisions, with the exceptions of an improved ocean loading model and some new options for handling clock parametrization. Such misprints as were discovered were corrected. Further revisions include modeling improvements and assurances that the model description is in accord with the current software.

  4. Of mental models, assumptions and heuristics: The case of acids and acid strength

    NASA Astrophysics Data System (ADS)

    McClary, Lakeisha Michelle

    This study explored what cognitive resources (i.e., units of knowledge necessary to learn) first-semester organic chemistry students used to make decisions about acid strength and how those resources guided the prediction, explanation and justification of trends in acid strength. We were specifically interested in the identifying and characterizing the mental models, assumptions and heuristics that students relied upon to make their decisions, in most cases under time constraints. The views about acids and acid strength were investigated for twenty undergraduate students. Data sources for this study included written responses and individual interviews. The data was analyzed using a qualitative methodology to answer five research questions. Data analysis regarding these research questions was based on existing theoretical frameworks: problem representation (Chi, Feltovich & Glaser, 1981), mental models (Johnson-Laird, 1983); intuitive assumptions (Talanquer, 2006), and heuristics (Evans, 2008). These frameworks were combined to develop the framework from which our data were analyzed. Results indicated that first-semester organic chemistry students' use of cognitive resources was complex and dependent on their understanding of the behavior of acids. Expressed mental models were generated using prior knowledge and assumptions about acids and acid strength; these models were then employed to make decisions. Explicit and implicit features of the compounds in each task mediated participants' attention, which triggered the use of a very limited number of heuristics, or shortcut reasoning strategies. Many students, however, were able to apply more effortful analytic reasoning, though correct trends were predicted infrequently. Most students continued to use their mental models, assumptions and heuristics to explain a given trend in acid strength and to justify their predicted trends, but the tasks influenced a few students to shift from one model to another model. An emergent finding from this project was that the problem representation greatly influenced students' ability to make correct predictions in acid strength. Many students, however, were able to apply more effortful analytic reasoning, though correct trends were predicted infrequently. Most students continued to use their mental models, assumptions and heuristics to explain a given trend in acid strength and to justify their predicted trends, but the tasks influenced a few students to shift from one model to another model. An emergent finding from this project was that the problem representation greatly influenced students' ability to make correct predictions in acid strength.

  5. Assessment of Atmospheric Algorithms to Retrieve Vegetation in Natural Protected Areas Using Multispectral High Resolution Imagery

    PubMed Central

    Marcello, Javier; Eugenio, Francisco; Perdomo, Ulises; Medina, Anabella

    2016-01-01

    The precise mapping of vegetation covers in semi-arid areas is a complex task as this type of environment consists of sparse vegetation mainly composed of small shrubs. The launch of high resolution satellites, with additional spectral bands and the ability to alter the viewing angle, offers a useful technology to focus on this objective. In this context, atmospheric correction is a fundamental step in the pre-processing of such remote sensing imagery and, consequently, different algorithms have been developed for this purpose over the years. They are commonly categorized as imaged-based methods as well as in more advanced physical models based on the radiative transfer theory. Despite the relevance of this topic, a few comparative studies covering several methods have been carried out using high resolution data or which are specifically applied to vegetation covers. In this work, the performance of five representative atmospheric correction algorithms (DOS, QUAC, FLAASH, ATCOR and 6S) has been assessed, using high resolution Worldview-2 imagery and field spectroradiometer data collected simultaneously, with the goal of identifying the most appropriate techniques. The study also included a detailed analysis of the parameterization influence on the final results of the correction, the aerosol model and its optical thickness being important parameters to be properly adjusted. The effects of corrections were studied in vegetation and soil sites belonging to different protected semi-arid ecosystems (high mountain and coastal areas). In summary, the superior performance of model-based algorithms, 6S in particular, has been demonstrated, achieving reflectance estimations very close to the in-situ measurements (RMSE of between 2% and 3%). Finally, an example of the importance of the atmospheric correction in the vegetation estimation in these natural areas is presented, allowing the robust mapping of species and the analysis of multitemporal variations related to the human activity and climate change. PMID:27706064

  6. Assessment of Atmospheric Algorithms to Retrieve Vegetation in Natural Protected Areas Using Multispectral High Resolution Imagery.

    PubMed

    Marcello, Javier; Eugenio, Francisco; Perdomo, Ulises; Medina, Anabella

    2016-09-30

    The precise mapping of vegetation covers in semi-arid areas is a complex task as this type of environment consists of sparse vegetation mainly composed of small shrubs. The launch of high resolution satellites, with additional spectral bands and the ability to alter the viewing angle, offers a useful technology to focus on this objective. In this context, atmospheric correction is a fundamental step in the pre-processing of such remote sensing imagery and, consequently, different algorithms have been developed for this purpose over the years. They are commonly categorized as imaged-based methods as well as in more advanced physical models based on the radiative transfer theory. Despite the relevance of this topic, a few comparative studies covering several methods have been carried out using high resolution data or which are specifically applied to vegetation covers. In this work, the performance of five representative atmospheric correction algorithms (DOS, QUAC, FLAASH, ATCOR and 6S) has been assessed, using high resolution Worldview-2 imagery and field spectroradiometer data collected simultaneously, with the goal of identifying the most appropriate techniques. The study also included a detailed analysis of the parameterization influence on the final results of the correction, the aerosol model and its optical thickness being important parameters to be properly adjusted. The effects of corrections were studied in vegetation and soil sites belonging to different protected semi-arid ecosystems (high mountain and coastal areas). In summary, the superior performance of model-based algorithms, 6S in particular, has been demonstrated, achieving reflectance estimations very close to the in-situ measurements (RMSE of between 2% and 3%). Finally, an example of the importance of the atmospheric correction in the vegetation estimation in these natural areas is presented, allowing the robust mapping of species and the analysis of multitemporal variations related to the human activity and climate change.

  7. Gradient nonlinearity calibration and correction for a compact, asymmetric magnetic resonance imaging gradient system

    PubMed Central

    Tao, S; Trzasko, J D; Gunter, J L; Weavers, P T; Shu, Y; Huston, J; Lee, S K; Tan, E T; Bernstein, M A

    2017-01-01

    Due to engineering limitations, the spatial encoding gradient fields in conventional magnetic resonance imaging cannot be perfectly linear and always contain higher-order, nonlinear components. If ignored during image reconstruction, gradient nonlinearity (GNL) manifests as image geometric distortion. Given an estimate of the GNL field, this distortion can be corrected to a degree proportional to the accuracy of the field estimate. The GNL of a gradient system is typically characterized using a spherical harmonic polynomial model with model coefficients obtained from electromagnetic simulation. Conventional whole-body gradient systems are symmetric in design; typically, only odd-order terms up to the 5th-order are required for GNL modeling. Recently, a high-performance, asymmetric gradient system was developed, which exhibits more complex GNL that requires higher-order terms including both odd- and even-orders for accurate modeling. This work characterizes the GNL of this system using an iterative calibration method and a fiducial phantom used in ADNI (Alzheimer’s Disease Neuroimaging Initiative). The phantom was scanned at different locations inside the 26-cm diameter-spherical-volume of this gradient, and the positions of fiducials in the phantom were estimated. An iterative calibration procedure was utilized to identify the model coefficients that minimize the mean-squared-error between the true fiducial positions and the positions estimated from images corrected using these coefficients. To examine the effect of higher-order and even-order terms, this calibration was performed using spherical harmonic polynomial of different orders up to the 10th-order including even- and odd-order terms, or odd-order only. The results showed that the model coefficients of this gradient can be successfully estimated. The residual root-mean-squared-error after correction using up to the 10th-order coefficients was reduced to 0.36 mm, yielding spatial accuracy comparable to conventional whole-body gradients. The even-order terms were necessary for accurate GNL modeling. In addition, the calibrated coefficients improved image geometric accuracy compared with the simulation-based coefficients. PMID:28033119

  8. Exploring the effects of transducer models when training convolutional neural networks to eliminate reflection artifacts in experimental photoacoustic images

    NASA Astrophysics Data System (ADS)

    Allman, Derek; Reiter, Austin; Bell, Muyinatu

    2018-02-01

    We previously proposed a method of removing reflection artifacts in photoacoustic images that uses deep learning. Our approach generally relies on using simulated photoacoustic channel data to train a convolutional neural network (CNN) that is capable of distinguishing sources from artifacts based on unique differences in their spatial impulse responses (manifested as depth-based differences in wavefront shapes). In this paper, we directly compare a CNN trained with our previous continuous transducer model to a CNN trained with an updated discrete acoustic receiver model that more closely matches an experimental ultrasound transducer. These two CNNs were trained with simulated data and tested on experimental data. The CNN trained using the continuous receiver model correctly classified 100% of sources and 70.3% of artifacts in the experimental data. In contrast, the CNN trained using the discrete receiver model correctly classified 100% of sources and 89.7% of artifacts in the experimental images. The 19.4% increase in artifact classification accuracy indicates that an acoustic receiver model that closely mimics the experimental transducer plays an important role in improving the classification of artifacts in experimental photoacoustic data. Results are promising for developing a method to display CNN-based images that remove artifacts in addition to only displaying network-identified sources as previously proposed.

  9. Folded Supersymmetry and the LDP Paradox

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burdman, Gustavo; Chacko, Z.; Goh, Hock-Seng

    2006-09-21

    We present a new class of models that stabilize the weak scale against radiative corrections up to scales of order 5 TeV without large corrections to precision electroweak observables. In these ''folded supersymmetric'' theories the one loop quadratic divergences of the Standard Model Higgs field are canceled by opposite spin partners, but the gauge quantum numbers of these new particles are in general different from those of the conventional superpartners. This class of models is built around the correspondence that exists in the large N limit between the correlation functions of supersymmetric theories and those of their non-supersymmetric orbifold daughters.more » By identifying the mechanism which underlies the cancellation of one loop quadratic divergences in these theories, we are able to construct simple extensions of the Standard Model which are radiatively stable at one loop. Ultraviolet completions of these theories can be obtained by imposing suitable boundary conditions on an appropriate supersymmetric higher dimensional theory compactified down to four dimensions. We construct a specific model based on these ideas which stabilizes the weak scale up to about 20 TeV and where the states which cancel the top loop are scalars not charged under Standard Model color. Its collider signatures are distinct from conventional supersymmetric theories and include characteristic events with hard leptons and missing energy.« less

  10. Identifying misbehaving models using baseline climate variance

    NASA Astrophysics Data System (ADS)

    Schultz, Colin

    2011-06-01

    The majority of projections made using general circulation models (GCMs) are conducted to help tease out the effects on a region, or on the climate system as a whole, of changing climate dynamics. Sun et al., however, used model runs from 20 different coupled atmosphere-ocean GCMs to try to understand a different aspect of climate projections: how bias correction, model selection, and other statistical techniques might affect the estimated outcomes. As a case study, the authors focused on predicting the potential change in precipitation for the Murray-Darling Basin (MDB), a 1-million- square- kilometer area in southeastern Australia that suffered a recent decade of drought that left many wondering about the potential impacts of climate change on this important agricultural region. The authors first compared the precipitation predictions made by the models with 107 years of observations, and they then made bias corrections to adjust the model projections to have the same statistical properties as the observations. They found that while the spread of the projected values was reduced, the average precipitation projection for the end of the 21st century barely changed. Further, the authors determined that interannual variations in precipitation for the MDB could be explained by random chance, where the precipitation in a given year was independent of that in previous years.

  11. 76 FR 38069 - Airworthiness Directives; Airbus Model A300 B4-600, B4-600R, and F4-600R Series Airplanes, and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-29

    ... the skin and honeycomb core. Such reworks were also performed on some rudders fitted on A310 and A300... defects were the result of de-bonding between the skin and honeycomb core. Such reworks were also... application of corrective actions for those rudders where production reworks have been identified. This new...

  12. The radiometer transfer function for the AAFE composite two-frequency radiometer scatterometer. M.S. Thesis - Pennsylvania Univ.

    NASA Technical Reports Server (NTRS)

    Moore, J. H.

    1973-01-01

    A model was developed for the switching radiometer utilizing a continuous method of calibration. Sources of system degradation were identified and include losses and voltage standing wave ratios in front of the receiver input. After computing the three modes of operation, expressions were developed for the normalized radiometer output, the minimum detectable signal (normalized RMS temperature fluctuation), sensitivity, and accuracy correction factors).

  13. Relative Quantification and Higher-Order Modeling of the Plasma Glycan Cancer Burden Ratio in Ovarian Cancer Case-Control Samples

    PubMed Central

    Hecht, Elizabeth S.; Scholl, Elizabeth H.; Walker, S. Hunter; Taylor, Amber D.; Cliby, William A.; Motsinger-Reif, Alison A.; Muddiman, David C.

    2016-01-01

    An early-stage, population-wide biomarker for ovarian cancer (OVC) is essential to reverse its high mortality rate. Aberrant glycosylation by OVC has been reported, but studies have yet to identify an N-glycan with sufficiently high specificity. We curated a human biorepository of 82 case-control plasma samples, with 27%, 12%, 46%, and 15% falling across stages I–IV, respectively. For relatve quantitation, glycans were analyzed by the individuality normalization when labeling with glycan hydrazide tags (INLIGHT) strategy for enhanced electrospray ionization, MS/MS analysis. Sixty-three glycan cancer burden ratios (GBRs), defined as the log10 ratio of the case-control extracted ion chromatogram abundances, were calculated above the limit of detection. The final GBR models, built using stepwise forward regression, included three significant terms: OVC stage, normalized mean GBR, and tag chemical purity; glycan class, fucosylation, or sialylation were not significant variables. After Bonferroni correction, seven N-glycans were identified as significant (p < 0.05), and after false discovery rate correction, an additional four glycans were determined to be significant (p < 0.05), with one borderline (p = 0.05). For all N-glycans, the vectors of the effects from stages II–IV were sequentially reversed, suggesting potential biological changes in OVC morphology or in host response. PMID:26347193

  14. 76 FR 44010 - Medicare Program; Hospice Wage Index for Fiscal Year 2012; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-22

    .... 93.774, Medicare-- Supplementary Medical Insurance Program) Dated: July 15, 2011. Dawn L. Smalls... corrects technical errors that appeared in the notice of CMS ruling published in the Federal Register on... FR 26731), there were technical errors that are identified and corrected in the Correction of Errors...

  15. Predictor-corrector framework for the sequential assembly of optical systems based on wavefront sensing.

    PubMed

    Schindlbeck, Christopher; Pape, Christian; Reithmeier, Eduard

    2018-04-16

    Alignment of optical components is crucial for the assembly of optical systems to ensure their full functionality. In this paper we present a novel predictor-corrector framework for the sequential assembly of serial optical systems. Therein, we use a hybrid optical simulation model that comprises virtual and identified component positions. The hybrid model is constantly adapted throughout the assembly process with the help of nonlinear identification techniques and wavefront measurements. This enables prediction of the future wavefront at the detector plane and therefore allows for taking corrective measures accordingly during the assembly process if a user-defined tolerance on the wavefront error is violated. We present a novel notation for the so-called hybrid model and outline the work flow of the presented predictor-corrector framework. A beam expander is assembled as demonstrator for experimental verification of the framework. The optical setup consists of a laser, two bi-convex spherical lenses each mounted to a five degree-of-freedom stage to misalign and correct components, and a Shack-Hartmann sensor for wavefront measurements.

  16. Testing chemical carcinogenicity by using a transcriptomics HepaRG-based model?

    PubMed Central

    Doktorova, T. Y.; Yildirimman, Reha; Ceelen, Liesbeth; Vilardell, Mireia; Vanhaecke, Tamara; Vinken, Mathieu; Ates, Gamze; Heymans, Anja; Gmuender, Hans; Bort, Roque; Corvi, Raffaella; Phrakonkham, Pascal; Li, Ruoya; Mouchet, Nicolas; Chesne, Christophe; van Delft, Joost; Kleinjans, Jos; Castell, Jose; Herwig, Ralf; Rogiers, Vera

    2014-01-01

    The EU FP6 project carcinoGENOMICS explored the combination of toxicogenomics and in vitro cell culture models for identifying organotypical genotoxic- and non-genotoxic carcinogen-specific gene signatures. Here the performance of its gene classifier, derived from exposure of metabolically competent human HepaRG cells to prototypical non-carcinogens (10 compounds) and hepatocarcinogens (20 compounds), is reported. Analysis of the data at the gene and the pathway level by using independent biostatistical approaches showed a distinct separation of genotoxic from non-genotoxic hepatocarcinogens and non-carcinogens (up to 88 % correct prediction). The most characteristic pathway responding to genotoxic exposure was DNA damage. Interlaboratory reproducibility was assessed by blindly testing of three compounds, from the set of 30 compounds, by three independent laboratories. Subsequent classification of these compounds resulted in correct prediction of the genotoxicants. As expected, results on the non-genotoxic carcinogens and the non-carcinogens were less predictive. In conclusion, the combination of transcriptomics with the HepaRG in vitro cell model provides a potential weight of evidence approach for the evaluation of the genotoxic potential of chemical substances. PMID:26417288

  17. Correction of ultrasonic wave aberration with a time delay and amplitude filter.

    PubMed

    Måsøy, Svein-Erik; Johansen, Tonni F; Angelsen, Bjørn

    2003-04-01

    Two-dimensional simulations with propagation through two different heterogeneous human body wall models have been performed to analyze different correction filters for ultrasonic wave aberration due to forward wave propagation. The different models each produce most of the characteristic aberration effects such as phase aberration, relatively strong amplitude aberration, and waveform deformation. Simulations of wave propagation from a point source in the focus (60 mm) of a 20 mm transducer through the body wall models were performed. Center frequency of the pulse was 2.5 MHz. Corrections of the aberrations introduced by the two body wall models were evaluated with reference to the corrections obtained with the optimal filter: a generalized frequency-dependent phase and amplitude correction filter [Angelsen, Ultrasonic Imaging (Emantec, Norway, 2000), Vol. II]. Two correction filters were applied, a time delay filter, and a time delay and amplitude filter. Results showed that correction with a time delay filter produced substantial reduction of the aberration in both cases. A time delay and amplitude correction filter performed even better in both cases, and gave correction close to the ideal situation (no aberration). The results also indicated that the effect of the correction was very sensitive to the accuracy of the arrival time fluctuations estimate, i.e., the time delay correction filter.

  18. Correction of Measured Taxicab Exhaust Emission Data Based on Cmem Modle

    NASA Astrophysics Data System (ADS)

    Li, Q.; Jia, T.

    2017-09-01

    Carbon dioxide emissions from urban road traffic mainly come from automobile exhaust. However, the carbon dioxide emissions obtained by the instruments are unreliable due to time delay error. In order to improve the reliability of data, we propose a method to correct the measured vehicles' carbon dioxide emissions from instrument based on the CMEM model. Firstly, the synthetic time series of carbon dioxide emissions are simulated by CMEM model and GPS velocity data. Then, taking the simulation data as the control group, the time delay error of the measured carbon dioxide emissions can be estimated by the asynchronous correlation analysis, and the outliers can be automatically identified and corrected using the principle of DTW algorithm. Taking the taxi trajectory data of Wuhan as an example, the results show that (1) the correlation coefficient between the measured data and the control group data can be improved from 0.52 to 0.59 by mitigating the systematic time delay error. Furthermore, by adjusting the outliers which account for 4.73 % of the total data, the correlation coefficient can raise to 0.63, which suggests strong correlation. The construction of low carbon traffic has become the focus of the local government. In order to respond to the slogan of energy saving and emission reduction, the distribution of carbon emissions from motor vehicle exhaust emission was studied. So our corrected data can be used to make further air quality analysis.

  19. Retinoids and Retinal Diseases

    PubMed Central

    Kiser, Philip D.; Palczewski, Krzysztof

    2016-01-01

    Recent progress in molecular understanding of the retinoid cycle in mammalian retina stems from painstaking biochemical reconstitution studies supported by natural or engineered animal models with known genetic lesions and studies of humans with specific genetic blinding diseases. Structural and membrane biology have been used to detect critical retinal enzymes and proteins and their substrates and ligands, placing them in a cellular context. These studies have been supplemented by analytical chemistry methods that have identified small molecules by their spectral characteristics, often in conjunction with the evaluation of models of animal retinal disease. It is from this background that rational therapeutic interventions to correct genetic defects or environmental insults are identified. Thus, most presently accepted modulators of the retinoid cycle already have demonstrated promising results in animal models of retinal degeneration. These encouraging signs indicate that some human blinding diseases can be alleviated by pharmacological interventions. PMID:27917399

  20. Building an Ontology for Identity Resolution in Healthcare and Public Health.

    PubMed

    Duncan, Jeffrey; Eilbeck, Karen; Narus, Scott P; Clyde, Stephen; Thornton, Sidney; Staes, Catherine

    2015-01-01

    Integration of disparate information from electronic health records, clinical data warehouses, birth certificate registries and other public health information systems offers great potential for clinical care, public health practice, and research. Such integration, however, depends on correctly matching patient-specific records using demographic identifiers. Without standards for these identifiers, record linkage is complicated by issues of structural and semantic heterogeneity. Our objectives were to develop and validate an ontology to: 1) identify components of identity and events subsequent to birth that result in creation, change, or sharing of identity information; 2) develop an ontology to facilitate data integration from multiple healthcare and public health sources; and 3) validate the ontology's ability to model identity-changing events over time. We interviewed domain experts in area hospitals and public health programs and developed process models describing the creation and transmission of identity information among various organizations for activities subsequent to a birth event. We searched for existing relevant ontologies. We validated the content of our ontology with simulated identity information conforming to scenarios identified in our process models. We chose the Simple Event Model (SEM) to describe events in early childhood and integrated the Clinical Element Model (CEM) for demographic information. We demonstrated the ability of the combined SEM-CEM ontology to model identity events over time. The use of an ontology can overcome issues of semantic and syntactic heterogeneity to facilitate record linkage.

  1. Regional geoid computation by least squares modified Hotine's formula with additive corrections

    NASA Astrophysics Data System (ADS)

    Märdla, Silja; Ellmann, Artu; Ågren, Jonas; Sjöberg, Lars E.

    2018-03-01

    Geoid and quasigeoid modelling from gravity anomalies by the method of least squares modification of Stokes's formula with additive corrections is adapted for the usage with gravity disturbances and Hotine's formula. The biased, unbiased and optimum versions of least squares modification are considered. Equations are presented for the four additive corrections that account for the combined (direct plus indirect) effect of downward continuation (DWC), topographic, atmospheric and ellipsoidal corrections in geoid or quasigeoid modelling. The geoid or quasigeoid modelling scheme by the least squares modified Hotine formula is numerically verified, analysed and compared to the Stokes counterpart in a heterogeneous study area. The resulting geoid models and the additive corrections computed both for use with Stokes's or Hotine's formula differ most in high topography areas. Over the study area (reaching almost 2 km in altitude), the approximate geoid models (before the additive corrections) differ by 7 mm on average with a 3 mm standard deviation (SD) and a maximum of 1.3 cm. The additive corrections, out of which only the DWC correction has a numerically significant difference, improve the agreement between respective geoid or quasigeoid models to an average difference of 5 mm with a 1 mm SD and a maximum of 8 mm.

  2. Development of Corrections for Biomass Burning Effects in Version 2 of GEWEX/SRB Algorithm

    NASA Technical Reports Server (NTRS)

    Pinker, Rachel T.; Laszlo, I.; Dicus, Dennis L. (Technical Monitor)

    1999-01-01

    The objectives of this project were: (1) To incorporate into an existing version of the University of Maryland Surface Radiation Budget (SRB) model, optical parameters of forest fire aerosols, using best available information, as well as optical properties of other aerosols, identified as significant. (2) To run the model on regional scales with the new parametrization and information on forest fire occurrence and plume advection, as available from NASA LARC, and test improvements in inferring surface fluxes against daily values of measured fluxes. (3) Develop strategy how to incorporate the new parametrization on global scale and how to transfer modified model to NASA LARC.

  3. Study of subgrid-scale velocity models for reacting and nonreacting flows

    NASA Astrophysics Data System (ADS)

    Langella, I.; Doan, N. A. K.; Swaminathan, N.; Pope, S. B.

    2018-05-01

    A study is conducted to identify advantages and limitations of existing large-eddy simulation (LES) closures for the subgrid-scale (SGS) kinetic energy using a database of direct numerical simulations (DNS). The analysis is conducted for both reacting and nonreacting flows, different turbulence conditions, and various filter sizes. A model, based on dissipation and diffusion of momentum (LD-D model), is proposed in this paper based on the observed behavior of four existing models. Our model shows the best overall agreements with DNS statistics. Two main investigations are conducted for both reacting and nonreacting flows: (i) an investigation on the robustness of the model constants, showing that commonly used constants lead to a severe underestimation of the SGS kinetic energy and enlightening their dependence on Reynolds number and filter size; and (ii) an investigation on the statistical behavior of the SGS closures, which suggests that the dissipation of momentum is the key parameter to be considered in such closures and that dilatation effect is important and must be captured correctly in reacting flows. Additional properties of SGS kinetic energy modeling are identified and discussed.

  4. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Archuleta County

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  5. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, San Miguel County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  6. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Fremont County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  7. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Routt County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled"warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  8. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Alamosa and Saguache Counties, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  9. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Dolores County

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  10. Assimilation of Satellite to Improve Cloud Simulation in Wrf Model

    NASA Astrophysics Data System (ADS)

    Park, Y. H.; Pour Biazar, A.; McNider, R. T.

    2012-12-01

    A simple approach has been introduced to improve cloud simulation spatially and temporally in a meteorological model. The first step for this approach is to use Geostationary Operational Environmental Satellite (GOES) observations to identify clouds and estimate the clouds structure. Then by comparing GOES observations to model cloud field, we identify areas in which model has under-predicted or over-predicted clouds. Next, by introducing subsidence in areas with over-prediction and lifting in areas with under-prediction, erroneous clouds are removed and new clouds are formed. The technique estimates a vertical velocity needed for the cloud correction and then uses a one dimensional variation schemes (1D_Var) to calculate the horizontal divergence components and the consequent horizontal wind components needed to sustain such vertical velocity. Finally, the new horizontal winds are provided as a nudging field to the model. This nudging provides the dynamical support needed to create/clear clouds in a sustainable manner. The technique was implemented and tested in the Weather Research and Forecast (WRF) Model and resulted in substantial improvement in model simulated clouds. Some of the results are presented here.

  11. Self-consistent conversion of a viscous fluid to particles

    NASA Astrophysics Data System (ADS)

    Molnar, Denes; Wolff, Zack

    2017-02-01

    Comparison of hydrodynamic and "hybrid" hydrodynamics+transport calculations with heavy-ion data inevitably requires the conversion of the fluid to particles. For dissipative fluids the conversion is ambiguous without additional theory input complementing hydrodynamics. We obtain self-consistent shear viscous phase-space corrections from linearized Boltzmann transport theory for a gas of hadrons. These corrections depend on the particle species, and incorporating them in Cooper-Frye freeze-out affects identified particle observables. For example, with additive quark model cross sections, proton elliptic flow is larger than pion elliptic flow at moderately high pT in Au+Au collisions at the BNL Relativistic Heavy Ion Collider. This is in contrast to Cooper-Frye freeze-out with the commonly used "democratic Grad" ansatz that assumes no species dependence. Various analytic and numerical results are also presented for massless and massive two-component mixtures to better elucidate how species dependence arises. For convenient inclusion in pure hydrodynamic and hybrid calculations, Appendix G contains self-consistent viscous corrections for each species both in tabulated and parametrized form.

  12. Voidage correction algorithm for unresolved Euler-Lagrange simulations

    NASA Astrophysics Data System (ADS)

    Askarishahi, Maryam; Salehi, Mohammad-Sadegh; Radl, Stefan

    2018-04-01

    The effect of grid coarsening on the predicted total drag force and heat exchange rate in dense gas-particle flows is investigated using Euler-Lagrange (EL) approach. We demonstrate that grid coarsening may reduce the predicted total drag force and exchange rate. Surprisingly, exchange coefficients predicted by the EL approach deviate more significantly from the exact value compared to results of Euler-Euler (EE)-based calculations. The voidage gradient is identified as the root cause of this peculiar behavior. Consequently, we propose a correction algorithm based on a sigmoidal function to predict the voidage experienced by individual particles. Our correction algorithm can significantly improve the prediction of exchange coefficients in EL models, which is tested for simulations involving Euler grid cell sizes between 2d_p and 12d_p . It is most relevant in simulations of dense polydisperse particle suspensions featuring steep voidage profiles. For these suspensions, classical approaches may result in an error of the total exchange rate of up to 30%.

  13. An introduction to mixture item response theory models.

    PubMed

    De Ayala, R J; Santiago, S Y

    2017-02-01

    Mixture item response theory (IRT) allows one to address situations that involve a mixture of latent subpopulations that are qualitatively different but within which a measurement model based on a continuous latent variable holds. In this modeling framework, one can characterize students by both their location on a continuous latent variable as well as by their latent class membership. For example, in a study of risky youth behavior this approach would make it possible to estimate an individual's propensity to engage in risky youth behavior (i.e., on a continuous scale) and to use these estimates to identify youth who might be at the greatest risk given their class membership. Mixture IRT can be used with binary response data (e.g., true/false, agree/disagree, endorsement/not endorsement, correct/incorrect, presence/absence of a behavior), Likert response scales, partial correct scoring, nominal scales, or rating scales. In the following, we present mixture IRT modeling and two examples of its use. Data needed to reproduce analyses in this article are available as supplemental online materials at http://dx.doi.org/10.1016/j.jsp.2016.01.002. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  14. Leveraging Hierarchical Population Structure in Discrete Association Studies

    PubMed Central

    Carlson, Jonathan; Kadie, Carl; Mallal, Simon; Heckerman, David

    2007-01-01

    Population structure can confound the identification of correlations in biological data. Such confounding has been recognized in multiple biological disciplines, resulting in a disparate collection of proposed solutions. We examine several methods that correct for confounding on discrete data with hierarchical population structure and identify two distinct confounding processes, which we call coevolution and conditional influence. We describe these processes in terms of generative models and show that these generative models can be used to correct for the confounding effects. Finally, we apply the models to three applications: identification of escape mutations in HIV-1 in response to specific HLA-mediated immune pressure, prediction of coevolving residues in an HIV-1 peptide, and a search for genotypes that are associated with bacterial resistance traits in Arabidopsis thaliana. We show that coevolution is a better description of confounding in some applications and conditional influence is better in others. That is, we show that no single method is best for addressing all forms of confounding. Analysis tools based on these models are available on the internet as both web based applications and downloadable source code at http://atom.research.microsoft.com/bio/phylod.aspx. PMID:17611623

  15. Robust hashing for 3D models

    NASA Astrophysics Data System (ADS)

    Berchtold, Waldemar; Schäfer, Marcel; Rettig, Michael; Steinebach, Martin

    2014-02-01

    3D models and applications are of utmost interest in both science and industry. With the increment of their usage, their number and thereby the challenge to correctly identify them increases. Content identification is commonly done by cryptographic hashes. However, they fail as a solution in application scenarios such as computer aided design (CAD), scientific visualization or video games, because even the smallest alteration of the 3D model, e.g. conversion or compression operations, massively changes the cryptographic hash as well. Therefore, this work presents a robust hashing algorithm for 3D mesh data. The algorithm applies several different bit extraction methods. They are built to resist desired alterations of the model as well as malicious attacks intending to prevent correct allocation. The different bit extraction methods are tested against each other and, as far as possible, the hashing algorithm is compared to the state of the art. The parameters tested are robustness, security and runtime performance as well as False Acceptance Rate (FAR) and False Rejection Rate (FRR), also the probability calculation of hash collision is included. The introduced hashing algorithm is kept adaptive e.g. in hash length, to serve as a proper tool for all applications in practice.

  16. Developmental model of static allometry in holometabolous insects.

    PubMed

    Shingleton, Alexander W; Mirth, Christen K; Bates, Peter W

    2008-08-22

    The regulation of static allometry is a fundamental developmental process, yet little is understood of the mechanisms that ensure organs scale correctly across a range of body sizes. Recent studies have revealed the physiological and genetic mechanisms that control nutritional variation in the final body and organ size in holometabolous insects. The implications these mechanisms have for the regulation of static allometry is, however, unknown. Here, we formulate a mathematical description of the nutritional control of body and organ size in Drosophila melanogaster and use it to explore how the developmental regulators of size influence static allometry. The model suggests that the slope of nutritional static allometries, the 'allometric coefficient', is controlled by the relative sensitivity of an organ's growth rate to changes in nutrition, and the relative duration of development when nutrition affects an organ's final size. The model also predicts that, in order to maintain correct scaling, sensitivity to changes in nutrition varies among organs, and within organs through time. We present experimental data that support these predictions. By revealing how specific physiological and genetic regulators of size influence allometry, the model serves to identify developmental processes upon which evolution may act to alter scaling relationships.

  17. Using a bias aware EnKF to account for unresolved structure in an unsaturated zone model

    NASA Astrophysics Data System (ADS)

    Erdal, D.; Neuweiler, I.; Wollschläger, U.

    2014-01-01

    When predicting flow in the unsaturated zone, any method for modeling the flow will have to define how, and to what level, the subsurface structure is resolved. In this paper, we use the Ensemble Kalman Filter to assimilate local soil water content observations from both a synthetic layered lysimeter and a real field experiment in layered soil in an unsaturated water flow model. We investigate the use of colored noise bias corrections to account for unresolved subsurface layering in a homogeneous model and compare this approach with a fully resolved model. In both models, we use a simplified model parameterization in the Ensemble Kalman Filter. The results show that the use of bias corrections can increase the predictive capability of a simplified homogeneous flow model if the bias corrections are applied to the model states. If correct knowledge of the layering structure is available, the fully resolved model performs best. However, if no, or erroneous, layering is used in the model, the use of a homogeneous model with bias corrections can be the better choice for modeling the behavior of the system.

  18. Coarse-Grained MD Simulations and Protein-Protein Interactions: The Cohesin-Dockerin System.

    PubMed

    Hall, Benjamin A; Sansom, Mark S P

    2009-09-08

    Coarse-grained molecular dynamics (CG-MD) may be applied as part of a multiscale modeling approach to protein-protein interactions. The cohesin-dockerin interaction provides a valuable test system for evaluation of the use of CG-MD, as structural (X-ray) data indicate a dual binding mode for the cohesin-dockerin pair. CG-MD simulations (of 5 μs duration) of the association of cohesin and dockerin identify two distinct binding modes, which resemble those observed in X-ray structures. For each binding mode, ca. 80% of interfacial residues are predicted correctly. Furthermore, each of the binding modes identified by CG-MD is conformationally stable when converted to an atomistic model and used as the basis of a conventional atomistic MD simulation of duration 20 ns.

  19. Towards parameter-free classification of sound effects in movies

    NASA Astrophysics Data System (ADS)

    Chu, Selina; Narayanan, Shrikanth; Kuo, C.-C. J.

    2005-08-01

    The problem of identifying intense events via multimedia data mining in films is investigated in this work. Movies are mainly characterized by dialog, music, and sound effects. We begin our investigation with detecting interesting events through sound effects. Sound effects are neither speech nor music, but are closely associated with interesting events such as car chases and gun shots. In this work, we utilize low-level audio features including MFCC and energy to identify sound effects. It was shown in previous work that the Hidden Markov model (HMM) works well for speech/audio signals. However, this technique requires a careful choice in designing the model and choosing correct parameters. In this work, we introduce a framework that will avoid such necessity and works well with semi- and non-parametric learning algorithms.

  20. Validation of a mathematical model of the bovine estrous cycle for cows with different estrous cycle characteristics.

    PubMed

    Boer, H M T; Butler, S T; Stötzel, C; Te Pas, M F W; Veerkamp, R F; Woelders, H

    2017-11-01

    A recently developed mechanistic mathematical model of the bovine estrous cycle was parameterized to fit empirical data sets collected during one estrous cycle of 31 individual cows, with the main objective to further validate the model. The a priori criteria for validation were (1) the resulting model can simulate the measured data correctly (i.e. goodness of fit), and (2) this is achieved without needing extreme, probably non-physiological parameter values. We used a least squares optimization procedure to identify parameter configurations for the mathematical model to fit the empirical in vivo measurements of follicle and corpus luteum sizes, and the plasma concentrations of progesterone, estradiol, FSH and LH for each cow. The model was capable of accommodating normal variation in estrous cycle characteristics of individual cows. With the parameter sets estimated for the individual cows, the model behavior changed for 21 cows, with improved fit of the simulated output curves for 18 of these 21 cows. Moreover, the number of follicular waves was predicted correctly for 18 of the 25 two-wave and three-wave cows, without extreme parameter value changes. Estimation of specific parameters confirmed results of previous model simulations indicating that parameters involved in luteolytic signaling are very important for regulation of general estrous cycle characteristics, and are likely responsible for differences in estrous cycle characteristics between cows.

  1. Corrective Action Decision Document for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada, Rev. No. 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert Boehlecke

    2004-04-01

    The six bunkers included in CAU 204 were primarily used to monitor atmospheric testing or store munitions. The ''Corrective Action Investigation Plan (CAIP) for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada'' (NNSA/NV, 2002a) provides information relating to the history, planning, and scope of the investigation; therefore, it will not be repeated in this CADD. This CADD identifies potential corrective action alternatives and provides a rationale for the selection of a recommended corrective action alternative for each CAS within CAU 204. The evaluation of corrective action alternatives is based on process knowledge and the results of investigative activitiesmore » conducted in accordance with the CAIP (NNSA/NV, 2002a) that was approved prior to the start of the Corrective Action Investigation (CAI). Record of Technical Change (ROTC) No. 1 to the CAIP (approval pending) documents changes to the preliminary action levels (PALs) agreed to by the Nevada Division of Environmental Protection (NDEP) and DOE, National Nuclear Security Administration Nevada Site Office (NNSA/NSO). This ROTC specifically discusses the radiological PALs and their application to the findings of the CAU 204 corrective action investigation. The scope of this CADD consists of the following: (1) Develop corrective action objectives; (2) Identify corrective action alternative screening criteria; (3) Develop corrective action alternatives; (4) Perform detailed and comparative evaluations of corrective action alternatives in relation to corrective action objectives and screening criteria; and (5) Recommend and justify a preferred corrective action alternative for each CAS within CAU 204.« less

  2. A Small-Scale Comparison of Iceland Scallop Size Distributions Obtained from a Camera Based Autonomous Underwater Vehicle and Dredge Survey

    PubMed Central

    Singh, Warsha; Örnólfsdóttir, Erla B.; Stefansson, Gunnar

    2014-01-01

    An approach is developed to estimate size of Iceland scallop shells from AUV photos. A small-scale camera based AUV survey of Iceland scallops was conducted at a defined site off West Iceland. Prior to height estimation of the identified shells, the distortions introduced by the vehicle orientation and the camera lens were corrected. The average AUV pitch and roll was and deg that resulted in error in ground distance rendering these effects negligible. A quadratic polynomial model was identified for lens distortion correction. This model successfully predicted a theoretical grid from a frame photographed underwater, representing the inherent lens distortion. The predicted shell heights were scaled for the distance from the bottom at which the photos were taken. This approach was validated by height estimation of scallops of known sizes. An underestimation of approximately cm was seen, which could be attributed to pixel error, where each pixel represented cm. After correcting for this difference the estimated heights ranged from cm. A comparison of the height-distribution from a small-scale dredge survey carried out in the vicinity showed non-overlapping peaks in size distribution, with scallops of a broader size range visible in the AUV survey. Further investigations are necessary to evaluate any underlying bias and to validate how representative these surveys are of the true population. The low resolution images made identification of smaller scallops difficult. Overall, the observations of very few small scallops in both surveys could be attributed to low recruitment levels in the recent years due to the known scallop parasite outbreak in the region. PMID:25303243

  3. A small-scale comparison of Iceland scallop size distributions obtained from a camera based autonomous underwater vehicle and dredge survey.

    PubMed

    Singh, Warsha; Örnólfsdóttir, Erla B; Stefansson, Gunnar

    2014-01-01

    An approach is developed to estimate size of Iceland scallop shells from AUV photos. A small-scale camera based AUV survey of Iceland scallops was conducted at a defined site off West Iceland. Prior to height estimation of the identified shells, the distortions introduced by the vehicle orientation and the camera lens were corrected. The average AUV pitch and roll was 1.3 and 2.3 deg that resulted in <2% error in ground distance rendering these effects negligible. A quadratic polynomial model was identified for lens distortion correction. This model successfully predicted a theoretical grid from a frame photographed underwater, representing the inherent lens distortion. The predicted shell heights were scaled for the distance from the bottom at which the photos were taken. This approach was validated by height estimation of scallops of known sizes. An underestimation of approximately 0.5 cm was seen, which could be attributed to pixel error, where each pixel represented 0.24 x 0.27 cm. After correcting for this difference the estimated heights ranged from 3.8-9.3 cm. A comparison of the height-distribution from a small-scale dredge survey carried out in the vicinity showed non-overlapping peaks in size distribution, with scallops of a broader size range visible in the AUV survey. Further investigations are necessary to evaluate any underlying bias and to validate how representative these surveys are of the true population. The low resolution images made identification of smaller scallops difficult. Overall, the observations of very few small scallops in both surveys could be attributed to low recruitment levels in the recent years due to the known scallop parasite outbreak in the region.

  4. What's in a title? An assessment of whether randomized controlled trial in a title means that it is one.

    PubMed

    Koletsi, Despina; Pandis, Nikolaos; Polychronopoulou, Argy; Eliades, Theodore

    2012-06-01

    In this study, we aimed to investigate whether studies published in orthodontic journals and titled as randomized clinical trials are truly randomized clinical trials. A second objective was to explore the association of journal type and other publication characteristics on correct classification. American Journal of Orthodontics and Dentofacial Orthopedics, European Journal of Orthodontics, Angle Orthodontist, Journal of Orthodontics, Orthodontics and Craniofacial Research, World Journal of Orthodontics, Australian Orthodontic Journal, and Journal of Orofacial Orthopedics were hand searched for clinical trials labeled in the title as randomized from 1979 to July 2011. The data were analyzed by using descriptive statistics, and univariable and multivariable examinations of statistical associations via ordinal logistic regression modeling (proportional odds model). One hundred twelve trials were identified. Of the included trials, 33 (29.5%) were randomized clinical trials, 52 (46.4%) had an unclear status, and 27 (24.1%) were not randomized clinical trials. In the multivariable analysis among the included journal types, year of publication, number of authors, multicenter trial, and involvement of statistician were significant predictors of correctly classifying a study as a randomized clinical trial vs unclear and not a randomized clinical trial. From 112 clinical trials in the orthodontic literature labeled as randomized clinical trials, only 29.5% were identified as randomized clinical trials based on clear descriptions of appropriate random number generation and allocation concealment. The type of journal, involvement of a statistician, multicenter trials, greater numbers of authors, and publication year were associated with correct clinical trial classification. This study indicates the need of clear and accurate reporting of clinical trials and the need for educating investigators on randomized clinical trial methodology. Copyright © 2012 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  5. Evaluation of artificial neural network algorithms for predicting METs and activity type from accelerometer data: validation on an independent sample.

    PubMed

    Freedson, Patty S; Lyden, Kate; Kozey-Keadle, Sarah; Staudenmayer, John

    2011-12-01

    Previous work from our laboratory provided a "proof of concept" for use of artificial neural networks (nnets) to estimate metabolic equivalents (METs) and identify activity type from accelerometer data (Staudenmayer J, Pober D, Crouter S, Bassett D, Freedson P, J Appl Physiol 107: 1330-1307, 2009). The purpose of this study was to develop new nnets based on a larger, more diverse, training data set and apply these nnet prediction models to an independent sample to evaluate the robustness and flexibility of this machine-learning modeling technique. The nnet training data set (University of Massachusetts) included 277 participants who each completed 11 activities. The independent validation sample (n = 65) (University of Tennessee) completed one of three activity routines. Criterion measures were 1) measured METs assessed using open-circuit indirect calorimetry; and 2) observed activity to identify activity type. The nnet input variables included five accelerometer count distribution features and the lag-1 autocorrelation. The bias and root mean square errors for the nnet MET trained on University of Massachusetts and applied to University of Tennessee were +0.32 and 1.90 METs, respectively. Seventy-seven percent of the activities were correctly classified as sedentary/light, moderate, or vigorous intensity. For activity type, household and locomotion activities were correctly classified by the nnet activity type 98.1 and 89.5% of the time, respectively, and sport was correctly classified 23.7% of the time. Use of this machine-learning technique operates reasonably well when applied to an independent sample. We propose the creation of an open-access activity dictionary, including accelerometer data from a broad array of activities, leading to further improvements in prediction accuracy for METs, activity intensity, and activity type.

  6. A Bayesian Framework for Generalized Linear Mixed Modeling Identifies New Candidate Loci for Late-Onset Alzheimer’s Disease

    PubMed Central

    Wang, Xulong; Philip, Vivek M.; Ananda, Guruprasad; White, Charles C.; Malhotra, Ankit; Michalski, Paul J.; Karuturi, Krishna R. Murthy; Chintalapudi, Sumana R.; Acklin, Casey; Sasner, Michael; Bennett, David A.; De Jager, Philip L.; Howell, Gareth R.; Carter, Gregory W.

    2018-01-01

    Recent technical and methodological advances have greatly enhanced genome-wide association studies (GWAS). The advent of low-cost, whole-genome sequencing facilitates high-resolution variant identification, and the development of linear mixed models (LMM) allows improved identification of putatively causal variants. While essential for correcting false positive associations due to sample relatedness and population stratification, LMMs have commonly been restricted to quantitative variables. However, phenotypic traits in association studies are often categorical, coded as binary case-control or ordered variables describing disease stages. To address these issues, we have devised a method for genomic association studies that implements a generalized LMM (GLMM) in a Bayesian framework, called Bayes-GLMM. Bayes-GLMM has four major features: (1) support of categorical, binary, and quantitative variables; (2) cohesive integration of previous GWAS results for related traits; (3) correction for sample relatedness by mixed modeling; and (4) model estimation by both Markov chain Monte Carlo sampling and maximal likelihood estimation. We applied Bayes-GLMM to the whole-genome sequencing cohort of the Alzheimer’s Disease Sequencing Project. This study contains 570 individuals from 111 families, each with Alzheimer’s disease diagnosed at one of four confidence levels. Using Bayes-GLMM we identified four variants in three loci significantly associated with Alzheimer’s disease. Two variants, rs140233081 and rs149372995, lie between PRKAR1B and PDGFA. The coded proteins are localized to the glial-vascular unit, and PDGFA transcript levels are associated with Alzheimer’s disease-related neuropathology. In summary, this work provides implementation of a flexible, generalized mixed-model approach in a Bayesian framework for association studies. PMID:29507048

  7. Pre-engineering Spaceflight Validation of Environmental Models and the 2005 HZETRN Simulation Code

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.; Dachev, Ts. P.; Tomov, B. T.; Walker, Steven A.; DeAngelis, Giovanni; Blattnig, Steve R.; Atwell, William

    2006-01-01

    The HZETRN code has been identified by NASA for engineering design in the next phase of space exploration highlighting a return to the Moon in preparation for a Mars mission. In response, a new series of algorithms beginning with 2005 HZETRN, will be issued by correcting some prior limitations and improving control of propagated errors along with established code verification processes. Code validation processes will use new/improved low Earth orbit (LEO) environmental models with a recently improved International Space Station (ISS) shield model to validate computational models and procedures using measured data aboard ISS. These validated models will provide a basis for flight-testing the designs of future space vehicles and systems of the Constellation program in the LEO environment.

  8. Recalculating the quasar luminosity function of the extended Baryon Oscillation Spectroscopic Survey

    NASA Astrophysics Data System (ADS)

    Caditz, David M.

    2017-12-01

    Aims: The extended Baryon Oscillation Spectroscopic Survey (eBOSS) of the Sloan Digital Sky Survey provides a uniform sample of over 13 000 variability selected quasi-stellar objects (QSOs) in the redshift range 0.68

  9. Using deep RNA sequencing for the structural annotation of the laccaria bicolor mycorrhizal transcriptome.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, P. E.; Trivedi, G.; Sreedasyam, A.

    2010-07-06

    Accurate structural annotation is important for prediction of function and required for in vitro approaches to characterize or validate the gene expression products. Despite significant efforts in the field, determination of the gene structure from genomic data alone is a challenging and inaccurate process. The ease of acquisition of transcriptomic sequence provides a direct route to identify expressed sequences and determine the correct gene structure. We developed methods to utilize RNA-seq data to correct errors in the structural annotation and extend the boundaries of current gene models using assembly approaches. The methods were validated with a transcriptomic data set derivedmore » from the fungus Laccaria bicolor, which develops a mycorrhizal symbiotic association with the roots of many tree species. Our analysis focused on the subset of 1501 gene models that are differentially expressed in the free living vs. mycorrhizal transcriptome and are expected to be important elements related to carbon metabolism, membrane permeability and transport, and intracellular signaling. Of the set of 1501 gene models, 1439 (96%) successfully generated modified gene models in which all error flags were successfully resolved and the sequences aligned to the genomic sequence. The remaining 4% (62 gene models) either had deviations from transcriptomic data that could not be spanned or generated sequence that did not align to genomic sequence. The outcome of this process is a set of high confidence gene models that can be reliably used for experimental characterization of protein function. 69% of expressed mycorrhizal JGI 'best' gene models deviated from the transcript sequence derived by this method. The transcriptomic sequence enabled correction of a majority of the structural inconsistencies and resulted in a set of validated models for 96% of the mycorrhizal genes. The method described here can be applied to improve gene structural annotation in other species, provided that there is a sequenced genome and a set of gene models.« less

  10. 21 CFR 820.100 - Corrective and preventive action.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ..., work operations, concessions, quality audit reports, quality records, service records, complaints, returned product, and other sources of quality data to identify existing and potential causes of... (CONTINUED) MEDICAL DEVICES QUALITY SYSTEM REGULATION Corrective and Preventive Action § 820.100 Corrective...

  11. 21 CFR 820.100 - Corrective and preventive action.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., work operations, concessions, quality audit reports, quality records, service records, complaints, returned product, and other sources of quality data to identify existing and potential causes of... (CONTINUED) MEDICAL DEVICES QUALITY SYSTEM REGULATION Corrective and Preventive Action § 820.100 Corrective...

  12. 21 CFR 820.100 - Corrective and preventive action.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., work operations, concessions, quality audit reports, quality records, service records, complaints, returned product, and other sources of quality data to identify existing and potential causes of... (CONTINUED) MEDICAL DEVICES QUALITY SYSTEM REGULATION Corrective and Preventive Action § 820.100 Corrective...

  13. 21 CFR 820.100 - Corrective and preventive action.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ..., work operations, concessions, quality audit reports, quality records, service records, complaints, returned product, and other sources of quality data to identify existing and potential causes of... (CONTINUED) MEDICAL DEVICES QUALITY SYSTEM REGULATION Corrective and Preventive Action § 820.100 Corrective...

  14. 21 CFR 820.100 - Corrective and preventive action.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ..., work operations, concessions, quality audit reports, quality records, service records, complaints, returned product, and other sources of quality data to identify existing and potential causes of... (CONTINUED) MEDICAL DEVICES QUALITY SYSTEM REGULATION Corrective and Preventive Action § 820.100 Corrective...

  15. [Is ultrasound equal to X-ray in pediatric fracture diagnosis?].

    PubMed

    Moritz, J D; Hoffmann, B; Meuser, S H; Sehr, D H; Caliebe, A; Heller, M

    2010-08-01

    Ultrasound is currently not established for the diagnosis of fractures. The aim of this study was to compare ultrasound and X-ray beyond their use solely for the identification of fractures, i. e., for the detection of fracture type and dislocation for pediatric fracture diagnosis. Limb bones of dead young pigs served as a model for pediatric bones. The fractured bones were examined with ultrasound, X-ray, and CT, which served as the gold standard. 162 of 248 bones were fractured. 130 fractures were identified using ultrasound, and 148 using X-ray. There were some advantages of X-ray over ultrasound in the detection of fracture type (80 correct results using X-ray, 66 correct results using ultrasound). Ultrasound, however, was superior to X-ray for dislocation identification (41 correct results using X-ray, 51 correct results using ultrasound). Both findings were not statistically significant after adjustment for multiple testing. Ultrasound not only has comparable sensitivity to that of X-ray for the identification of limb fractures but is also equally effective for the diagnosis of fracture type and dislocation. Thus, ultrasound can be used as an adequate alternative method to X-ray for pediatric fracture diagnosis. Georg Thieme Verlag KG Stuttgart, New York.

  16. Investigation of under-ascertainment in epidemiological studies based in general practice.

    PubMed

    Sethi, D; Wheeler, J; Rodrigues, L C; Fox, S; Roderick, P

    1999-02-01

    One of the aims of the Study of Infectious Intestinal Disease (IID) in England is to estimate the incidence of IID presenting to general practice. This sub-study aims to estimate and correct the degree of under-ascertainment in the national study. Cases of presumed IID which presented to general practice in the national study had been ascertained by their GP. In 26 general practices, cases with computerized diagnoses suggestive of IID were identified retrospectively. Cases which fulfilled the case definition of IID and should have been ascertained to the coordinating centre but were not, represented the under-ascertainment. Logistic regression modelling was used to identify independent factors which influenced under-ascertainment. The records of 2021 patients were examined, 1514 were eligible and should have been ascertained but only 974 (64%) were. There was variation in ascertainment between the practices (30% to 93%). Patient-related factors independently associated with ascertainment were: i) vomiting only as opposed to diarrhoea with and without vomiting (OR 0.37) and ii) consultation in the surgery as opposed to at home (OR 2.18). Practice-related factors independently associated with ascertainment were: i) participation in the enumeration study component (OR 1.78), ii) a larger number of partners (OR 0.3 for 7-8 partners); iii) rural location (OR 2.27) and iv) previous research experience (OR 1.92). Predicted ascertainment percentages were calculated according to practice characteristics. Under-ascertainment of IID was substantial (36%) and non-random and had to be corrected. Practice characteristics influencing variation in ascertainment were identified and a multivariate model developed to identify adjustment factors which could be applied to individual practices. Researchers need to be aware of factors which influence ascertainment in acute epidemiological studies based in general practice.

  17. Factors Leading to Persistent Postsurgical Pain in Adolescents Undergoing Spinal Fusion: An Integrative Literature Review.

    PubMed

    Perry, Mallory; Starkweather, Angela; Baumbauer, Kyle; Young, Erin

    Adolescent idiopathic scoliosis (AIS) is the most common spinal deformity among children and adolescents and the most frequent reason for corrective spinal fusion (SF). Of the children and adolescents who undergo SF, a significant number will experience persistent postoperative pain (PPP). This integrative literature review was conducted to identify and synthesize perioperative factors that may contribute to risk of developing PPP. Articles which addressed PPP within the last 10years and primary research on postoperative pain outcomes in adolescents after SF were selected for review. 15 articles which met eligibility criteria were included. Preoperative pain intensity was the most significant factor identified in the development of PPP and increased postoperative pain. Social function and psychological factors also have role in the development of PPP. There were no theoretical models or frameworks for evaluating PPP incidence in adolescent with AIS after SF. Perioperative factors such as, preoperative pain, correction magnitude, pain coping, anxiety and social functioning are vital to understanding a child's risk of PPP following SF. There is a need for theoretically-based studies to assess PPP among children and adolescents with AIS after SF surgery. The Biobehavioral Pain Network (BPN) model was proposed, to encompass biological, social and psychological domains which may be responsible for incidence of PPP in children undergoing SF. Such a model can be used to systematically develop and evaluate personalized postoperative pain management strategies for this patient population. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Construction of a multiple myeloma diagnostic model by magnetic bead-based MALDI-TOF mass spectrometry of serum and pattern recognition software.

    PubMed

    Wang, Qing-Tao; Li, Yong-Zhe; Liang, Yu-Fang; Hu, Chao-Jun; Zhai, Yu-Hua; Zhao, Guan-Fei; Zhang, Jian; Li, Ning; Ni, An-Ping; Chen, Wen-Ming; Xu, Yang

    2009-04-01

    A diagnosis of multiple myeloma (MM) is difficult to make on the basis of any single laboratory test result. Accurate diagnosis of MM generally results from a number of costly and invasive laboratory tests and medical procedures. The aim of this work is to find a new, highly specific and sensitive method for MM diagnosis. Serum samples were tested in groups representing MM (n = 54) and non-MM (n = 108). These included a subgroup of 17 plasma cell dyscrasias, a subgroup of 17 reactive plasmacytosis, 5 B cell lymphomas, and 7 other tumors with osseus metastasis, as well as 62 healthy donors as controls. Bioinformatic calculations associated with MM were performed. The decision algorithm, with a panel of three biomarkers, correctly identified 24 of 24 (100%) MM samples and 46 of 49 (93.88%) non-MM samples in the training set. During the masked test for the discriminatory model, 26 of 30 MM patients (sensitivity, 86.67%) were precisely recognized, and all 34 normal donors were successfully classified; patients with reactive plasmacytosis were also correctly classified into the non-MM group, and 11 of the other patients were incorrectly classified as MM. The results suggested that proteomic fingerprint technology combining magnetic beads with MALDI-TOF-MS has the potential for identifying individuals with MM. The biomarker classification model was suitable for preliminary assessment of MM and could potentially serve as a useful tool for MM diagnosis and differentiation diagnosis.

  19. Lipid correction model of carbon stable isotopes for a cosmopolitan predator, spiny dogfish Squalus acanthias.

    PubMed

    Reum, J C P

    2011-12-01

    Three lipid correction models were evaluated for liver and white dorsal muscle from Squalus acanthias. For muscle, all three models performed well, based on the Akaike Information Criterion value corrected for small sample sizes (AIC(c) ), and predicted similar lipid corrections to δ(13) C that were up to 2.8 ‰ higher than those predicted using previously published models based on multispecies data. For liver, which possessed higher bulk C:N values compared to that of white muscle, all three models performed poorly and lipid-corrected δ(13) C values were best approximated by simply adding 5.74 ‰ to bulk δ(13) C values. © 2011 The Author. Journal of Fish Biology © 2011 The Fisheries Society of the British Isles.

  20. Nondestructive application of laser-induced fluorescence spectroscopy for quantitative analyses of phenolic compounds in strawberry fruits (Fragaria x ananassa).

    PubMed

    Wulf, J S; Rühmann, S; Rego, I; Puhl, I; Treutter, D; Zude, M

    2008-05-14

    Laser-induced fluorescence spectroscopy (LIFS) was nondestructively applied on strawberries (EX = 337 nm, EM = 400-820 nm) to test the feasibility of quantitatively determining native phenolic compounds in strawberries. Eighteen phenolic compounds were identified in fruit skin by UV and MS spectroscopy and quantitatively determined by use of rp-HPLC for separation and diode-array or chemical reaction detection. Partial least-squares calibration models were built for single phenolic compounds by means of nondestructively recorded fluorescence spectra in the blue-green wavelength range using different data preprocessing methods. The direct orthogonal signal correction resulted in r (2) = 0.99 and rmsep < 8% for p-coumaroyl-glucose, and r (2) = 0.99 and rmsep < 24% for cinnamoyl-glucose. In comparison, the correction of the fluorescence spectral data with simultaneously recorded reflectance spectra did not further improve the calibration models. Results show the potential of LIFS for a rapid and nondestructive assessment of contents of p-coumaroyl-glucose and cinnamoyl-glucose in strawberry fruits.

  1. Zero-Echo-Time and Dixon Deep Pseudo-CT (ZeDD CT): Direct Generation of Pseudo-CT Images for Pelvic PET/MRI Attenuation Correction Using Deep Convolutional Neural Networks with Multiparametric MRI.

    PubMed

    Leynes, Andrew P; Yang, Jaewon; Wiesinger, Florian; Kaushik, Sandeep S; Shanbhag, Dattesh D; Seo, Youngho; Hope, Thomas A; Larson, Peder E Z

    2018-05-01

    Accurate quantification of uptake on PET images depends on accurate attenuation correction in reconstruction. Current MR-based attenuation correction methods for body PET use a fat and water map derived from a 2-echo Dixon MRI sequence in which bone is neglected. Ultrashort-echo-time or zero-echo-time (ZTE) pulse sequences can capture bone information. We propose the use of patient-specific multiparametric MRI consisting of Dixon MRI and proton-density-weighted ZTE MRI to directly synthesize pseudo-CT images with a deep learning model: we call this method ZTE and Dixon deep pseudo-CT (ZeDD CT). Methods: Twenty-six patients were scanned using an integrated 3-T time-of-flight PET/MRI system. Helical CT images of the patients were acquired separately. A deep convolutional neural network was trained to transform ZTE and Dixon MR images into pseudo-CT images. Ten patients were used for model training, and 16 patients were used for evaluation. Bone and soft-tissue lesions were identified, and the SUV max was measured. The root-mean-squared error (RMSE) was used to compare the MR-based attenuation correction with the ground-truth CT attenuation correction. Results: In total, 30 bone lesions and 60 soft-tissue lesions were evaluated. The RMSE in PET quantification was reduced by a factor of 4 for bone lesions (10.24% for Dixon PET and 2.68% for ZeDD PET) and by a factor of 1.5 for soft-tissue lesions (6.24% for Dixon PET and 4.07% for ZeDD PET). Conclusion: ZeDD CT produces natural-looking and quantitatively accurate pseudo-CT images and reduces error in pelvic PET/MRI attenuation correction compared with standard methods. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.

  2. Who's My Doctor? Using an Electronic Tool to Improve Team Member Identification on an Inpatient Pediatrics Team.

    PubMed

    Singh, Amit; Rhee, Kyung E; Brennan, Jesse J; Kuelbs, Cynthia; El-Kareh, Robert; Fisher, Erin S

    2016-03-01

    Increase parent/caregiver ability to correctly identify the attending in charge and define terminology of treatment team members (TTMs). We hypothesized that correct TTM identification would increase with use of an electronic communication tool. Secondary aims included assessing subjects' satisfaction with and trust of TTM and interest in computer activities during hospitalization. Two similar groups of parents/legal guardians/primary caregivers of children admitted to the Pediatric Hospital Medicine teaching service with an unplanned first admission were surveyed before (Phase 1) and after (Phase 2) implementation of a novel electronic medical record (EMR)-based tool with names, photos, and definitions of TTMs. Physicians were also surveyed only during Phase 1. Surveys assessed TTM identification, satisfaction, trust, and computer use. More subjects in Phase 2 correctly identified attending physicians by name (71% vs. 28%, P < .001) and correctly defined terms intern, resident, and attending (P ≤ .03) compared with Phase 1. Almost all subjects (>79%) and TTMs (>87%) reported that subjects' ability to identify TTMs moderately or strongly impacted satisfaction and trust. The majority of subjects expressed interest in using computers to understand TTMs in each phase. Subjects' ability to correctly identify attending physicians and define TTMs was significantly greater for those who used our tool. In our study, subjects reported that TTM identification impacted aspects of the TTM relationship, yet few could correctly identify TTMs before tool use. This pilot study showed early success in engaging subjects with the EMR in the hospital and suggests that families would engage in computer-based activities in this setting. Copyright © 2016 by the American Academy of Pediatrics.

  3. An integrated, ethically driven environmental model of clinical decision making in emergency settings.

    PubMed

    Wolf, Lisa

    2013-02-01

    To explore the relationship between multiple variables within a model of critical thinking and moral reasoning. A quantitative descriptive correlational design using a purposive sample of 200 emergency nurses. Measured variables were accuracy in clinical decision-making, moral reasoning, perceived care environment, and demographics. Analysis was by bivariate correlation using Pearson's product-moment correlation coefficients, chi square and multiple linear regression analysis. The elements as identified in the integrated ethically-driven environmental model of clinical decision-making (IEDEM-CD) corrected depict moral reasoning and environment of care as factors significantly affecting accuracy in decision-making. The integrated, ethically driven environmental model of clinical decision making is a framework useful for predicting clinical decision making accuracy for emergency nurses in practice, with further implications in education, research and policy. A diagnostic and therapeutic framework for identifying and remediating individual and environmental challenges to accurate clinical decision making. © 2012, The Author. International Journal of Nursing Knowledge © 2012, NANDA International.

  4. GOTHiC, a probabilistic model to resolve complex biases and to identify real interactions in Hi-C data.

    PubMed

    Mifsud, Borbala; Martincorena, Inigo; Darbo, Elodie; Sugar, Robert; Schoenfelder, Stefan; Fraser, Peter; Luscombe, Nicholas M

    2017-01-01

    Hi-C is one of the main methods for investigating spatial co-localisation of DNA in the nucleus. However, the raw sequencing data obtained from Hi-C experiments suffer from large biases and spurious contacts, making it difficult to identify true interactions. Existing methods use complex models to account for biases and do not provide a significance threshold for detecting interactions. Here we introduce a simple binomial probabilistic model that resolves complex biases and distinguishes between true and false interactions. The model corrects biases of known and unknown origin and yields a p-value for each interaction, providing a reliable threshold based on significance. We demonstrate this experimentally by testing the method against a random ligation dataset. Our method outperforms previous methods and provides a statistical framework for further data analysis, such as comparisons of Hi-C interactions between different conditions. GOTHiC is available as a BioConductor package (http://www.bioconductor.org/packages/release/bioc/html/GOTHiC.html).

  5. Near resonant bubble acoustic cross-section corrections, including examples from oceanography, volcanology, and biomedical ultrasound.

    PubMed

    Ainslie, Michael A; Leighton, Timothy G

    2009-11-01

    The scattering cross-section sigma(s) of a gas bubble of equilibrium radius R(0) in liquid can be written in the form sigma(s)=4piR(0) (2)[(omega(1) (2)omega(2)-1)(2)+delta(2)], where omega is the excitation frequency, omega(1) is the resonance frequency, and delta is a frequency-dependent dimensionless damping coefficient. A persistent discrepancy in the frequency dependence of the contribution to delta from radiation damping, denoted delta(rad), is identified and resolved, as follows. Wildt's [Physics of Sound in the Sea (Washington, DC, 1946), Chap. 28] pioneering derivation predicts a linear dependence of delta(rad) on frequency, a result which Medwin [Ultrasonics 15, 7-13 (1977)] reproduces using a different method. Weston [Underwater Acoustics, NATO Advanced Study Institute Series Vol. II, 55-88 (1967)], using ostensibly the same method as Wildt, predicts the opposite relationship, i.e., that delta(rad) is inversely proportional to frequency. Weston's version of the derivation of the scattering cross-section is shown here to be the correct one, thus resolving the discrepancy. Further, a correction to Weston's model is derived that amounts to a shift in the resonance frequency. A new, corrected, expression for the extinction cross-section is also derived. The magnitudes of the corrections are illustrated using examples from oceanography, volcanology, planetary acoustics, neutron spallation, and biomedical ultrasound. The corrections become significant when the bulk modulus of the gas is not negligible relative to that of the surrounding liquid.

  6. Impact of a statistical bias correction on the projected simulated hydrological changes obtained from three GCMs and two hydrology models

    NASA Astrophysics Data System (ADS)

    Hagemann, Stefan; Chen, Cui; Haerter, Jan O.; Gerten, Dieter; Heinke, Jens; Piani, Claudio

    2010-05-01

    Future climate model scenarios depend crucially on their adequate representation of the hydrological cycle. Within the European project "Water and Global Change" (WATCH) special care is taken to couple state-of-the-art climate model output to a suite of hydrological models. This coupling is expected to lead to a better assessment of changes in the hydrological cycle. However, due to the systematic model errors of climate models, their output is often not directly applicable as input for hydrological models. Thus, the methodology of a statistical bias correction has been developed, which can be used for correcting climate model output to produce internally consistent fields that have the same statistical intensity distribution as the observations. As observations, global re-analysed daily data of precipitation and temperature are used that are obtained in the WATCH project. We will apply the bias correction to global climate model data of precipitation and temperature from the GCMs ECHAM5/MPIOM, CNRM-CM3 and LMDZ-4, and intercompare the bias corrected data to the original GCM data and the observations. Then, the orginal and the bias corrected GCM data will be used to force two global hydrology models: (1) the hydrological model of the Max Planck Institute for Meteorology (MPI-HM) consisting of the Simplified Land surface (SL) scheme and the Hydrological Discharge (HD) model, and (2) the dynamic vegetation model LPJmL operated by the Potsdam Institute for Climate Impact Research. The impact of the bias correction on the projected simulated hydrological changes will be analysed, and the resulting behaviour of the two hydrology models will be compared.

  7. Corrective Action Investigation Plan for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada (December 2002, Revision No.: 0), Including Record of Technical Change No. 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NNSA /NSO

    The Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 204 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 204 is located on the Nevada Test Site approximately 65 miles northwest of Las Vegas, Nevada. This CAU is comprised of six Corrective Action Sites (CASs) which include: 01-34-01, Underground Instrument House Bunker; 02-34-01, Instrument Bunker; 03-34-01, Underground Bunker; 05-18-02, Chemical Explosives Storage; 05-33-01, Kay Blockhouse; 05-99-02, Explosive Storage Bunker.more » Based on site history, process knowledge, and previous field efforts, contaminants of potential concern for Corrective Action Unit 204 collectively include radionuclides, beryllium, high explosives, lead, polychlorinated biphenyls, total petroleum hydrocarbons, silver, warfarin, and zinc phosphide. The primary question for the investigation is: ''Are existing data sufficient to evaluate appropriate corrective actions?'' To address this question, resolution of two decision statements is required. Decision I is to ''Define the nature of contamination'' by identifying any contamination above preliminary action levels (PALs); Decision II is to ''Determine the extent of contamination identified above PALs. If PALs are not exceeded, the investigation is completed. If PALs are exceeded, then Decision II must be resolved. In addition, data will be obtained to support waste management decisions. Field activities will include radiological land area surveys, geophysical surveys to identify any subsurface metallic and nonmetallic debris, field screening for applicable contaminants of potential concern, collection and analysis of surface and subsurface soil samples from biased locations, and step-out sampling to define the extent of contamination, as necessary. The results of this field investigation will support a defensible evaluation of corrective action alternatives in the corrective action decision document.« less

  8. Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna Roberts

    Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order polynomial with 16-bit precision, significant improvement over the one and two-point correction algorithms. All algorithm have been implemented in software with satisfactory results and the third order gain equalization non-uniformity correction algorithm has been implemented in hardware.

  9. Effect of tubing length on the dispersion correction of an arterially sampled input function for kinetic modeling in PET.

    PubMed

    O'Doherty, Jim; Chilcott, Anna; Dunn, Joel

    2015-11-01

    Arterial sampling with dispersion correction is routinely performed for kinetic analysis of PET studies. Because of the the advent of PET-MRI systems, non-MR safe instrumentation will be required to be kept outside the scan room, which requires the length of the tubing between the patient and detector to increase, thus worsening the effects of dispersion. We examined the effects of dispersion in idealized radioactive blood studies using various lengths of tubing (1.5, 3, and 4.5 m) and applied a well-known transmission-dispersion model to attempt to correct the resulting traces. A simulation study was also carried out to examine noise characteristics of the model. The model was applied to patient traces using a 1.5 m acquisition tubing and extended to its use at 3 m. Satisfactory dispersion correction of the blood traces was achieved in the 1.5 m line. Predictions on the basis of experimental measurements, numerical simulations and noise analysis of resulting traces show that corrections of blood data can also be achieved using the 3 m tubing. The effects of dispersion could not be corrected for the 4.5 m line by the selected transmission-dispersion model. On the basis of our setup, correction of dispersion in arterial sampling tubing up to 3 m by the transmission-dispersion model can be performed. The model could not dispersion correct data acquired using a 4.5 m arterial tubing.

  10. Essays in energy economics: The electricity industry

    NASA Astrophysics Data System (ADS)

    Martinez-Chombo, Eduardo

    Electricity demand analysis using cointegration and error-correction models with time varying parameters: The Mexican case. In this essay we show how some flexibility can be allowed in modeling the parameters of the electricity demand function by employing the time varying coefficient (TVC) cointegrating model developed by Park and Hahn (1999). With the income elasticity of electricity demand modeled as a TVC, we perform tests to examine the adequacy of the proposed model against the cointegrating regression with fixed coefficients, as well as against the spuriousness of the regression with TVC. The results reject the specification of the model with fixed coefficients and favor the proposed model. We also show how some flexibility is gained in the specification of the error correction model based on the proposed TVC cointegrating model, by including more lags of the error correction term as predetermined variables. Finally, we present the results of some out-of-sample forecast comparison among competing models. Electricity demand and supply in Mexico. In this essay we present a simplified model of the Mexican electricity transmission network. We use the model to approximate the marginal cost of supplying electricity to consumers in different locations and at different times of the year. We examine how costs and system operations will be affected by proposed investments in generation and transmission capacity given a forecast of growth in regional electricity demands. Decomposing electricity prices with jumps. In this essay we propose a model that decomposes electricity prices into two independent stochastic processes: one that represents the "normal" pattern of electricity prices and the other that captures temporary shocks, or "jumps", with non-lasting effects in the market. Each contains specific mean reverting parameters to estimate. In order to identify such components we specify a state-space model with regime switching. Using Kim's (1994) filtering algorithm we estimate the parameters of the model, the transition probabilities and the unobservable components for the mean adjusted series of New South Wales' electricity prices. Finally, bootstrap simulations were performed to estimate the expected contribution of each of the components in the overall electricity prices.

  11. Solar array model corrections from Mars Pathfinder lander data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ewell, R.C.; Burger, D.R.

    1997-12-31

    The MESUR solar array power model initially assumed values for input variables. After landing early surface variables such as array tilt and azimuth or early environmental variables such as array temperature can be corrected. Correction of later environmental variables such as tau versus time, spectral shift, dust deposition, and UV darkening is dependent upon time, on-board science instruments, and ability to separate effects of variables. Engineering estimates had to be made for additional shadow losses and Voc sensor temperature corrections. Some variations had not been expected such as tau versus time of day, and spectral shift versus time of day.more » Additions needed to the model are thermal mass of lander petal and correction between Voc sensor and temperature sensor. Conclusions are: the model works well; good battery predictions are difficult; inclusion of Isc and Voc sensors was valuable; and the IMP and MAE science experiments greatly assisted the data analysis and model correction.« less

  12. Evaluation of respiratory and cardiac motion correction schemes in dual gated PET/CT cardiac imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamare, F., E-mail: frederic.lamare@chu-bordeaux.fr; Fernandez, P.; CNRS, INCIA, UMR 5287, F-33400 Talence

    Purpose: Cardiac imaging suffers from both respiratory and cardiac motion. One of the proposed solutions involves double gated acquisitions. Although such an approach may lead to both respiratory and cardiac motion compensation there are issues associated with (a) the combination of data from cardiac and respiratory motion bins, and (b) poor statistical quality images as a result of using only part of the acquired data. The main objective of this work was to evaluate different schemes of combining binned data in order to identify the best strategy to reconstruct motion free cardiac images from dual gated positron emission tomography (PET)more » acquisitions. Methods: A digital phantom study as well as seven human studies were used in this evaluation. PET data were acquired in list mode (LM). A real-time position management system and an electrocardiogram device were used to provide the respiratory and cardiac motion triggers registered within the LM file. Acquired data were subsequently binned considering four and six cardiac gates, or the diastole only in combination with eight respiratory amplitude gates. PET images were corrected for attenuation, but no randoms nor scatter corrections were included. Reconstructed images from each of the bins considered above were subsequently used in combination with an affine or an elastic registration algorithm to derive transformation parameters allowing the combination of all acquired data in a particular position in the cardiac and respiratory cycles. Images were assessed in terms of signal-to-noise ratio (SNR), contrast, image profile, coefficient-of-variation (COV), and relative difference of the recovered activity concentration. Results: Regardless of the considered motion compensation strategy, the nonrigid motion model performed better than the affine model, leading to higher SNR and contrast combined with a lower COV. Nevertheless, when compensating for respiration only, no statistically significant differences were observed in the performance of the two motion models considered. Superior image SNR and contrast were seen using the affine respiratory motion model in combination with the diastole cardiac bin in comparison to the use of the whole cardiac cycle. In contrast, when simultaneously correcting for cardiac beating and respiration, the elastic respiratory motion model outperformed the affine model. In this context, four cardiac bins associated with eight respiratory amplitude bins seemed to be adequate. Conclusions: Considering the compensation of respiratory motion effects only, both affine and elastic based approaches led to an accurate resizing and positioning of the myocardium. The use of the diastolic phase combined with an affine model based respiratory motion correction may therefore be a simple approach leading to significant quality improvements in cardiac PET imaging. However, the best performance was obtained with the combined correction for both cardiac and respiratory movements considering all the dual-gated bins independently through the use of an elastic model based motion compensation.« less

  13. The Detection and Correction of Bias in Student Ratings of Instruction.

    ERIC Educational Resources Information Center

    Haladyna, Thomas; Hess, Robert K.

    1994-01-01

    A Rasch model was used to detect and correct bias in Likert rating scales used to assess student perceptions of college teaching, using a database of ratings. Statistical corrections were significant, supporting the model's potential utility. Recommendations are made for a theoretical rationale and further research on the model. (Author/MSE)

  14. An Evaluation of Information Criteria Use for Correct Cross-Classified Random Effects Model Selection

    ERIC Educational Resources Information Center

    Beretvas, S. Natasha; Murphy, Daniel L.

    2013-01-01

    The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…

  15. Predicting introductory programming performance: A multi-institutional multivariate study

    NASA Astrophysics Data System (ADS)

    Bergin, Susan; Reilly, Ronan

    2006-12-01

    A model for predicting student performance on introductory programming modules is presented. The model uses attributes identified in a study carried out at four third-level institutions in the Republic of Ireland. Four instruments were used to collect the data and over 25 attributes were examined. A data reduction technique was applied and a logistic regression model using 10-fold stratified cross validation was developed. The model used three attributes: Leaving Certificate Mathematics result (final mathematics examination at second level), number of hours playing computer games while taking the module and programming self-esteem. Prediction success was significant with 80% of students correctly classified. The model also works well on a per-institution level. A discussion on the implications of the model is provided and future work is outlined.

  16. CRISPR/Cas9-mediated somatic correction of a novel coagulator factor IX gene mutation ameliorates hemophilia in mouse.

    PubMed

    Guan, Yuting; Ma, Yanlin; Li, Qi; Sun, Zhenliang; Ma, Lie; Wu, Lijuan; Wang, Liren; Zeng, Li; Shao, Yanjiao; Chen, Yuting; Ma, Ning; Lu, Wenqing; Hu, Kewen; Han, Honghui; Yu, Yanhong; Huang, Yuanhua; Liu, Mingyao; Li, Dali

    2016-05-01

    The X-linked genetic bleeding disorder caused by deficiency of coagulator factor IX, hemophilia B, is a disease ideally suited for gene therapy with genome editing technology. Here, we identify a family with hemophilia B carrying a novel mutation, Y371D, in the human F9 gene. The CRISPR/Cas9 system was used to generate distinct genetically modified mouse models and confirmed that the novel Y371D mutation resulted in a more severe hemophilia B phenotype than the previously identified Y371S mutation. To develop therapeutic strategies targeting this mutation, we subsequently compared naked DNA constructs versus adenoviral vectors to deliver Cas9 components targeting the F9 Y371D mutation in adult mice. After treatment, hemophilia B mice receiving naked DNA constructs exhibited correction of over 0.56% of F9 alleles in hepatocytes, which was sufficient to restore hemostasis. In contrast, the adenoviral delivery system resulted in a higher corrective efficiency but no therapeutic effects due to severe hepatic toxicity. Our studies suggest that CRISPR/Cas-mediated in situ genome editing could be a feasible therapeutic strategy for human hereditary diseases, although an efficient and clinically relevant delivery system is required for further clinical studies. © 2016 The Authors. Published under the terms of the CC BY 4.0 license.

  17. Substance Abuse Treatment in Adult and Juvenile Correctional Facilities: Findings from the Uniform Facility Data Set 1997 Survey of Correctional Facilities.

    ERIC Educational Resources Information Center

    Marsden, Mary Ellen, Ed.; Straw, Richard S., Ed.

    This report presents methodology and findings from the Uniform Facility Data Set (UFDS) 1997 Survey of Correctional Facilities, which surveyed about 7,600 adult and juvenile correctional facilities to identify those that provide on-site substance abuse treatment to their inmates or residents. The survey assesses substance abuse treatment provided…

  18. Correcting for static shift of magnetotelluric data with airborne electromagnetic measurements: a case study from Rathlin Basin, Northern Ireland

    NASA Astrophysics Data System (ADS)

    Delhaye, Robert; Rath, Volker; Jones, Alan G.; Muller, Mark R.; Reay, Derek

    2017-05-01

    Galvanic distortions of magnetotelluric (MT) data, such as the static-shift effect, are a known problem that can lead to incorrect estimation of resistivities and erroneous modelling of geometries with resulting misinterpretation of subsurface electrical resistivity structure. A wide variety of approaches have been proposed to account for these galvanic distortions, some depending on the target area, with varying degrees of success. The natural laboratory for our study is a hydraulically permeable volume of conductive sediment at depth, the internal resistivity structure of which can be used to estimate reservoir viability for geothermal purposes; however, static-shift correction is required in order to ensure robust and precise modelling accuracy.We present here a possible method to employ frequency-domain electromagnetic data in order to correct static-shift effects, illustrated by a case study from Northern Ireland. In our survey area, airborne frequency domain electromagnetic (FDEM) data are regionally available with high spatial density. The spatial distributions of the derived static-shift corrections are analysed and applied to the uncorrected MT data prior to inversion. Two comparative inversion models are derived, one with and one without static-shift corrections, with instructive results. As expected from the one-dimensional analogy of static-shift correction, at shallow model depths, where the structure is controlled by a single local MT site, the correction of static-shift effects leads to vertical scaling of resistivity-thickness products in the model, with the corrected model showing improved correlation to existing borehole wireline resistivity data. In turn, as these vertical scalings are effectively independent of adjacent sites, lateral resistivity distributions are also affected, with up to half a decade of resistivity variation between the models estimated at depths down to 2000 m. Simple estimation of differences in bulk porosity, derived using Archie's Law, between the two models reinforces our conclusion that the suborder of magnitude resistivity contrasts induced by the correction of static shifts correspond to similar contrasts in estimated porosities, and hence, for purposes of reservoir investigation or similar cases requiring accurate absolute resistivity estimates, galvanic distortion correction, especially static-shift correction, is essential.

  19. Regional ionospheric model for improvement of navigation position with EGNOS

    NASA Astrophysics Data System (ADS)

    Swiatek, Anna; Tomasik, Lukasz; Jaworski, Leszek

    The problem of insufficient accuracy of EGNOS correction for the territory of Poland, located at the edge of EGNOS range is well known. The EEI PECS project (EGNOS EUPOS Integration) assumed improving the EGNOS correction by using the GPS observations from Polish ASG-EUPOS stations. A ionospheric delay parameter is a part of EGNOS correction. The comparative analysis of TEC values obtained from EGNOS and regional permanent GNSS stations showed the systematic shift. The TEC from EGNOS correction is underestimated related to computed regional TEC value. The new-‘improved’ corrections computed based on regional model were substituted for the EGNOS correction for suitable message. Dynamic measurements managed using the Mobile GPS Laboratory (MGL), showed the improvement of navigation position with TEC regional model.

  20. NASA Standard for Models and Simulations (M and S): Development Process and Rationale

    NASA Technical Reports Server (NTRS)

    Zang, Thomas A.; Blattnig, Steve R.; Green, Lawrence L.; Hemsch, Michael J.; Luckring, James M.; Morison, Joseph H.; Tripathi, Ram K.

    2009-01-01

    After the Columbia Accident Investigation Board (CAIB) report. the NASA Administrator at that time chartered an executive team (known as the Diaz Team) to identify the CAIB report elements with Agency-wide applicability, and to develop corrective measures to address each element. This report documents the chronological development and release of an Agency-wide Standard for Models and Simulations (M&S) (NASA Standard 7009) in response to Action #4 from the report, "A Renewed Commitment to Excellence: An Assessment of the NASA Agency-wide Applicability of the Columbia Accident Investigation Board Report, January 30, 2004".

  1. Statistical Properties of Global Precipitation in the NCEP GFS Model and TMPA Observations for Data Assimilation

    NASA Technical Reports Server (NTRS)

    Lien, Guo-Yuan; Kalnay, Eugenia; Miyoshi, Takemasa; Huffman, George J.

    2016-01-01

    Assimilation of satellite precipitation data into numerical models presents several difficulties, with two of the most important being the non-Gaussian error distributions associated with precipitation, and large model and observation errors. As a result, improving the model forecast beyond a few hours by assimilating precipitation has been found to be difficult. To identify the challenges and propose practical solutions to assimilation of precipitation, statistics are calculated for global precipitation in a low-resolution NCEP Global Forecast System (GFS) model and the TRMM Multisatellite Precipitation Analysis (TMPA). The samples are constructed using the same model with the same forecast period, observation variables, and resolution as in the follow-on GFSTMPA precipitation assimilation experiments presented in the companion paper.The statistical results indicate that the T62 and T126 GFS models generally have positive bias in precipitation compared to the TMPA observations, and that the simulation of the marine stratocumulus precipitation is not realistic in the T62 GFS model. It is necessary to apply to precipitation either the commonly used logarithm transformation or the newly proposed Gaussian transformation to obtain a better relationship between the model and observational precipitation. When the Gaussian transformations are separately applied to the model and observational precipitation, they serve as a bias correction that corrects the amplitude-dependent biases. In addition, using a spatially andor temporally averaged precipitation variable, such as the 6-h accumulated precipitation, should be advantageous for precipitation assimilation.

  2. A high speed model-based approach for wavefront sensorless adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing

    2018-02-01

    To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).

  3. CORRELATES OF INTERORGANIZATIONAL SERVICE COORDINATION IN COMMUNITY CORRECTIONS

    PubMed Central

    Welsh, Wayne N.; Prendergast, Michael; Knight, Kevin; Knudsen, Hannah; Monico, Laura; Gray, Julie; Abdel-Salam, Sami; Redden, Shawna Malvini; Link, Nathan; Hamilton, Leah; Shafer, Michael S.; Friedmann, Peter D.

    2016-01-01

    Because weak interagency coordination between community correctional agencies (e.g., probation and parole) and community-based treatment providers has been identified as a major barrier to the use of evidence-based practices (EBPs) for treating druginvolved offenders, this study sought to examine how key organizational (e.g., leadership, support, staffing) and individual (e.g., burnout, satisfaction) factors influence interagency relationships between these agencies. At each of 20 sites, probation/parole officials (n = 366) and community treatment providers (n = 204) were surveyed about characteristics of their agencies, themselves, and interorganizational relationships with each other. Key organizational and individual correlates of interagency relationships were examined using hierarchical linear models (HLM) analyses, supplemented by interview data. The strongest correlates included Adaptability, Efficacy, and Burnout. Implications for policy and practice are discussed. PMID:27546925

  4. Streamflow Bias Correction for Climate Change Impact Studies: Harmless Correction or Wrecking Ball?

    NASA Astrophysics Data System (ADS)

    Nijssen, B.; Chegwidden, O.

    2017-12-01

    Projections of the hydrologic impacts of climate change rely on a modeling chain that includes estimates of future greenhouse gas emissions, global climate models, and hydrologic models. The resulting streamflow time series are used in turn as input to impact studies. While these flows can sometimes be used directly in these impact studies, many applications require additional post-processing to remove model errors. Water resources models and regulation studies are a prime example of this type of application. These models rely on specific flows and reservoir levels to trigger reservoir releases and diversions and do not function well if the unregulated streamflow inputs are significantly biased in time and/or amount. This post-processing step is typically referred to as bias-correction, even though this step corrects not just the mean but the entire distribution of flows. Various quantile-mapping approaches have been developed that adjust the modeled flows to match a reference distribution for some historic period. Simulations of future flows are then post-processed using this same mapping to remove hydrologic model errors. These streamflow bias-correction methods have received far less scrutiny than the downscaling and bias-correction methods that are used for climate model output, mostly because they are less widely used. However, some of these methods introduce large artifacts in the resulting flow series, in some cases severely distorting the climate change signal that is present in future flows. In this presentation, we discuss our experience with streamflow bias-correction methods as part of a climate change impact study in the Columbia River basin in the Pacific Northwest region of the United States. To support this discussion, we present a novel way to assess whether a streamflow bias-correction method is merely a harmless correction or is more akin to taking a wrecking ball to the climate change signal.

  5. An Extreme-Value Approach to Anomaly Vulnerability Identification

    NASA Technical Reports Server (NTRS)

    Everett, Chris; Maggio, Gaspare; Groen, Frank

    2010-01-01

    The objective of this paper is to present a method for importance analysis in parametric probabilistic modeling where the result of interest is the identification of potential engineering vulnerabilities associated with postulated anomalies in system behavior. In the context of Accident Precursor Analysis (APA), under which this method has been developed, these vulnerabilities, designated as anomaly vulnerabilities, are conditions that produce high risk in the presence of anomalous system behavior. The method defines a parameter-specific Parameter Vulnerability Importance measure (PVI), which identifies anomaly risk-model parameter values that indicate the potential presence of anomaly vulnerabilities, and allows them to be prioritized for further investigation. This entails analyzing each uncertain risk-model parameter over its credible range of values to determine where it produces the maximum risk. A parameter that produces high system risk for a particular range of values suggests that the system is vulnerable to the modeled anomalous conditions, if indeed the true parameter value lies in that range. Thus, PVI analysis provides a means of identifying and prioritizing anomaly-related engineering issues that at the very least warrant improved understanding to reduce uncertainty, such that true vulnerabilities may be identified and proper corrective actions taken.

  6. A Dirichlet process model for classifying and forecasting epidemic curves

    PubMed Central

    2014-01-01

    Background A forecast can be defined as an endeavor to quantitatively estimate a future event or probabilities assigned to a future occurrence. Forecasting stochastic processes such as epidemics is challenging since there are several biological, behavioral, and environmental factors that influence the number of cases observed at each point during an epidemic. However, accurate forecasts of epidemics would impact timely and effective implementation of public health interventions. In this study, we introduce a Dirichlet process (DP) model for classifying and forecasting influenza epidemic curves. Methods The DP model is a nonparametric Bayesian approach that enables the matching of current influenza activity to simulated and historical patterns, identifies epidemic curves different from those observed in the past and enables prediction of the expected epidemic peak time. The method was validated using simulated influenza epidemics from an individual-based model and the accuracy was compared to that of the tree-based classification technique, Random Forest (RF), which has been shown to achieve high accuracy in the early prediction of epidemic curves using a classification approach. We also applied the method to forecasting influenza outbreaks in the United States from 1997–2013 using influenza-like illness (ILI) data from the Centers for Disease Control and Prevention (CDC). Results We made the following observations. First, the DP model performed as well as RF in identifying several of the simulated epidemics. Second, the DP model correctly forecasted the peak time several days in advance for most of the simulated epidemics. Third, the accuracy of identifying epidemics different from those already observed improved with additional data, as expected. Fourth, both methods correctly classified epidemics with higher reproduction numbers (R) with a higher accuracy compared to epidemics with lower R values. Lastly, in the classification of seasonal influenza epidemics based on ILI data from the CDC, the methods’ performance was comparable. Conclusions Although RF requires less computational time compared to the DP model, the algorithm is fully supervised implying that epidemic curves different from those previously observed will always be misclassified. In contrast, the DP model can be unsupervised, semi-supervised or fully supervised. Since both methods have their relative merits, an approach that uses both RF and the DP model could be beneficial. PMID:24405642

  7. Ionospheric propagation correction modeling for satellite altimeters

    NASA Technical Reports Server (NTRS)

    Nesterczuk, G.

    1981-01-01

    The theoretical basis and avaliable accuracy verifications were reviewed and compared for ionospheric correction procedures based on a global ionsopheric model driven by solar flux, and a technique in which measured electron content (using Faraday rotation measurements) for one path is mapped into corrections for a hemisphere. For these two techniques, RMS errors for correcting satellite altimeters data (at 14 GHz) are estimated to be 12 cm and 3 cm, respectively. On the basis of global accuracy and reliability after implementation, the solar flux model is recommended.

  8. Modeling boundary measurements of scattered light using the corrected diffusion approximation

    PubMed Central

    Lehtikangas, Ossi; Tarvainen, Tanja; Kim, Arnold D.

    2012-01-01

    We study the modeling and simulation of steady-state measurements of light scattered by a turbid medium taken at the boundary. In particular, we implement the recently introduced corrected diffusion approximation in two spatial dimensions to model these boundary measurements. This implementation uses expansions in plane wave solutions to compute boundary conditions and the additive boundary layer correction, and a finite element method to solve the diffusion equation. We show that this corrected diffusion approximation models boundary measurements substantially better than the standard diffusion approximation in comparison to numerical solutions of the radiative transport equation. PMID:22435102

  9. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.

  10. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks

    PubMed Central

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-01-01

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network. PMID:26633400

  11. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks.

    PubMed

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-11-30

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network.

  12. Station Correction Uncertainty in Multiple Event Location Algorithms and the Effect on Error Ellipses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne

    Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less

  13. Comparing the use of an online expert health network against common information sources to answer health questions.

    PubMed

    Rhebergen, Martijn D F; Lenderink, Annet F; van Dijk, Frank J H; Hulshof, Carel T J

    2012-02-02

    Many workers have questions about occupational safety and health (OSH). It is unknown whether workers are able to find correct, evidence-based answers to OSH questions when they use common information sources, such as websites, or whether they would benefit from using an easily accessible, free-of-charge online network of OSH experts providing advice. To assess the rate of correct, evidence-based answers to OSH questions in a group of workers who used an online network of OSH experts (intervention group) compared with a group of workers who used common information sources (control group). In a quasi-experimental study, workers in the intervention and control groups were randomly offered 2 questions from a pool of 16 standardized OSH questions. Both questions were sent by mail to all participants, who had 3 weeks to answer them. The intervention group was instructed to use only the online network ArboAntwoord, a network of about 80 OSH experts, to solve the questions. The control group was instructed that they could use all information sources available to them. To assess answer correctness as the main study outcome, 16 standardized correct model answers were constructed with the help of reviewers who performed literature searches. Subsequently, the answers provided by all participants in the intervention (n = 94 answers) and control groups (n = 124 answers) were blinded and compared with the correct model answers on the degree of correctness. Of the 94 answers given by participants in the intervention group, 58 were correct (62%), compared with 24 of the 124 answers (19%) in the control group, who mainly used informational websites found via Google. The difference between the 2 groups was significant (rate difference = 43%, 95% confidence interval [CI] 30%-54%). Additional analysis showed that the rate of correct main conclusions of the answers was 85 of 94 answers (90%) in the intervention group and 75 of 124 answers (61%) in the control group (rate difference = 29%, 95% CI 19%-40%). Remarkably, we could not identify differences between workers who provided correct answers and workers who did not on how they experienced the credibility, completeness, and applicability of the information found (P > .05). Workers are often unable to find correct answers to OSH questions when using common information sources, generally informational websites. Because workers frequently misjudge the quality of the information they find, other strategies are required to assist workers in finding correct answers. Expert advice provided through an online expert network can be effective for this purpose. As many people experience difficulties in finding correct answers to their health questions, expert networks may be an attractive new source of information for health fields in general.

  14. Comparing the Use of an Online Expert Health Network against Common Information Sources to Answer Health Questions

    PubMed Central

    Lenderink, Annet F; van Dijk, Frank JH; Hulshof, Carel TJ

    2012-01-01

    Background Many workers have questions about occupational safety and health (OSH). It is unknown whether workers are able to find correct, evidence-based answers to OSH questions when they use common information sources, such as websites, or whether they would benefit from using an easily accessible, free-of-charge online network of OSH experts providing advice. Objective To assess the rate of correct, evidence-based answers to OSH questions in a group of workers who used an online network of OSH experts (intervention group) compared with a group of workers who used common information sources (control group). Methods In a quasi-experimental study, workers in the intervention and control groups were randomly offered 2 questions from a pool of 16 standardized OSH questions. Both questions were sent by mail to all participants, who had 3 weeks to answer them. The intervention group was instructed to use only the online network ArboAntwoord, a network of about 80 OSH experts, to solve the questions. The control group was instructed that they could use all information sources available to them. To assess answer correctness as the main study outcome, 16 standardized correct model answers were constructed with the help of reviewers who performed literature searches. Subsequently, the answers provided by all participants in the intervention (n = 94 answers) and control groups (n = 124 answers) were blinded and compared with the correct model answers on the degree of correctness. Results Of the 94 answers given by participants in the intervention group, 58 were correct (62%), compared with 24 of the 124 answers (19%) in the control group, who mainly used informational websites found via Google. The difference between the 2 groups was significant (rate difference = 43%, 95% confidence interval [CI] 30%–54%). Additional analysis showed that the rate of correct main conclusions of the answers was 85 of 94 answers (90%) in the intervention group and 75 of 124 answers (61%) in the control group (rate difference = 29%, 95% CI 19%–40%). Remarkably, we could not identify differences between workers who provided correct answers and workers who did not on how they experienced the credibility, completeness, and applicability of the information found (P > .05). Conclusions Workers are often unable to find correct answers to OSH questions when using common information sources, generally informational websites. Because workers frequently misjudge the quality of the information they find, other strategies are required to assist workers in finding correct answers. Expert advice provided through an online expert network can be effective for this purpose. As many people experience difficulties in finding correct answers to their health questions, expert networks may be an attractive new source of information for health fields in general. PMID:22356848

  15. Clinical prediction model to identify vulnerable patients in ambulatory surgery: towards optimal medical decision-making.

    PubMed

    Mijderwijk, Herjan; Stolker, Robert Jan; Duivenvoorden, Hugo J; Klimek, Markus; Steyerberg, Ewout W

    2016-09-01

    Ambulatory surgery patients are at risk of adverse psychological outcomes such as anxiety, aggression, fatigue, and depression. We developed and validated a clinical prediction model to identify patients who were vulnerable to these psychological outcome parameters. We prospectively assessed 383 mixed ambulatory surgery patients for psychological vulnerability, defined as the presence of anxiety (state/trait), aggression (state/trait), fatigue, and depression seven days after surgery. Three psychological vulnerability categories were considered-i.e., none, one, or multiple poor scores, defined as a score exceeding one standard deviation above the mean for each single outcome according to normative data. The following determinants were assessed preoperatively: sociodemographic (age, sex, level of education, employment status, marital status, having children, religion, nationality), medical (heart rate and body mass index), and psychological variables (self-esteem and self-efficacy), in addition to anxiety, aggression, fatigue, and depression. A prediction model was constructed using ordinal polytomous logistic regression analysis, and bootstrapping was applied for internal validation. The ordinal c-index (ORC) quantified the discriminative ability of the model, in addition to measures for overall model performance (Nagelkerke's R (2) ). In this population, 137 (36%) patients were identified as being psychologically vulnerable after surgery for at least one of the psychological outcomes. The most parsimonious and optimal prediction model combined sociodemographic variables (level of education, having children, and nationality) with psychological variables (trait anxiety, state/trait aggression, fatigue, and depression). Model performance was promising: R (2)  = 30% and ORC = 0.76 after correction for optimism. This study identified a substantial group of vulnerable patients in ambulatory surgery. The proposed clinical prediction model could allow healthcare professionals the opportunity to identify vulnerable patients in ambulatory surgery, although additional modification and validation are needed. (ClinicalTrials.gov number, NCT01441843).

  16. Implementing Training for Correctional Educators. Correctional/Special Education Training Project.

    ERIC Educational Resources Information Center

    Wolford, Bruce I., Ed.; And Others

    These papers represent the collected thoughts of the contributors to a national training and dissemination conference dealing with identifying and developing linkages between postsecondary special education and criminal justice preservice education programs in order to improve training for correctional educators working with disabled clients. The…

  17. Vocational Education in Corrections. Information Series No. 237.

    ERIC Educational Resources Information Center

    Day, Sherman R.; McCane, Mel R.

    Vocational education programs in America's correctional institutions have been financially handicapped, since security demands the greatest portion of resource allocations. Four eras in the development of the correctional system are generally identified: era of punishment and retribution, era of restraint or reform, era of rehabilitation and…

  18. 76 FR 419 - Airworthiness Directives; 328 Support Services GmbH (Type Certificate Previously Held by AvCraft...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-05

    ... cracking was identified as stress corrosion. This condition, if not corrected, could lead to in-flight... identified as stress corrosion. This condition, if not corrected, could lead to in-flight failure of the tab..., using a material that is more resistant to stress corrosion. The improved material rudder spring tab...

  19. Best Practices for Controlling Tuberculosis-Training in Correctional Facilities: A Mixed Methods Evaluation

    ERIC Educational Resources Information Center

    Murray, Ellen R.

    2016-01-01

    According to the literature, identifying and treating tuberculosis (TB) in correctional facilities have been problematic for the inmates and also for the communities into which inmates are released. The importance of training those who can identify this disease early into incarceration is vital to halt the transmission. Although some training has…

  20. Learning Style, Brain Modality, and Teaching Preferences of Incarcerated Females at the Pocatello Women's Correctional Center.

    ERIC Educational Resources Information Center

    Croker, Robert E.; And Others

    A study identified the learning style preferences and brain hemisphericity of female inmates at the Pocatello Women's Correctional Center in Pocatello, Idaho. It also identified teaching methodologies to which inmates were exposed while in a learning environment as well as preferred teaching methods. Data were gathered by the Learning Type Measure…

  1. A corrected model for static and dynamic electromechanical instability of narrow nanotweezers: Incorporation of size effect, surface layer and finite dimensions

    NASA Astrophysics Data System (ADS)

    Koochi, Ali; Hosseini-Toudeshky, Hossein; Abadyan, Mohamadreza

    2018-03-01

    Herein, a corrected theoretical model is proposed for modeling the static and dynamic behavior of electrostatically actuated narrow-width nanotweezers considering the correction due to finite dimensions, size dependency and surface energy. The Gurtin-Murdoch surface elasticity in conjunction with the modified couple stress theory is employed to consider the coupling effect of surface stresses and size phenomenon. In addition, the model accounts for the external force corrections by incorporating the impact of narrow width on the distribution of Casimir attraction, van der Waals (vdW) force and the fringing field effect. The proposed model is beneficial for the precise modeling of the narrow nanotweezers in nano-scale.

  2. Improving Mixed Variable Optimization of Computational and Model Parameters Using Multiple Surrogate Functions

    DTIC Science & Technology

    2008-03-01

    multiplicative corrections as well as space mapping transformations for models defined over a lower dimensional space. A corrected surrogate model for the...correction functions used in [72]. If the low fidelity model g(x̃) is defined over a lower dimensional space then a space mapping transformation is...required. As defined in [21, 72], space mapping is a method of mapping between models of different dimensionality or fidelity. Let P denote the space

  3. Conjunctive and Disjunctive Extensions of the Least Squares Distance Model of Cognitive Diagnosis

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.; Atanasov, Dimitar V.

    2012-01-01

    Many models of cognitive diagnosis, including the "least squares distance model" (LSDM), work under the "conjunctive" assumption that a correct item response occurs when all latent attributes required by the item are correctly performed. This article proposes a "disjunctive" version of the LSDM under which the correct item response occurs when "at…

  4. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    PubMed

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Current recommendations on the estimation of transition probabilities in Markov cohort models for use in health care decision-making: a targeted literature review.

    PubMed

    Olariu, Elena; Cadwell, Kevin K; Hancock, Elizabeth; Trueman, David; Chevrou-Severac, Helene

    2017-01-01

    Although Markov cohort models represent one of the most common forms of decision-analytic models used in health care decision-making, correct implementation of such models requires reliable estimation of transition probabilities. This study sought to identify consensus statements or guidelines that detail how such transition probability matrices should be estimated. A literature review was performed to identify relevant publications in the following databases: Medline, Embase, the Cochrane Library, and PubMed. Electronic searches were supplemented by manual-searches of health technology assessment (HTA) websites in Australia, Belgium, Canada, France, Germany, Ireland, Norway, Portugal, Sweden, and the UK. One reviewer assessed studies for eligibility. Of the 1,931 citations identified in the electronic searches, no studies met the inclusion criteria for full-text review, and no guidelines on transition probabilities in Markov models were identified. Manual-searching of the websites of HTA agencies identified ten guidelines on economic evaluations (Australia, Belgium, Canada, France, Germany, Ireland, Norway, Portugal, Sweden, and UK). All identified guidelines provided general guidance on how to develop economic models, but none provided guidance on the calculation of transition probabilities. One relevant publication was identified following review of the reference lists of HTA agency guidelines: the International Society for Pharmacoeconomics and Outcomes Research taskforce guidance. This provided limited guidance on the use of rates and probabilities. There is limited formal guidance available on the estimation of transition probabilities for use in decision-analytic models. Given the increasing importance of cost-effectiveness analysis in the decision-making processes of HTA bodies and other medical decision-makers, there is a need for additional guidance to inform a more consistent approach to decision-analytic modeling. Further research should be done to develop more detailed guidelines on the estimation of transition probabilities.

  6. Consistency of QSAR models: Correct split of training and test sets, ranking of models and performance parameters.

    PubMed

    Rácz, A; Bajusz, D; Héberger, K

    2015-01-01

    Recent implementations of QSAR modelling software provide the user with numerous models and a wealth of information. In this work, we provide some guidance on how one should interpret the results of QSAR modelling, compare and assess the resulting models, and select the best and most consistent ones. Two QSAR datasets are applied as case studies for the comparison of model performance parameters and model selection methods. We demonstrate the capabilities of sum of ranking differences (SRD) in model selection and ranking, and identify the best performance indicators and models. While the exchange of the original training and (external) test sets does not affect the ranking of performance parameters, it provides improved models in certain cases (despite the lower number of molecules in the training set). Performance parameters for external validation are substantially separated from the other merits in SRD analyses, highlighting their value in data fusion.

  7. An effective model for ergonomic optimization applied to a new automotive assembly line

    NASA Astrophysics Data System (ADS)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-01

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.

  8. Reconstructing shifts in vital rates driven by long-term environmental change: a new demographic method based on readily available data.

    PubMed

    González, Edgar J; Martorell, Carlos

    2013-07-01

    Frequently, vital rates are driven by directional, long-term environmental changes. Many of these are of great importance, such as land degradation, climate change, and succession. Traditional demographic methods assume a constant or stationary environment, and thus are inappropriate to analyze populations subject to these changes. They also require repeat surveys of the individuals as change unfolds. Methods for reconstructing such lengthy processes are needed. We present a model that, based on a time series of population size structures and densities, reconstructs the impact of directional environmental changes on vital rates. The model uses integral projection models and maximum likelihood to identify the rates that best reconstructs the time series. The procedure was validated with artificial and real data. The former involved simulated species with widely different demographic behaviors. The latter used a chronosequence of populations of an endangered cactus subject to increasing anthropogenic disturbance. In our simulations, the vital rates and their change were always reconstructed accurately. Nevertheless, the model frequently produced alternative results. The use of coarse knowledge of the species' biology (whether vital rates increase or decrease with size or their plausible values) allowed the correct rates to be identified with a 90% success rate. With real data, the model correctly reconstructed the effects of disturbance on vital rates. These effects were previously known from two populations for which demographic data were available. Our procedure seems robust, as the data violated several of the model's assumptions. Thus, time series of size structures and densities contain the necessary information to reconstruct changing vital rates. However, additional biological knowledge may be required to provide reliable results. Because time series of size structures and densities are available for many species or can be rapidly generated, our model can contribute to understand populations that face highly pressing environmental problems.

  9. Reconstructing shifts in vital rates driven by long-term environmental change: a new demographic method based on readily available data

    PubMed Central

    González, Edgar J; Martorell, Carlos

    2013-01-01

    Frequently, vital rates are driven by directional, long-term environmental changes. Many of these are of great importance, such as land degradation, climate change, and succession. Traditional demographic methods assume a constant or stationary environment, and thus are inappropriate to analyze populations subject to these changes. They also require repeat surveys of the individuals as change unfolds. Methods for reconstructing such lengthy processes are needed. We present a model that, based on a time series of population size structures and densities, reconstructs the impact of directional environmental changes on vital rates. The model uses integral projection models and maximum likelihood to identify the rates that best reconstructs the time series. The procedure was validated with artificial and real data. The former involved simulated species with widely different demographic behaviors. The latter used a chronosequence of populations of an endangered cactus subject to increasing anthropogenic disturbance. In our simulations, the vital rates and their change were always reconstructed accurately. Nevertheless, the model frequently produced alternative results. The use of coarse knowledge of the species' biology (whether vital rates increase or decrease with size or their plausible values) allowed the correct rates to be identified with a 90% success rate. With real data, the model correctly reconstructed the effects of disturbance on vital rates. These effects were previously known from two populations for which demographic data were available. Our procedure seems robust, as the data violated several of the model's assumptions. Thus, time series of size structures and densities contain the necessary information to reconstruct changing vital rates. However, additional biological knowledge may be required to provide reliable results. Because time series of size structures and densities are available for many species or can be rapidly generated, our model can contribute to understand populations that face highly pressing environmental problems. PMID:23919169

  10. Estimating Uncertainties of Ship Course and Speed in Early Navigations using ICOADS3.0

    NASA Astrophysics Data System (ADS)

    Chan, D.; Huybers, P. J.

    2017-12-01

    Information on ship position and its uncertainty is potentially important for mapping out climatologists and changes in SSTs. Using the 2-hourly ship reports from the International Comprehensive Ocean Atmosphere Dataset 3.0 (ICOADS 3.0), we estimate the uncertainties of ship course, ship speed, and latitude/longitude corrections during 1870-1900. After reviewing the techniques used in early navigations, we build forward navigation model that uses dead reckoning technique, celestial latitude corrections, and chronometer longitude corrections. The modeled ship tracks exhibit jumps in longitude and latitude, when a position correction is applied. These jumps are also seen in ICOADS3.0 observations. In this model, position error at the end of each day increases following a 2D random walk; the latitudinal/longitude errors are reset when a latitude/longitude correction is applied.We fit the variance of the magnitude of latitude/longitude corrections in the observation against model outputs, and estimate that the standard deviation of uncertainty is 5.5 degree for ship course, 32% for ship speed, 22km for latitude correction, and 27km for longitude correction. The estimates here are informative priors for Bayesian methods that quantify position errors of individual tracks.

  11. An evaluation of fundus photography and fundus autofluorescence in the diagnosis of cuticular drusen.

    PubMed

    Høeg, Tracy B; Moldow, Birgitte; Klein, Ronald; La Cour, Morten; Klemp, Kristian; Erngaard, Ditte; Ellervik, Christina; Buch, Helena

    2016-03-01

    To examine non-mydriatic fundus photography (FP) and fundus autofluorescence (FAF) as alternative non-invasive imaging modalities to fluorescein angiography (FA) in the detection of cuticular drusen (CD). Among 2953 adults from the Danish Rural Eye Study (DRES) with gradable FP, three study groups were selected: (1) All those with suspected CD without age-related macular degeneration (AMD) on FP, (2) all those with suspected CD with AMD on FP and (3) a randomly selected group with early AMD. Groups 1, 2 and 3 underwent FA and FAF and group 4 underwent FAF only as part of DRES CD substudy. Main outcome measures included percentage of correct positive and correct negative diagnoses, Cohen's κ and prevalence-adjusted and bias-adjusted κ (PABAK) coefficients of test and grader reliability. CD was correctly identified on FP 88.9% of the time and correctly identified as not being present 83.3% of the time. CD was correctly identified on FAF 62.0% of the time and correctly identified as not being present 100.0% of the time. Compared with FA, FP has a PABAK of 0.75 (0.60 to 1.5) and FAF a PABAK of 0.44 (0.23 to 0.95). FP is a promising, non-invasive substitute for FA in the diagnosis of CD. FAF was less reliable than FP to detect CD. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  12. Semiempirical evaluation of post-Hartree-Fock diagonal-Born-Oppenheimer corrections for organic molecules.

    PubMed

    Mohallem, José R

    2008-04-14

    Recent post-Hartree-Fock calculations of the diagonal-Born-Oppenheimer correction empirically show that it behaves quite similar to atomic nuclear mass corrections. An almost constant contribution per electron is identified, which converges with system size for specific series of organic molecules. This feature permits pocket-calculator evaluation of the corrections within thermochemical accuracy (10(-1) mhartree or kcal/mol).

  13. Comparison of Bruker Biotyper Matrix-Assisted Laser Desorption Ionization–Time of Flight Mass Spectrometer to BD Phoenix Automated Microbiology System for Identification of Gram-Negative Bacilli▿

    PubMed Central

    Saffert, Ryan T.; Cunningham, Scott A.; Ihde, Sherry M.; Monson Jobe, Kristine E.; Mandrekar, Jayawant; Patel, Robin

    2011-01-01

    We compared the BD Phoenix automated microbiology system to the Bruker Biotyper (version 2.0) matrix-assisted laser desorption ionization–time of flight (MALDI-TOF) mass spectrometry (MS) system for identification of Gram-negative bacilli, using biochemical testing and/or genetic sequencing to resolve discordant results. The BD Phoenix correctly identified 363 (83%) and 330 (75%) isolates to the genus and species level, respectively. The Bruker Biotyper correctly identified 408 (93%) and 360 (82%) isolates to the genus and species level, respectively. The 440 isolates were grouped into common (308) and infrequent (132) isolates in the clinical laboratory. For the 308 common isolates, the BD Phoenix and Bruker Biotyper correctly identified 294 (95%) and 296 (96%) of the isolates to the genus level, respectively. For species identification, the BD Phoenix and Bruker Biotyper correctly identified 93% of the common isolates (285 and 286, respectively). In contrast, for the 132 infrequent isolates, the Bruker Biotyper correctly identified 112 (85%) and 74 (56%) isolates to the genus and species level, respectively, compared to the BD Phoenix, which identified only 69 (52%) and 45 (34%) isolates to the genus and species level, respectively. Statistically, the Bruker Biotyper overall outperformed the BD Phoenix for identification of Gram-negative bacilli to the genus (P < 0.0001) and species (P = 0.0005) level in this sample set. When isolates were categorized as common or infrequent isolates, there was statistically no difference between the instruments for identification of common Gram-negative bacilli (P > 0.05). However, the Bruker Biotyper outperformed the BD Phoenix for identification of infrequently isolated Gram-negative bacilli (P < 0.0001). PMID:21209160

  14. Comparison of methods for the identification of microorganisms isolated from blood cultures.

    PubMed

    Monteiro, Aydir Cecília Marinho; Fortaleza, Carlos Magno Castelo Branco; Ferreira, Adriano Martison; Cavalcante, Ricardo de Souza; Mondelli, Alessandro Lia; Bagagli, Eduardo; da Cunha, Maria de Lourdes Ribeiro de Souza

    2016-08-05

    Bloodstream infections are responsible for thousands of deaths each year. The rapid identification of the microorganisms causing these infections permits correct therapeutic management that will improve the prognosis of the patient. In an attempt to reduce the time spent on this step, microorganism identification devices have been developed, including the VITEK(®) 2 system, which is currently used in routine clinical microbiology laboratories. This study evaluated the accuracy of the VITEK(®) 2 system in the identification of 400 microorganisms isolated from blood cultures and compared the results to those obtained with conventional phenotypic and genotypic methods. In parallel to the phenotypic identification methods, the DNA of these microorganisms was extracted directly from the blood culture bottles for genotypic identification by the polymerase chain reaction (PCR) and DNA sequencing. The automated VITEK(®) 2 system correctly identified 94.7 % (379/400) of the isolates. The YST and GN cards resulted in 100 % correct identifications of yeasts (15/15) and Gram-negative bacilli (165/165), respectively. The GP card correctly identified 92.6 % (199/215) of Gram-positive cocci, while the ANC card was unable to correctly identify any Gram-positive bacilli (0/5). The performance of the VITEK(®) 2 system was considered acceptable and statistical analysis showed that the system is a suitable option for routine clinical microbiology laboratories to identify different microorganisms.

  15. The robust corrective action priority-an improved approach for selecting competing corrective actions in FMEA based on principle of robust design

    NASA Astrophysics Data System (ADS)

    Sutrisno, Agung; Gunawan, Indra; Vanany, Iwan

    2017-11-01

    In spite of being integral part in risk - based quality improvement effort, studies improving quality of selection of corrective action priority using FMEA technique are still limited in literature. If any, none is considering robustness and risk in selecting competing improvement initiatives. This study proposed a theoretical model to select risk - based competing corrective action by considering robustness and risk of competing corrective actions. We incorporated the principle of robust design in counting the preference score among corrective action candidates. Along with considering cost and benefit of competing corrective actions, we also incorporate the risk and robustness of corrective actions. An example is provided to represent the applicability of the proposed model.

  16. Similarities in error processing establish a link between saccade prediction at baseline and adaptation performance.

    PubMed

    Wong, Aaron L; Shelhamer, Mark

    2014-05-01

    Adaptive processes are crucial in maintaining the accuracy of body movements and rely on error storage and processing mechanisms. Although classically studied with adaptation paradigms, evidence of these ongoing error-correction mechanisms should also be detectable in other movements. Despite this connection, current adaptation models are challenged when forecasting adaptation ability with measures of baseline behavior. On the other hand, we have previously identified an error-correction process present in a particular form of baseline behavior, the generation of predictive saccades. This process exhibits long-term intertrial correlations that decay gradually (as a power law) and are best characterized with the tools of fractal time series analysis. Since this baseline task and adaptation both involve error storage and processing, we sought to find a link between the intertrial correlations of the error-correction process in predictive saccades and the ability of subjects to alter their saccade amplitudes during an adaptation task. Here we find just such a relationship: the stronger the intertrial correlations during prediction, the more rapid the acquisition of adaptation. This reinforces the links found previously between prediction and adaptation in motor control and suggests that current adaptation models are inadequate to capture the complete dynamics of these error-correction processes. A better understanding of the similarities in error processing between prediction and adaptation might provide the means to forecast adaptation ability with a baseline task. This would have many potential uses in physical therapy and the general design of paradigms of motor adaptation. Copyright © 2014 the American Physiological Society.

  17. A transcriptome-wide association study of 229,000 women identifies new candidate susceptibility genes for breast cancer.

    PubMed

    Wu, Lang; Shi, Wei; Long, Jirong; Guo, Xingyi; Michailidou, Kyriaki; Beesley, Jonathan; Bolla, Manjeet K; Shu, Xiao-Ou; Lu, Yingchang; Cai, Qiuyin; Al-Ejeh, Fares; Rozali, Esdy; Wang, Qin; Dennis, Joe; Li, Bingshan; Zeng, Chenjie; Feng, Helian; Gusev, Alexander; Barfield, Richard T; Andrulis, Irene L; Anton-Culver, Hoda; Arndt, Volker; Aronson, Kristan J; Auer, Paul L; Barrdahl, Myrto; Baynes, Caroline; Beckmann, Matthias W; Benitez, Javier; Bermisheva, Marina; Blomqvist, Carl; Bogdanova, Natalia V; Bojesen, Stig E; Brauch, Hiltrud; Brenner, Hermann; Brinton, Louise; Broberg, Per; Brucker, Sara Y; Burwinkel, Barbara; Caldés, Trinidad; Canzian, Federico; Carter, Brian D; Castelao, J Esteban; Chang-Claude, Jenny; Chen, Xiaoqing; Cheng, Ting-Yuan David; Christiansen, Hans; Clarke, Christine L; Collée, Margriet; Cornelissen, Sten; Couch, Fergus J; Cox, David; Cox, Angela; Cross, Simon S; Cunningham, Julie M; Czene, Kamila; Daly, Mary B; Devilee, Peter; Doheny, Kimberly F; Dörk, Thilo; Dos-Santos-Silva, Isabel; Dumont, Martine; Dwek, Miriam; Eccles, Diana M; Eilber, Ursula; Eliassen, A Heather; Engel, Christoph; Eriksson, Mikael; Fachal, Laura; Fasching, Peter A; Figueroa, Jonine; Flesch-Janys, Dieter; Fletcher, Olivia; Flyger, Henrik; Fritschi, Lin; Gabrielson, Marike; Gago-Dominguez, Manuela; Gapstur, Susan M; García-Closas, Montserrat; Gaudet, Mia M; Ghoussaini, Maya; Giles, Graham G; Goldberg, Mark S; Goldgar, David E; González-Neira, Anna; Guénel, Pascal; Hahnen, Eric; Haiman, Christopher A; Håkansson, Niclas; Hall, Per; Hallberg, Emily; Hamann, Ute; Harrington, Patricia; Hein, Alexander; Hicks, Belynda; Hillemanns, Peter; Hollestelle, Antoinette; Hoover, Robert N; Hopper, John L; Huang, Guanmengqian; Humphreys, Keith; Hunter, David J; Jakubowska, Anna; Janni, Wolfgang; John, Esther M; Johnson, Nichola; Jones, Kristine; Jones, Michael E; Jung, Audrey; Kaaks, Rudolf; Kerin, Michael J; Khusnutdinova, Elza; Kosma, Veli-Matti; Kristensen, Vessela N; Lambrechts, Diether; Le Marchand, Loic; Li, Jingmei; Lindström, Sara; Lissowska, Jolanta; Lo, Wing-Yee; Loibl, Sibylle; Lubinski, Jan; Luccarini, Craig; Lux, Michael P; MacInnis, Robert J; Maishman, Tom; Kostovska, Ivana Maleva; Mannermaa, Arto; Manson, JoAnn E; Margolin, Sara; Mavroudis, Dimitrios; Meijers-Heijboer, Hanne; Meindl, Alfons; Menon, Usha; Meyer, Jeffery; Mulligan, Anna Marie; Neuhausen, Susan L; Nevanlinna, Heli; Neven, Patrick; Nielsen, Sune F; Nordestgaard, Børge G; Olopade, Olufunmilayo I; Olson, Janet E; Olsson, Håkan; Peterlongo, Paolo; Peto, Julian; Plaseska-Karanfilska, Dijana; Prentice, Ross; Presneau, Nadege; Pylkäs, Katri; Rack, Brigitte; Radice, Paolo; Rahman, Nazneen; Rennert, Gad; Rennert, Hedy S; Rhenius, Valerie; Romero, Atocha; Romm, Jane; Rudolph, Anja; Saloustros, Emmanouil; Sandler, Dale P; Sawyer, Elinor J; Schmidt, Marjanka K; Schmutzler, Rita K; Schneeweiss, Andreas; Scott, Rodney J; Scott, Christopher G; Seal, Sheila; Shah, Mitul; Shrubsole, Martha J; Smeets, Ann; Southey, Melissa C; Spinelli, John J; Stone, Jennifer; Surowy, Harald; Swerdlow, Anthony J; Tamimi, Rulla M; Tapper, William; Taylor, Jack A; Terry, Mary Beth; Tessier, Daniel C; Thomas, Abigail; Thöne, Kathrin; Tollenaar, Rob A E M; Torres, Diana; Truong, Thérèse; Untch, Michael; Vachon, Celine; Van Den Berg, David; Vincent, Daniel; Waisfisz, Quinten; Weinberg, Clarice R; Wendt, Camilla; Whittemore, Alice S; Wildiers, Hans; Willett, Walter C; Winqvist, Robert; Wolk, Alicja; Xia, Lucy; Yang, Xiaohong R; Ziogas, Argyrios; Ziv, Elad; Dunning, Alison M; Pharoah, Paul D P; Simard, Jacques; Milne, Roger L; Edwards, Stacey L; Kraft, Peter; Easton, Douglas F; Chenevix-Trench, Georgia; Zheng, Wei

    2018-06-18

    The breast cancer risk variants identified in genome-wide association studies explain only a small fraction of the familial relative risk, and the genes responsible for these associations remain largely unknown. To identify novel risk loci and likely causal genes, we performed a transcriptome-wide association study evaluating associations of genetically predicted gene expression with breast cancer risk in 122,977 cases and 105,974 controls of European ancestry. We used data from the Genotype-Tissue Expression Project to establish genetic models to predict gene expression in breast tissue and evaluated model performance using data from The Cancer Genome Atlas. Of the 8,597 genes evaluated, significant associations were identified for 48 at a Bonferroni-corrected threshold of P < 5.82 × 10 -6 , including 14 genes at loci not yet reported for breast cancer. We silenced 13 genes and showed an effect for 11 on cell proliferation and/or colony-forming efficiency. Our study provides new insights into breast cancer genetics and biology.

  18. Optimisation of near-infrared reflectance model in measuring protein and amylose content of rice flour.

    PubMed

    Xie, L H; Tang, S Q; Chen, N; Luo, J; Jiao, G A; Shao, G N; Wei, X J; Hu, P S

    2014-01-01

    Near-infrared reflectance spectroscopy (NIRS) has been used to predict the cooking quality parameters of rice, such as the protein (PC) and amylose content (AC). Using brown and milled flours from 519 rice samples representing a wide range of grain qualities, this study was to compare the calibration models generated by different mathematical, preprocessing treatments, and combinations of different regression algorithm. A modified partial least squares model (MPLS) with the mathematic treatment "2, 8, 8, 2" (2nd order derivative computed based on 8 data points, and 8 and 2 data points in the 1st and 2nd smoothing, respectively) and inverse multiplicative scattering correction preprocessing treatment was identified as the best model for simultaneously measurement of PC and AC in brown flours. MPLS/"2, 8, 8, 2"/detrend preprocessing was identified as the best model for milled flours. The results indicated that NIRS could be useful in estimation of PC and AC of breeding lines in early generations of the breeding programs, and for the purposes of quality control in the food industry. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Rasch-Master's Partial Credit Model in the assessment of children's creativity in drawings.

    PubMed

    Nakano, Tatiana de Cássia; Primi, Ricardo

    2014-01-01

    The purpose of the present study was to use the Partial Credit Model to study the factors of the Test of Creativity in Children and identify which characteristics of the creative person would be more effective to differentiate subjects according to their ability level. A sample of 1426 students from first to eighth grades answered the instrument. The Partial Credits model was used to estimate the ability of the subjects and item difficulties on a common scale for each of the four factors, indicating which items required a higher level of creativity to be scored and will differentiate the more creative individuals. The results demonstrated that the greater part of the characteristics showed good fit indices, with values between 0.80 and 1.30 both infit and outfit, indicating a response pattern consistent with the model. The characteristics of Unusual Perspective, Expression of Emotion and Originality have been identified as better predictors of creative performance because requires greater ability level (usually above two standard deviation). These results may be used in the future development of an instrument's reduced form or simplification of the current correction model.

  20. Grohar: Automated Visualization of Genome-Scale Metabolic Models and Their Pathways.

    PubMed

    Moškon, Miha; Zimic, Nikolaj; Mraz, Miha

    2018-05-01

    Genome-scale metabolic models (GEMs) have become a powerful tool for the investigation of the entire metabolism of the organism in silico. These models are, however, often extremely hard to reconstruct and also difficult to apply to the selected problem. Visualization of the GEM allows us to easier comprehend the model, to perform its graphical analysis, to find and correct the faulty relations, to identify the parts of the system with a designated function, etc. Even though several approaches for the automatic visualization of GEMs have been proposed, metabolic maps are still manually drawn or at least require large amount of manual curation. We present Grohar, a computational tool for automatic identification and visualization of GEM (sub)networks and their metabolic fluxes. These (sub)networks can be specified directly by listing the metabolites of interest or indirectly by providing reference metabolic pathways from different sources, such as KEGG, SBML, or Matlab file. These pathways are identified within the GEM using three different pathway alignment algorithms. Grohar also supports the visualization of the model adjustments (e.g., activation or inhibition of metabolic reactions) after perturbations are induced.

  1. Compressibility Considerations for kappa-omega Turbulence Models in Hypersonic Boundary Layer Applications

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.

    2009-01-01

    The ability of kappa-omega models to predict compressible turbulent skin friction in hypersonic boundary layers is investigated. Although uncorrected two-equation models can agree well with correlations for hot-wall cases, they tend to perform progressively worse - particularly for cold walls - as the Mach number is increased in the hypersonic regime. Simple algebraic models such as Baldwin-Lomax perform better compared to experiments and correlations in these circumstances. Many of the compressibility corrections described in the literature are summarized here. These include corrections that have only a small influence for kappa-omega models, or that apply only in specific circumstances. The most widely-used general corrections were designed for use with jet or mixing-layer free shear flows. A less well-known dilatation-dissipation correction intended for boundary layer flows is also tested, and is shown to agree reasonably well with the Baldwin-Lomax model at cold-wall conditions. It exhibits a less dramatic influence than the free shear type of correction. There is clearly a need for improved understanding and better overall physical modeling for turbulence models applied to hypersonic boundary layer flows.

  2. Use of modeling to identify vulnerabilities to human error in laparoscopy.

    PubMed

    Funk, Kenneth H; Bauer, James D; Doolen, Toni L; Telasha, David; Nicolalde, R Javier; Reeber, Miriam; Yodpijit, Nantakrit; Long, Myra

    2010-01-01

    This article describes an exercise to investigate the utility of modeling and human factors analysis in understanding surgical processes and their vulnerabilities to medical error. A formal method to identify error vulnerabilities was developed and applied to a test case of Veress needle insertion during closed laparoscopy. A team of 2 surgeons, a medical assistant, and 3 engineers used hierarchical task analysis and Integrated DEFinition language 0 (IDEF0) modeling to create rich models of the processes used in initial port creation. Using terminology from a standardized human performance database, detailed task descriptions were written for 4 tasks executed in the process of inserting the Veress needle. Key terms from the descriptions were used to extract from the database generic errors that could occur. Task descriptions with potential errors were translated back into surgical terminology. Referring to the process models and task descriptions, the team used a modified failure modes and effects analysis (FMEA) to consider each potential error for its probability of occurrence, its consequences if it should occur and be undetected, and its probability of detection. The resulting likely and consequential errors were prioritized for intervention. A literature-based validation study confirmed the significance of the top error vulnerabilities identified using the method. Ongoing work includes design and evaluation of procedures to correct the identified vulnerabilities and improvements to the modeling and vulnerability identification methods. Copyright 2010 AAGL. Published by Elsevier Inc. All rights reserved.

  3. Building an Ontology for Identity Resolution in Healthcare and Public Health

    PubMed Central

    Duncan, Jeffrey; Eilbeck, Karen; Narus, Scott P.; Clyde, Stephen; Thornton, Sidney; Staes, Catherine

    2015-01-01

    Integration of disparate information from electronic health records, clinical data warehouses, birth certificate registries and other public health information systems offers great potential for clinical care, public health practice, and research. Such integration, however, depends on correctly matching patient-specific records using demographic identifiers. Without standards for these identifiers, record linkage is complicated by issues of structural and semantic heterogeneity. Objectives: Our objectives were to develop and validate an ontology to: 1) identify components of identity and events subsequent to birth that result in creation, change, or sharing of identity information; 2) develop an ontology to facilitate data integration from multiple healthcare and public health sources; and 3) validate the ontology’s ability to model identity-changing events over time. Methods: We interviewed domain experts in area hospitals and public health programs and developed process models describing the creation and transmission of identity information among various organizations for activities subsequent to a birth event. We searched for existing relevant ontologies. We validated the content of our ontology with simulated identity information conforming to scenarios identified in our process models. Results: We chose the Simple Event Model (SEM) to describe events in early childhood and integrated the Clinical Element Model (CEM) for demographic information. We demonstrated the ability of the combined SEM-CEM ontology to model identity events over time. Conclusion: The use of an ontology can overcome issues of semantic and syntactic heterogeneity to facilitate record linkage. PMID:26392849

  4. Characterizing bias correction uncertainty in wheat yield predictions

    NASA Astrophysics Data System (ADS)

    Ortiz, Andrea Monica; Jones, Julie; Freckleton, Robert; Scaife, Adam

    2017-04-01

    Farming systems are under increased pressure due to current and future climate change, variability and extremes. Research on the impacts of climate change on crop production typically rely on the output of complex Global and Regional Climate Models, which are used as input to crop impact models. Yield predictions from these top-down approaches can have high uncertainty for several reasons, including diverse model construction and parameterization, future emissions scenarios, and inherent or response uncertainty. These uncertainties propagate down each step of the 'cascade of uncertainty' that flows from climate input to impact predictions, leading to yield predictions that may be too complex for their intended use in practical adaptation options. In addition to uncertainty from impact models, uncertainty can also stem from the intermediate steps that are used in impact studies to adjust climate model simulations to become more realistic when compared to observations, or to correct the spatial or temporal resolution of climate simulations, which are often not directly applicable as input into impact models. These important steps of bias correction or calibration also add uncertainty to final yield predictions, given the various approaches that exist to correct climate model simulations. In order to address how much uncertainty the choice of bias correction method can add to yield predictions, we use several evaluation runs from Regional Climate Models from the Coordinated Regional Downscaling Experiment over Europe (EURO-CORDEX) at different resolutions together with different bias correction methods (linear and variance scaling, power transformation, quantile-quantile mapping) as input to a statistical crop model for wheat, a staple European food crop. The objective of our work is to compare the resulting simulation-driven hindcasted wheat yields to climate observation-driven wheat yield hindcasts from the UK and Germany in order to determine ranges of yield uncertainty that result from different climate model simulation input and bias correction methods. We simulate wheat yields using a General Linear Model that includes the effects of seasonal maximum temperatures and precipitation, since wheat is sensitive to heat stress during important developmental stages. We use the same statistical model to predict future wheat yields using the recently available bias-corrected simulations of EURO-CORDEX-Adjust. While statistical models are often criticized for their lack of complexity, an advantage is that we are here able to consider only the effect of the choice of climate model, resolution or bias correction method on yield. Initial results using both past and future bias-corrected climate simulations with a process-based model will also be presented. Through these methods, we make recommendations in preparing climate model output for crop models.

  5. Radiographic detection of single-leg fracture in Björk-Shiley Convexo-Concave prosthetic valves: a phantom model study.

    PubMed

    Gilchrist, I C; Cardella, J F; Fox, P S; Pae, W E; el-Ghamry Sabe, A A; Landis, J R; Localio, A R; Kunselman, A R; Hopper, K D

    1997-02-01

    Cineradiography can identify patients with single-leg fractured Björk-Shiley Convexo-Concave valves, although little is known about the sensitivity and specificity of this technique. We evaluated three normal and six (0 microm gap) single-leg fractured Björk-Shiley valves that were placed in a working phantom model. Valves were randomly imaged a total of 33 times and duplicated into a 120-valve series with a 1:9 ratio of abnormal/normal valves. Six reviewers independently graded each valve and demonstrated markedly different rates of identifying the fractured valves. Average sensitivity at the grade that clinically results in valve explanation was 47%. Among the normal valves, a correct identification was made 96% (range 91% to 99%) of the time. Present radiographic technology may have significant difficulty in identifying true single-leg fracture in Björk-Shiley valves with limb separations that are common among clinically explanted valves.

  6. Tracking employment shocks using mobile phone data

    PubMed Central

    Toole, Jameson L.; Lin, Yu-Ru; Muehlegger, Erich; Shoag, Daniel; González, Marta C.; Lazer, David

    2015-01-01

    Can data from mobile phones be used to observe economic shocks and their consequences at multiple scales? Here we present novel methods to detect mass layoffs, identify individuals affected by them and predict changes in aggregate unemployment rates using call detail records (CDRs) from mobile phones. Using the closure of a large manufacturing plant as a case study, we first describe a structural break model to correctly detect the date of a mass layoff and estimate its size. We then use a Bayesian classification model to identify affected individuals by observing changes in calling behaviour following the plant's closure. For these affected individuals, we observe significant declines in social behaviour and mobility following job loss. Using the features identified at the micro level, we show that the same changes in these calling behaviours, aggregated at the regional level, can improve forecasts of macro unemployment rates. These methods and results highlight promise of new data resources to measure microeconomic behaviour and improve estimates of critical economic indicators. PMID:26018965

  7. Near Infrared Spectroscopy Facilitates Rapid Identification of Both Young and Mature Amazonian Tree Species.

    PubMed

    Lang, Carla; Costa, Flávia Regina Capellotto; Camargo, José Luís Campana; Durgante, Flávia Machado; Vicentini, Alberto

    2015-01-01

    Precise identification of plant species requires a high level of knowledge by taxonomists and presence of reproductive material. This represents a major limitation for those working with seedlings and juveniles, which differ morphologically from adults and do not bear reproductive structures. Near-infrared spectroscopy (FT-NIR) has previously been shown to be effective in species discrimination of adult plants, so if young and adults have a similar spectral signature, discriminant functions based on FT-NIR spectra of adults can be used to identify leaves from young plants. We tested this with a sample of 419 plants in 13 Amazonian species from the genera Protium and Crepidospermum (Burseraceae). We obtained 12 spectral readings per plant, from adaxial and abaxial surfaces of dried leaves, and compared the rate of correct predictions of species with discriminant functions for different combinations of readings. We showed that the best models for predicting species in early developmental stages are those containing spectral data from both young and adult plants (98% correct predictions of external samples), but even using only adult spectra it is still possible to attain good levels of identification of young. We obtained an average of 75% correct identifications of young plants by discriminant equations based only on adults, when the most informative wavelengths were selected. Most species were accurately predicted (75-100% correct identifications), and only three had poor predictions (27-60%). These results were obtained despite the fact that spectra of young individuals were distinct from those of adults when species were analyzed individually. We concluded that FT-NIR has a high potential in the identification of species even at different ontogenetic stages, and that young plants can be identified based on spectra of adults with reasonable confidence.

  8. Ocean bottom pressure observations near the source of the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Inazu, D.; Hino, R.; Suzuki, S.; Osada, Y.; Ohta, Y.; Iinuma, T.; Tsushima, H.; Ito, Y.; Kido, M.; Fujimoto, H.

    2011-12-01

    A Mw9.0 earthquake occurred off Miyagi, northeast Japan, on 11 March 2011 (hereafter mainshock). An earthquake of M7.3, considered to be the largest foreshock of the mainshock, occurred on 9 March 2011 near the mainshock hypocenter. A suite of seismic and geodetic variations related to these earthquakes was observed by autonomous, ocean bottom pressure (OBP) gauges at multiple sites (4 sites at present) near the sources within a distance of about 100 km. This paper presents the OBP records with a focus on the earthquakes. Thanks to correcting tides, instrumental drifts, and non-tidal oceanic variations, we can detect OBP signals of tsunamis and vertical seafloor deformation of the order of centimeters with timescales of less than months. In the following we review the detected signals and how to correct the OBP data. The coseismic seafloor displacement and the tsunami accompanied by the mainshock were of the order of meters and large enough to be distinctly identified (Ito et al., 2011, GRL). Co- and post-seismic seafloor displacement and tsunami accompanied by the foreshock were of the order of centimeters which is difficult to be identified from the raw OBP records. The first evident pulses of these tsunamis in the deep sea have durations (periods) of ~20 minutes and ~10 minutes, for the mainshock and the foreshock, respectively. Amounts of seafloor vertical displacement due to post-mainshock deformation reached a few tens of centimeters in two months. It is worth noting that elevation and depression of seafloor were detected at rates of a couple of centimeters in a day after the largest foreshock. The seafloor displacement of centimeters between the largest foreshock and the mainshock can be reasonably identified after correcting non-tidal oceanic variations. The oceanic variations are simulated by a barotropic ocean model driven by atmospheric disturbances (Inazu et al., 2011, Ann. Rep. Earth Simulator Center 2011). The model enables residual OBP time series of non-tidal oceanic variations off Miyagi to be reduced by less than 2 cm. In order to accurately detect signals of centimeters, detiding had better be carefully done analyzing in-situ data rather than using existing ocean tide models such as NAO.99Jb and FES2004. A BAYTAP-G program was used in the present study. Instrumental drifts are modeled by a popularly used, linear and exponential form (Watts and Kontoyiannis, 1990, J. Atmos. Oceanic Tech.). Seismological interpretations of the detected OBP signals of the seafloor displacement and the tsunamis will be demonstrated in the separate papers presented in this meeting.

  9. Client Accounts of Corrective Experiences in Psychotherapy: Implications for Clinical Practice.

    PubMed

    Angus, Lynne; Constantino, Michael J

    2017-02-01

    The Patient Perceptions of Corrective Experiences in Individual Therapy (PPCEIT; Constantino, Angus, Friedlander, Messer, & Moertl, 2011) posttreatment interview guide was developed to provide clinical researchers with an effective mode of inquiry to identify and further explore clients' firsthand accounts of corrective and transformative therapy experiences and their determinants. Not only do findings from the analysis of client corrective experience (CE) accounts help identify what and how CEs happen in or as a result of psychotherapy, but the measure itself may also provide therapists with an effective tool to further enhance clients' awareness, understanding, and integration of transformative change experiences. Accordingly, we discuss in this afterword to the series the implications for clinical practice arising from (a) the thematic analysis of client CE accounts, drawn from a range of clinical samples and international research programs and (b) the clinical effect of completing the PPCEIT posttreatment interview inquiry. We also identify directions for future clinical training and research. © 2016 Wiley Periodicals, Inc.

  10. Use of market segmentation to identify untapped consumer needs in vision correction surgery for future growth.

    PubMed

    Loarie, Thomas M; Applegate, David; Kuenne, Christopher B; Choi, Lawrence J; Horowitz, Diane P

    2003-01-01

    Market segmentation analysis identifies discrete segments of the population whose beliefs are consistent with exhibited behaviors such as purchase choice. This study applies market segmentation analysis to low myopes (-1 to -3 D with less than 1 D cylinder) in their consideration and choice of a refractive surgery procedure to discover opportunities within the market. A quantitative survey based on focus group research was sent to a demographically balanced sample of myopes using contact lenses and/or glasses. A variable reduction process followed by a clustering analysis was used to discover discrete belief-based segments. The resulting segments were validated both analytically and through in-market testing. Discontented individuals who wear contact lenses are the primary target for vision correction surgery. However, 81% of the target group is apprehensive about laser in situ keratomileusis (LASIK). They are nervous about the procedure and strongly desire reversibility and exchangeability. There exists a large untapped opportunity for vision correction surgery within the low myope population. Market segmentation analysis helped determine how to best meet this opportunity through repositioning existing procedures or developing new vision correction technology, and could also be applied to identify opportunities in other vision correction populations.

  11. 15 CFR 30.9 - Transmitting and correcting Electronic Export Information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... in the AES and transmitting any changes to that information as soon as they are known. Corrections, cancellations, or amendments to that information shall be electronically identified and transmitted to the AES... authorized agent has received an error message from AES, the corrections shall take place as required. Fatal...

  12. 15 CFR 30.9 - Transmitting and correcting Electronic Export Information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... in the AES and transmitting any changes to that information as soon as they are known. Corrections, cancellations, or amendments to that information shall be electronically identified and transmitted to the AES... authorized agent has received an error message from AES, the corrections shall take place as required. Fatal...

  13. 15 CFR 30.9 - Transmitting and correcting Electronic Export Information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... in the AES and transmitting any changes to that information as soon as they are known. Corrections, cancellations, or amendments to that information shall be electronically identified and transmitted to the AES... authorized agent has received an error message from AES, the corrections shall take place as required. Fatal...

  14. 15 CFR 30.9 - Transmitting and correcting Electronic Export Information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... in the AES and transmitting any changes to that information as soon as they are known. Corrections, cancellations, or amendments to that information shall be electronically identified and transmitted to the AES... authorized agent has received an error message from AES, the corrections shall take place as required. Fatal...

  15. Calibration and prediction of removal function in magnetorheological finishing.

    PubMed

    Dai, Yifan; Song, Ci; Peng, Xiaoqiang; Shi, Feng

    2010-01-20

    A calibrated and predictive model of the removal function has been established based on the analysis of a magnetorheological finishing (MRF) process. By introducing an efficiency coefficient of the removal function, the model can be used to calibrate the removal function in a MRF figuring process and to accurately predict the removal function of a workpiece to be polished whose material is different from the spot part. Its correctness and feasibility have been validated by simulations. Furthermore, applying this model to the MRF figuring experiments, the efficiency coefficient of the removal function can be identified accurately to make the MRF figuring process deterministic and controllable. Therefore, all the results indicate that the calibrated and predictive model of the removal function can improve the finishing determinacy and increase the model applicability in a MRF process.

  16. An evaluation of three-dimensional photogrammetric and morphometric techniques for estimating volume and mass in Weddell seals Leptonychotes weddellii

    PubMed Central

    Ruscher-Hill, Brandi; Kirkham, Amy L.; Burns, Jennifer M.

    2018-01-01

    Body mass dynamics of animals can indicate critical associations between extrinsic factors and population vital rates. Photogrammetry can be used to estimate mass of individuals in species whose life histories make it logistically difficult to obtain direct body mass measurements. Such studies typically use equations to relate volume estimates from photogrammetry to mass; however, most fail to identify the sources of error between the estimated and actual mass. Our objective was to identify the sources of error that prevent photogrammetric mass estimation from directly predicting actual mass, and develop a methodology to correct this issue. To do this, we obtained mass, body measurements, and scaled photos for 56 sedated Weddell seals (Leptonychotes weddellii). After creating a three-dimensional silhouette in the image processing program PhotoModeler Pro, we used horizontal scale bars to define the ground plane, then removed the below-ground portion of the animal’s estimated silhouette. We then re-calculated body volume and applied an expected density to estimate animal mass. We compared the body mass estimates derived from this silhouette slice method with estimates derived from two other published methodologies: body mass calculated using photogrammetry coupled with a species-specific correction factor, and estimates using elliptical cones and measured tissue densities. The estimated mass values (mean ± standard deviation 345±71 kg for correction equation, 346±75 kg for silhouette slice, 343±76 kg for cones) were not statistically distinguishable from each other or from actual mass (346±73 kg) (ANOVA with Tukey HSD post-hoc, p>0.05 for all pairwise comparisons). We conclude that volume overestimates from photogrammetry are likely due to the inability of photo modeling software to properly render the ventral surface of the animal where it contacts the ground. Due to logistical differences between the “correction equation”, “silhouette slicing”, and “cones” approaches, researchers may find one technique more useful for certain study programs. In combination or exclusively, these three-dimensional mass estimation techniques have great utility in field studies with repeated measures sampling designs or where logistic constraints preclude weighing animals. PMID:29320573

  17. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks

    PubMed Central

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-01-01

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C/2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C/2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi’s model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments. PMID:29401668

  18. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks.

    PubMed

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-02-03

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.

  19. Terahertz Technology and Molecular Interactions

    DTIC Science & Technology

    2010-12-16

    numerical identification algorithm, based on a simple threshold model, showed that the probability for false alarm ( PFA ) for the least favorable of...Briefly put, Phase I of MACS was to develop in 18 months a sensor system in a 1 cu ft vol- ume that could correctly identify with a PFA < 10-4 gases in a...observe spectral lines that have fractional ab- sorptions of 10-7 there are six orders of magnitude in sensitivity at stake. If spectral lines have

  20. Crosstalk in solar polarization measurements

    NASA Technical Reports Server (NTRS)

    West, E. A.; Balasubramaniam, K. S.

    1992-01-01

    The instrumental crosstalk associated with the Marshall Space Flight Center Vector Magnetograph and the solar crosstalk created by the magnetic field are described and their impact on the reconstruction of the solar vector magnetic field is analyzed. It is pointed out that identifying and correcting the crosstalk is important in the development of realistic models describing the solar atmosphere. Solar crosstalk is spatially dependent on the structure of the magnetic field while instrumental crosstalk is dependent on the position of the analyzer.

  1. Identifying potential effects of climate change on the development of water resources in Pinios River Basin, Central Greece

    NASA Astrophysics Data System (ADS)

    Arampatzis, G.; Panagopoulos, A.; Pisinaras, V.; Tziritis, E.; Wendland, F.

    2018-05-01

    The aim of the present study is to assess the future spatial and temporal distribution of precipitation and temperature, and relate the corresponding change to water resources' quantitative status in Pinios River Basin (PRB), Thessaly, Greece. For this purpose, data from four Regional Climate Models (RCMs) for the periods 2021-2100 driven by several General Circulation Models (GCMs) were collected and bias-correction was performed based on linear scaling method. The bias-correction was made based on monthly precipitation and temperature data collected for the period 1981-2000 from 57 meteorological stations in total. The results indicate a general trend according to which precipitation is decreasing whilst temperature is increasing to an extent that varies depending on each particular RCM-GCM output. On the average, annual precipitation change for the period 2021-2100 was about - 80 mm, ranging between - 149 and + 35 mm, while the corresponding change for temperature was 2.81 °C, ranging between 1.48 and 3.72 °C. The investigation of potential impacts to the water resources demonstrates that water availability is expected to be significantly decreased in the already water-stressed PRB. The water stresses identified are related to the potential decreasing trend in groundwater recharge and the increasing trend in irrigation demand, which constitutes the major water consumer in PRB.

  2. Nomograms Predicting Progression-Free Survival, Overall Survival, and Pelvic Recurrence in Locally Advanced Cervical Cancer Developed From an Analysis of Identifiable Prognostic Factors in Patients From NRG Oncology/Gynecologic Oncology Group Randomized Trials of Chemoradiotherapy

    PubMed Central

    Rose, Peter G.; Java, James; Whitney, Charles W.; Stehman, Frederick B.; Lanciano, Rachelle; Thomas, Gillian M.; DiSilvestro, Paul A.

    2015-01-01

    Purpose To evaluate the prognostic factors in locally advanced cervical cancer limited to the pelvis and develop nomograms for 2-year progression-free survival (PFS), 5-year overall survival (OS), and pelvic recurrence. Patients and Methods We retrospectively reviewed 2,042 patients with locally advanced cervical carcinoma enrolled onto Gynecologic Oncology Group clinical trials of concurrent cisplatin-based chemotherapy and radiotherapy. Nomograms for 2-year PFS, five-year OS, and pelvic recurrence were created as visualizations of Cox proportional hazards regression models. The models were validated by bootstrap-corrected, relatively unbiased estimates of discrimination and calibration. Results Multivariable analysis identified prognostic factors including histology, race/ethnicity, performance status, tumor size, International Federation of Gynecology and Obstetrics stage, tumor grade, pelvic node status, and treatment with concurrent cisplatin-based chemotherapy. PFS, OS, and pelvic recurrence nomograms had bootstrap-corrected concordance indices of 0.62, 0.64, and 0.73, respectively, and were well calibrated. Conclusion Prognostic factors were used to develop nomograms for 2-year PFS, 5-year OS, and pelvic recurrence for locally advanced cervical cancer clinically limited to the pelvis treated with concurrent cisplatin-based chemotherapy and radiotherapy. These nomograms can be used to better estimate individual and collective outcomes. PMID:25732170

  3. Accounting protesting and warm glow bidding in Contingent Valuation surveys considering the management of environmental goods--an empirical case study assessing the value of protecting a Natura 2000 wetland area in Greece.

    PubMed

    Grammatikopoulou, Ioanna; Olsen, Søren Bøye

    2013-11-30

    Based on a Contingent Valuation survey aiming to reveal the willingness to pay (WTP) for conservation of a wetland area in Greece, we show how protest and warm glow motives can be taken into account when modeling WTP. In a sample of more than 300 respondents, we find that 54% of the positive bids are rooted to some extent in warm glow reasoning while 29% of the zero bids can be classified as expressions of protest rather than preferences. In previous studies, warm glow bidders are only rarely identified while protesters are typically identified and excluded from further analysis. We test for selection bias associated with simple removal of both protesters and warm glow bidders in our data. Our findings show that removal of warm glow bidders does not significantly distort WTP whereas we find strong evidence of selection bias associated with removal of protesters. We show how to correct for such selection bias by using a sample selection model. In our empirical sample, using the typical approach of removing protesters from the analysis, the value of protecting the wetland is significantly underestimated by as much as 46% unless correcting for selection bias. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.

    2018-05-01

    Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed hydrology. However, a thorough validation and a comparison with other methods are recommended before using the JBC method, since it may perform worse than the IBC method for some cases due to bias nonstationarity of climate model outputs.

  5. NTCP modelling of lung toxicity after SBRT comparing the universal survival curve and the linear quadratic model for fractionation correction.

    PubMed

    Wennberg, Berit M; Baumann, Pia; Gagliardi, Giovanna; Nyman, Jan; Drugge, Ninni; Hoyer, Morten; Traberg, Anders; Nilsson, Kristina; Morhed, Elisabeth; Ekberg, Lars; Wittgren, Lena; Lund, Jo-Åsmund; Levin, Nina; Sederholm, Christer; Lewensohn, Rolf; Lax, Ingmar

    2011-05-01

    In SBRT of lung tumours no established relationship between dose-volume parameters and the incidence of lung toxicity is found. The aim of this study is to compare the LQ model and the universal survival curve (USC) to calculate biologically equivalent doses in SBRT to see if this will improve knowledge on this relationship. Toxicity data on radiation pneumonitis grade 2 or more (RP2+) from 57 patients were used, 10.5% were diagnosed with RP2+. The lung DVHs were corrected for fractionation (LQ and USC) and analysed with the Lyman- Kutcher-Burman (LKB) model. In the LQ-correction α/β = 3 Gy was used and the USC parameters used were: α/β = 3 Gy, D(0) = 1.0 Gy, [Formula: see text] = 10, α = 0.206 Gy(-1) and d(T) = 5.8 Gy. In order to understand the relative contribution of different dose levels to the calculated NTCP the concept of fractional NTCP was used. This might give an insight to the questions of whether "high doses to small volumes" or "low doses to large volumes" are most important for lung toxicity. NTCP analysis with the LKB-model using parameters m = 0.4, D(50) = 30 Gy resulted for the volume dependence parameter (n) with LQ correction n = 0.87 and with USC correction n = 0.71. Using parameters m = 0.3, D(50) = 20 Gy n = 0.93 with LQ correction and n = 0.83 with USC correction. In SBRT of lung tumours, NTCP modelling of lung toxicity comparing models (LQ,USC) for fractionation correction, shows that low dose contribute less and high dose more to the NTCP when using the USC-model. Comparing NTCP modelling of SBRT data and data from breast cancer, lung cancer and whole lung irradiation implies that the response of the lung is treatment specific. More data are however needed in order to have a more reliable modelling.

  6. HESS Opinions "Should we apply bias correction to global and regional climate model data?"

    NASA Astrophysics Data System (ADS)

    Ehret, U.; Zehe, E.; Wulfmeyer, V.; Warrach-Sagi, K.; Liebert, J.

    2012-04-01

    Despite considerable progress in recent years, output of both Global and Regional Circulation Models is still afflicted with biases to a degree that precludes its direct use, especially in climate change impact studies. This is well known, and to overcome this problem bias correction (BC), i.e. the correction of model output towards observations in a post processing step for its subsequent application in climate change impact studies has now become a standard procedure. In this paper we argue that bias correction, which has a considerable influence on the results of impact studies, is not a valid procedure in the way it is currently used: it impairs the advantages of Circulation Models which are based on established physical laws by altering spatiotemporal field consistency, relations among variables and by violating conservation principles. Bias correction largely neglects feedback mechanisms and it is unclear whether bias correction methods are time-invariant under climate change conditions. Applying bias correction increases agreement of Climate Model output with observations in hind casts and hence narrows the uncertainty range of simulations and predictions without, however, providing a satisfactory physical justification. This is in most cases not transparent to the end user. We argue that this masks rather than reduces uncertainty, which may lead to avoidable forejudging of end users and decision makers. We present here a brief overview of state-of-the-art bias correction methods, discuss the related assumptions and implications, draw conclusions on the validity of bias correction and propose ways to cope with biased output of Circulation Models in the short term and how to reduce the bias in the long term. The most promising strategy for improved future Global and Regional Circulation Model simulations is the increase in model resolution to the convection-permitting scale in combination with ensemble predictions based on sophisticated approaches for ensemble perturbation. With this article, we advocate communicating the entire uncertainty range associated with climate change predictions openly and hope to stimulate a lively discussion on bias correction among the atmospheric and hydrological community and end users of climate change impact studies.

  7. Modeling functional neuroanatomy for an anatomy information system.

    PubMed

    Niggemann, Jörg M; Gebert, Andreas; Schulz, Stefan

    2008-01-01

    Existing neuroanatomical ontologies, databases and information systems, such as the Foundational Model of Anatomy (FMA), represent outgoing connections from brain structures, but cannot represent the "internal wiring" of structures and as such, cannot distinguish between different independent connections from the same structure. Thus, a fundamental aspect of Neuroanatomy, the functional pathways and functional systems of the brain such as the pupillary light reflex system, is not adequately represented. This article identifies underlying anatomical objects which are the source of independent connections (collections of neurons) and uses these as basic building blocks to construct a model of functional neuroanatomy and its functional pathways. The basic representational elements of the model are unnamed groups of neurons or groups of neuron segments. These groups, their relations to each other, and the relations to the objects of macroscopic anatomy are defined. The resulting model can be incorporated into the FMA. The capabilities of the presented model are compared to the FMA and the Brain Architecture Management System (BAMS). Internal wiring as well as functional pathways can correctly be represented and tracked. This model bridges the gap between representations of single neurons and their parts on the one hand and representations of spatial brain structures and areas on the other hand. It is capable of drawing correct inferences on pathways in a nervous system. The object and relation definitions are related to the Open Biomedical Ontology effort and its relation ontology, so that this model can be further developed into an ontology of neuronal functional systems.

  8. Retinal image contrast obtained by a model eye with combined correction of chromatic and spherical aberrations

    PubMed Central

    Ohnuma, Kazuhiko; Kayanuma, Hiroyuki; Lawu, Tjundewo; Negishi, Kazuno; Yamaguchi, Takefumi; Noda, Toru

    2011-01-01

    Correcting spherical and chromatic aberrations in vitro in human eyes provides substantial visual acuity and contrast sensitivity improvements. We found the same improvement in the retinal images using a model eye with/without correction of longitudinal chromatic aberrations (LCAs) and spherical aberrations (SAs). The model eye included an intraocular lens (IOL) and artificial cornea with human ocular LCAs and average human SAs. The optotypes were illuminated using a D65 light source, and the images were obtained using two-dimensional luminance colorimeter. The contrast improvement from the SA correction was higher than the LCA correction, indicating the benefit of an aspheric achromatic IOL. PMID:21698008

  9. Use of the Ames Check Standard Model for the Validation of Wall Interference Corrections

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Amaya, M.; Flach, R.

    2018-01-01

    The new check standard model of the NASA Ames 11-ft Transonic Wind Tunnel was chosen for a future validation of the facility's wall interference correction system. The chosen validation approach takes advantage of the fact that test conditions experienced by a large model in the slotted part of the tunnel's test section will change significantly if a subset of the slots is temporarily sealed. Therefore, the model's aerodynamic coefficients have to be recorded, corrected, and compared for two different test section configurations in order to perform the validation. Test section configurations with highly accurate Mach number and dynamic pressure calibrations were selected for the validation. First, the model is tested with all test section slots in open configuration while keeping the model's center of rotation on the tunnel centerline. In the next step, slots on the test section floor are sealed and the model is moved to a new center of rotation that is 33 inches below the tunnel centerline. Then, the original angle of attack sweeps are repeated. Afterwards, wall interference corrections are applied to both test data sets and response surface models of the resulting aerodynamic coefficients in interference-free flow are generated. Finally, the response surface models are used to predict the aerodynamic coefficients for a family of angles of attack while keeping dynamic pressure, Mach number, and Reynolds number constant. The validation is considered successful if the corrected aerodynamic coefficients obtained from the related response surface model pair show good agreement. Residual differences between the corrected coefficient sets will be analyzed as well because they are an indicator of the overall accuracy of the facility's wall interference correction process.

  10. Study on the influence of stochastic properties of correction terms on the reliability of instantaneous network RTK

    NASA Astrophysics Data System (ADS)

    Próchniewicz, Dominik

    2014-03-01

    The reliability of precision GNSS positioning primarily depends on correct carrier-phase ambiguity resolution. An optimal estimation and correct validation of ambiguities necessitates a proper definition of mathematical positioning model. Of particular importance in the model definition is the taking into account of the atmospheric errors (ionospheric and tropospheric refraction) as well as orbital errors. The use of the network of reference stations in kinematic positioning, known as Network-based Real-Time Kinematic (Network RTK) solution, facilitates the modeling of such errors and their incorporation, in the form of correction terms, into the functional description of positioning model. Lowered accuracy of corrections, especially during atmospheric disturbances, results in the occurrence of unaccounted biases, the so-called residual errors. The taking into account of such errors in Network RTK positioning model is possible by incorporating the accuracy characteristics of the correction terms into the stochastic model of observations. In this paper we investigate the impact of the expansion of the stochastic model to include correction term variances on the reliability of the model solution. In particular the results of instantaneous solution that only utilizes a single epoch of GPS observations, is analyzed. Such a solution mode due to the low number of degrees of freedom is very sensitive to an inappropriate mathematical model definition. Thus the high level of the solution reliability is very difficult to achieve. Numerical tests performed for a test network located in mountain area during ionospheric disturbances allows to verify the described method for the poor measurement conditions. The results of the ambiguity resolution as well as the rover positioning accuracy shows that the proposed method of stochastic modeling can increase the reliability of instantaneous Network RTK performance.

  11. Solution to the spectral filter problem of residual terrain modelling (RTM)

    NASA Astrophysics Data System (ADS)

    Rexer, Moritz; Hirt, Christian; Bucha, Blažej; Holmes, Simon

    2018-06-01

    In physical geodesy, the residual terrain modelling (RTM) technique is frequently used for high-frequency gravity forward modelling. In the RTM technique, a detailed elevation model is high-pass-filtered in the topography domain, which is not equivalent to filtering in the gravity domain. This in-equivalence, denoted as spectral filter problem of the RTM technique, gives rise to two imperfections (errors). The first imperfection is unwanted low-frequency (LF) gravity signals, and the second imperfection is missing high-frequency (HF) signals in the forward-modelled RTM gravity signal. This paper presents new solutions to the RTM spectral filter problem. Our solutions are based on explicit modelling of the two imperfections via corrections. The HF correction is computed using spectral domain gravity forward modelling that delivers the HF gravity signal generated by the long-wavelength RTM reference topography. The LF correction is obtained from pre-computed global RTM gravity grids that are low-pass-filtered using surface or solid spherical harmonics. A numerical case study reveals maximum absolute signal strengths of ˜ 44 mGal (0.5 mGal RMS) for the HF correction and ˜ 33 mGal (0.6 mGal RMS) for the LF correction w.r.t. a degree-2160 reference topography within the data coverage of the SRTM topography model (56°S ≤ φ ≤ 60°N). Application of the LF and HF corrections to pre-computed global gravity models (here the GGMplus gravity maps) demonstrates the efficiency of the new corrections over topographically rugged terrain. Over Switzerland, consideration of the HF and LF corrections reduced the RMS of the residuals between GGMplus and ground-truth gravity from 4.41 to 3.27 mGal, which translates into ˜ 26% improvement. Over a second test area (Canada), our corrections reduced the RMS of the residuals between GGMplus and ground-truth gravity from 5.65 to 5.30 mGal (˜ 6% improvement). Particularly over Switzerland, geophysical signals (associated, e.g. with valley fillings) were found to stand out more clearly in the RTM-reduced gravity measurements when the HF and LF correction are taken into account. In summary, the new RTM filter corrections can be easily computed and applied to improve the spectral filter characteristics of the popular RTM approach. Benefits are expected, e.g. in the context of the development of future ultra-high-resolution global gravity models, smoothing of observed gravity data in mountainous terrain and geophysical interpretations of RTM-reduced gravity measurements.

  12. Building a new predictor for multiple linear regression technique-based corrective maintenance turnaround time.

    PubMed

    Cruz, Antonio M; Barr, Cameron; Puñales-Pozo, Elsa

    2008-01-01

    This research's main goals were to build a predictor for a turnaround time (TAT) indicator for estimating its values and use a numerical clustering technique for finding possible causes of undesirable TAT values. The following stages were used: domain understanding, data characterisation and sample reduction and insight characterisation. Building the TAT indicator multiple linear regression predictor and clustering techniques were used for improving corrective maintenance task efficiency in a clinical engineering department (CED). The indicator being studied was turnaround time (TAT). Multiple linear regression was used for building a predictive TAT value model. The variables contributing to such model were clinical engineering department response time (CE(rt), 0.415 positive coefficient), stock service response time (Stock(rt), 0.734 positive coefficient), priority level (0.21 positive coefficient) and service time (0.06 positive coefficient). The regression process showed heavy reliance on Stock(rt), CE(rt) and priority, in that order. Clustering techniques revealed the main causes of high TAT values. This examination has provided a means for analysing current technical service quality and effectiveness. In doing so, it has demonstrated a process for identifying areas and methods of improvement and a model against which to analyse these methods' effectiveness.

  13. Clinically Significant Thromboembolic Disease in Adult Spinal Deformity Surgery: Incidence and Risk Factors in 737 Patients.

    PubMed

    Kim, Han Jo; Iyer, Sravisht; Diebo, Basel G; Kelly, Michael P; Sciubba, Daniel; Schwab, Frank; Lafage, Virginie; Mundis, Gregory M; Shaffrey, Christopher I; Smith, Justin S; Hart, Robert; Burton, Douglas; Bess, Shay; Klineberg, Eric O

    2018-05-01

    Retrospective cohort study. Describe the rate and risk factors for venous thromboembolic events (VTEs; defined as deep venous thrombosis [DVT] and/or pulmonary embolism [PE]) in adult spinal deformity (ASD) surgery. ASD patients with VTE were identified in a prospective, multicenter database. Complications, revision, and mortality rate were examined. Patient demographics, operative details, and radiographic and clinical outcomes were compared with a non-VTE group. Multivariate binary regression model was used to identify predictors of VTE. A total of 737 patients were identified, 32 (4.3%) had VTE (DVT = 14; PE = 18). At baseline, VTE patients were less likely to be employed in jobs requiring physical labor (59.4% vs 79.7%, P < .01) and more likely to have osteoporosis (29% vs 15.1%, P = .037) and liver disease (6.5% vs 1.4%, P = .027). Patients with VTE had a larger preoperative sagittal vertical axis (SVA; 93 mm vs 55 mm, P < .01) and underwent larger SVA corrections. VTE was associated with a combined anterior/posterior approach (45% vs 25%, P = .028). VTE patients had a longer hospital stay (10 vs 7 days, P < .05) and higher mortality rate (6.3% vs 0.7%, P < .01). Multivariate analysis demonstrated osteoporosis, lack of physical labor, and increased SVA correction were independent predictors of VTE ( r 2 = .11, area under the curve = 0.74, P < .05). The incidence of VTE in ASD is 4.3% with a DVT rate of 1.9% and PE rate of 2.4%. Osteoporosis, lack of physical labor, and increased SVA correction were independent predictors of VTE. Patients with VTE had a higher mortality rate compared with non-VTE patients.

  14. Influences on women's decision making about intrauterine device use in Madagascar.

    PubMed

    Gottert, Ann; Jacquin, Karin; Rahaivondrafahitra, Bakoly; Moracco, Kathryn; Maman, Suzanne

    2015-04-01

    We explored influences on decision making about intrauterine device (IUD) use among women in the Women's Health Project (WHP), managed by Population Services International in Madagascar. We conducted six small group photonarrative discussions (n=18 individuals) and 12 individual in-depth interviews with women who were IUD users and nonusers. All participants had had contact with WHP counselors in three sites in Madagascar. Data analysis involved creating summaries of each transcript, coding in Atlas.ti and then synthesizing findings in a conceptual model. We identified three stages of women's decision making about IUD use, and specific forms of social support that seemed helpful at each stage. During the first stage, receiving correct information from a trusted source such as a counselor conveys IUD benefits and corrects misinformation, but lingering fears about the method often appeared to delay method adoption among interested women. During the second stage, hearing testimony from satisfied users and receiving ongoing emotional support appeared to help alleviate these fears. During the third stage, accompaniment by a counselor or peer seemed to help some women gain confidence to go to the clinic to receive the IUD. Identifying and supplying the types of social support women find helpful at different stages of the decision-making process could help program managers better respond to women's staged decision-making process about IUD use. This qualitative study suggests that women in Madagascar perceive multiple IUD benefits but also fear the method even after misinformation is corrected, leading to a staged decision-making process about IUD use. Programs should identify and supply the types of social support that women find helpful at each stage of decision making. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. New Swift UVOT data reduction tools and AGN variability studies

    NASA Astrophysics Data System (ADS)

    Gelbord, Jonathan; Edelson, Rick

    2017-08-01

    The efficient slewing and flexible scheduling of the Swift observatory have made it possible to conduct monitoring campaigns that are both intensive and prolonged, with multiple visits per day sustained over weeks and months. Recent Swift monitoring campaigns of a handful of AGN provide simultaneous optical, UV and X-ray light curves that can be used to measure variability and interband correlations on timescales from hours to months, providing new constraints for the structures within AGN and the relationships between them. However, the first of these campaigns, thrice-per-day observations of NGC 5548 through four months, revealed anomalous dropouts in the UVOT light curves (Edelson, Gelbord, et al. 2015). We identified the cause as localized regions of reduced detector sensitivity that are not corrected by standard processing. Properly interpreting the light curves required identifying and screening out the affected measurements.We are now using archival Swift data to better characterize these low sensitivity regions. Our immediate goal is to produce a more complete mapping of their locations so that affected measurements can be identified and screened before further analysis. Our longer-term goal is to build a more quantitative model of the effect in order to define a correction for measured fluxes, if possible, or at least to put limits on the impact upon any observation. We will combine data from numerous background stars in well-monitored fields in order to quantify the strength of the effect as a function of filter as well as location on the detector, and to test for other dependencies such as evolution over time or sensitivity to the count rate of the target. Our UVOT sensitivity maps and any correction tools will be provided to the community of Swift users.

  16. Statistical Downscaling and Bias Correction of Climate Model Outputs for Climate Change Impact Assessment in the U.S. Northeast

    NASA Technical Reports Server (NTRS)

    Ahmed, Kazi Farzan; Wang, Guiling; Silander, John; Wilson, Adam M.; Allen, Jenica M.; Horton, Radley; Anyah, Richard

    2013-01-01

    Statistical downscaling can be used to efficiently downscale a large number of General Circulation Model (GCM) outputs to a fine temporal and spatial scale. To facilitate regional impact assessments, this study statistically downscales (to 1/8deg spatial resolution) and corrects the bias of daily maximum and minimum temperature and daily precipitation data from six GCMs and four Regional Climate Models (RCMs) for the northeast United States (US) using the Statistical Downscaling and Bias Correction (SDBC) approach. Based on these downscaled data from multiple models, five extreme indices were analyzed for the future climate to quantify future changes of climate extremes. For a subset of models and indices, results based on raw and bias corrected model outputs for the present-day climate were compared with observations, which demonstrated that bias correction is important not only for GCM outputs, but also for RCM outputs. For future climate, bias correction led to a higher level of agreements among the models in predicting the magnitude and capturing the spatial pattern of the extreme climate indices. We found that the incorporation of dynamical downscaling as an intermediate step does not lead to considerable differences in the results of statistical downscaling for the study domain.

  17. Sandmeier model based topographic correction to lunar spectral profiler (SP) data from KAGUYA satellite.

    PubMed

    Chen, Sheng-Bo; Wang, Jing-Ran; Guo, Peng-Ju; Wang, Ming-Chang

    2014-09-01

    The Moon may be considered as the frontier base for the deep space exploration. The spectral analysis is one of the key techniques to determine the lunar surface rock and mineral compositions. But the lunar topographic relief is more remarkable than that of the Earth. It is necessary to conduct the topographic correction for lunar spectral data before they are used to retrieve the compositions. In the present paper, a lunar Sandmeier model was proposed by considering the radiance effect from the macro and ambient topographic relief. And the reflectance correction model was also reduced based on the Sandmeier model. The Spectral Profile (SP) data from KAGUYA satellite in the Sinus Iridum quadrangle was taken as an example. And the digital elevation data from Lunar Orbiter Laser Altimeter are used to calculate the slope, aspect, incidence and emergence angles, and terrain-viewing factor for the topographic correction Thus, the lunar surface reflectance from the SP data was corrected by the proposed model after the direct component of irradiance on a horizontal surface was derived. As a result, the high spectral reflectance facing the sun is decreased and low spectral reflectance back to the sun is compensated. The statistical histogram of reflectance-corrected pixel numbers presents Gaussian distribution Therefore, the model is robust to correct lunar topographic effect and estimate lunar surface reflectance.

  18. Reduction of wafer-edge overlay errors using advanced correction models, optimized for minimal metrology requirements

    NASA Astrophysics Data System (ADS)

    Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon

    2016-03-01

    In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.

  19. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    PubMed

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.

  20. Rate equation model of laser induced bias in uranium isotope ratios measured by resonance ionization mass spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isselhardt, B. H.; Prussin, S. G.; Savina, M. R.

    2016-01-01

    Resonance Ionization Mass Spectrometry (RIMS) has been developed as a method to measure uranium isotope abundances. In this approach, RIMS is used as an element-selective ionization process between uranium atoms and potential isobars without the aid of chemical purification and separation. The use of broad bandwidth lasers with automated feedback control of wavelength was applied to the measurement of the U-235/U-238 ratio to decrease laser-induced isotopic fractionation. In application, isotope standards are used to identify and correct bias in measured isotope ratios, but understanding laser-induced bias from first-principles can improve the precision and accuracy of experimental measurements. A rate equationmore » model for predicting the relative ionization probability has been developed to study the effect of variations in laser parameters on the measured isotope ratio. The model uses atomic data and empirical descriptions of laser performance to estimate the laser-induced bias expected in experimental measurements of the U-235/U-238 ratio. Empirical corrections are also included to account for ionization processes that are difficult to calculate from first principles with the available atomic data. Development of this model has highlighted several important considerations for properly interpreting experimental results.« less

  1. H++ 3.0: automating pK prediction and the preparation of biomolecular structures for atomistic molecular modeling and simulations.

    PubMed

    Anandakrishnan, Ramu; Aguilar, Boris; Onufriev, Alexey V

    2012-07-01

    The accuracy of atomistic biomolecular modeling and simulation studies depend on the accuracy of the input structures. Preparing these structures for an atomistic modeling task, such as molecular dynamics (MD) simulation, can involve the use of a variety of different tools for: correcting errors, adding missing atoms, filling valences with hydrogens, predicting pK values for titratable amino acids, assigning predefined partial charges and radii to all atoms, and generating force field parameter/topology files for MD. Identifying, installing and effectively using the appropriate tools for each of these tasks can be difficult for novice and time-consuming for experienced users. H++ (http://biophysics.cs.vt.edu/) is a free open-source web server that automates the above key steps in the preparation of biomolecular structures for molecular modeling and simulations. H++ also performs extensive error and consistency checking, providing error/warning messages together with the suggested corrections. In addition to numerous minor improvements, the latest version of H++ includes several new capabilities and options: fix erroneous (flipped) side chain conformations for HIS, GLN and ASN, include a ligand in the input structure, process nucleic acid structures and generate a solvent box with specified number of common ions for explicit solvent MD.

  2. Rate equation model of laser induced bias in uranium isotope ratios measured by resonance ionization mass spectrometry

    DOE PAGES

    Isselhardt, B. H.; Prussin, S. G.; Savina, M. R.; ...

    2015-12-07

    Resonance Ionization Mass Spectrometry (RIMS) has been developed as a method to measure uranium isotope abundances. In this approach, RIMS is used as an element-selective ionization process between uranium atoms and potential isobars without the aid of chemical purification and separation. The use of broad bandwidth lasers with automated feedback control of wavelength was applied to the measurement of the 235U/238U ratio to decrease laser-induced isotopic fractionation. In application, isotope standards are used to identify and correct bias in measured isotope ratios, but understanding laser-induced bias from first-principles can improve the precision and accuracy of experimental measurements. A rate equationmore » model for predicting the relative ionization probability has been developed to study the effect of variations in laser parameters on the measured isotope ratio. The model uses atomic data and empirical descriptions of laser performance to estimate the laser-induced bias expected in experimental measurements of the 235U/ 238U ratio. Empirical corrections are also included to account for ionization processes that are difficult to calculate from first principles with the available atomic data. As a result, development of this model has highlighted several important considerations for properly interpreting experimental results.« less

  3. Regulatory controls on the hydrogeological characterization of a mixed waste disposal site, Radioactive Waste Management Complex, Idaho National Engineering Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruebelmann, K.L.

    1990-01-01

    Following the detection of chlorinated volatile organic compounds in the groundwater beneath the SDA in the summer of 1987, hydrogeological characterization of the Radioactive Waste Management Complex (RWMC), Idaho National Engineering Laboratory (INEL) was required by the Resource Conservation and Recovery Act (RCRA). The waste site, the Subsurface Disposal Area (SDA), is the subject of a RCRA Corrective Action Program. Regulatory requirements for the Corrective Action Program dictate a phased approach to evaluation of the SDA. In the first phase of the program, the SDA is the subject of a RCRA Facility Investigation (RIF), which will obtain information to fullymore » characterize the physical properties of the site, determine the nature and extent of contamination, and identify pathways for migration of contaminants. If the need for corrective measures is identified during the RIF, a Corrective Measures Study (CMS) will be performed as second phase. Information generated during the RIF will be used to aid in the selection and implementation of appropriate corrective measures to correct the release. Following the CMS, the final phase is the implementation of the selected corrective measures. 4 refs., 1 fig.« less

  4. Spherical aberration correction with an in-lens N-fold symmetric line currents model.

    PubMed

    Hoque, Shahedul; Ito, Hiroyuki; Nishi, Ryuji

    2018-04-01

    In our previous works, we have proposed N-SYLC (N-fold symmetric line currents) models for aberration correction. In this paper, we propose "in-lens N-SYLC" model, where N-SYLC overlaps rotationally symmetric lens. Such overlap is possible because N-SYLC is free of magnetic materials. We analytically prove that, if certain parameters of the model are optimized, an in-lens 3-SYLC (N = 3) doublet can correct 3rd order spherical aberration. By computer simulation, we show that the required excitation current for correction is less than 0.25 AT for beam energy 5 keV, and the beam size after correction is smaller than 1 nm at the corrector image plane for initial slope less than 4 mrad. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  6. The accuracy of climate models' simulated season lengths and the effectiveness of grid scale correction factors

    DOE PAGES

    Winterhalter, Wade E.

    2011-09-01

    Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I foundmore » that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.« less

  7. The Swiss cheese model of safety incidents: are there holes in the metaphor?

    PubMed Central

    Perneger, Thomas V

    2005-01-01

    Background Reason's Swiss cheese model has become the dominant paradigm for analysing medical errors and patient safety incidents. The aim of this study was to determine if the components of the model are understood in the same way by quality and safety professionals. Methods Survey of a volunteer sample of persons who claimed familiarity with the model, recruited at a conference on quality in health care, and on the internet through quality-related websites. The questionnaire proposed several interpretations of components of the Swiss cheese model: a) slice of cheese, b) hole, c) arrow, d) active error, e) how to make the system safer. Eleven interpretations were compatible with this author's interpretation of the model, 12 were not. Results Eighty five respondents stated that they were very or quite familiar with the model. They gave on average 15.3 (SD 2.3, range 10 to 21) "correct" answers out of 23 (66.5%) – significantly more than 11.5 "correct" answers that would expected by chance (p < 0.001). Respondents gave on average 2.4 "correct" answers regarding the slice of cheese (out of 4), 2.7 "correct" answers about holes (out of 5), 2.8 "correct" answers about the arrow (out of 4), 3.3 "correct" answers about the active error (out of 5), and 4.1 "correct" answers about improving safety (out of 5). Conclusion The interpretations of specific features of the Swiss cheese model varied considerably among quality and safety professionals. Reaching consensus about concepts of patient safety requires further work. PMID:16280077

  8. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems.

    PubMed

    Silva, Lenardo C; Almeida, Hyggo O; Perkusich, Angelo; Perkusich, Mirko

    2015-10-30

    Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage.

  9. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems

    PubMed Central

    Silva, Lenardo C.; Almeida, Hyggo O.; Perkusich, Angelo; Perkusich, Mirko

    2015-01-01

    Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage. PMID:26528982

  10. Using a knowledge-based planning solution to select patients for proton therapy.

    PubMed

    Delaney, Alexander R; Dahele, Max; Tol, Jim P; Kuijper, Ingrid T; Slotman, Ben J; Verbakel, Wilko F A R

    2017-08-01

    Patient selection for proton therapy by comparing proton/photon treatment plans is time-consuming and prone to bias. RapidPlan™, a knowledge-based-planning solution, uses plan-libraries to model and predict organ-at-risk (OAR) dose-volume-histograms (DVHs). We investigated whether RapidPlan, utilizing an algorithm based only on photon beam characteristics, could generate proton DVH-predictions and whether these could correctly identify patients for proton therapy. Model PROT and Model PHOT comprised 30 head-and-neck cancer proton and photon plans, respectively. Proton and photon knowledge-based-plans (KBPs) were made for ten evaluation-patients. DVH-prediction accuracy was analyzed by comparing predicted-vs-achieved mean OAR doses. KBPs and manual plans were compared using salivary gland and swallowing muscle mean doses. For illustration, patients were selected for protons if predicted Model PHOT mean dose minus predicted Model PROT mean dose (ΔPrediction) for combined OARs was ≥6Gy, and benchmarked using achieved KBP doses. Achieved and predicted Model PROT /Model PHOT mean dose R 2 was 0.95/0.98. Generally, achieved mean dose for Model PHOT /Model PROT KBPs was respectively lower/higher than predicted. Comparing Model PROT /Model PHOT KBPs with manual plans, salivary and swallowing mean doses increased/decreased by <2Gy, on average. ΔPrediction≥6Gy correctly selected 4 of 5 patients for protons. Knowledge-based DVH-predictions can provide efficient, patient-specific selection for protons. A proton-specific RapidPlan-solution could improve results. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Satellite SAR geocoding with refined RPC model

    NASA Astrophysics Data System (ADS)

    Zhang, Lu; Balz, Timo; Liao, Mingsheng

    2012-04-01

    Recent studies have proved that the Rational Polynomial Camera (RPC) model is able to act as a reliable replacement of the rigorous Range-Doppler (RD) model for the geometric processing of satellite SAR datasets. But its capability in absolute geolocation of SAR images has not been evaluated quantitatively. Therefore, in this article the problems of error analysis and refinement of SAR RPC model are primarily investigated to improve the absolute accuracy of SAR geolocation. Range propagation delay and azimuth timing error are identified as two major error sources for SAR geolocation. An approach based on SAR image simulation and real-to-simulated image matching is developed to estimate and correct these two errors. Afterwards a refined RPC model can be built from the error-corrected RD model and then used in satellite SAR geocoding. Three experiments with different settings are designed and conducted to comprehensively evaluate the accuracies of SAR geolocation with both ordinary and refined RPC models. All the experimental results demonstrate that with RPC model refinement the absolute location accuracies of geocoded SAR images can be improved significantly, particularly in Easting direction. In another experiment the computation efficiencies of SAR geocoding with both RD and RPC models are compared quantitatively. The results show that by using the RPC model such efficiency can be remarkably improved by at least 16 times. In addition the problem of DEM data selection for SAR image simulation in RPC model refinement is studied by a comparative experiment. The results reveal that the best choice should be using the proper DEM datasets of spatial resolution comparable to that of the SAR images.

  12. Contaminant Boundary at the Faultless Underground Nuclear Test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greg Pohll; Karl Pohlmann; Jeff Daniels

    The U.S. Department of Energy (DOE) and the Nevada Division of Environmental Protection (NDEP) have reached agreement on a corrective action strategy applicable to address the extent and potential impact of radionuclide contamination of groundwater at underground nuclear test locations. This strategy is described in detail in the Federal Facility Agreement and Consent Order (FFACO, 2000). As part of the corrective action strategy, the nuclear detonations that occurred underground were identified as geographically distinct corrective action units (CAUs). The strategic objective for each CAU is to estimate over a 1,000-yr time period, with uncertainty quantified, the three-dimensional extent of groundwatermore » contamination that would be considered unsafe for domestic and municipal use. Two types of boundaries (contaminant and compliance) are discussed in the FFACO that will map the three-dimensional extent of radionuclide contamination. The contaminant boundary will identify the region wi th 95 percent certainty that contaminants do not exist above a threshold value. It will be prepared by the DOE and presented to NDEP. The compliance boundary will be produced as a result of negotiation between the DOE and NDEP, and can be coincident with, or differ from, the contaminant boundary. Two different thresholds are considered for the contaminant boundary. One is based on the enforceable National Primary Drinking Water Regulations for radionuclides, which were developed as a requirement of the Safe Drinking Water Act. The other is a risk-based threshold considering applicable lifetime excess cancer-risk-based criteria The contaminant boundary for the Faultless underground nuclear test at the Central Nevada Test Area (CNTA) is calculated using a newly developed groundwater flow and radionuclide transport model that incorporates aspects of both the original three-dimensional model (Pohlmann et al., 1999) and the two-dimensional model developed for the Faultless data decision analysis (DDA) (Pohll and Mihevc, 2000). This new model includes the uncertainty in the three-dimensional spatial distribution of lithology and hydraulic conductivity from the 1999 model as well as the uncertainty in the other flow and transport parameters from the 2000 DDA model. Additionally, the new model focuses on a much smaller region than was included in the earlier models, that is, the subsurface within the UC-1 land withdrawal area where the 1999 model predicted radionuclide transport will occur over the next 1,000 years. The purpose of this unclassified document is to present the modifications to the CNTA groundwater flow and transport model, to present the methodology used to calculate contaminant boundaries, and to present the Safe Drinking Water Act and risk-derived contaminant boundaries for the Faultless underground nuclear test CAU.« less

  13. Dynamical generation of a repulsive vector contribution to the quark pressure

    NASA Astrophysics Data System (ADS)

    Restrepo, Tulio E.; Macias, Juan Camilo; Pinto, Marcus Benghi; Ferrari, Gabriel N.

    2015-03-01

    Lattice QCD results for the coefficient c2 appearing in the Taylor expansion of the pressure show that this quantity increases with the temperature towards the Stefan-Boltzmann limit. On the other hand, model approximations predict that when a vector repulsion, parametrized by GV, is present this coefficient reaches a maximum just after Tc and then deviates from the lattice predictions. Recently, this discrepancy has been used as a guide to constrain the (presently unknown) value of GV within the framework of effective models at large Nc (LN). In the present investigation we show that, due to finite Nc effects, c2 may also develop a maximum even when GV=0 since a vector repulsive term can be dynamically generated by exchange-type radiative corrections. Here we apply the optimized perturbation theory (OPT) method to the two-flavor Polyakov-Nambu-Jona-Lasinio model (at GV=0 ) and compare the results with those furnished by lattice simulations and by the LN approximation at GV=0 and also at GV≠0 . The OPT numerical results for c2 are impressively accurate for T ≲1.2 Tc but, as expected, they predict that this quantity develops a maximum at high T . After identifying the mathematical origin of this extremum we argue that such a discrepant behavior may naturally arise within this type of effective quark theories (at GV=0 ) whenever the first 1 /Nc corrections are taken into account. We then interpret this hypothesis as an indication that beyond the large-Nc limit the correct high-temperature (perturbative) behavior of c2 will be faithfully described by effective models only if they also mimic the asymptotic freedom phenomenon.

  14. Ice Cores Dating With a New Inverse Method Taking Account of the Flow Modeling Errors

    NASA Astrophysics Data System (ADS)

    Lemieux-Dudon, B.; Parrenin, F.; Blayo, E.

    2007-12-01

    Deep ice cores extracted from Antarctica or Greenland recorded a wide range of past climatic events. In order to contribute to the Quaternary climate system understanding, the calculation of an accurate depth-age relationship is a crucial point. Up to now ice chronologies for deep ice cores estimated with inverse approaches are based on quite simplified ice-flow models that fail to reproduce flow irregularities and consequently to respect all available set of age markers. We describe in this paper, a new inverse method that takes into account the model uncertainty in order to circumvent the restrictions linked to the use of simplified flow models. This method uses first guesses on two flow physical entities, the ice thinning function and the accumulation rate and then identifies correction functions on both flow entities. We highlight two major benefits brought by this new method: first of all the ability to respect large set of observations and as a consequence, the feasibility to estimate a synchronized common ice chronology for several cores at the same time. This inverse approach relies on a bayesian framework. To respect the positive constraint on the searched correction functions, we assume lognormal probability distribution on one hand for the background errors, but also for one particular set of the observation errors. We test this new inversion method on three cores simultaneously (the two EPICA cores : DC and DML and the Vostok core) and we assimilate more than 150 observations (e.g.: age markers, stratigraphic links,...). We analyze the sensitivity of the solution with respect to the background information, especially the prior error covariance matrix. The confidence intervals based on the posterior covariance matrix calculation, are estimated on the correction functions and for the first time on the overall output chronologies.

  15. High-resolution dynamical downscaling of the future Alpine climate

    NASA Astrophysics Data System (ADS)

    Bozhinova, Denica; José Gómez-Navarro, Juan; Raible, Christoph

    2017-04-01

    The Alpine region and Switzerland is a challenging area for simulating and analysing Global Climate Model (GCM) results. This is mostly due to the combination of a very complex topography and the still rather coarse horizontal resolution of current GCMs, in which not all of the many-scale processes that drive the local weather and climate can be resolved. In our study, the Weather Research and Forecasting (WRF) model is used to dynamically downscale a GCM simulation to a resolution as high as 2 km x 2 km. WRF is driven by initial and boundary conditions produced with the Community Earth System Model (CESM) for the recent past (control run) and until 2100 using the RCP8.5 climate scenario (future run). The control run downscaled with WRF covers the period 1976-2005, while the future run investigates a 20-year-slice simulated for the 2080-2099. We compare the control WRF-CESM simulations to an observational product provided by MeteoSwiss and an additional WRF simulation driven by the ERA-Interim reanalysis, to estimate the bias that is introduced by the extra modelling step of our framework. Several bias-correction methods are evaluated, including a quantile mapping technique, to ameliorate the bias in the control WRF-CESM simulation. In the next step of our study these corrections are applied to our future WRF-CESM run. The resulting downscaled and bias-corrected data is analysed for the properties of precipitation and wind speed in the future climate. Our special interest focuses on the absolute quantities simulated for these meteorological variables as these are used to identify extreme events, such as wind storms and situations that can lead to floods.

  16. Multiple balance tests improve the assessment of postural stability in subjects with Parkinson's disease

    PubMed Central

    Jacobs, J V; Horak, F B; Tran, V K; Nutt, J G

    2006-01-01

    Objectives Clinicians often base the implementation of therapies on the presence of postural instability in subjects with Parkinson's disease (PD). These decisions are frequently based on the pull test from the Unified Parkinson's Disease Rating Scale (UPDRS). We sought to determine whether combining the pull test, the one‐leg stance test, the functional reach test, and UPDRS items 27–29 (arise from chair, posture, and gait) predicts balance confidence and falling better than any test alone. Methods The study included 67 subjects with PD. Subjects performed the one‐leg stance test, the functional reach test, and the UPDRS motor exam. Subjects also responded to the Activities‐specific Balance Confidence (ABC) scale and reported how many times they fell during the previous year. Regression models determined the combination of tests that optimally predicted mean ABC scores or categorised fall frequency. Results When all tests were included in a stepwise linear regression, only gait (UPDRS item 29), the pull test (UPDRS item 30), and the one‐leg stance test, in combination, represented significant predictor variables for mean ABC scores (r2 = 0.51). A multinomial logistic regression model including the one‐leg stance test and gait represented the model with the fewest significant predictor variables that correctly identified the most subjects as fallers or non‐fallers (85% of subjects were correctly identified). Conclusions Multiple balance tests (including the one‐leg stance test, and the gait and pull test items of the UPDRS) that assess different types of postural stress provide an optimal assessment of postural stability in subjects with PD. PMID:16484639

  17. Corrective Action Decision Document for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada, Revision 0 with ROTC 1, 2, and Errata

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wickline, Alfred

    2004-04-01

    This Corrective Action Decision Document (CADD) has been prepared for Corrective Action Unit (CAU) 204 Storage Bunkers, Nevada Test Site (NTS), Nevada, in accordance with the ''Federal Facility Agreement and Consent Order'' (FFACO) that was agreed to by the State of Nevada; U.S. Department of Energy (DOE); and the U.S. Department of Defense (FFACO, 1996). The NTS is approximately 65 miles (mi) north of Las Vegas, Nevada (Figure 1-1). The Corrective Action Sites (CASs) within CAU 204 are located in Areas 1, 2, 3, and 5 of the NTS, in Nye County, Nevada (Figure 1-2). Corrective Action Unit 204 ismore » comprised of the six CASs identified in Table 1-1. As shown in Table 1-1, the FFACO describes four of these CASs as bunkers one as chemical exchange storage and one as a blockhouse. Subsequent investigations have identified four of these structures as instrumentation bunkers (CASs 01-34-01, 02-34-01, 03-34-01, 05-33-01), one as an explosives storage bunker (CAS 05-99-02), and one as both (CAS 05-18-02). The six bunkers included in CAU 204 were primarily used to monitor atmospheric testing or store munitions. The ''Corrective Action Investigation Plan (CAIP) for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada'' (NNSA/NV, 2002a) provides information relating to the history, planning, and scope of the investigation; therefore, it will not be repeated in this CADD. This CADD identifies potential corrective action alternatives and provides a rationale for the selection of a recommended corrective action alternative for each CAS within CAU 204. The evaluation of corrective action alternatives is based on process knowledge and the results of investigative activities conducted in accordance with the CAIP (NNSA/NV, 2002a) that was approved prior to the start of the Corrective Action Investigation (CAI). Record of Technical Change (ROTC) No. 1 to the CAIP (approval pending) documents changes to the preliminary action levels (PALs) agreed to by the Nevada Division of Environmental Protection (NDEP) and DOE, National Nuclear Security Administration Nevada Site Office (NNSA/NSO). This ROTC specifically discusses the radiological PALs and their application to the findings of the CAU 204 corrective action investigation.« less

  18. Correctional Officers and Workplace Adversity: Identifying Interpersonal, Cognitive, and Behavioral Response Tendencies.

    PubMed

    Trounson, Justin S; Pfeifer, Jeffrey E

    2017-10-01

    This study explored correctional officers' response tendencies (i.e., cognitive, interpersonal, and behavioral response patterns they engage in) when managing workplace adversity. In total, 53 Australian correctional officers participated in the study. Eight exploratory focus group discussions ( n = 42) were conducted to identify a set of officer-endorsed response tendencies. Thematic analysis of group data revealed that correctional officers engage in a range of response tendencies when facing workplace adversity and that these tendencies may be categorized as interpersonally, cognitively, or behaviorally based. Semistructured interviews ( n = 11) were then conducted to provide further depth of information regarding officer response tendency usage. Results are discussed in terms of common themes, future research, and implications for developing training programs designed to ameliorate the effects of workplace adversity.

  19. Handheld ultrasound versus physical examination in patients referred for transthoracic echocardiography for a suspected cardiac condition.

    PubMed

    Mehta, Manish; Jacobson, Timothy; Peters, Dawn; Le, Elizabeth; Chadderdon, Scott; Allen, Allison J; Caughey, Aaron B; Kaul, Sanjiv

    2014-10-01

    The purpose of this study was to test the hypothesis that handheld ultrasound (HHU) provides a more accurate diagnosis than physical examination in patients with suspected cardiovascular abnormalities and that its use thus reduces additional testing and overall costs. Despite the limitations of physical examination and the demonstrated superiority of HHU for detecting cardiac abnormalities, it is not routinely used for the bedside diagnosis of cardiac conditions. Patients referred for a standard echocardiogram for common indications (cardiac function, murmur, stroke, arrhythmias, and miscellaneous) underwent physical examination and HHU by different cardiologists, who filled out a form that also included suggestions for additional testing, if necessary, based on their findings. Of 250 patients, 142 had an abnormal finding on standard echocardiogram. Of these, HHU correctly identified 117 patients (82%), and physical examination correctly identified 67 (47%, p < 0.0001). HHU was superior to physical examination (p < 0.0001) for both normal and abnormal cardiac function. It was also superior to physical examination in correctly identifying the presence of substantial valve disease (71% vs. 31%, p = 0.0003) and in identifying miscellaneous findings (47% vs. 3%, p < 0.0001). Of 108 patients without any abnormalities on standard echocardiography, further testing was suggested for 89 (82%) undergoing physical examination versus only 60 (56%) undergoing HHU (p < 0.0001). Cost modeling showed that HHU had an average cost of $644.43 versus an average cost of $707.44 for physical examination. This yielded a savings of $63.01 per patient when HHU was used versus physical examination. When used by cardiologists, HHU provides a more accurate diagnosis than physical examination for the majority of common cardiovascular abnormalities. The finding of no significant abnormality on HHU is also likely to result in less downstream testing and thus potentially reduce the overall cost for patients being evaluated for a cardiovascular diagnosis. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  20. Identifying pollination service hotspots and coldspots using citizen science data from the Great Sunflower Project

    NASA Astrophysics Data System (ADS)

    LeBuhn, G.; Schmucki, R.

    2016-12-01

    Identifying the spatial patterns of pollinator visitation rates is key to identifying the drivers of differences in pollination service and the areas where pollinator conservation will provide the highest return on investment. However, gathering pollinator abundance data at the appropriate regional and national scales is untenable. As a surrogate, habitat models have been developed to identify areas of pollinator losses but these models have been developed using expert opinion based on foraging and nesting requirements. Thousands of citizen scientists across the United States participating in The Great Sunflower Project (www.GreatSunflower.org) contribute timed counts of pollinator visits to a focal sunflower variety planted in local gardens and green spaces. While these data provide a more direct measure of pollination service to a standardized plant and include a measure of effort, the data are complicated. Each location is sampled at different dates, times and frequencies as well as different points across the local flight season. To overcome this complication, we have used a generalized additive model to generate regional flight curves to calibrate each individual data point and to attain better estimates of pollination service at each site. Using these flight season corrected data, we identify hotspots and cold spots in pollinator service across the United States, evaluate the drivers shaping the spatial patterns and observe how these data align with the results obtained from predictive models that are based on expert knowledge on foraging and nesting habitats.

  1. Understanding the dynamics of correct and error responses in free recall: evidence from externalized free recall.

    PubMed

    Unsworth, Nash; Brewer, Gene A; Spillers, Gregory J

    2010-06-01

    The dynamics of correct and error responses in a variant of delayed free recall were examined in the present study. In the externalized free recall paradigm, participants were presented with lists of words and were instructed to subsequently recall not only the words that they could remember from the most recently presented list, but also any other words that came to mind during the recall period. Externalized free recall is useful for elucidating both sampling and postretrieval editing processes, thereby yielding more accurate estimates of the total number of error responses, which are typically sampled and subsequently edited during free recall. The results indicated that the participants generally sampled correct items early in the recall period and then transitioned to sampling more erroneous responses. Furthermore, the participants generally terminated their search after sampling too many errors. An examination of editing processes suggested that the participants were quite good at identifying errors, but this varied systematically on the basis of a number of factors. The results from the present study are framed in terms of generate-edit models of free recall.

  2. Assessments of higher-order ionospheric effects on GPS coordinate time series: A case study of CMONOC with longer time series

    NASA Astrophysics Data System (ADS)

    Jiang, Weiping; Deng, Liansheng; Zhou, Xiaohui; Ma, Yifang

    2014-05-01

    Higher-order ionospheric (HIO) corrections are proposed to become a standard part for precise GPS data analysis. For this study, we deeply investigate the impacts of the HIO corrections on the coordinate time series by implementing re-processing of the GPS data from Crustal Movement Observation Network of China (CMONOC). Nearly 13 year data are used in our three processing runs: (a) run NO, without HOI corrections, (b) run IG, both second- and third-order corrections are modeled using the International Geomagnetic Reference Field 11 (IGRF11) to model the magnetic field, (c) run ID, the same with IG but dipole magnetic model are applied. Both spectral analysis and noise analysis are adopted to investigate these effects. Results show that for CMONOC stations, HIO corrections are found to have brought an overall improvement. After the corrections are applied, the noise amplitudes decrease, with the white noise amplitudes showing a more remarkable variation. Low-latitude sites are more affected. For different coordinate components, the impacts vary. The results of an analysis of stacked periodograms show that there is a good match between the seasonal amplitudes and the HOI corrections, and the observed variations in the coordinate time series are related to HOI effects. HOI delays partially explain the seasonal amplitudes in the coordinate time series, especially for the U component. The annual amplitudes for all components are decreased for over one-half of the selected CMONOC sites. Additionally, the semi-annual amplitudes for the sites are much more strongly affected by the corrections. However, when diplole model is used, the results are not as optimistic as IGRF model. Analysis of dipole model indicate that HIO delay lead to the increase of noise amplitudes, and that HIO delays with dipole model can generate false periodic signals. When dipole model are used in modeling HIO terms, larger residual and noise are brought in rather than the effective improvements.

  3. [Investigation of color vision in acute unilateral optic neuritis using a web-based color vision test].

    PubMed

    Kuchenbecker, J; Blum, M; Paul, F

    2016-03-01

    In acute unilateral optic neuritis (ON) color vision defects combined with a decrease in visual acuity and contrast sensitivity frequently occur. This study investigated whether a web-based color vision test is a reliable detector of acquired color vision defects in ON and, if so, which charts are particularly suitable. In 12 patients with acute unilateral ON, a web-based color vision test ( www.farbsehtest.de ) with 25 color plates (16 Velhagen/Broschmann and 9 Ishihara color plates) was performed. For each patient the affected eye was tested first and then the unaffected eye. The mean best-corrected distance visual acuity (BCDVA) in the ON eye was 0.36 ± 0.20 and 1.0 ± 0.1 in the contralateral eye. The number of incorrectly read plates correlated with the visual acuity. For the ON eye a total of 134 plates were correctly identified and 166 plates were incorrectly identified, while for the disease-free fellow eye, 276 plates were correctly identified and 24 plates were incorrectly identified. Both of the blue/yellow plates were identified correctly 14 times and incorrectly 10 times using the ON eye and exclusively correctly (24 times) using the fellow eye. The Velhagen/Broschmann plates were incorrectly identified significantly more frequently in comparison with the Ishihara plates. In 4 out of 16 Velhagen/Broschmann plates and 5 out of 9 Ishihara plates, no statistically significant differences between the ON eye and the fellow eye could be detected. The number of incorrectly identified plates correlated with a decrease in visual acuity. Red/green and blue/yellow plates were incorrectly identified significantly more frequently with the ON eye, while the Velhagen/Broschmann color plates were incorrectly identified significantly more frequently than the Ishihara color plates. Thus, under defined test conditions the web-based color vision test can also be used to detect acquired color vision defects, such as those caused by ON. Optimization of the test by altering the combination of plates may be a useful next step.

  4. Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2005-01-01

    A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…

  5. Contribution of energy values to the analysis of global searching molecular dynamics simulations of transmembrane helical bundles.

    PubMed Central

    Torres, Jaume; Briggs, John A G; Arkin, Isaiah T

    2002-01-01

    Molecular interactions between transmembrane alpha-helices can be explored using global searching molecular dynamics simulations (GSMDS), a method that produces a group of probable low energy structures. We have shown previously that the correct model in various homooligomers is always located at the bottom of one of various possible energy basins. Unfortunately, the correct model is not necessarily the one with the lowest energy according to the computational protocol, which has resulted in overlooking of this parameter in favor of experimental data. In an attempt to use energetic considerations in the aforementioned analysis, we used global searching molecular dynamics simulations on three homooligomers of different sizes, the structures of which are known. As expected, our results show that even when the conformational space searched includes the correct structure, taking together simulations using both left and right handedness, the correct model does not necessarily have the lowest energy. However, for the models derived from the simulation that uses the correct handedness, the lowest energy model is always at, or very close to, the correct orientation. We hypothesize that this should also be true when simulations are performed using homologous sequences, and consequently lowest energy models with the right handedness should produce a cluster around a certain orientation. In contrast, using the wrong handedness the lowest energy structures for each sequence should appear at many different orientations. The rationale behind this is that, although more than one energy basin may exist, basins that do not contain the correct model will shift or disappear because they will be destabilized by at least one conservative (i.e. silent) mutation, whereas the basin containing the correct model will remain. This not only allows one to point to the possible handedness of the bundle, but can be used to overcome ambiguities arising from the use of homologous sequences in the analysis of global searching molecular dynamics simulations. In addition, because clustering of lowest energy models arising from homologous sequences only happens when the estimation of the helix tilt is correct, it may provide a validation for the helix tilt estimate. PMID:12023229

  6. Temperature effects on pitfall catches of epigeal arthropods: a model and method for bias correction.

    PubMed

    Saska, Pavel; van der Werf, Wopke; Hemerik, Lia; Luff, Martin L; Hatten, Timothy D; Honek, Alois; Pocock, Michael

    2013-02-01

    Carabids and other epigeal arthropods make important contributions to biodiversity, food webs and biocontrol of invertebrate pests and weeds. Pitfall trapping is widely used for sampling carabid populations, but this technique yields biased estimates of abundance ('activity-density') because individual activity - which is affected by climatic factors - affects the rate of catch. To date, the impact of temperature on pitfall catches, while suspected to be large, has not been quantified, and no method is available to account for it. This lack of knowledge and the unavailability of a method for bias correction affect the confidence that can be placed on results of ecological field studies based on pitfall data.Here, we develop a simple model for the effect of temperature, assuming a constant proportional change in the rate of catch per °C change in temperature, r , consistent with an exponential Q 10 response to temperature. We fit this model to 38 time series of pitfall catches and accompanying temperature records from the literature, using first differences and other detrending methods to account for seasonality. We use meta-analysis to assess consistency of the estimated parameter r among studies.The mean rate of increase in total catch across data sets was 0·0863 ± 0·0058 per °C of maximum temperature and 0·0497 ± 0·0107 per °C of minimum temperature. Multiple regression analyses of 19 data sets showed that temperature is the key climatic variable affecting total catch. Relationships between temperature and catch were also identified at species level. Correction for temperature bias had substantial effects on seasonal trends of carabid catches. Synthesis and Applications . The effect of temperature on pitfall catches is shown here to be substantial and worthy of consideration when interpreting results of pitfall trapping. The exponential model can be used both for effect estimation and for bias correction of observed data. Correcting for temperature-related trapping bias is straightforward and enables population estimates to be more comparable. It may thus improve data interpretation in ecological, conservation and monitoring studies, and assist in better management and conservation of habitats and ecosystem services. Nevertheless, field ecologists should remain vigilant for other sources of bias.

  7. Error Detection/Correction in Collaborative Writing

    ERIC Educational Resources Information Center

    Pilotti, Maura; Chodorow, Martin

    2009-01-01

    In the present study, we examined error detection/correction during collaborative writing. Subjects were asked to identify and correct errors in two contexts: a passage written by the subject (familiar text) and a passage written by a person other than the subject (unfamiliar text). A computer program inserted errors in function words prior to the…

  8. 75 FR 70013 - Medicare Program; Inpatient Rehabilitation Facility Prospective Payment System for Federal Fiscal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-16

    ... Insurance; and Program No. 93.774, Medicare-- Supplementary Medical Insurance Program) Dated: November 9...: Correction notice. SUMMARY: This document corrects a technical error that appeared in the notice published in... of July 22, 2010 (75 FR 42836), there was a technical error that we are identifying and correcting in...

  9. 78 FR 39730 - Medicare Program; Notification of Closure of Teaching Hospitals and Opportunity To Apply for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-02

    ..., Medicare--Hospital Insurance; and Program No. 93.774, Medicare-- Supplementary Medical Insurance Program.... SUMMARY: This document corrects a typographical error that appeared in the notice published in the Federal... typographical error that is identified and corrected in the Correction of Errors section below. II. Summary of...

  10. Modeling crustal deformation near active faults and volcanic centers: a catalog of deformation models and modeling approaches

    USGS Publications Warehouse

    Battaglia, Maurizio; ,; Peter, F.; Murray, Jessica R.

    2013-01-01

    This manual provides the physical and mathematical concepts for selected models used to interpret deformation measurements near active faults and volcanic centers. The emphasis is on analytical models of deformation that can be compared with data from the Global Positioning System (GPS) receivers, Interferometric synthetic aperture radar (InSAR), leveling surveys, tiltmeters and strainmeters. Source models include pressurized spherical, ellipsoidal, and horizontal penny-shaped geometries in an elastic, homogeneous, flat half-space. Vertical dikes and faults are described following the mathematical notation for rectangular dislocations in an elastic, homogeneous, flat half-space. All the analytical expressions were verified against numerical models developed by use of COMSOL Multyphics, a Finite Element Analysis software (http://www.comsol.com). In this way, typographical errors present were identified and corrected. Matlab scripts are also provided to facilitate the application of these models.

  11. Super-global distortion correction for a rotational C-arm x-ray image intensifier.

    PubMed

    Liu, R R; Rudin, S; Bednarek, D R

    1999-09-01

    Image intensifier (II) distortion changes as a function of C-arm rotation angle because of changes in the orientation of the II with the earth's or other stray magnetic fields. For cone-beam computed tomography (CT), distortion correction for all angles is essential. The new super-global distortion correction consists of a model to continuously correct II distortion not only at each location in the image but for every rotational angle of the C arm. Calibration bead images were acquired with a standard C arm in 9 in. II mode. The super-global (SG) model is obtained from the single-plane global correction of the selected calibration images with given sampling angle interval. The fifth-order single-plane global corrections yielded a residual rms error of 0.20 pixels, while the SG model yielded a rms error of 0.21 pixels, a negligibly small difference. We evaluated the accuracy dependence of the SG model on various factors, such as the single-plane global fitting order, SG order, and angular sampling interval. We found that a good SG model can be obtained using a sixth-order SG polynomial fit based on the fifth-order single-plane global correction, and that a 10 degrees sampling interval was sufficient. Thus, the SG model saves processing resources and storage space. The residual errors from the mechanical errors of the x-ray system were also investigated, and found comparable with the SG residual error. Additionally, a single-plane global correction was done in the cylindrical coordinate system, and physical information about pincushion distortion and S distortion were observed and analyzed; however, this method is not recommended due to a lack of calculational efficiency. In conclusion, the SG model provides an accurate, fast, and simple correction for rotational C-arm images, which may be used for cone-beam CT.

  12. A model-free method for mass spectrometer response correction. [for oxygen consumption and cardiac output calculation

    NASA Technical Reports Server (NTRS)

    Shykoff, Barbara E.; Swanson, Harvey T.

    1987-01-01

    A new method for correction of mass spectrometer output signals is described. Response-time distortion is reduced independently of any model of mass spectrometer behavior. The delay of the system is found first from the cross-correlation function of a step change and its response. A two-sided time-domain digital correction filter (deconvolution filter) is generated next from the same step response data using a regression procedure. Other data are corrected using the filter and delay. The mean squared error between a step response and a step is reduced considerably more after the use of a deconvolution filter than after the application of a second-order model correction. O2 consumption and CO2 production values calculated from data corrupted by a simulated dynamic process return to near the uncorrupted values after correction. Although a clean step response or the ensemble average of several responses contaminated with noise is needed for the generation of the filter, random noise of magnitude not above 0.5 percent added to the response to be corrected does not impair the correction severely.

  13. Detection and correction of prescription errors by an emergency department pharmacy service.

    PubMed

    Stasiak, Philip; Afilalo, Marc; Castelino, Tanya; Xue, Xiaoqing; Colacone, Antoinette; Soucy, Nathalie; Dankoff, Jerrald

    2014-05-01

    Emergency departments (EDs) are recognized as a high-risk setting for prescription errors. Pharmacist involvement may be important in reviewing prescriptions to identify and correct errors. The objectives of this study were to describe the frequency and type of prescription errors detected by pharmacists in EDs, determine the proportion of errors that could be corrected, and identify factors associated with prescription errors. This prospective observational study was conducted in a tertiary care teaching ED on 25 consecutive weekdays. Pharmacists reviewed all documented prescriptions and flagged and corrected errors for patients in the ED. We collected information on patient demographics, details on prescription errors, and the pharmacists' recommendations. A total of 3,136 ED prescriptions were reviewed. The proportion of prescriptions in which a pharmacist identified an error was 3.2% (99 of 3,136; 95% confidence interval [CI] 2.5-3.8). The types of identified errors were wrong dose (28 of 99, 28.3%), incomplete prescription (27 of 99, 27.3%), wrong frequency (15 of 99, 15.2%), wrong drug (11 of 99, 11.1%), wrong route (1 of 99, 1.0%), and other (17 of 99, 17.2%). The pharmacy service intervened and corrected 78 (78 of 99, 78.8%) errors. Factors associated with prescription errors were patient age over 65 (odds ratio [OR] 2.34; 95% CI 1.32-4.13), prescriptions with more than one medication (OR 5.03; 95% CI 2.54-9.96), and those written by emergency medicine residents compared to attending emergency physicians (OR 2.21, 95% CI 1.18-4.14). Pharmacists in a tertiary ED are able to correct the majority of prescriptions in which they find errors. Errors are more likely to be identified in prescriptions written for older patients, those containing multiple medication orders, and those prescribed by emergency residents.

  14. A concise guide to clinical reasoning.

    PubMed

    Daly, Patrick

    2018-04-30

    What constitutes clinical reasoning is a disputed subject regarding the processes underlying accurate diagnosis, the importance of patient-specific versus population-based data, and the relation between virtue and expertise in clinical practice. In this paper, I present a model of clinical reasoning that identifies and integrates the processes of diagnosis, prognosis, and therapeutic decision making. The model is based on the generalized empirical method of Bernard Lonergan, which approaches inquiry with equal attention to the subject who investigates and the object under investigation. After identifying the structured operations of knowing and doing and relating these to a self-correcting cycle of learning, I correlate levels of inquiry regarding what-is-going-on and what-to-do to the practical and theoretical elements of clinical reasoning. I conclude that this model provides a methodical way to study questions regarding the operations of clinical reasoning as well as what constitute significant clinical data, clinical expertise, and virtuous health care practice. © 2018 John Wiley & Sons, Ltd.

  15. Off-target model based OPC

    NASA Astrophysics Data System (ADS)

    Lu, Mark; Liang, Curtis; King, Dion; Melvin, Lawrence S., III

    2005-11-01

    Model-based Optical Proximity correction has become an indispensable tool for achieving wafer pattern to design fidelity at current manufacturing process nodes. Most model-based OPC is performed considering the nominal process condition, with limited consideration of through process manufacturing robustness. This study examines the use of off-target process models - models that represent non-nominal process states such as would occur with a dose or focus variation - to understands and manipulate the final pattern correction to a more process robust configuration. The study will first examine and validate the process of generating an off-target model, then examine the quality of the off-target model. Once the off-target model is proven, it will be used to demonstrate methods of generating process robust corrections. The concepts are demonstrated using a 0.13 μm logic gate process. Preliminary indications show success in both off-target model production and process robust corrections. With these off-target models as tools, mask production cycle times can be reduced.

  16. Corrective Action Plan in response to the March 1992 Tiger Team Assessment of the Ames Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-11-20

    On March 5, 1992, a Department of Energy (DOE) Tiger Team completed an assessment of the Ames Laboratory, located in Ames, Iowa. The purpose of the assessment was to provide the Secretary of Energy with a report on the status and performance of Environment, Safety and Health (ES H) programs at Ames Laboratory. Detailed findings of the assessment are presented in the report, DOE/EH-0237, Tiger Team Assessment of the Ames Laboratory. This document, the Ames Laboratory Corrective Action Plan (ALCAP), presents corrective actions to overcome deficiencies cited in the Tiger Team Assessment. The Tiger Team identified 53 Environmental findings, frommore » which the Team derived four key findings. In the Safety and Health (S H) area, 126 concerns were identified, eight of which were designated Category 11 (there were no Category I concerns). Seven key concerns were derived from the 126 concerns. The Management Subteam developed 19 findings which have been summarized in four key findings. The eight S H Category 11 concerns identified in the Tiger Team Assessment were given prompt management attention. Actions to address these deficiencies have been described in individual corrective action plans, which were submitted to DOE Headquarters on March 20, 1992. The ALCAP includes actions described in this early response, as well as a long term strategy and framework for correcting all remaining deficiencies. Accordingly, the ALCAP presents the organizational structure, management systems, and specific responses that are being developed to implement corrective actions and to resolve root causes identified in the Tiger Team Assessment. The Chicago Field Office (CH), IowaState University (ISU), the Institute for Physical Research and Technology (IPRT), and Ames Laboratory prepared the ALCAP with input from the DOE Headquarters, Office of Energy Research (ER).« less

  17. Corrective Action Plan in response to the March 1992 Tiger Team Assessment of the Ames Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-11-20

    On March 5, 1992, a Department of Energy (DOE) Tiger Team completed an assessment of the Ames Laboratory, located in Ames, Iowa. The purpose of the assessment was to provide the Secretary of Energy with a report on the status and performance of Environment, Safety and Health (ES&H) programs at Ames Laboratory. Detailed findings of the assessment are presented in the report, DOE/EH-0237, Tiger Team Assessment of the Ames Laboratory. This document, the Ames Laboratory Corrective Action Plan (ALCAP), presents corrective actions to overcome deficiencies cited in the Tiger Team Assessment. The Tiger Team identified 53 Environmental findings, from whichmore » the Team derived four key findings. In the Safety and Health (S&H) area, 126 concerns were identified, eight of which were designated Category 11 (there were no Category I concerns). Seven key concerns were derived from the 126 concerns. The Management Subteam developed 19 findings which have been summarized in four key findings. The eight S&H Category 11 concerns identified in the Tiger Team Assessment were given prompt management attention. Actions to address these deficiencies have been described in individual corrective action plans, which were submitted to DOE Headquarters on March 20, 1992. The ALCAP includes actions described in this early response, as well as a long term strategy and framework for correcting all remaining deficiencies. Accordingly, the ALCAP presents the organizational structure, management systems, and specific responses that are being developed to implement corrective actions and to resolve root causes identified in the Tiger Team Assessment. The Chicago Field Office (CH), IowaState University (ISU), the Institute for Physical Research and Technology (IPRT), and Ames Laboratory prepared the ALCAP with input from the DOE Headquarters, Office of Energy Research (ER).« less

  18. CORRRECTIVE ACTION DECISION DOCUMENT FOR CORRECTIVE ACTION UNIT 427: AREA 3 SEPTIC WASTE SYSTEMS 2 AND 6, TONOPAH TEST RANGE, NEVADA, REVISION 0, JUNE 1998

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ITLV.

    1998-06-01

    This Corrective Action Decision Document has been prepared for the Area 3 Septic Waste Systems 2 and 6 (Corrective Action Unit 427) in accordance with the Federal Facility Agreement and Consent Order of 1996 (FFACO, 1996). Corrective Action Unit 427 is located at the Tonopah Test Range, Nevada, and is comprised of the following Corrective Action Sites, each an individual septic waste system (DOE/NV, 1996a): Septic Waste System 2 is Corrective Action Site Number 03-05-002-SW02. Septic Waste System 6 is Corrective Action Site Number 03-05-002-SW06. The purpose of this Corrective Action Decision Document is to identify and provide a rationalemore » for the selection of a recommended corrective action alternative for each Corrective Action Site. The scope of this Correction Action Decision Document consists of the following tasks: Develop corrective action objectives. Identify corrective action alternative screening criteria. Develop corrective action alternatives. Perform detailed and comparative evaluations of the corrective action alternatives in relation to the corrective action objectives and screening criteria. Recommend and justify a preferred corrective action alternative for each CAS. From November 1997 through January 1998, a corrective action investigation was performed as set forth in the Corrective Action Investigation Plan for Corrective Action Unit No. 427: Area 3 Septic Waste System Numbers 2 and 6, Tonopah Test Range, Nevada (DOE/NV, 1997b). Details can be found in Appendix A of this document. The results indicated that contamination is present in some portions of the CAU and not in others as described in Table ES-1 and shown in Figure A.2-2 of Appendix A. Based on the potential exposure pathways, the following corrective action objectives have been identified for Corrective Action Unit 427: Prevent or mitigate human exposure to subsurface soils containing TPH at concentrations greater than 100 milligrams per kilogram (NAC, 1996b). Close Septic Tank 33-5 in accordance with Nevada Administrative Code 459 (NAC, 1996c). Prevent adverse impacts to groundwater quality. Based on the review of existing data, future land use, and current operations at the Tonopah Test Range, the following alternatives were developed for consideration at the Area 3 Septic Waste Systems 2 and 6: Alternative 1 - No Further Action Alternative 2 - Closure of Septic Tank 33-5 and Administrative Controls Alternative 3 - Closure of Septic Tank 33-5, Excavation, and Disposal The corrective action alternatives were evaluated based on four general corrective action standards and five remedy selection decision factors. Based on the results of this evaluation, the preferred alternative for Corrective Action Unit 427 is Alternative 2, Closure of Septic Tank 33-5 and Administrative Controls. The preferred corrective action alternative was evaluated on technical merit, focusing on performance, reliability, feasibility, and safety. The alternative was judged to meet all requirements for the technical components evaluated. The alternative meets all applicable state and federal regulations for closure of the site and will reduce potential future exposure pathways to the contaminated soils. During corrective action implementation, this alternative will present minimal potential threat to site workers who come in contact with the waste. However, procedures will be developed and implemented to ensure worker health and safety.« less

  19. Discrimination of biological and chemical threat simulants in residue mixtures on multiple substrates.

    PubMed

    Gottfried, Jennifer L

    2011-07-01

    The potential of laser-induced breakdown spectroscopy (LIBS) to discriminate biological and chemical threat simulant residues prepared on multiple substrates and in the presence of interferents has been explored. The simulant samples tested include Bacillus atrophaeus spores, Escherichia coli, MS-2 bacteriophage, α-hemolysin from Staphylococcus aureus, 2-chloroethyl ethyl sulfide, and dimethyl methylphosphonate. The residue samples were prepared on polycarbonate, stainless steel and aluminum foil substrates by Battelle Eastern Science and Technology Center. LIBS spectra were collected by Battelle on a portable LIBS instrument developed by A3 Technologies. This paper presents the chemometric analysis of the LIBS spectra using partial least-squares discriminant analysis (PLS-DA). The performance of PLS-DA models developed based on the full LIBS spectra, and selected emission intensities and ratios have been compared. The full-spectra models generally provided better classification results based on the inclusion of substrate emission features; however, the intensity/ratio models were able to correctly identify more types of simulant residues in the presence of interferents. The fusion of the two types of PLS-DA models resulted in a significant improvement in classification performance for models built using multiple substrates. In addition to identifying the major components of residue mixtures, minor components such as growth media and solvents can be identified with an appropriately designed PLS-DA model.

  20. An evidence-based patient-centered method makes the biopsychosocial model scientific.

    PubMed

    Smith, Robert C; Fortin, Auguste H; Dwamena, Francesca; Frankel, Richard M

    2013-06-01

    To review the scientific status of the biopsychosocial (BPS) model and to propose a way to improve it. Engel's BPS model added patients' psychological and social health concerns to the highly successful biomedical model. He proposed that the BPS model could make medicine more scientific, but its use in education, clinical care, and, especially, research remains minimal. Many aver correctly that the present model cannot be defined in a consistent way for the individual patient, making it untestable and non-scientific. This stems from not obtaining relevant BPS data systematically, where one interviewer obtains the same information another would. Recent research by two of the authors has produced similar patient-centered interviewing methods that are repeatable and elicit just the relevant patient information needed to define the model at each visit. We propose that the field adopt these evidence-based methods as the standard for identifying the BPS model. Identifying a scientific BPS model in each patient with an agreed-upon, evidence-based patient-centered interviewing method can produce a quantum leap ahead in both research and teaching. A scientific BPS model can give us more confidence in being humanistic. In research, we can conduct more rigorous studies to inform better practices. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. Joint use of over- and under-sampling techniques and cross-validation for the development and assessment of prediction models.

    PubMed

    Blagus, Rok; Lusa, Lara

    2015-11-04

    Prediction models are used in clinical research to develop rules that can be used to accurately predict the outcome of the patients based on some of their characteristics. They represent a valuable tool in the decision making process of clinicians and health policy makers, as they enable them to estimate the probability that patients have or will develop a disease, will respond to a treatment, or that their disease will recur. The interest devoted to prediction models in the biomedical community has been growing in the last few years. Often the data used to develop the prediction models are class-imbalanced as only few patients experience the event (and therefore belong to minority class). Prediction models developed using class-imbalanced data tend to achieve sub-optimal predictive accuracy in the minority class. This problem can be diminished by using sampling techniques aimed at balancing the class distribution. These techniques include under- and oversampling, where a fraction of the majority class samples are retained in the analysis or new samples from the minority class are generated. The correct assessment of how the prediction model is likely to perform on independent data is of crucial importance; in the absence of an independent data set, cross-validation is normally used. While the importance of correct cross-validation is well documented in the biomedical literature, the challenges posed by the joint use of sampling techniques and cross-validation have not been addressed. We show that care must be taken to ensure that cross-validation is performed correctly on sampled data, and that the risk of overestimating the predictive accuracy is greater when oversampling techniques are used. Examples based on the re-analysis of real datasets and simulation studies are provided. We identify some results from the biomedical literature where the incorrect cross-validation was performed, where we expect that the performance of oversampling techniques was heavily overestimated.

  2. Designing multifocal corneal models to correct presbyopia by laser ablation

    NASA Astrophysics Data System (ADS)

    Alarcón, Aixa; Anera, Rosario G.; Del Barco, Luis Jiménez; Jiménez, José R.

    2012-01-01

    Two multifocal corneal models and an aspheric model designed to correct presbyopia by corneal photoablation were evaluated. The design of each model was optimized to achieve the best visual quality possible for both near and distance vision. In addition, we evaluated the effect of myosis and pupil decentration on visual quality. The corrected model with the central zone for near vision provides better results since it requires less ablated corneal surface area, permits higher addition values, presents stabler visual quality with pupil-size variations and lower high-order aberrations.

  3. Corrective Action Decision Document for Corrective Action Unit 563: Septic Systems, Nevada Test Site, Nevada, Revision 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grant Evenson

    2008-02-01

    This Corrective Action Decision Document has been prepared for Corrective Action Unit (CAU) 563, Septic Systems, in accordance with the Federal Facility Agreement and Consent Order (FFACO, 1996; as amended January 2007). The corrective action sites (CASs) for CAU 563 are located in Areas 3 and 12 of the Nevada Test Site, Nevada, and are comprised of the following four sites: •03-04-02, Area 3 Subdock Septic Tank •03-59-05, Area 3 Subdock Cesspool •12-59-01, Drilling/Welding Shop Septic Tanks •12-60-01, Drilling/Welding Shop Outfalls The purpose of this Corrective Action Decision Document is to identify and provide the rationale for the recommendation of a correctivemore » action alternative (CAA) for the four CASs within CAU 563. Corrective action investigation (CAI) activities were performed from July 17 through November 19, 2007, as set forth in the CAU 563 Corrective Action Investigation Plan (NNSA/NSO, 2007). Analytes detected during the CAI were evaluated against appropriate final action levels (FALs) to identify the contaminants of concern (COCs) for each CAS. The results of the CAI identified COCs at one of the four CASs in CAU 563 and required the evaluation of CAAs. Assessment of the data generated from investigation activities conducted at CAU 563 revealed the following: •CASs 03-04-02, 03-59-05, and 12-60-01 do not contain contamination at concentrations exceeding the FALs. •CAS 12-59-01 contains arsenic and chromium contamination above FALs in surface and near-surface soils surrounding a stained location within the site. Based on the evaluation of analytical data from the CAI, review of future and current operations at CAS 12-59-01, and the detailed and comparative analysis of the potential CAAs, the following corrective actions are recommended for CAU 563.« less

  4. Ability of ICU Health-Care Professionals to Identify Patient-Ventilator Asynchrony Using Waveform Analysis.

    PubMed

    Ramirez, Ivan I; Arellano, Daniel H; Adasme, Rodrigo S; Landeros, Jose M; Salinas, Francisco A; Vargas, Alvaro G; Vasquez, Francisco J; Lobos, Ignacio A; Oyarzun, Magdalena L; Restrepo, Ruben D

    2017-02-01

    Waveform analysis by visual inspection can be a reliable, noninvasive, and useful tool for detecting patient-ventilator asynchrony. However, it is a skill that requires a properly trained professional. This observational study was conducted in 17 urban ICUs. Health-care professionals (HCPs) working in these ICUs were asked to recognize different types of asynchrony shown in 3 evaluation videos. The health-care professionals were categorized according to years of experience, prior training in mechanical ventilation, profession, and number of asynchronies identified correctly. A total of 366 HCPs were evaluated. Statistically significant differences were found when HCPs with and without prior training in mechanical ventilation (trained vs non-trained HCPs) were compared according to the number of asynchronies detected correctly (of the HCPs who identified 3 asynchronies, 63 [81%] trained vs 15 [19%] non-trained, P < .001; 2 asynchronies, 72 [65%] trained vs 39 [35%] non-trained, P = .034; 1 asynchrony, 55 [47%] trained vs 61 [53%] non-trained, P = .02; 0 asynchronies, 17 [28%] trained vs 44 [72%] non-trained, P < .001). HCPs who had prior training in mechanical ventilation also increased, nearly 4-fold, their odds of identifying ≥2 asynchronies correctly (odds ratio 3.67, 95% CI 1.93-6.96, P < .001). However, neither years of experience nor profession were associated with the ability of HCPs to identify asynchrony. HCPs who have specific training in mechanical ventilation increase their ability to identify asynchrony using waveform analysis. Neither experience nor profession proved to be a relevant factor to identify asynchrony correctly using waveform analysis. Copyright © 2017 by Daedalus Enterprises.

  5. Reduced atomic pair-interaction design (RAPID) model for simulations of proteins.

    PubMed

    Ni, Boris; Baumketner, Andrij

    2013-02-14

    Increasingly, theoretical studies of proteins focus on large systems. This trend demands the development of computational models that are fast, to overcome the growing complexity, and accurate, to capture the physically relevant features. To address this demand, we introduce a protein model that uses all-atom architecture to ensure the highest level of chemical detail while employing effective pair potentials to represent the effect of solvent to achieve the maximum speed. The effective potentials are derived for amino acid residues based on the condition that the solvent-free model matches the relevant pair-distribution functions observed in explicit solvent simulations. As a test, the model is applied to alanine polypeptides. For the chain with 10 amino acid residues, the model is found to reproduce properly the native state and its population. Small discrepancies are observed for other folding properties and can be attributed to the approximations inherent in the model. The transferability of the generated effective potentials is investigated in simulations of a longer peptide with 25 residues. A minimal set of potentials is identified that leads to qualitatively correct results in comparison with the explicit solvent simulations. Further tests, conducted for multiple peptide chains, show that the transferable model correctly reproduces the experimentally observed tendency of polyalanines to aggregate into β-sheets more strongly with the growing length of the peptide chain. Taken together, the reported results suggest that the proposed model could be used to succesfully simulate folding and aggregation of small peptides in atomic detail. Further tests are needed to assess the strengths and limitations of the model more thoroughly.

  6. Problems and Limitations of Satellite Image Orientation for Determination of Height Models

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.

    2017-05-01

    The usual satellite image orientation is based on bias corrected rational polynomial coefficients (RPC). The RPC are describing the direct sensor orientation of the satellite images. The locations of the projection centres today are without problems, but an accuracy limit is caused by the attitudes. Very high resolution satellites today are very agile, able to change the pointed area over 200km within 10 to 11 seconds. The corresponding fast attitude acceleration of the satellite may cause a jitter which cannot be expressed by the third order RPC, even if it is recorded by the gyros. Only a correction of the image geometry may help, but usually this will not be done. The first indication of jitter problems is shown by systematic errors of the y-parallaxes (py) for the intersection of corresponding points during the computation of ground coordinates. These y-parallaxes have a limited influence to the ground coordinates, but similar problems can be expected for the x-parallaxes, determining directly the object height. Systematic y-parallaxes are shown for Ziyuan-3 (ZY3), WorldView-2 (WV2), Pleiades, Cartosat-1, IKONOS and GeoEye. Some of them have clear jitter effects. In addition linear trends of py can be seen. Linear trends in py and tilts in of computed height models may be caused by limited accuracy of the attitude registration, but also by bias correction with affinity transformation. The bias correction is based on ground control points (GCPs). The accuracy of the GCPs usually does not cause some limitations but the identification of the GCPs in the images may be difficult. With 2-dimensional bias corrected RPC-orientation by affinity transformation tilts of the generated height models may be caused, but due to large affine image deformations some satellites, as Cartosat-1, have to be handled with bias correction by affinity transformation. Instead of a 2-dimensional RPC-orientation also a 3-dimensional orientation is possible, respecting the object height more as by 2-dimensional orientation. The 3-dimensional orientation showed advantages for orientation based on a limited number of GCPs, but in case of poor GCP distribution it may cause also negative effects. For some of the used satellites the bias correction by affinity transformation showed advantages, but for some other the bias correction by shift was leading to a better levelling of the generated height models, even if the root mean square (RMS) differences at the GCPs were larger as for bias correction by affinity transformation. The generated height models can be analyzed and corrected with reference height models. For the used data sets accurate reference height models are available, but an analysis and correction with the free of charge available SRTM digital surface model (DSM) or ALOS World 3D (AW3D30) is also possible and leads to similar results. The comparison of the generated height models with the reference DSM shows some height undulations, but the major accuracy influence is caused by tilts of the height models. Some height model undulations reach up to 50 % of the ground sampling distance (GSD), this is not negligible but it cannot be seen not so much at the standard deviations of the height. In any case an improvement of the generated height models is possible with reference height models. If such corrections are applied it compensates possible negative effects of the type of bias correction or 2-dimensional orientations against 3-dimensional handling.

  7. [Baseline correction of spectrum for the inversion of chlorophyll-a concentration in the turbidity water].

    PubMed

    Wei, Yu-Chun; Wang, Guo-Xiang; Cheng, Chun-Mei; Zhang, Jing; Sun, Xiao-Peng

    2012-09-01

    Suspended particle material is the main factor affecting remote sensing inversion of chlorophyll-a concentration (Chla) in turbidity water. According to the optical property of suspended material in water, the present paper proposed a linear baseline correction method to weaken the suspended particle contribution in the spectrum above turbidity water surface. The linear baseline was defined as the connecting line of reflectance from 450 to 750 nm, and baseline correction is that spectrum reflectance subtracts the baseline. Analysis result of field data in situ of Meiliangwan, Taihu Lake in April, 2011 and March, 2010 shows that spectrum linear baseline correction can improve the inversion precision of Chl a and produce the better model diagnoses. As the data in March, 2010, RMSE of band ratio model built by original spectrum is 4.11 mg x m(-3), and that built by spectrum baseline correction is 3.58 mg x m(-3). Meanwhile, residual distribution and homoscedasticity in the model built by baseline correction spectrum is improved obviously. The model RMSE of April, 2011 shows the similar result. The authors suggest that using linear baseline correction as the spectrum processing method to improve Chla inversion accuracy in turbidity water without algae bloom.

  8. Statistical Calibration and Validation of a Homogeneous Ventilated Wall-Interference Correction Method for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.

    2005-01-01

    Wind tunnel experiments will continue to be a primary source of validation data for many types of mathematical and computational models in the aerospace industry. The increased emphasis on accuracy of data acquired from these facilities requires understanding of the uncertainty of not only the measurement data but also any correction applied to the data. One of the largest and most critical corrections made to these data is due to wall interference. In an effort to understand the accuracy and suitability of these corrections, a statistical validation process for wall interference correction methods has been developed. This process is based on the use of independent cases which, after correction, are expected to produce the same result. Comparison of these independent cases with respect to the uncertainty in the correction process establishes a domain of applicability based on the capability of the method to provide reasonable corrections with respect to customer accuracy requirements. The statistical validation method was applied to the version of the Transonic Wall Interference Correction System (TWICS) recently implemented in the National Transonic Facility at NASA Langley Research Center. The TWICS code generates corrections for solid and slotted wall interference in the model pitch plane based on boundary pressure measurements. Before validation could be performed on this method, it was necessary to calibrate the ventilated wall boundary condition parameters. Discrimination comparisons are used to determine the most representative of three linear boundary condition models which have historically been used to represent longitudinally slotted test section walls. Of the three linear boundary condition models implemented for ventilated walls, the general slotted wall model was the most representative of the data. The TWICS code using the calibrated general slotted wall model was found to be valid to within the process uncertainty for test section Mach numbers less than or equal to 0.60. The scatter among the mean corrected results of the bodies of revolution validation cases was within one count of drag on a typical transport aircraft configuration for Mach numbers at or below 0.80 and two counts of drag for Mach numbers at or below 0.90.

  9. Validation of the Two-Layer Model for Correcting Clear Sky Reflectance Near Clouds

    NASA Technical Reports Server (NTRS)

    Wen, Guoyong; Marshak, Alexander; Evans, K. Frank; Vamal, Tamas

    2014-01-01

    A two-layer model was developed in our earlier studies to estimate the clear sky reflectance enhancement near clouds. This simple model accounts for the radiative interaction between boundary layer clouds and molecular layer above, the major contribution to the reflectance enhancement near clouds for short wavelengths. We use LES/SHDOM simulated 3D radiation fields to valid the two-layer model for reflectance enhancement at 0.47 micrometer. We find: (a) The simple model captures the viewing angle dependence of the reflectance enhancement near cloud, suggesting the physics of this model is correct; and (b) The magnitude of the 2-layer modeled enhancement agree reasonably well with the "truth" with some expected underestimation. We further extend our model to include cloud-surface interaction using the Poisson model for broken clouds. We found that including cloud-surface interaction improves the correction, though it can introduced some over corrections for large cloud albedo, large cloud optical depth, large cloud fraction, large cloud aspect ratio. This over correction can be reduced by excluding scenes (10 km x 10km) with large cloud fraction for which the Poisson model is not designed for. Further research is underway to account for the contribution of cloud-aerosol radiative interaction to the enhancement.

  10. Getting the justification for research ethics review right.

    PubMed

    Dunn, Michael

    2013-08-01

    Dyck and Allen claim that the current model for mandatory ethical review of research involving human participants is unethical once the harms that accrue from the review process are identified. However, the assumptions upon which the authors assert that this model of research ethics governance is justified are false. In this commentary, I aim to correct these assumptions, and provide the right justificatory account of the requirement for research ethics review. This account clarifies why the subsequent arguments that Dyck and Allen make in the paper lack force, and why the 'governance problem' in research ethics that they allude to ought to be explained differently.

  11. Response to Germann's "Comment on 'theory for source-responsive and free-surface film modeling of unsaturated flow'"

    USGS Publications Warehouse

    Nimmo, J.R.

    2010-01-01

    Germann's (2010) comment helpfully presents supporting evidence that I have missed, notes items that need clarification or correction, and stimulates discussion of what is needed for improved theory of unsaturated flow. Several points from this comment relate not only to specific features of the content of my paper (Nimmo, 2010), but also to the broader question of what methodology is appropriate for developing an applied earth science. Accordingly, before addressing specific points that Germann identified, I present here some considerations of purpose and background relevant to evaluation of the unsaturated flow model of Nimmo (2010).

  12. An intelligent subtitle detection model for locating television commercials.

    PubMed

    Huang, Yo-Ping; Hsu, Liang-Wei; Sandnes, Frode-Eika

    2007-04-01

    A strategy for locating television (TV) commercials in TV programs is proposed. Based on the observation that most TV commercials do not have subtitles, the first stage exploits six subtitle constraints and an adaptive neurofuzzy inference system model to determine whether a frame contains a subtitle or not. The second stage involves locating the mark-in/mark-out points using a genetic algorithm. An interactive user interface allows users to efficiently identify and fine-tune the exact boundaries separating the commercials from the program content. Furthermore, erroneous boundaries are manually corrected. Experimental results show that the precision rate and recall rates exceed 90%.

  13. Ocean observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1998-01-01

    Significant accomplishments made during the present reporting period: (1) We expanded our "spectral-matching" algorithm (SMA), for identifying the presence of absorbing aerosols and simultaneously performing atmospheric correction and derivation of the ocean's bio-optical parameters, to the point where it could be added as a subroutine to the MODIS water-leaving radiance algorithm; (2) A modification to the SMA that does not require detailed aerosol models has been developed. This is important as the requirement for realistic aerosol models has been a weakness of the SMA; and (3) We successfully acquired micro pulse lidar data in a Saharan dust outbreak during ACE-2 in the Canary Islands.

  14. Knowledge modeling in image-guided neurosurgery: application in understanding intraoperative brain shift

    NASA Astrophysics Data System (ADS)

    Cohen-Adad, Julien; Paul, Perrine; Morandi, Xavier; Jannin, Pierre

    2006-03-01

    During an image-guided neurosurgery procedure, the neuronavigation system is subject to inaccuracy because of anatomical deformations which induce a gap between the preoperative images and their anatomical reality. Thus, the objective of many research teams is to succeed in quantifying these deformations in order to update preoperative images. Anatomical intraoperative deformations correspond to a complex spatio-temporal phenomenon. Our objective is to identify the parameters implicated in these deformations and to use these parameters as constrains for systems dedicated to updating preoperative images. In order to identify these parameters of deformation we followed the iterative methodology used for cognitive system conception: identification, conceptualization, formalization, implementation and validation. A state of the art about cortical deformations has been established in order to identify relevant parameters probably involved in the deformations. As a first step, 30 parameters have been identified and described following an ontological approach. They were formalized into a Unified Modeling Language (UML) class diagram. We implemented that model into a web-based application in order to fill a database. Two surgical cases have been studied at this moment. After having entered enough surgical cases for data mining purposes, we expect to identify the most relevant and influential parameters and to gain a better ability to understand the deformation phenomenon. This original approach is part of a global system aiming at quantifying and correcting anatomical deformations.

  15. Quantum corrections to quasi-periodic solution of Sine-Gordon model and periodic solution of phi4 model

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, G.; Leble, S.

    2014-03-01

    Analytical form of quantum corrections to quasi-periodic solution of Sine-Gordon model and periodic solution of phi4 model is obtained through zeta function regularisation with account of all rest variables of a d-dimensional theory. Qualitative dependence of quantum corrections on parameters of the classical systems is also evaluated for a much broader class of potentials u(x) = b2f(bx) + C with b and C as arbitrary real constants.

  16. Assessment of a Bidirectional Reflectance Distribution Correction of Above-Water and Satellite Water-Leaving Radiance in Coastal Waters

    DTIC Science & Technology

    2012-01-10

    water and satellite water-leaving radiance data for bidirectional effects. The proposed model is first validated with a one year time series of in situ... model is proposed to correct above-water and satellite water-leaving radiance data for bidirectional effects. The proposed model is first validated with...proposed model over the current one, demonstrating the need for a specific case 2 water BRDF correction algorithm as well as the feasibility of enhancing

  17. Integrating Model-Based Transmission Reduction into a multi-tier architecture

    NASA Astrophysics Data System (ADS)

    Straub, J.

    A multi-tier architecture consists of numerous craft as part of the system, orbital, aerial, and surface tiers. Each tier is able to collect progressively greater levels of information. Generally, craft from lower-level tiers are deployed to a target of interest based on its identification by a higher-level craft. While the architecture promotes significant amounts of science being performed in parallel, this may overwhelm the computational and transmission capabilities of higher-tier craft and links (particularly the deep space link back to Earth). Because of this, a new paradigm in in-situ data processing is required. Model-based transmission reduction (MBTR) is such a paradigm. Under MBTR, each node (whether a single spacecraft in orbit of the Earth or another planet or a member of a multi-tier network) is given an a priori model of the phenomenon that it is assigned to study. It performs activities to validate this model. If the model is found to be erroneous, corrective changes are identified, assessed to ensure their significance for being passed on, and prioritized for transmission. A limited amount of verification data is sent with each MBTR assertion message to allow those that might rely on the data to validate the correct operation of the spacecraft and MBTR engine onboard. Integrating MBTR with a multi-tier framework creates an MBTR hierarchy. Higher levels of the MBTR hierarchy task lower levels with data collection and assessment tasks that are required to validate or correct elements of its model. A model of the expected conditions is sent to the lower level craft; which then engages its own MBTR engine to validate or correct the model. This may include tasking a yet lower level of craft to perform activities. When the MBTR engine at a given level receives all of its component data (whether directly collected or from delegation), it randomly chooses some to validate (by reprocessing the validation data), performs analysis and sends its own results (v- lidation and/or changes of model elements and supporting validation data) to its upstream node. This constrains data transmission to only significant (either because it includes a change or is validation data critical for assessing overall performance) information and reduces the processing requirements (by not having to process insignificant data) at higher-level nodes. This paper presents a framework for multi-tier MBTR and two demonstration mission concepts: an Earth sensornet and a mission to Mars. These multi-tier MBTR concepts are compared to a traditional mission approach.

  18. High-end climate change impact on European runoff and low flows – exploring the effects of forcing biases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papadimitriou, Lamprini V.; Koutroulis, Aristeidis G.; Grillakis, Manolis G.

    Climate models project a much more substantial warming than the 2 °C target under the more probable emission scenarios, making higher-end scenarios increasingly plausible. Freshwater availability under such conditions is a key issue of concern. In this study, an ensemble of Euro-CORDEX projections under RCP8.5 is used to assess the mean and low hydrological states under +4 °C of global warming for the European region. Five major European catchments were analysed in terms of future drought climatology and the impact of +2 °C versus +4 °C global warming was investigated. The effect of bias correction of the climate model outputsmore » and the observations used for this adjustment was also quantified. Projections indicate an intensification of the water cycle at higher levels of warming. Even for areas where the average state may not considerably be affected, low flows are expected to reduce, leading to changes in the number of dry days and thus drought climatology. The identified increasing or decreasing runoff trends are substantially intensified when moving from the +2 to the +4° of global warming. Bias correction resulted in an improved representation of the historical hydrology. Moreover, it is also found that the selection of the observational data set for the application of the bias correction has an impact on the projected signal that could be of the same order of magnitude to the selection of the Global Climate Model (GCM).« less

  19. Strain accumulation across the Prince William Sound asperity, Southcentral Alaska

    NASA Astrophysics Data System (ADS)

    Savage, J. C.; Svarc, J. L.; Lisowski, M.

    2015-03-01

    The surface velocities predicted by the conventional subduction model are compared to velocities measured in a GPS array (surveyed in 1993, 1995, 1997, 2000, and 2004) spanning the Prince William Sound asperity. The observed velocities in the comparison have been corrected to remove the contributions from postseismic (1964 Alaska earthquake) mantle relaxation. Except at the most seaward monument (located on Middleton Island at the seaward edge of the continental shelf, just 50 km landward of the deformation front in the Aleutian Trench), the corrected velocities qualitatively agree with those predicted by an improved, two-dimensional, back slip, subduction model in which the locked megathrust coincides with the plate interface identified by seismic refraction surveys, and the back slip rate is equal to the plate convergence rate. A better fit to the corrected velocities is furnished by either a back slip rate 20% greater than the plate convergence rate or a 30% shallower megathrust. The shallow megathrust in the latter fit may be an artifact of the uniform half-space Earth model used in the inversion. Backslip at the plate convergence rate on the megathrust mapped by refraction surveys would fit the data as well if the rigidity of the underthrust plate was twice that of the overlying plate, a rigidity contrast higher than expected. The anomalous motion at Middleton Island is attributed to continuous slip at near the plate convergence rate on a postulated, listric fault that splays off the megathrust at depth of about 12 km and outcrops on the continental slope south-southeast of Middleton Island.

  20. Optimally Repeatable Kinetic Model Variant for Myocardial Blood Flow Measurements with 82Rb PET.

    PubMed

    Ocneanu, Adrian F; deKemp, Robert A; Renaud, Jennifer M; Adler, Andy; Beanlands, Rob S B; Klein, Ran

    2017-01-01

    Purpose. Myocardial blood flow (MBF) quantification with 82 Rb positron emission tomography (PET) is gaining clinical adoption, but improvements in precision are desired. This study aims to identify analysis variants producing the most repeatable MBF measures. Methods. 12 volunteers underwent same-day test-retest rest and dipyridamole stress imaging with dynamic 82 Rb PET, from which MBF was quantified using 1-tissue-compartment kinetic model variants: (1) blood-pool versus uptake region sampled input function (Blood/Uptake-ROI), (2) dual spillover correction (SOC-On/Off), (3) right blood correction (RBC-On/Off), (4) arterial blood transit delay (Delay-On/Off), and (5) distribution volume (DV) constraint (Global/Regional-DV). Repeatability of MBF, stress/rest myocardial flow reserve (MFR), and stress/rest MBF difference (ΔMBF) was assessed using nonparametric reproducibility coefficients (RPC np = 1.45 × interquartile range). Results. MBF using SOC-On, RVBC-Off, Blood-ROI, Global-DV, and Delay-Off was most repeatable for combined rest and stress: RPC np = 0.21 mL/min/g (15.8%). Corresponding MFR and ΔMBF RPC np were 0.42 (20.2%) and 0.24 mL/min/g (23.5%). MBF repeatability improved with SOC-On at stress ( p < 0.001) and tended to improve with RBC-Off at both rest and stress ( p < 0.08). DV and ROI did not significantly influence repeatability. The Delay-On model was overdetermined and did not reliably converge. Conclusion. MBF and MFR test-retest repeatability were the best with dual spillover correction, left atrium blood input function, and global DV.

Top