Sample records for complementary error function

  1. Intelligent complementary sliding-mode control for LUSMS-based X-Y-theta motion control stage.

    PubMed

    Lin, Faa-Jeng; Chen, Syuan-Yi; Shyu, Kuo-Kai; Liu, Yen-Hung

    2010-07-01

    An intelligent complementary sliding-mode control (ICSMC) system using a recurrent wavelet-based Elman neural network (RWENN) estimator is proposed in this study to control the mover position of a linear ultrasonic motors (LUSMs)-based X-Y-theta motion control stage for the tracking of various contours. By the addition of a complementary generalized error transformation, the complementary sliding-mode control (CSMC) can efficiently reduce the guaranteed ultimate bound of the tracking error by half compared with the slidingmode control (SMC) while using the saturation function. To estimate a lumped uncertainty on-line and replace the hitting control of the CSMC directly, the RWENN estimator is adopted in the proposed ICSMC system. In the RWENN, each hidden neuron employs a different wavelet function as an activation function to improve both the convergent precision and the convergent time compared with the conventional Elman neural network (ENN). The estimation laws of the RWENN are derived using the Lyapunov stability theorem to train the network parameters on-line. A robust compensator is also proposed to confront the uncertainties including approximation error, optimal parameter vectors, and higher-order terms in Taylor series. Finally, some experimental results of various contours tracking show that the tracking performance of the ICSMC system is significantly improved compared with the SMC and CSMC systems.

  2. The Generation, Radiation and Prediction of Supersonic Jet Noise. Volume 1

    DTIC Science & Technology

    1978-10-01

    standard, Gaussian correlation function model can yield a good noise spectrum prediction (at 900), but the corresponding axial source distributions do not...forms for the turbulence cross-correlation function. Good agreement was obtained between measured and calculated far- field noise spectra. However, the...complementary error function profile (3.63) was found to provide a good fit to the axial velocity distribution tor a wide range of Mach numbers in the Initial

  3. Error suppression via complementary gauge choices in Reed-Muller codes

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Jochym-O'Connor, Tomas

    2017-09-01

    Concatenation of two quantum error-correcting codes with complementary sets of transversal gates can provide a means toward universal fault-tolerant quantum computation. We first show that it is generally preferable to choose the inner code with the higher pseudo-threshold to achieve lower logical failure rates. We then explore the threshold properties of a wide range of concatenation schemes. Notably, we demonstrate that the concatenation of complementary sets of Reed-Muller codes can increase the code capacity threshold under depolarizing noise when compared to extensions of previously proposed concatenation models. We also analyze the properties of logical errors under circuit-level noise, showing that smaller codes perform better for all sampled physical error rates. Our work provides new insights into the performance of universal concatenated quantum codes for both code capacity and circuit-level noise.

  4. WE-G-213CD-03: A Dual Complementary Verification Method for Dynamic Tumor Tracking on Vero SBRT.

    PubMed

    Poels, K; Depuydt, T; Verellen, D; De Ridder, M

    2012-06-01

    to use complementary cine EPID and gimbals log file analysis for in-vivo tracking accuracy monitoring. A clinical prototype of dynamic tracking (DT) was installed on the Vero SBRT system. This prototype version allowed tumor tracking by gimballed linac rotations using an internal-external correspondence model. The DT prototype software allowed the detailed logging of all applied gimbals rotations during tracking. The integration of an EPID on the vero system allowed the acquisition of cine EPID images during DT. We quantified the tracking error on cine EPID (E-EPID) by subtracting the target center (fiducial marker detection) and the field centroid. Dynamic gimbals log file information was combined with orthogonal x-ray verification images to calculate the in-vivo tracking error (E-kVLog). The correlation between E-kVLog and E-EPID was calculated for validation of the gimbals log file. Further, we investigated the sensitivity of the log file tracking error by introducing predefined systematic tracking errors. As an application we calculate gimbals log file tracking error for dynamic hidden target tests to investigate gravity effects and decoupled gimbals rotation from gantry rotation. Finally, calculating complementary cine EPID and log file tracking errors evaluated the clinical accuracy of dynamic tracking. A strong correlation was found between log file and cine EPID tracking error distribution during concurrent measurements (R=0.98). We found sensitivity in the gimbals log files to detect a systematic tracking error up to 0.5 mm. Dynamic hidden target tests showed no gravity influence on tracking performance and high degree of decoupled gimbals and gantry rotation during dynamic arc dynamic tracking. A submillimetric agreement between clinical complementary tracking error measurements was found. Redundancy of the internal gimbals log file with x-ray verification images with complementary independent cine EPID images was implemented to monitor the accuracy of gimballed tumor tracking on Vero SBRT. Research was financially supported by the Flemish government (FWO), Hercules Foundation and BrainLAB AG. © 2012 American Association of Physicists in Medicine.

  5. Ensemble Data Mining Methods

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.

    2004-01-01

    Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve better prediction accuracy than any of the individual models could on their own. The basic goal when designing an ensemble is the same as when establishing a committee of people: each member of the committee should be as competent as possible, but the members should be complementary to one another. If the members are not complementary, Le., if they always agree, then the committee is unnecessary---any one member is sufficient. If the members are complementary, then when one or a few members make an error, the probability is high that the remaining members can correct this error. Research in ensemble methods has largely revolved around designing ensembles consisting of competent yet complementary models.

  6. Prenatal drug exposure and selective attention in preschoolers.

    PubMed

    Noland, Julia S; Singer, Lynn T; Short, Elizabeth J; Minnes, Sonia; Arendt, Robert E; Kirchner, H Lester; Bearer, Cynthia

    2005-01-01

    Deficits in sustained attention and impulsivity have previously been demonstrated in preschoolers prenatally exposed to cocaine. We assessed an additional component of attention, selective attention, in a large, poly-substance cocaine-exposed cohort of 4 year olds and their at-risk comparison group. Employing postpartum maternal report and biological assay, we assigned children to overlapping exposed and complementary control groups for maternal use of cocaine, alcohol, marijuana, and cigarettes. Maternal pregnancy use of cocaine and use of cigarettes were both associated with increased commission errors, indicative of inferior selective attention. Severity of maternal use of marijuana during pregnancy was positively correlated with omission errors, suggesting impaired sustained attention. Substance exposure effects were independent of maternal postpartum psychological distress, birth mother cognitive functioning, current caregiver functioning, other substance exposures and child concurrent verbal IQ.

  7. Optimal joint measurements of complementary observables by a single trapped ion

    NASA Astrophysics Data System (ADS)

    Xiong, T. P.; Yan, L. L.; Ma, Z. H.; Zhou, F.; Chen, L.; Yang, W. L.; Feng, M.; Busch, P.

    2017-06-01

    The uncertainty relations, pioneered by Werner Heisenberg nearly 90 years ago, set a fundamental limitation on the joint measurability of complementary observables. This limitation has long been a subject of debate, which has been reignited recently due to new proposed forms of measurement uncertainty relations. The present work is associated with a new error trade-off relation for compatible observables approximating two incompatible observables, in keeping with the spirit of Heisenberg’s original ideas of 1927. We report the first direct test and confirmation of the tight bounds prescribed by such an error trade-off relation, based on an experimental realisation of optimal joint measurements of complementary observables using a single ultracold {}40{{{Ca}}}+ ion trapped in a harmonic potential. Our work provides a prototypical determination of ultimate joint measurement error bounds with potential applications in quantum information science for high-precision measurement and information security.

  8. A two-phase sampling survey for nonresponse and its paradata to correct nonresponse bias in a health surveillance survey.

    PubMed

    Santin, G; Bénézet, L; Geoffroy-Perez, B; Bouyer, J; Guéguen, A

    2017-02-01

    The decline in participation rates in surveys, including epidemiological surveillance surveys, has become a real concern since it may increase nonresponse bias. The aim of this study is to estimate the contribution of a complementary survey among a subsample of nonrespondents, and the additional contribution of paradata in correcting for nonresponse bias in an occupational health surveillance survey. In 2010, 10,000 workers were randomly selected and sent a postal questionnaire. Sociodemographic data were available for the whole sample. After data collection of the questionnaires, a complementary survey among a random subsample of 500 nonrespondents was performed using a questionnaire administered by an interviewer. Paradata were collected for the complete subsample of the complementary survey. Nonresponse bias in the initial sample and in the combined samples were assessed using variables from administrative databases available for the whole sample, not subject to differential measurement errors. Corrected prevalences by reweighting technique were estimated by first using the initial survey alone and then the initial and complementary surveys combined, under several assumptions regarding the missing data process. Results were compared by computing relative errors. The response rates of the initial and complementary surveys were 23.6% and 62.6%, respectively. For the initial and the combined surveys, the relative errors decreased after correction for nonresponse on sociodemographic variables. For the combined surveys without paradata, relative errors decreased compared with the initial survey. The contribution of the paradata was weak. When a complex descriptive survey has a low response rate, a short complementary survey among nonrespondents with a protocol which aims to maximize the response rates, is useful. The contribution of sociodemographic variables in correcting for nonresponse bias is important whereas the additional contribution of paradata in correcting for nonresponse bias is questionable. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  9. Ratiometric, filter-free optical sensor based on a complementary metal oxide semiconductor buried double junction photodiode.

    PubMed

    Yung, Ka Yi; Zhan, Zhiyong; Titus, Albert H; Baker, Gary A; Bright, Frank V

    2015-07-16

    We report a complementary metal oxide semiconductor integrated circuit (CMOS IC) with a buried double junction (BDJ) photodiode that (i) provides a real-time output signal that is related to the intensity ratio at two emission wavelengths and (ii) simultaneously eliminates the need for an optical filter to block Rayleigh scatter. We demonstrate the BDJ platform performance for gaseous NH3 and aqueous pH detection. We also compare the BDJ performance to parallel results obtained by using a slew scanned fluorimeter (SSF). The BDJ results are functionally equivalent to the SSF results without the need for any wavelength filtering or monochromators and the BDJ platform is not prone to errors associated with source intensity fluctuations or sensor signal drift. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Direct measurement of the poliovirus RNA polymerase error frequency in vitro

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, C.D.; Stokes, M.A.M.; Flanegan, J.B.

    1988-02-01

    The fidelity of RNA replication by the poliovirus-RNA-dependent RNA polymerase was examined by copying homopolymeric RNA templates in vitro. The poliovirus RNA polymerase was extensively purified and used to copy poly(A), poly(C), or poly(I) templates with equimolar concentrations of noncomplementary and complementary ribonucleotides. The error frequency was expressed as the amount of a noncomplementary nucleotide incorporated divided by the total amount of complementary and noncomplementary nucleotide incorporated. The polymerase error frequencies were very high, depending on the specific reaction conditions. The activity of the polymerase on poly(U) and poly(G) was too low to measure error frequencies on these templates. Amore » fivefold increase in the error frequency was observed when the reaction conditions were changed from 3.0 mM Mg{sup 2+} (pH 7.0) to 7.0 mM Mg{sup 2+} (pH 8.0). This increase in the error frequency correlates with an eightfold increase in the elongation rate that was observed under the same conditions in a previous study.« less

  11. Closed-form integrator for the quaternion (euler angle) kinematics equations

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A. (Inventor)

    2000-01-01

    The invention is embodied in a method of integrating kinematics equations for updating a set of vehicle attitude angles of a vehicle using 3-dimensional angular velocities of the vehicle, which includes computing an integrating factor matrix from quantities corresponding to the 3-dimensional angular velocities, computing a total integrated angular rate from the quantities corresponding to a 3-dimensional angular velocities, computing a state transition matrix as a sum of (a) a first complementary function of the total integrated angular rate and (b) the integrating factor matrix multiplied by a second complementary function of the total integrated angular rate, and updating the set of vehicle attitude angles using the state transition matrix. Preferably, the method further includes computing a quanternion vector from the quantities corresponding to the 3-dimensional angular velocities, in which case the updating of the set of vehicle attitude angles using the state transition matrix is carried out by (a) updating the quanternion vector by multiplying the quanternion vector by the state transition matrix to produce an updated quanternion vector and (b) computing an updated set of vehicle attitude angles from the updated quanternion vector. The first and second trigonometric functions are complementary, such as a sine and a cosine. The quantities corresponding to the 3-dimensional angular velocities include respective averages of the 3-dimensional angular velocities over plural time frames. The updating of the quanternion vector preserves the norm of the vector, whereby the updated set of vehicle attitude angles are virtually error-free.

  12. A new model integrating short- and long-term aging of copper added to soils

    PubMed Central

    Zeng, Saiqi; Li, Jumei; Wei, Dongpu

    2017-01-01

    Aging refers to the processes by which the bioavailability/toxicity, isotopic exchangeability, and extractability of metals added to soils decline overtime. We studied the characteristics of the aging process in copper (Cu) added to soils and the factors that affect this process. Then we developed a semi-mechanistic model to predict the lability of Cu during the aging process with descriptions of the diffusion process using complementary error function. In the previous studies, two semi-mechanistic models to separately predict short-term and long-term aging of Cu added to soils were developed with individual descriptions of the diffusion process. In the short-term model, the diffusion process was linearly related to the square root of incubation time (t1/2), and in the long-term model, the diffusion process was linearly related to the natural logarithm of incubation time (lnt). Both models could predict short-term or long-term aging processes separately, but could not predict the short- and long-term aging processes by one model. By analyzing and combining the two models, we found that the short- and long-term behaviors of the diffusion process could be described adequately using the complementary error function. The effect of temperature on the diffusion process was obtained in this model as well. The model can predict the aging process continuously based on four factors—soil pH, incubation time, soil organic matter content and temperature. PMID:28820888

  13. Physical fault tolerance of nanoelectronics.

    PubMed

    Szkopek, Thomas; Roychowdhury, Vwani P; Antoniadis, Dimitri A; Damoulakis, John N

    2011-04-29

    The error rate in complementary transistor circuits is suppressed exponentially in electron number, arising from an intrinsic physical implementation of fault-tolerant error correction. Contrariwise, explicit assembly of gates into the most efficient known fault-tolerant architecture is characterized by a subexponential suppression of error rate with electron number, and incurs significant overhead in wiring and complexity. We conclude that it is more efficient to prevent logical errors with physical fault tolerance than to correct logical errors with fault-tolerant architecture.

  14. Mesolimbic Dopamine Signals the Value of Work

    PubMed Central

    Hamid, Arif A.; Pettibone, Jeffrey R.; Mabrouk, Omar S.; Hetrick, Vaughn L.; Schmidt, Robert; Vander Weele, Caitlin M.; Kennedy, Robert T.; Aragona, Brandon J.; Berke, Joshua D.

    2015-01-01

    Dopamine cell firing can encode errors in reward prediction, providing a learning signal to guide future behavior. Yet dopamine is also a key modulator of motivation, invigorating current behavior. Existing theories propose that fast (“phasic”) dopamine fluctuations support learning, while much slower (“tonic”) dopamine changes are involved in motivation. We examined dopamine release in the nucleus accumbens across multiple time scales, using complementary microdialysis and voltammetric methods during adaptive decision-making. We first show that minute-by-minute dopamine levels covary with reward rate and motivational vigor. We then show that second-by-second dopamine release encodes an estimate of temporally-discounted future reward (a value function). We demonstrate that changing dopamine immediately alters willingness to work, and reinforces preceding action choices by encoding temporal-difference reward prediction errors. Our results indicate that dopamine conveys a single, rapidly-evolving decision variable, the available reward for investment of effort, that is employed for both learning and motivational functions. PMID:26595651

  15. Digital Photon Correlation Data Processing Techniques

    DTIC Science & Technology

    1976-07-01

    velocimeter signals. During the conduct of the contract a complementary theoretical effort with the NASA Langley Research Center was in progress ( NASI -13140...6.3.2 Variability Error In an earlier very brief contract with NASA Langley ( NASI -13140) a simplified variability error analysis was performed

  16. Quantum information density scaling and qubit operation time constraints of CMOS silicon-based quantum computer architectures

    NASA Astrophysics Data System (ADS)

    Rotta, Davide; Sebastiano, Fabio; Charbon, Edoardo; Prati, Enrico

    2017-06-01

    Even the quantum simulation of an apparently simple molecule such as Fe2S2 requires a considerable number of qubits of the order of 106, while more complex molecules such as alanine (C3H7NO2) require about a hundred times more. In order to assess such a multimillion scale of identical qubits and control lines, the silicon platform seems to be one of the most indicated routes as it naturally provides, together with qubit functionalities, the capability of nanometric, serial, and industrial-quality fabrication. The scaling trend of microelectronic devices predicting that computing power would double every 2 years, known as Moore's law, according to the new slope set after the 32-nm node of 2009, suggests that the technology roadmap will achieve the 3-nm manufacturability limit proposed by Kelly around 2020. Today, circuital quantum information processing architectures are predicted to take advantage from the scalability ensured by silicon technology. However, the maximum amount of quantum information per unit surface that can be stored in silicon-based qubits and the consequent space constraints on qubit operations have never been addressed so far. This represents one of the key parameters toward the implementation of quantum error correction for fault-tolerant quantum information processing and its dependence on the features of the technology node. The maximum quantum information per unit surface virtually storable and controllable in the compact exchange-only silicon double quantum dot qubit architecture is expressed as a function of the complementary metal-oxide-semiconductor technology node, so the size scale optimizing both physical qubit operation time and quantum error correction requirements is assessed by reviewing the physical and technological constraints. According to the requirements imposed by the quantum error correction method and the constraints given by the typical strength of the exchange coupling, we determine the workable operation frequency range of a silicon complementary metal-oxide-semiconductor quantum processor to be within 1 and 100 GHz. Such constraint limits the feasibility of fault-tolerant quantum information processing with complementary metal-oxide-semiconductor technology only to the most advanced nodes. The compatibility with classical complementary metal-oxide-semiconductor control circuitry is discussed, focusing on the cryogenic complementary metal-oxide-semiconductor operation required to bring the classical controller as close as possible to the quantum processor and to enable interfacing thousands of qubits on the same chip via time-division, frequency-division, and space-division multiplexing. The operation time range prospected for cryogenic control electronics is found to be compatible with the operation time expected for qubits. By combining the forecast of the development of scaled technology nodes with operation time and classical circuitry constraints, we derive a maximum quantum information density for logical qubits of 2.8 and 4 Mqb/cm2 for the 10 and 7-nm technology nodes, respectively, for the Steane code. The density is one and two orders of magnitude less for surface codes and for concatenated codes, respectively. Such values provide a benchmark for the development of fault-tolerant quantum algorithms by circuital quantum information based on silicon platforms and a guideline for other technologies in general.

  17. Complementary roles for amygdala and periaqueductal gray in temporal-difference fear learning.

    PubMed

    Cole, Sindy; McNally, Gavan P

    2009-01-01

    Pavlovian fear conditioning is not a unitary process. At the neurobiological level multiple brain regions and neurotransmitters contribute to fear learning. At the behavioral level many variables contribute to fear learning including the physical salience of the events being learned about, the direction and magnitude of predictive error, and the rate at which these are learned about. These experiments used a serial compound conditioning design to determine the roles of basolateral amygdala (BLA) NMDA receptors and ventrolateral midbrain periaqueductal gray (vlPAG) mu-opioid receptors (MOR) in predictive fear learning. Rats received a three-stage design, which arranged for both positive and negative prediction errors producing bidirectional changes in fear learning within the same subjects during the test stage. Intra-BLA infusion of the NR2B receptor antagonist Ifenprodil prevented all learning. In contrast, intra-vlPAG infusion of the MOR antagonist CTAP enhanced learning in response to positive predictive error but impaired learning in response to negative predictive error--a pattern similar to Hebbian learning and an indication that fear learning had been divorced from predictive error. These findings identify complementary but dissociable roles for amygdala NMDA receptors and vlPAG MOR in temporal-difference predictive fear learning.

  18. Hyperspectral Analysis of Soil Total Nitrogen in Subsided Land Using the Local Correlation Maximization-Complementary Superiority (LCMCS) Method.

    PubMed

    Lin, Lixin; Wang, Yunjia; Teng, Jiyao; Xi, Xiuxiu

    2015-07-23

    The measurement of soil total nitrogen (TN) by hyperspectral remote sensing provides an important tool for soil restoration programs in areas with subsided land caused by the extraction of natural resources. This study used the local correlation maximization-complementary superiority method (LCMCS) to establish TN prediction models by considering the relationship between spectral reflectance (measured by an ASD FieldSpec 3 spectroradiometer) and TN based on spectral reflectance curves of soil samples collected from subsided land which is determined by synthetic aperture radar interferometry (InSAR) technology. Based on the 1655 selected effective bands of the optimal spectrum (OSP) of the first derivate differential of reciprocal logarithm ([log{1/R}]'), (correlation coefficients, p < 0.01), the optimal model of LCMCS method was obtained to determine the final model, which produced lower prediction errors (root mean square error of validation [RMSEV] = 0.89, mean relative error of validation [MREV] = 5.93%) when compared with models built by the local correlation maximization (LCM), complementary superiority (CS) and partial least squares regression (PLS) methods. The predictive effect of LCMCS model was optional in Cangzhou, Renqiu and Fengfeng District. Results indicate that the LCMCS method has great potential to monitor TN in subsided lands caused by the extraction of natural resources including groundwater, oil and coal.

  19. Characterization of near-stoichiometric Ti:LiNbO(3) strip waveguides with varied substrate refractive index in the guiding layer.

    PubMed

    Zhang, De-Long; Zhang, Pei; Zhou, Hao-Jiang; Pun, Edwin Yue-Bun

    2008-10-01

    We have demonstrated the possibility that near-stoichiometric Ti:LiNbO(3) strip waveguides are fabricated by carrying out vapor transport equilibration at 1060 degrees C for 12 h on a congruent LiNbO(3) substrate with photolithographically patterned 4-8 microm wide, 115 nm thick Ti strips. Optical characterizations show that these waveguides are single mode at 1.5 microm and show a waveguide loss of 1.3 dB/cm for TM mode and 1.1 dB/cm for TE mode. In the width/depth direction of the waveguide, the mode field follows the Gauss/Hermite-Gauss function. Secondary-ion-mass spectrometry (SIMS) was used to study Ti-concentration profiles in the depth direction and on the surface of the 6 microm wide waveguide. The result shows that the Ti profile follows a sum of two error functions along the width direction and a complementary error function in the depth direction. The surface Ti concentration, 1/e width and depth, and mean diffusivities along the width and depth directions of the guide are similar to 3.0 x 10(21) cm(-3), 3.8 microm, 2.6 microm, 0.30 and 0.14 microm(2)/h, respectively. Micro-Raman analysis was carried out on the waveguide endface to characterize the depth profile of Li composition in the guiding layer. The results show that the depth profile of Li composition also follows a complementary error function with a 1/e depth of 3.64 microm. The mean ([Li(Li)]+[Ti(Li)])/([Nb(Nb)]+[Ti(Nb)]) ratio in the waveguide layer is about 0.98. The inhomogeneous Li-composition profile results in a varied substrate index in the guiding layer. A two-dimensional refractive index profile model in the waveguide is proposed by taking into consideration the varied substrate index and assuming linearity between Ti-induced index change and Ti concentration. The net waveguide surface index increments at 1545 nm are 0.0114 and 0.0212 for ordinary and extraordinary rays, respectively. Based upon the constructed index model, the fundamental mode field profile was calculated using the beam propagation method, and the mode sizes and effective index versus the Ti-strip width were calculated for three lower TM and TE modes using the variational method. An agreement between theory and experiment is obtained.

  20. Fusion of 4D echocardiography and cine cardiac magnetic resonance volumes using a salient spatio-temporal analysis

    NASA Astrophysics Data System (ADS)

    Atehortúa, Angélica; Garreau, Mireille; Romero, Eduardo

    2017-11-01

    An accurate left (LV) and right ventricular (RV) function quantification is important to support evaluation, diagnosis and prognosis of cardiac pathologies such as the cardiomyopathies. Currently, diagnosis by ultrasound is the most cost-effective examination. However, this modality is highly noisy and operator dependent, hence prone to errors. Therefore, fusion with other cardiac modalities may provide complementary information and improve the analysis of the specific pathologies like cardiomyopathies. This paper proposes an automatic registration between two complementary modalities, 4D echocardiography and Magnetic resonance images, by mapping both modalities to a common space of salience where an optimal registration between them is estimated. The obtained matrix transformation is then applied to the MRI volume which is superimposed to the 4D echocardiography. Manually selected marks in both modalities are used to evaluate the precision of the superimposition. Preliminary results, in three evaluation cases, show the distance between these marked points and the estimated with the transformation is about 2 mm.

  1. Processing TES Level-2 Data

    NASA Technical Reports Server (NTRS)

    Poosti, Sassaneh; Akopyan, Sirvard; Sakurai, Regina; Yun, Hyejung; Saha, Pranjit; Strickland, Irina; Croft, Kevin; Smith, Weldon; Hoffman, Rodney; Koffend, John; hide

    2006-01-01

    TES Level 2 Subsystem is a set of computer programs that performs functions complementary to those of the program summarized in the immediately preceding article. TES Level-2 data pertain to retrieved species (or temperature) profiles, and errors thereof. Geolocation, quality, and other data (e.g., surface characteristics for nadir observations) are also included. The subsystem processes gridded meteorological information and extracts parameters that can be interpolated to the appropriate latitude, longitude, and pressure level based on the date and time. Radiances are simulated using the aforementioned meteorological information for initial guesses, and spectroscopic-parameter tables are generated. At each step of the retrieval, a nonlinear-least-squares- solving routine is run over multiple iterations, retrieving a subset of atmospheric constituents, and error analysis is performed. Scientific TES Level-2 data products are written in a format known as Hierarchical Data Format Earth Observing System 5 (HDF-EOS 5) for public distribution.

  2. Hyperspectral Analysis of Soil Total Nitrogen in Subsided Land Using the Local Correlation Maximization-Complementary Superiority (LCMCS) Method

    PubMed Central

    Lin, Lixin; Wang, Yunjia; Teng, Jiyao; Xi, Xiuxiu

    2015-01-01

    The measurement of soil total nitrogen (TN) by hyperspectral remote sensing provides an important tool for soil restoration programs in areas with subsided land caused by the extraction of natural resources. This study used the local correlation maximization-complementary superiority method (LCMCS) to establish TN prediction models by considering the relationship between spectral reflectance (measured by an ASD FieldSpec 3 spectroradiometer) and TN based on spectral reflectance curves of soil samples collected from subsided land which is determined by synthetic aperture radar interferometry (InSAR) technology. Based on the 1655 selected effective bands of the optimal spectrum (OSP) of the first derivate differential of reciprocal logarithm ([log{1/R}]′), (correlation coefficients, p < 0.01), the optimal model of LCMCS method was obtained to determine the final model, which produced lower prediction errors (root mean square error of validation [RMSEV] = 0.89, mean relative error of validation [MREV] = 5.93%) when compared with models built by the local correlation maximization (LCM), complementary superiority (CS) and partial least squares regression (PLS) methods. The predictive effect of LCMCS model was optional in Cangzhou, Renqiu and Fengfeng District. Results indicate that the LCMCS method has great potential to monitor TN in subsided lands caused by the extraction of natural resources including groundwater, oil and coal. PMID:26213935

  3. Mathematical and field analysis of longitudinal reservoir infill

    NASA Astrophysics Data System (ADS)

    Ke, W. T.; Capart, H.

    2016-12-01

    In reservoirs, severe problems are caused by infilled sediment deposits. In long term, the sediment accumulation reduces the capacity of reservoir storage and flood control benefits. In the short term, the sediment deposits influence the intakes of water-supply and hydroelectricity generation. For the management of reservoir, it is important to understand the deposition process and then to predict the sedimentation in reservoir. To investigate the behaviors of sediment deposits, we propose a one-dimensional simplified theory derived by the Exner equation to predict the longitudinal sedimentation distribution in idealized reservoirs. The theory models the reservoir infill geomorphic actions for three scenarios: delta progradation, near-dam bottom deposition, and final infill. These yield three kinds of self-similar analytical solutions for the reservoir bed profiles, under different boundary conditions. Three analytical solutions are composed by error function, complementary error function, and imaginary error function, respectively. The theory is also computed by finite volume method to test the analytical solutions. The theoretical and numerical predictions are in good agreement with one-dimensional small-scale laboratory experiment. As the theory is simple to apply with analytical solutions and numerical computation, we propose some applications to simulate the long-profile evolution of field reservoirs and focus on the infill sediment deposit volume resulting the uplift of near-dam bottom elevation. These field reservoirs introduced here are Wushe Reservoir, Tsengwen Reservoir, Mudan Reservoir in Taiwan, Lago Dos Bocas in Puerto Rico, and Sakuma Dam in Japan.

  4. Comparison of complementary and Kalman filter based data fusion for attitude heading reference system

    NASA Astrophysics Data System (ADS)

    Islam, Tariqul; Islam, Md. Saiful; Shajid-Ul-Mahmud, Md.; Hossam-E-Haider, Md

    2017-12-01

    An Attitude Heading Reference System (AHRS) provides 3D orientation of an aircraft (roll, pitch, and yaw) with instantaneous position and also heading information. For implementation of a low cost AHRS system Micro-electrical-Mechanical system (MEMS) based sensors are used such as accelerometer, gyroscope, and magnetometer. Accelerometers suffer from errors caused by external accelerations that sums to gravity and make accelerometers based rotation inaccurate. Gyroscopes can remove such errors but create drifting problems. So for getting the precise data additionally two very common and well known filters Complementary and Kalman are introduced to the system. In this paper a comparison of system performance using these two filters is shown separately so that one would be able to select filter with better performance for his/her system.

  5. Multimodal Image Registration through Simultaneous Segmentation.

    PubMed

    Aganj, Iman; Fischl, Bruce

    2017-11-01

    Multimodal image registration facilitates the combination of complementary information from images acquired with different modalities. Most existing methods require computation of the joint histogram of the images, while some perform joint segmentation and registration in alternate iterations. In this work, we introduce a new non-information-theoretical method for pairwise multimodal image registration, in which the error of segmentation - using both images - is considered as the registration cost function. We empirically evaluate our method via rigid registration of multi-contrast brain magnetic resonance images, and demonstrate an often higher registration accuracy in the results produced by the proposed technique, compared to those by several existing methods.

  6. Revisiting the Fundamental Analytical Solutions of Heat and Mass Transfer: The Kernel of Multirate and Multidimensional Diffusion

    NASA Astrophysics Data System (ADS)

    Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; Birkholzer, Jens T.

    2017-11-01

    There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1-D, 2-D, and 3-D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, td. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, td0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the first two terms for high-accuracy approximations (with less than 10-7 relative error) for 1-D isotropic (spheres, cylinders, slabs) and 2-D/3-D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1-D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2-D/3-D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.

  7. Revisiting the Fundamental Analytical Solutions of Heat and Mass Transfer: The Kernel of Multirate and Multidimensional Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny

    There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less

  8. Revisiting the Fundamental Analytical Solutions of Heat and Mass Transfer: The Kernel of Multirate and Multidimensional Diffusion

    DOE PAGES

    Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; ...

    2017-10-24

    There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less

  9. A Reconfigurable Readout Integrated Circuit for Heterogeneous Display-Based Multi-Sensor Systems

    PubMed Central

    Park, Kyeonghwan; Kim, Seung Mok; Eom, Won-Jin; Kim, Jae Joon

    2017-01-01

    This paper presents a reconfigurable multi-sensor interface and its readout integrated circuit (ROIC) for display-based multi-sensor systems, which builds up multi-sensor functions by utilizing touch screen panels. In addition to inherent touch detection, physiological and environmental sensor interfaces are incorporated. The reconfigurable feature is effectively implemented by proposing two basis readout topologies of amplifier-based and oscillator-based circuits. For noise-immune design against various noises from inherent human-touch operations, an alternate-sampling error-correction scheme is proposed and integrated inside the ROIC, achieving a 12-bit resolution of successive approximation register (SAR) of analog-to-digital conversion without additional calibrations. A ROIC prototype that includes the whole proposed functions and data converters was fabricated in a 0.18 μm complementary metal oxide semiconductor (CMOS) process, and its feasibility was experimentally verified to support multiple heterogeneous sensing functions of touch, electrocardiogram, body impedance, and environmental sensors. PMID:28368355

  10. A Reconfigurable Readout Integrated Circuit for Heterogeneous Display-Based Multi-Sensor Systems.

    PubMed

    Park, Kyeonghwan; Kim, Seung Mok; Eom, Won-Jin; Kim, Jae Joon

    2017-04-03

    This paper presents a reconfigurable multi-sensor interface and its readout integrated circuit (ROIC) for display-based multi-sensor systems, which builds up multi-sensor functions by utilizing touch screen panels. In addition to inherent touch detection, physiological and environmental sensor interfaces are incorporated. The reconfigurable feature is effectively implemented by proposing two basis readout topologies of amplifier-based and oscillator-based circuits. For noise-immune design against various noises from inherent human-touch operations, an alternate-sampling error-correction scheme is proposed and integrated inside the ROIC, achieving a 12-bit resolution of successive approximation register (SAR) of analog-to-digital conversion without additional calibrations. A ROIC prototype that includes the whole proposed functions and data converters was fabricated in a 0.18 μm complementary metal oxide semiconductor (CMOS) process, and its feasibility was experimentally verified to support multiple heterogeneous sensing functions of touch, electrocardiogram, body impedance, and environmental sensors.

  11. Parallel Processing of Broad-Band PPM Signals

    NASA Technical Reports Server (NTRS)

    Gray, Andrew; Kang, Edward; Lay, Norman; Vilnrotter, Victor; Srinivasan, Meera; Lee, Clement

    2010-01-01

    A parallel-processing algorithm and a hardware architecture to implement the algorithm have been devised for timeslot synchronization in the reception of pulse-position-modulated (PPM) optical or radio signals. As in the cases of some prior algorithms and architectures for parallel, discrete-time, digital processing of signals other than PPM, an incoming broadband signal is divided into multiple parallel narrower-band signals by means of sub-sampling and filtering. The number of parallel streams is chosen so that the frequency content of the narrower-band signals is low enough to enable processing by relatively-low speed complementary metal oxide semiconductor (CMOS) electronic circuitry. The algorithm and architecture are intended to satisfy requirements for time-varying time-slot synchronization and post-detection filtering, with correction of timing errors independent of estimation of timing errors. They are also intended to afford flexibility for dynamic reconfiguration and upgrading. The architecture is implemented in a reconfigurable CMOS processor in the form of a field-programmable gate array. The algorithm and its hardware implementation incorporate three separate time-varying filter banks for three distinct functions: correction of sub-sample timing errors, post-detection filtering, and post-detection estimation of timing errors. The design of the filter bank for correction of timing errors, the method of estimating timing errors, and the design of a feedback-loop filter are governed by a host of parameters, the most critical one, with regard to processing very broadband signals with CMOS hardware, being the number of parallel streams (equivalently, the rate-reduction parameter).

  12. Conditional Entropy and Location Error in Indoor Localization Using Probabilistic Wi-Fi Fingerprinting.

    PubMed

    Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten

    2016-10-02

    Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method.

  13. Conditional Entropy and Location Error in Indoor Localization Using Probabilistic Wi-Fi Fingerprinting

    PubMed Central

    Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten

    2016-01-01

    Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method. PMID:27706099

  14. Use of complementary and alternative medicine by pediatric patients with functional and organic gastrointestinal diseases: results from a multicenter survey.

    PubMed

    Vlieger, Arine M; Blink, Marjolein; Tromp, Ellen; Benninga, Marc A

    2008-08-01

    Many pediatric patients use complementary and alternative medicine, especially when facing a chronic illness for which treatment options are limited. So far, research on the use of complementary and alternative medicine in patients with functional gastrointestinal disease has been scarce. This study was designed to assess complementary and alternative medicine use in children with different gastrointestinal diseases, including functional disorders, to determine which factors predicted complementary and alternative medicine use and to assess the willingness of parents to participate in future studies on complementary and alternative medicine efficacy and safety. The prevalence of complementary and alternative medicine use was assessed by using a questionnaire for 749 children visiting pediatric gastroenterology clinics of 9 hospitals in the Netherlands. The questionnaire consisted of 35 questions on the child's gastrointestinal disease, medication use, health status, past and future complementary and alternative medicine use, reasons for its use, and the necessity of complementary and alternative medicine research. In this study population, the frequency of complementary and alternative medicine use was 37.6%. A total of 60.3% of this group had used complementary and alternative medicine specifically for their gastrointestinal disease. This specific complementary and alternative medicine use was higher in patients with functional disorders than organic disorders (25.3% vs 17.2%). Adverse effects of allopathic medication, school absenteeism, age

  15. Complementary and alternative medicine used by persons with functional gastrointestinal disorders to alleviate symptom distress.

    PubMed

    Stake-Nilsson, Kerstin; Hultcrantz, Rolf; Unge, Peter; Wengström, Yvonne

    2012-03-01

    The aim of this study was to describe the complementary and alternative medicine methods most commonly used to alleviate symptom distress in persons with functional gastrointestinal disorders. People with functional gastrointestinal disorders face many challenges in their everyday lives, and each individual has his/her own way of dealing with this illness. The experience of illness often leads persons with functional gastrointestinal disorders to complementary and alternative medicine as a viable healthcare choice. Quantitative and describing design. A study-specific complementary and alternative medicine questionnaire was used, including questions about complementary and alternative medicine methods used and the perceived effects of each method. Efficacy assessments for each method were preventive effect, partial symptom relief, total symptom relief or no effect. A total of 137 persons with functional gastrointestinal disorders answered the questionnaire, 62% (n = 85) women and 38% (n = 52) men. A total of 28 different complementary and alternative medicine methods were identified and grouped into four categories: nutritional, drug/biological, psychological activity and physical activity. All persons had tried at least one method, and most methods provided partial symptom relief. Persons with functional gastrointestinal disorders commonly use complementary and alternative medicine methods to alleviate symptoms. Nurses have a unique opportunity to expand their roles in this group of patients. Increased knowledge of complementary and alternative medicine practices would enable a more comprehensive patient assessment and a better plan for meaningful interventions that meet the needs of individual patients. © 2011 Blackwell Publishing Ltd.

  16. Who cares about the history of science?

    PubMed Central

    Chang, Hasok

    2017-01-01

    The history of science has many functions. Historians should consider how their work contributes to various functions, going beyond a simple desire to understand the past correctly. There are both internal and external functions of the history of science in relation to science itself; I focus here on the internal, as they tend to be neglected these days. The internal functions can be divided into orthodox and complementary. The orthodox function is to assist with the understanding of the content and methods of science as it is now practised. The complementary function is to generate and improve scientific knowledge where current science itself fails to do so. Complementary functions of the history of science include the raising of critical awareness, and the recovery and extension of past scientific knowledge that has become forgotten or neglected. These complementary functions are illustrated with some concrete examples.

  17. Navigation in Difficult Environments: Multi-Sensor Fusion Techniques

    DTIC Science & Technology

    2010-03-01

    Hwang , Introduction to Random Signals and Applied Kalman Filtering, 3rd ed., John Wiley & Sons, Inc., New York, 1997. [17] J. L. Farrell, “GPS/INS...nav solution Navigation outputs Estimation of inertial errors ( Kalman filter) Error estimates Core sensor Incoming signal INS Estimates of signal...the INS drift terms is performed using the mechanism of a complementary Kalman filter. The idea is that a signal parameter can be generally

  18. FMLRC: Hybrid long read error correction using an FM-index.

    PubMed

    Wang, Jeremy R; Holt, James; McMillan, Leonard; Jones, Corbin D

    2018-02-09

    Long read sequencing is changing the landscape of genomic research, especially de novo assembly. Despite the high error rate inherent to long read technologies, increased read lengths dramatically improve the continuity and accuracy of genome assemblies. However, the cost and throughput of these technologies limits their application to complex genomes. One solution is to decrease the cost and time to assemble novel genomes by leveraging "hybrid" assemblies that use long reads for scaffolding and short reads for accuracy. We describe a novel method leveraging a multi-string Burrows-Wheeler Transform with auxiliary FM-index to correct errors in long read sequences using a set of complementary short reads. We demonstrate that our method efficiently produces significantly more high quality corrected sequence than existing hybrid error-correction methods. We also show that our method produces more contiguous assemblies, in many cases, than existing state-of-the-art hybrid and long-read only de novo assembly methods. Our method accurately corrects long read sequence data using complementary short reads. We demonstrate higher total throughput of corrected long reads and a corresponding increase in contiguity of the resulting de novo assemblies. Improved throughput and computational efficiency than existing methods will help better economically utilize emerging long read sequencing technologies.

  19. A novel diagnosis method for a Hall plates-based rotary encoder with a magnetic concentrator.

    PubMed

    Meng, Bumin; Wang, Yaonan; Sun, Wei; Yuan, Xiaofang

    2014-07-31

    In the last few years, rotary encoders based on two-dimensional complementary metal oxide semiconductors (CMOS) Hall plates with a magnetic concentrator have been developed to measure contactless absolute angle. There are various error factors influencing the measuring accuracy, which are difficult to locate after the assembly of encoder. In this paper, a model-based rapid diagnosis method is presented. Based on an analysis of the error mechanism, an error model is built to compare minimum residual angle error and to quantify the error factors. Additionally, a modified particle swarm optimization (PSO) algorithm is used to reduce the calculated amount. The simulation and experimental results show that this diagnosis method is feasible to quantify the causes of the error and to reduce iteration significantly.

  20. Enhanced intercarrier interference mitigation based on encoded bit-sequence distribution inside optical superchannels

    NASA Astrophysics Data System (ADS)

    Torres, Jhon James Granada; Soto, Ana María Cárdenas; González, Neil Guerrero

    2016-10-01

    In the context of gridless optical multicarrier systems, we propose a method for intercarrier interference (ICI) mitigation which allows bit error correction in scenarios of nonspectral flatness between the subcarriers composing the multicarrier system and sub-Nyquist carrier spacing. We propose a hybrid ICI mitigation technique which exploits the advantages of signal equalization at both levels: the physical level for any digital and analog pulse shaping, and the bit-data level and its ability to incorporate advanced correcting codes. The concatenation of these two complementary techniques consists of a nondata-aided equalizer applied to each optical subcarrier, and a hard-decision forward error correction applied to the sequence of bits distributed along the optical subcarriers regardless of prior subchannel quality assessment as performed in orthogonal frequency-division multiplexing modulations for the implementation of the bit-loading technique. The impact of the ICI is systematically evaluated in terms of bit-error-rate as a function of the carrier frequency spacing and the roll-off factor of the digital pulse-shaping filter for a simulated 3×32-Gbaud single-polarization quadrature phase shift keying Nyquist-wavelength division multiplexing system. After the ICI mitigation, a back-to-back error-free decoding was obtained for sub-Nyquist carrier spacings of 28.5 and 30 GHz and roll-off values of 0.1 and 0.4, respectively.

  1. Precision, Reliability, and Effect Size of Slope Variance in Latent Growth Curve Models: Implications for Statistical Power Analysis

    PubMed Central

    Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher

    2018-01-01

    Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377

  2. Regional Variation in Use of Complementary Health Approaches by U.S. Adults

    MedlinePlus

    ... part of their yoga exercise. Data sources and methods Data from the 2012 NHIS were used for ... sampling design of NHIS. The Taylor series linearization method was chosen for estimation of standard errors. Differences ...

  3. Deriving stellar parameters with the SME software package

    NASA Astrophysics Data System (ADS)

    Piskunov, N.

    2017-09-01

    Photometry and spectroscopy are complementary tools for deriving accurate stellar parameters. Here I present one of the popular packages for stellar spectroscopy called SME with the emphasis on the latest developments and error assessment for the derived parameters.

  4. Orthogonal Polynomials Associated with Complementary Chain Sequences

    NASA Astrophysics Data System (ADS)

    Behera, Kiran Kumar; Sri Ranga, A.; Swaminathan, A.

    2016-07-01

    Using the minimal parameter sequence of a given chain sequence, we introduce the concept of complementary chain sequences, which we view as perturbations of chain sequences. Using the relation between these complementary chain sequences and the corresponding Verblunsky coefficients, the para-orthogonal polynomials and the associated Szegő polynomials are analyzed. Two illustrations, one involving Gaussian hypergeometric functions and the other involving Carathéodory functions are also provided. A connection between these two illustrations by means of complementary chain sequences is also observed.

  5. Improving real-time inflow forecasting into hydropower reservoirs through a complementary modelling framework

    NASA Astrophysics Data System (ADS)

    Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K.

    2015-08-01

    Accuracy of reservoir inflow forecasts is instrumental for maximizing the value of water resources and benefits gained through hydropower generation. Improving hourly reservoir inflow forecasts over a 24 h lead time is considered within the day-ahead (Elspot) market of the Nordic exchange market. A complementary modelling framework presents an approach for improving real-time forecasting without needing to modify the pre-existing forecasting model, but instead formulating an independent additive or complementary model that captures the structure the existing operational model may be missing. We present here the application of this principle for issuing improved hourly inflow forecasts into hydropower reservoirs over extended lead times, and the parameter estimation procedure reformulated to deal with bias, persistence and heteroscedasticity. The procedure presented comprises an error model added on top of an unalterable constant parameter conceptual model. This procedure is applied in the 207 km2 Krinsvatn catchment in central Norway. The structure of the error model is established based on attributes of the residual time series from the conceptual model. Besides improving forecast skills of operational models, the approach estimates the uncertainty in the complementary model structure and produces probabilistic inflow forecasts that entrain suitable information for reducing uncertainty in the decision-making processes in hydropower systems operation. Deterministic and probabilistic evaluations revealed an overall significant improvement in forecast accuracy for lead times up to 17 h. Evaluation of the percentage of observations bracketed in the forecasted 95 % confidence interval indicated that the degree of success in containing 95 % of the observations varies across seasons and hydrologic years.

  6. Differential Dopamine Release Dynamics in the Nucleus Accumbens Core and Shell Reveal Complementary Signals for Error Prediction and Incentive Motivation

    PubMed Central

    Cacciapaglia, Fabio; Wightman, R. Mark; Carelli, Regina M.

    2015-01-01

    Mesolimbic dopamine (DA) is phasically released during appetitive behaviors, though there is substantive disagreement about the specific purpose of these DA signals. For example, prediction error (PE) models suggest a role of learning, while incentive salience (IS) models argue that the DA signal imbues stimuli with value and thereby stimulates motivated behavior. However, within the nucleus accumbens (NAc) patterns of DA release can strikingly differ between subregions, and as such, it is possible that these patterns differentially contribute to aspects of PE and IS. To assess this, we measured DA release in subregions of the NAc during a behavioral task that spatiotemporally separated sequential goal-directed stimuli. Electrochemical methods were used to measure subsecond NAc dopamine release in the core and shell during a well learned instrumental chain schedule in which rats were trained to press one lever (seeking; SL) to gain access to a second lever (taking; TL) linked with food delivery, and again during extinction. In the core, phasic DA release was greatest following initial SL presentation, but minimal for the subsequent TL and reward events. In contrast, phasic shell DA showed robust release at all task events. Signaling decreased between the beginning and end of sessions in the shell, but not core. During extinction, peak DA release in the core showed a graded decrease for the SL and pauses in release during omitted expected rewards, whereas shell DA release decreased predominantly during the TL. These release dynamics suggest parallel DA signals capable of supporting distinct theories of appetitive behavior. SIGNIFICANCE STATEMENT Dopamine signaling in the brain is important for a variety of cognitive functions, such as learning and motivation. Typically, it is assumed that a single dopamine signal is sufficient to support these cognitive functions, though competing theories disagree on how dopamine contributes to reward-based behaviors. Here, we have found that real-time dopamine release within the nucleus accumbens (a primary target of midbrain dopamine neurons) strikingly varies between core and shell subregions. In the core, dopamine dynamics are consistent with learning-based theories (such as reward prediction error) whereas in the shell, dopamine is consistent with motivation-based theories (e.g., incentive salience). These findings demonstrate that dopamine plays multiple and complementary roles based on discrete circuits that help animals optimize rewarding behaviors. PMID:26290234

  7. The Effect of Detector Nonlinearity on WFIRST PSF Profiles for Weak Gravitational Lensing Measurements

    NASA Astrophysics Data System (ADS)

    Plazas, A. A.; Shapiro, C.; Kannawadi, A.; Mandelbaum, R.; Rhodes, J.; Smith, R.

    2016-10-01

    Weak gravitational lensing (WL) is one of the most powerful techniques to learn about the dark sector of the universe. To extract the WL signal from astronomical observations, galaxy shapes must be measured and corrected for the point-spread function (PSF) of the imaging system with extreme accuracy. Future WL missions—such as NASA’s Wide-Field Infrared Survey Telescope (WFIRST)—will use a family of hybrid near-infrared complementary metal-oxide-semiconductor detectors (HAWAII-4RG) that are untested for accurate WL measurements. Like all image sensors, these devices are subject to conversion gain nonlinearities (voltage response to collected photo-charge) that bias the shape and size of bright objects such as reference stars that are used in PSF determination. We study this type of detector nonlinearity (NL) and show how to derive requirements on it from WFIRST PSF size and ellipticity requirements. We simulate the PSF optical profiles expected for WFIRST and measure the fractional error in the PSF size (ΔR/R) and the absolute error in the PSF ellipticity (Δe) as a function of star magnitude and the NL model. For our nominal NL model (a quadratic correction), we find that, uncalibrated, NL can induce an error of ΔR/R = 1 × 10-2 and Δe 2 = 1.75 × 10-3 in the H158 bandpass for the brightest unsaturated stars in WFIRST. In addition, our simulations show that to limit the bias of ΔR/R and Δe in the H158 band to ˜10% of the estimated WFIRST error budget, the quadratic NL model parameter β must be calibrated to ˜1% and ˜2.4%, respectively. We present a fitting formula that can be used to estimate WFIRST detector NL requirements once a true PSF error budget is established.

  8. Robust recognition of degraded machine-printed characters using complementary similarity measure and error-correction learning

    NASA Astrophysics Data System (ADS)

    Hagita, Norihiro; Sawaki, Minako

    1995-03-01

    Most conventional methods in character recognition extract geometrical features such as stroke direction, connectivity of strokes, etc., and compare them with reference patterns in a stored dictionary. Unfortunately, geometrical features are easily degraded by blurs, stains and the graphical background designs used in Japanese newspaper headlines. This noise must be removed before recognition commences, but no preprocessing method is completely accurate. This paper proposes a method for recognizing degraded characters and characters printed on graphical background designs. This method is based on the binary image feature method and uses binary images as features. A new similarity measure, called the complementary similarity measure, is used as a discriminant function. It compares the similarity and dissimilarity of binary patterns with reference dictionary patterns. Experiments are conducted using the standard character database ETL-2 which consists of machine-printed Kanji, Hiragana, Katakana, alphanumeric, an special characters. The results show that this method is much more robust against noise than the conventional geometrical feature method. It also achieves high recognition rates of over 92% for characters with textured foregrounds, over 98% for characters with textured backgrounds, over 98% for outline fonts, and over 99% for reverse contrast characters.

  9. Using the Abstraction Network in Complement to Description Logics for Quality Assurance in Biomedical Terminologies - A Case Study in SNOMED CT

    PubMed Central

    Wei, Duo; Bodenreider, Olivier

    2015-01-01

    Objectives To investigate errors identified in SNOMED CT by human reviewers with help from the Abstraction Network methodology and examine why they had escaped detection by the Description Logic (DL) classifier. Case study; Two examples of errors are presented in detail (one missing IS-A relation and one duplicate concept). After correction, SNOMED CT is reclassified to ensure that no new inconsistency was introduced. Conclusions DL-based auditing techniques built in terminology development environments ensure the logical consistency of the terminology. However, complementary approaches are needed for identifying and addressing other types of errors. PMID:20841848

  10. Using the abstraction network in complement to description logics for quality assurance in biomedical terminologies - a case study in SNOMED CT.

    PubMed

    Wei, Duo; Bodenreider, Olivier

    2010-01-01

    To investigate errors identified in SNOMED CT by human reviewers with help from the Abstraction Network methodology and examine why they had escaped detection by the Description Logic (DL) classifier. Case study; Two examples of errors are presented in detail (one missing IS-A relation and one duplicate concept). After correction, SNOMED CT is reclassified to ensure that no new inconsistency was introduced. DL-based auditing techniques built in terminology development environments ensure the logical consistency of the terminology. However, complementary approaches are needed for identifying and addressing other types of errors.

  11. Testing large aspheric surfaces with complementary annular subaperture interferometric method

    NASA Astrophysics Data System (ADS)

    Hou, Xi; Wu, Fan; Lei, Baiping; Fan, Bin; Chen, Qiang

    2008-07-01

    Annular subaperture interferometric method has provided an alternative solution to testing rotationally symmetric aspheric surfaces with low cost and flexibility. However, some new challenges, particularly in the motion and algorithm components, appear when applied to large aspheric surfaces with large departure in the practical engineering. Based on our previously reported annular subaperture reconstruction algorithm with Zernike annular polynomials and matrix method, and the experimental results for an approximate 130-mm diameter and f/2 parabolic mirror, an experimental investigation by testing an approximate 302-mm diameter and f/1.7 parabolic mirror with the complementary annular subaperture interferometric method is presented. We have focused on full-aperture reconstruction accuracy, and discuss some error effects and limitations of testing larger aspheric surfaces with the annular subaperture method. Some considerations about testing sector segment with complementary sector subapertures are provided.

  12. An error-based micro-sensor capture system for real-time motion estimation

    NASA Astrophysics Data System (ADS)

    Yang, Lin; Ye, Shiwei; Wang, Zhibo; Huang, Zhipei; Wu, Jiankang; Kong, Yongmei; Zhang, Li

    2017-10-01

    A wearable micro-sensor motion capture system with 16 IMUs and an error-compensatory complementary filter algorithm for real-time motion estimation has been developed to acquire accurate 3D orientation and displacement in real life activities. In the proposed filter algorithm, the gyroscope bias error, orientation error and magnetic disturbance error are estimated and compensated, significantly reducing the orientation estimation error due to sensor noise and drift. Displacement estimation, especially for activities such as jumping, has been the challenge in micro-sensor motion capture. An adaptive gait phase detection algorithm has been developed to accommodate accurate displacement estimation in different types of activities. The performance of this system is benchmarked with respect to the results of VICON optical capture system. The experimental results have demonstrated effectiveness of the system in daily activities tracking, with estimation error 0.16 ± 0.06 m for normal walking and 0.13 ± 0.11 m for jumping motions. Research supported by the National Natural Science Foundation of China (Nos. 61431017, 81272166).

  13. Complementary Hand Responses Occur in Both Peri- and Extrapersonal Space.

    PubMed

    Faber, Tim W; van Elk, Michiel; Jonas, Kai J

    2016-01-01

    Human beings have a strong tendency to imitate. Evidence from motor priming paradigms suggests that people automatically tend to imitate observed actions such as hand gestures by performing mirror-congruent movements (e.g., lifting one's right finger upon observing a left finger movement; from a mirror perspective). Many observed actions however, do not require mirror-congruent responses but afford complementary (fitting) responses instead (e.g., handing over a cup; shaking hands). Crucially, whereas mirror-congruent responses don't require physical interaction with another person, complementary actions often do. Given that most experiments studying motor priming have used stimuli devoid of contextual information, this space or interaction-dependency of complementary responses has not yet been assessed. To address this issue, we let participants perform a task in which they had to mirror or complement a hand gesture (fist or open hand) performed by an actor depicted either within or outside of reach. In three studies, we observed faster reaction times and less response errors for complementary relative to mirrored hand movements in response to open hand gestures (i.e., 'hand-shaking') irrespective of the perceived interpersonal distance of the actor. This complementary effect could not be accounted for by a low-level spatial cueing effect. These results demonstrate that humans have a strong and automatic tendency to respond by performing complementary actions. In addition, our findings underline the limitations of manipulations of space in modulating effects of motor priming and the perception of affordances.

  14. Experimental Comparison Between Mahoney and Complementary Sensor Fusion Algorithm for Attitude Determination by Raw Sensor Data of Xsens Imu on Buoy

    NASA Astrophysics Data System (ADS)

    Jouybari, A.; Ardalan, A. A.; Rezvani, M.-H.

    2017-09-01

    The accurate measurement of platform orientation plays a critical role in a range of applications including marine, aerospace, robotics, navigation, human motion analysis, and machine interaction. We used Mahoney filter, Complementary filter and Xsens Kalman filter for achieving Euler angle of a dynamic platform by integration of gyroscope, accelerometer, and magnetometer measurements. The field test has been performed in Kish Island using an IMU sensor (Xsens MTi-G-700) that installed onboard a buoy so as to provide raw data of gyroscopes, accelerometers, magnetometer measurements about 25 minutes. These raw data were used to calculate the Euler angles by Mahoney filter and Complementary filter, while the Euler angles collected by XSense IMU sensor become the reference of the Euler angle estimations. We then compared Euler angles which calculated by Mahoney Filter and Complementary Filter with reference to the Euler angles recorded by the XSense IMU sensor. The standard deviations of the differences between the Mahoney Filter, Complementary Filter Euler angles and XSense IMU sensor Euler angles were about 0.5644, 0.3872, 0.4990 degrees and 0.6349, 0.2621, 2.3778 degrees for roll, pitch, and heading, respectively, so the numerical result assert that Mahoney filter is precise for roll and heading angles determination and Complementary filter is precise only for pitch determination, it should be noted that heading angle determination by Complementary filter has more error than Mahoney filter.

  15. What Information is Stored in DNA: Does it Contain Digital Error Correcting Codes?

    NASA Astrophysics Data System (ADS)

    Liebovitch, Larry

    1998-03-01

    The longest term correlations in living systems are the information stored in DNA which reflects the evolutionary history of an organism. The 4 bases (A,T,G,C) encode sequences of amino acids as well as locations of binding sites for proteins that regulate DNA. The fidelity of this important information is maintained by ANALOG error check mechanisms. When a single strand of DNA is replicated the complementary base is inserted in the new strand. Sometimes the wrong base is inserted that sticks out disrupting the phosphate backbone. The new base is not yet methylated, so repair enzymes, that slide along the DNA, can tear out the wrong base and replace it with the right one. The bases in DNA form a sequence of 4 different symbols and so the information is encoded in a DIGITAL form. All the digital codes in our society (ISBN book numbers, UPC product codes, bank account numbers, airline ticket numbers) use error checking code, where some digits are functions of other digits to maintain the fidelity of transmitted informaiton. Does DNA also utitlize a DIGITAL error chekcing code to maintain the fidelity of its information and increase the accuracy of replication? That is, are some bases in DNA functions of other bases upstream or downstream? This raises the interesting mathematical problem: How does one determine whether some symbols in a sequence of symbols are a function of other symbols. It also bears on the issue of determining algorithmic complexity: What is the function that generates the shortest algorithm for reproducing the symbol sequence. The error checking codes most used in our technology are linear block codes. We developed an efficient method to test for the presence of such codes in DNA. We coded the 4 bases as (0,1,2,3) and used Gaussian elimination, modified for modulus 4, to test if some bases are linear combinations of other bases. We used this method to analyze the base sequence in the genes from the lac operon and cytochrome C. We did not find evidence for such error correcting codes in these genes. However, we analyzed only a small amount of DNA and if digitial error correcting schemes are present in DNA, they may be more subtle than such simple linear block codes. The basic issue we raise here, is how information is stored in DNA and an appreciation that digital symbol sequences, such as DNA, admit of interesting schemes to store and protect the fidelity of their information content. Liebovitch, Tao, Todorov, Levine. 1996. Biophys. J. 71:1539-1544. Supported by NIH grant EY6234.

  16. An Ensemble Method for Spelling Correction in Consumer Health Questions

    PubMed Central

    Kilicoglu, Halil; Fiszman, Marcelo; Roberts, Kirk; Demner-Fushman, Dina

    2015-01-01

    Orthographic and grammatical errors are a common feature of informal texts written by lay people. Health-related questions asked by consumers are a case in point. Automatic interpretation of consumer health questions is hampered by such errors. In this paper, we propose a method that combines techniques based on edit distance and frequency counts with a contextual similarity-based method for detecting and correcting orthographic errors, including misspellings, word breaks, and punctuation errors. We evaluate our method on a set of spell-corrected questions extracted from the NLM collection of consumer health questions. Our method achieves a F1 score of 0.61, compared to an informed baseline of 0.29, achieved using ESpell, a spelling correction system developed for biomedical queries. Our results show that orthographic similarity is most relevant in spelling error correction in consumer health questions and that frequency and contextual information are complementary to orthographic features. PMID:26958208

  17. Novel processor architecture for onboard infrared sensors

    NASA Astrophysics Data System (ADS)

    Hihara, Hiroki; Iwasaki, Akira; Tamagawa, Nobuo; Kuribayashi, Mitsunobu; Hashimoto, Masanori; Mitsuyama, Yukio; Ochi, Hiroyuki; Onodera, Hidetoshi; Kanbara, Hiroyuki; Wakabayashi, Kazutoshi; Tada, Munehiro

    2016-09-01

    Infrared sensor system is a major concern for inter-planetary missions that investigate the nature and the formation processes of planets and asteroids. The infrared sensor system requires signal preprocessing functions that compensate for the intensity of infrared image sensors to get high quality data and high compression ratio through the limited capacity of transmission channels towards ground stations. For those implementations, combinations of Field Programmable Gate Arrays (FPGAs) and microprocessors are employed by AKATSUKI, the Venus Climate Orbiter, and HAYABUSA2, the asteroid probe. On the other hand, much smaller size and lower power consumption are demanded for future missions to accommodate more sensors. To fulfill this future demand, we developed a novel processor architecture which consists of reconfigurable cluster cores and programmable-logic cells with complementary atom switches. The complementary atom switches enable hardware programming without configuration memories, and thus soft-error on logic circuit connection is completely eliminated. This is a noteworthy advantage for space applications which cannot be found in conventional re-writable FPGAs. Almost one-tenth of lower power consumption is expected compared to conventional re-writable FPGAs because of the elimination of configuration memories. The proposed processor architecture can be reconfigured by behavioral synthesis with higher level language specification. Consequently, compensation functions are implemented in a single chip without accommodating program memories, which is accompanied with conventional microprocessors, while maintaining the comparable performance. This enables us to embed a processor element on each infrared signal detector output channel.

  18. PAPR reduction in CO-OFDM systems using IPTS and modified clipping and filtering

    NASA Astrophysics Data System (ADS)

    Tong, Zheng-rong; Hu, Ya-nong; Zhang, Wei-hua

    2018-05-01

    Aiming at the problem of the peak to average power ratio ( PAPR) in coherent optical orthogonal frequency division multiplexing (CO-OFDM), a hybrid PAPR reduction technique of the CO-OFDM system by combining iterative partial transmit sequence (IPTS) scheme with modified clipping and filtering (MCF) is proposed. The simulation results show that at the complementary cumulative distribution function ( CCDF) of 10-4, the PAPR of proposed scheme is optimized by 1.86 dB and 2.13 dB compared with those of IPTS and CF schemes, respectively. Meanwhile, when the bit error rate ( BER) is 10-3, the optical signal to noise ratio ( OSNR) are optimized by 1.57 dB and 0.66 dB compared with those of CF and IPTS-CF schemes, respectively.

  19. Author Correction: Smac mimetics and oncolytic viruses synergize in driving anticancer T-cell responses through complementary mechanisms.

    PubMed

    Kim, Dae-Sun; Dastidar, Himika; Zhang, Chunfen; Zemp, Franz J; Lau, Keith; Ernst, Matthias; Rakic, Andrea; Sikdar, Saif; Rajwani, Jahanara; Naumenko, Victor; Balce, Dale R; Ewanchuk, Ben W; Tailor, Pankaj; Yates, Robin M; Jenne, Craig; Gafuik, Chris; Mahoney, Douglas J

    2018-05-24

    The originally published version of this article contained an error in the spelling of the author Pankaj Tailor, which was incorrectly given as Pankaj Taylor. This has now been corrected in both the PDF and HTML versions of the article.

  20. Why Current Statistics of Complementary Alternative Medicine Clinical Trials is Invalid.

    PubMed

    Pandolfi, Maurizio; Carreras, Giulia

    2018-06-07

    It is not sufficiently known that frequentist statistics cannot provide direct information on the probability that the research hypothesis tested is correct. The error resulting from this misunderstanding is compounded when the hypotheses under scrutiny have precarious scientific bases, which, generally, those of complementary alternative medicine (CAM) are. In such cases, it is mandatory to use inferential statistics, considering the prior probability that the hypothesis tested is true, such as the Bayesian statistics. The authors show that, under such circumstances, no real statistical significance can be achieved in CAM clinical trials. In this respect, CAM trials involving human material are also hardly defensible from an ethical viewpoint.

  1. TU-H-CAMPUS-IeP3-01: Simultaneous PET Restoration and PET/CT Co-Segmentation Using a Variational Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, L; Tan, S; Lu, W

    Purpose: PET images are usually blurred due to the finite spatial resolution, while CT images suffer from low contrast. Segment a tumor from either a single PET or CT image is thus challenging. To make full use of the complementary information between PET and CT, we propose a novel variational method for simultaneous PET image restoration and PET/CT images co-segmentation. Methods: The proposed model was constructed based on the Γ-convergence approximation of Mumford-Shah (MS) segmentation model for PET/CT co-segmentation. Moreover, a PET de-blur process was integrated into the MS model to improve the segmentation accuracy. An interaction edge constraint termmore » over the two modalities were specially designed to share the complementary information. The energy functional was iteratively optimized using an alternate minimization (AM) algorithm. The performance of the proposed method was validated on ten lung cancer cases and five esophageal cancer cases. The ground truth were manually delineated by an experienced radiation oncologist using the complementary visual features of PET and CT. The segmentation accuracy was evaluated by Dice similarity index (DSI) and volume error (VE). Results: The proposed method achieved an expected restoration result for PET image and satisfactory segmentation results for both PET and CT images. For lung cancer dataset, the average DSI (0.72) increased by 0.17 and 0.40 than single PET and CT segmentation. For esophageal cancer dataset, the average DSI (0.85) increased by 0.07 and 0.43 than single PET and CT segmentation. Conclusion: The proposed method took full advantage of the complementary information from PET and CT images. This work was supported in part by the National Cancer Institute Grants R01CA172638. Shan Tan and Laquan Li were supported in part by the National Natural Science Foundation of China, under Grant Nos. 60971112 and 61375018.« less

  2. Complementary functions of the two brain hemispheres: comparisons with earlier conceptions and implications for individual and society.

    PubMed

    Zeier, H

    1989-07-01

    The concept of different functions for the left and right cerebral hemispheres coincides in an astonishing way with earlier philosophical and psychological work which divided the human mind into two complementary functions without having a neurophysiological explanation. Representative are the ideas of Fichte, Hegel and Jung. The latter postulated the two subsystems Ego and Self and associated the conscious functions of the Ego with the intellect, the capacity for rational thought, and the Self with the mind, which also includes the emotional feelings. For the harmonic development and self-realization of man the functions of both systems in complementary interaction are required. Therefore, the current overaccentuation of the intellect and of progress directed technical-scientific thinking should be corrected by making better use of the much neglected functions of the right hemisphere.

  3. A Complementary Note to 'A Lag-1 Smoother Approach to System-Error Estimation': The Intrinsic Limitations of Residual Diagnostics

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2015-01-01

    Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.

  4. Fusing metabolomics data sets with heterogeneous measurement errors

    PubMed Central

    Waaijenborg, Sandra; Korobko, Oksana; Willems van Dijk, Ko; Lips, Mirjam; Hankemeier, Thomas; Wilderjans, Tom F.; Smilde, Age K.

    2018-01-01

    Combining different metabolomics platforms can contribute significantly to the discovery of complementary processes expressed under different conditions. However, analysing the fused data might be hampered by the difference in their quality. In metabolomics data, one often observes that measurement errors increase with increasing measurement level and that different platforms have different measurement error variance. In this paper we compare three different approaches to correct for the measurement error heterogeneity, by transformation of the raw data, by weighted filtering before modelling and by a modelling approach using a weighted sum of residuals. For an illustration of these different approaches we analyse data from healthy obese and diabetic obese individuals, obtained from two metabolomics platforms. Concluding, the filtering and modelling approaches that both estimate a model of the measurement error did not outperform the data transformation approaches for this application. This is probably due to the limited difference in measurement error and the fact that estimation of measurement error models is unstable due to the small number of repeats available. A transformation of the data improves the classification of the two groups. PMID:29698490

  5. Triangle network motifs predict complexes by complementing high-error interactomes with structural information.

    PubMed

    Andreopoulos, Bill; Winter, Christof; Labudde, Dirk; Schroeder, Michael

    2009-06-27

    A lot of high-throughput studies produce protein-protein interaction networks (PPINs) with many errors and missing information. Even for genome-wide approaches, there is often a low overlap between PPINs produced by different studies. Second-level neighbors separated by two protein-protein interactions (PPIs) were previously used for predicting protein function and finding complexes in high-error PPINs. We retrieve second level neighbors in PPINs, and complement these with structural domain-domain interactions (SDDIs) representing binding evidence on proteins, forming PPI-SDDI-PPI triangles. We find low overlap between PPINs, SDDIs and known complexes, all well below 10%. We evaluate the overlap of PPI-SDDI-PPI triangles with known complexes from Munich Information center for Protein Sequences (MIPS). PPI-SDDI-PPI triangles have ~20 times higher overlap with MIPS complexes than using second-level neighbors in PPINs without SDDIs. The biological interpretation for triangles is that a SDDI causes two proteins to be observed with common interaction partners in high-throughput experiments. The relatively few SDDIs overlapping with PPINs are part of highly connected SDDI components, and are more likely to be detected in experimental studies. We demonstrate the utility of PPI-SDDI-PPI triangles by reconstructing myosin-actin processes in the nucleus, cytoplasm, and cytoskeleton, which were not obvious in the original PPIN. Using other complementary datatypes in place of SDDIs to form triangles, such as PubMed co-occurrences or threading information, results in a similar ability to find protein complexes. Given high-error PPINs with missing information, triangles of mixed datatypes are a promising direction for finding protein complexes. Integrating PPINs with SDDIs improves finding complexes. Structural SDDIs partially explain the high functional similarity of second-level neighbors in PPINs. We estimate that relatively little structural information would be sufficient for finding complexes involving most of the proteins and interactions in a typical PPIN.

  6. Triangle network motifs predict complexes by complementing high-error interactomes with structural information

    PubMed Central

    Andreopoulos, Bill; Winter, Christof; Labudde, Dirk; Schroeder, Michael

    2009-01-01

    Background A lot of high-throughput studies produce protein-protein interaction networks (PPINs) with many errors and missing information. Even for genome-wide approaches, there is often a low overlap between PPINs produced by different studies. Second-level neighbors separated by two protein-protein interactions (PPIs) were previously used for predicting protein function and finding complexes in high-error PPINs. We retrieve second level neighbors in PPINs, and complement these with structural domain-domain interactions (SDDIs) representing binding evidence on proteins, forming PPI-SDDI-PPI triangles. Results We find low overlap between PPINs, SDDIs and known complexes, all well below 10%. We evaluate the overlap of PPI-SDDI-PPI triangles with known complexes from Munich Information center for Protein Sequences (MIPS). PPI-SDDI-PPI triangles have ~20 times higher overlap with MIPS complexes than using second-level neighbors in PPINs without SDDIs. The biological interpretation for triangles is that a SDDI causes two proteins to be observed with common interaction partners in high-throughput experiments. The relatively few SDDIs overlapping with PPINs are part of highly connected SDDI components, and are more likely to be detected in experimental studies. We demonstrate the utility of PPI-SDDI-PPI triangles by reconstructing myosin-actin processes in the nucleus, cytoplasm, and cytoskeleton, which were not obvious in the original PPIN. Using other complementary datatypes in place of SDDIs to form triangles, such as PubMed co-occurrences or threading information, results in a similar ability to find protein complexes. Conclusion Given high-error PPINs with missing information, triangles of mixed datatypes are a promising direction for finding protein complexes. Integrating PPINs with SDDIs improves finding complexes. Structural SDDIs partially explain the high functional similarity of second-level neighbors in PPINs. We estimate that relatively little structural information would be sufficient for finding complexes involving most of the proteins and interactions in a typical PPIN. PMID:19558694

  7. Differential Dopamine Release Dynamics in the Nucleus Accumbens Core and Shell Reveal Complementary Signals for Error Prediction and Incentive Motivation.

    PubMed

    Saddoris, Michael P; Cacciapaglia, Fabio; Wightman, R Mark; Carelli, Regina M

    2015-08-19

    Mesolimbic dopamine (DA) is phasically released during appetitive behaviors, though there is substantive disagreement about the specific purpose of these DA signals. For example, prediction error (PE) models suggest a role of learning, while incentive salience (IS) models argue that the DA signal imbues stimuli with value and thereby stimulates motivated behavior. However, within the nucleus accumbens (NAc) patterns of DA release can strikingly differ between subregions, and as such, it is possible that these patterns differentially contribute to aspects of PE and IS. To assess this, we measured DA release in subregions of the NAc during a behavioral task that spatiotemporally separated sequential goal-directed stimuli. Electrochemical methods were used to measure subsecond NAc dopamine release in the core and shell during a well learned instrumental chain schedule in which rats were trained to press one lever (seeking; SL) to gain access to a second lever (taking; TL) linked with food delivery, and again during extinction. In the core, phasic DA release was greatest following initial SL presentation, but minimal for the subsequent TL and reward events. In contrast, phasic shell DA showed robust release at all task events. Signaling decreased between the beginning and end of sessions in the shell, but not core. During extinction, peak DA release in the core showed a graded decrease for the SL and pauses in release during omitted expected rewards, whereas shell DA release decreased predominantly during the TL. These release dynamics suggest parallel DA signals capable of supporting distinct theories of appetitive behavior. Dopamine signaling in the brain is important for a variety of cognitive functions, such as learning and motivation. Typically, it is assumed that a single dopamine signal is sufficient to support these cognitive functions, though competing theories disagree on how dopamine contributes to reward-based behaviors. Here, we have found that real-time dopamine release within the nucleus accumbens (a primary target of midbrain dopamine neurons) strikingly varies between core and shell subregions. In the core, dopamine dynamics are consistent with learning-based theories (such as reward prediction error) whereas in the shell, dopamine is consistent with motivation-based theories (e.g., incentive salience). These findings demonstrate that dopamine plays multiple and complementary roles based on discrete circuits that help animals optimize rewarding behaviors. Copyright © 2015 the authors 0270-6474/15/3511572-11$15.00/0.

  8. Complementary Roles for Amygdala and Periaqueductal Gray in Temporal-Difference Fear Learning

    ERIC Educational Resources Information Center

    Cole, Sindy; McNally, Gavan P.

    2009-01-01

    Pavlovian fear conditioning is not a unitary process. At the neurobiological level multiple brain regions and neurotransmitters contribute to fear learning. At the behavioral level many variables contribute to fear learning including the physical salience of the events being learned about, the direction and magnitude of predictive error, and the…

  9. Nature, Nurture, and Attention Deficit Hyperactivity Disorder.

    ERIC Educational Resources Information Center

    Faraone, Stephen V.; Biederman, Joseph

    2000-01-01

    Comments on Joseph's review of the genetics of attention deficit disorder, demonstrating errors of scientific logic and oversight of relevant research in Joseph's argument. Argues for the validity of twin studies in supporting a genetic link for ADHD and for the complementary role of nature and nurture in the etiology of the disorder. (JPB)

  10. Improving the complementary methods to estimate evapotranspiration under diverse climatic and physical conditions

    NASA Astrophysics Data System (ADS)

    Anayah, F. M.; Kaluarachchi, J. J.

    2014-06-01

    Reliable estimation of evapotranspiration (ET) is important for the purpose of water resources planning and management. Complementary methods, including complementary relationship areal evapotranspiration (CRAE), advection aridity (AA) and Granger and Gray (GG), have been used to estimate ET because these methods are simple and practical in estimating regional ET using meteorological data only. However, prior studies have found limitations in these methods especially in contrasting climates. This study aims to develop a calibration-free universal method using the complementary relationships to compute regional ET in contrasting climatic and physical conditions with meteorological data only. The proposed methodology consists of a systematic sensitivity analysis using the existing complementary methods. This work used 34 global FLUXNET sites where eddy covariance (EC) fluxes of ET are available for validation. A total of 33 alternative model variations from the original complementary methods were proposed. Further analysis using statistical methods and simplified climatic class definitions produced one distinctly improved GG-model-based alternative. The proposed model produced a single-step ET formulation with results equal to or better than the recent studies using data-intensive, classical methods. Average root mean square error (RMSE), mean absolute bias (BIAS) and R2 (coefficient of determination) across 34 global sites were 20.57 mm month-1, 10.55 mm month-1 and 0.64, respectively. The proposed model showed a step forward toward predicting ET in large river basins with limited data and requiring no calibration.

  11. Application of Molecular Dynamics Simulations in Molecular Property Prediction I: Density and Heat of Vaporization

    PubMed Central

    Wang, Junmei; Tingjun, Hou

    2011-01-01

    Molecular mechanical force field (FF) methods are useful in studying condensed phase properties. They are complementary to experiment and can often go beyond experiment in atomic details. Even a FF is specific for studying structures, dynamics and functions of biomolecules, it is still important for the FF to accurately reproduce the experimental liquid properties of small molecules that represent the chemical moieties of biomolecules. Otherwise, the force field may not describe the structures and energies of macromolecules in aqueous solutions properly. In this work, we have carried out a systematic study to evaluate the General AMBER Force Field (GAFF) in studying densities and heats of vaporization for a large set of organic molecules that covers the most common chemical functional groups. The latest techniques, such as the particle mesh Ewald (PME) for calculating electrostatic energies, and Langevin dynamics for scaling temperatures, have been applied in the molecular dynamics (MD) simulations. For density, the average percent error (APE) of 71 organic compounds is 4.43% when compared to the experimental values. More encouragingly, the APE drops to 3.43% after the exclusion of two outliers and four other compounds for which the experimental densities have been measured with pressures higher than 1.0 atm. For heat of vaporization, several protocols have been investigated and the best one, P4/ntt0, achieves an average unsigned error (AUE) and a root-mean-square error (RMSE) of 0.93 and 1.20 kcal/mol, respectively. How to reduce the prediction errors through proper van der Waals (vdW) parameterization has been discussed. An encouraging finding in vdW parameterization is that both densities and heats of vaporization approach their “ideal” values in a synchronous fashion when vdW parameters are tuned. The following hydration free energy calculation using thermodynamic integration further justifies the vdW refinement. We conclude that simple vdW parameterization can significantly reduce the prediction errors. We believe that GAFF can greatly improve its performance in predicting liquid properties of organic molecules after a systematic vdW parameterization, which will be reported in a separate paper. PMID:21857814

  12. Improving inflow forecasting into hydropower reservoirs through a complementary modelling framework

    NASA Astrophysics Data System (ADS)

    Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K.

    2014-10-01

    Accuracy of reservoir inflow forecasts is instrumental for maximizing the value of water resources and benefits gained through hydropower generation. Improving hourly reservoir inflow forecasts over a 24 h lead-time is considered within the day-ahead (Elspot) market of the Nordic exchange market. We present here a new approach for issuing hourly reservoir inflow forecasts that aims to improve on existing forecasting models that are in place operationally, without needing to modify the pre-existing approach, but instead formulating an additive or complementary model that is independent and captures the structure the existing model may be missing. Besides improving forecast skills of operational models, the approach estimates the uncertainty in the complementary model structure and produces probabilistic inflow forecasts that entrain suitable information for reducing uncertainty in the decision-making processes in hydropower systems operation. The procedure presented comprises an error model added on top of an un-alterable constant parameter conceptual model, the models being demonstrated with reference to the 207 km2 Krinsvatn catchment in central Norway. The structure of the error model is established based on attributes of the residual time series from the conceptual model. Deterministic and probabilistic evaluations revealed an overall significant improvement in forecast accuracy for lead-times up to 17 h. Season based evaluations indicated that the improvement in inflow forecasts varies across seasons and inflow forecasts in autumn and spring are less successful with the 95% prediction interval bracketing less than 95% of the observations for lead-times beyond 17 h.

  13. Complementary aspects of diffusion imaging and fMRI; I: structure and function.

    PubMed

    Mulkern, Robert V; Davis, Peter E; Haker, Steven J; Estepar, Raul San Jose; Panych, Lawrence P; Maier, Stephan E; Rivkin, Michael J

    2006-05-01

    Studying the intersection of brain structure and function is an important aspect of modern neuroscience. The development of magnetic resonance imaging (MRI) over the last 25 years has provided new and powerful tools for the study of brain structure and function. Two tools in particular, diffusion imaging and functional MRI (fMRI), are playing increasingly important roles in elucidating the complementary aspects of brain structure and function. In this work, we review basic technical features of diffusion imaging and fMRI for studying the integrity of white matter structural components and for determining the location and extent of cortical activation in gray matter, respectively. We then review a growing body of literature in which the complementary aspects of diffusion imaging and fMRI, applied as separate examinations but analyzed in tandem, have been exploited to enhance our knowledge of brain structure and function.

  14. Satellite altimetric measurements of the ocean. Report of the TOPEX Science Working Group

    NASA Technical Reports Server (NTRS)

    Stewart, R.

    1981-01-01

    The scientific usefulness of satellite measurements of ocean topography for the study of ocean circulation was investigated. The following topics were studied: (1) scientific problems which use altimetric measurements of ocean topography; (2) the extent in which in situ measurements are complementary or required; (3) accuracy, precision, and spatial and temporal resolutions which are required of the topographic measurements; (4) errors associated with measurement techniques; and (5) influences of these errors on scientific problems. An operational system for measuring ocean topography, was defined and the cost of conducting such a topographic experiment, was estimated.

  15. Differential wide temperature range CMOS interface circuit for capacitive MEMS pressure sensors.

    PubMed

    Wang, Yucai; Chodavarapu, Vamsy P

    2015-02-12

    We describe a Complementary Metal-Oxide Semiconductor (CMOS) differential interface circuit for capacitive Micro-Electro-Mechanical Systems (MEMS) pressure sensors that is functional over a wide temperature range between -55 °C and 225 °C. The circuit is implemented using IBM 0.13 μm CMOS technology with 2.5 V power supply. A constant-gm biasing technique is used to mitigate performance degradation at high temperatures. The circuit offers the flexibility to interface with MEMS sensors with a wide range of the steady-state capacitance values from 0.5 pF to 10 pF. Simulation results show that the circuitry has excellent linearity and stability over the wide temperature range. Experimental results confirm that the temperature effects on the circuitry are small, with an overall linearity error around 2%.

  16. Differential Wide Temperature Range CMOS Interface Circuit for Capacitive MEMS Pressure Sensors

    PubMed Central

    Wang, Yucai; Chodavarapu, Vamsy P.

    2015-01-01

    We describe a Complementary Metal-Oxide Semiconductor (CMOS) differential interface circuit for capacitive Micro-Electro-Mechanical Systems (MEMS) pressure sensors that is functional over a wide temperature range between −55 °C and 225 °C. The circuit is implemented using IBM 0.13 μm CMOS technology with 2.5 V power supply. A constant-gm biasing technique is used to mitigate performance degradation at high temperatures. The circuit offers the flexibility to interface with MEMS sensors with a wide range of the steady-state capacitance values from 0.5 pF to 10 pF. Simulation results show that the circuitry has excellent linearity and stability over the wide temperature range. Experimental results confirm that the temperature effects on the circuitry are small, with an overall linearity error around 2%. PMID:25686312

  17. Photonics-based microwave frequency measurement using a double-sideband suppressed-carrier modulation and an InP integrated ring-assisted Mach-Zehnder interferometer filter.

    PubMed

    Fandiño, Javier S; Muñoz, Pascual

    2013-11-01

    A photonic system capable of estimating the unknown frequency of a CW microwave tone is presented. The core of the system is a complementary optical filter monolithically integrated in InP, consisting of a ring-assisted Mach-Zehnder interferometer with a second-order elliptic response. By simultaneously measuring the different optical powers produced by a double-sideband suppressed-carrier modulation at the outputs of the photonic integrated circuit, an amplitude comparison function that depends on the input tone frequency is obtained. Using this technique, a frequency measurement range of 10 GHz (5-15 GHz) with a root mean square value of frequency error lower than 200 MHz is experimentally demonstrated. Moreover, simulations showing the impact of a residual optical carrier on system performance are also provided.

  18. The mirror neuron system is more active during complementary compared with imitative action.

    PubMed

    Newman-Norlund, Roger D; van Schie, Hein T; van Zuijlen, Alexander M J; Bekkering, Harold

    2007-07-01

    We assessed the role of the human mirror neuron system (MNS) in complementary actions using functional magnetic resonance imaging while participants prepared to execute imitative or complementary actions. The BOLD signal in the right inferior frontal gyrus and bilateral inferior parietal lobes was greater during preparation of complementary than during imitative actions, suggesting that the MNS may be essential in dynamically coupling action observation to action execution.

  19. How to compute isomerization energies of organic molecules with quantum chemical methods.

    PubMed

    Grimme, Stefan; Steinmetz, Marc; Korth, Martin

    2007-03-16

    The reaction energies for 34 typical organic isomerizations including oxygen and nitrogen heteroatoms are investigated with modern quantum chemical methods that have the perspective of also being applicable to large systems. The experimental reaction enthalpies are corrected for vibrational and thermal effects, and the thus derived "experimental" reaction energies are compared to corresponding theoretical data. A series of standard AO basis sets in combination with second-order perturbation theory (MP2, SCS-MP2), conventional density functionals (e.g., PBE, TPSS, B3-LYP, MPW1K, BMK), and new perturbative functionals (B2-PLYP, mPW2-PLYP) are tested. In three cases, obvious errors of the experimental values could be detected, and accurate coupled-cluster [CCSD(T)] reference values have been used instead. It is found that only triple-zeta quality AO basis sets provide results close enough to the basis set limit and that sets like the popular 6-31G(d) should be avoided in accurate work. Augmentation of small basis sets with diffuse functions has a notable effect in B3-LYP calculations that is attributed to intramolecular basis set superposition error and covers basic deficiencies of the functional. The new methods based on perturbation theory (SCS-MP2, X2-PLYP) are found to be clearly superior to many other approaches; that is, they provide mean absolute deviations of less than 1.2 kcal mol-1 and only a few (<10%) outliers. The best performance in the group of conventional functionals is found for the highly parametrized BMK hybrid meta-GGA. Contrary to accepted opinion, hybrid density functionals offer no real advantage over simple GGAs. For reasonably large AO basis sets, results of poor quality are obtained with the popular B3-LYP functional that cannot be recommended for thermochemical applications in organic chemistry. The results of this study are complementary to often used benchmarks based on atomization energies and should guide chemists in their search for accurate and efficient computational thermochemistry methods.

  20. A Golay complementary TS-based symbol synchronization scheme in variable rate LDPC-coded MB-OFDM UWBoF system

    NASA Astrophysics Data System (ADS)

    He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin

    2015-09-01

    In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.

  1. A general transfer-function approach to noise filtering in open-loop quantum control

    NASA Astrophysics Data System (ADS)

    Viola, Lorenza

    2015-03-01

    Hamiltonian engineering via unitary open-loop quantum control provides a versatile and experimentally validated framework for manipulating a broad class of non-Markovian open quantum systems of interest, with applications ranging from dynamical decoupling and dynamically corrected quantum gates, to noise spectroscopy and quantum simulation. In this context, transfer-function techniques directly motivated by control engineering have proved invaluable for obtaining a transparent picture of the controlled dynamics in the frequency domain and for quantitatively analyzing performance. In this talk, I will show how to identify a computationally tractable set of ``fundamental filter functions,'' out of which arbitrary filter functions may be assembled up to arbitrary high order in principle. Besides avoiding the infinite recursive hierarchy of filter functions that arises in general control scenarios, this fundamental set suffices to characterize the error suppression capabilities of the control protocol in both the time and frequency domain. I will show, in particular, how the resulting notion of ``filtering order'' reveals conceptually distinct, albeit complementary, features of the controlled dynamics as compared to the ``cancellation order,'' traditionally defined in the Magnus sense. Implications for current quantum control experiments will be discussed. Work supported by the U.S. Army Research Office under Contract No. W911NF-14-1-0682.

  2. Suppressing relaxation in superconducting qubits by quasiparticle pumping.

    PubMed

    Gustavsson, Simon; Yan, Fei; Catelani, Gianluigi; Bylander, Jonas; Kamal, Archana; Birenbaum, Jeffrey; Hover, David; Rosenberg, Danna; Samach, Gabriel; Sears, Adam P; Weber, Steven J; Yoder, Jonilyn L; Clarke, John; Kerman, Andrew J; Yoshihara, Fumiki; Nakamura, Yasunobu; Orlando, Terry P; Oliver, William D

    2016-12-23

    Dynamical error suppression techniques are commonly used to improve coherence in quantum systems. They reduce dephasing errors by applying control pulses designed to reverse erroneous coherent evolution driven by environmental noise. However, such methods cannot correct for irreversible processes such as energy relaxation. We investigate a complementary, stochastic approach to reducing errors: Instead of deterministically reversing the unwanted qubit evolution, we use control pulses to shape the noise environment dynamically. In the context of superconducting qubits, we implement a pumping sequence to reduce the number of unpaired electrons (quasiparticles) in close proximity to the device. A 70% reduction in the quasiparticle density results in a threefold enhancement in qubit relaxation times and a comparable reduction in coherence variability. Copyright © 2016, American Association for the Advancement of Science.

  3. DNA Repair Mechanisms and the Bypass of DNA Damage in Saccharomyces cerevisiae

    PubMed Central

    Boiteux, Serge; Jinks-Robertson, Sue

    2013-01-01

    DNA repair mechanisms are critical for maintaining the integrity of genomic DNA, and their loss is associated with cancer predisposition syndromes. Studies in Saccharomyces cerevisiae have played a central role in elucidating the highly conserved mechanisms that promote eukaryotic genome stability. This review will focus on repair mechanisms that involve excision of a single strand from duplex DNA with the intact, complementary strand serving as a template to fill the resulting gap. These mechanisms are of two general types: those that remove damage from DNA and those that repair errors made during DNA synthesis. The major DNA-damage repair pathways are base excision repair and nucleotide excision repair, which, in the most simple terms, are distinguished by the extent of single-strand DNA removed together with the lesion. Mistakes made by DNA polymerases are corrected by the mismatch repair pathway, which also corrects mismatches generated when single strands of non-identical duplexes are exchanged during homologous recombination. In addition to the true repair pathways, the postreplication repair pathway allows lesions or structural aberrations that block replicative DNA polymerases to be tolerated. There are two bypass mechanisms: an error-free mechanism that involves a switch to an undamaged template for synthesis past the lesion and an error-prone mechanism that utilizes specialized translesion synthesis DNA polymerases to directly synthesize DNA across the lesion. A high level of functional redundancy exists among the pathways that deal with lesions, which minimizes the detrimental effects of endogenous and exogenous DNA damage. PMID:23547164

  4. Investigating Systematic Errors of the Interstellar Flow Longitude Derived from the Pickup Ion Cutoff

    NASA Astrophysics Data System (ADS)

    Taut, A.; Berger, L.; Drews, C.; Bower, J.; Keilbach, D.; Lee, M. A.; Moebius, E.; Wimmer-Schweingruber, R. F.

    2017-12-01

    Complementary to the direct neutral particle measurements performed by e.g. IBEX, the measurement of PickUp Ions (PUIs) constitutes a diagnostic tool to investigate the local interstellar medium. PUIs are former neutral particles that have been ionized in the inner heliosphere. Subsequently, they are picked up by the solar wind and its frozen-in magnetic field. Due to this process, a characteristic Velocity Distribution Function (VDF) with a sharp cutoff evolves, which carries information about the PUI's injection speed and thus the former neutral particle velocity. The symmetry of the injection speed about the interstellar flow vector is used to derive the interstellar flow longitude from PUI measurements. Using He PUI data obtained by the PLASTIC sensor on STEREO A, we investigate how this concept may be affected by systematic errors. The PUI VDF strongly depends on the orientation of the local interplanetary magnetic field. Recently injected PUIs with speeds just below the cutoff speed typically form a highly anisotropic torus distribution in velocity space, which leads to a longitudinal transport for certain magnetic field orientation. Therefore, we investigate how the selection of magnetic field configurations in the data affects the result for the interstellar flow longitude that we derive from the PUI cutoff. Indeed, we find that the results follow a systematic trend with the filtered magnetic field angles that can lead to a shift of the result up to 5°. In turn, this means that every value for the interstellar flow longitude derived from the PUI cutoff is affected by a systematic error depending on the utilized magnetic field orientations. Here, we present our observations, discuss possible reasons for the systematic trend we discovered, and indicate selections that may minimize the systematic errors.

  5. Convergence in parameters and predictions using computational experimental design.

    PubMed

    Hagen, David R; White, Jacob K; Tidor, Bruce

    2013-08-06

    Typically, biological models fitted to experimental data suffer from significant parameter uncertainty, which can lead to inaccurate or uncertain predictions. One school of thought holds that accurate estimation of the true parameters of a biological system is inherently problematic. Recent work, however, suggests that optimal experimental design techniques can select sets of experiments whose members probe complementary aspects of a biochemical network that together can account for its full behaviour. Here, we implemented an experimental design approach for selecting sets of experiments that constrain parameter uncertainty. We demonstrated with a model of the epidermal growth factor-nerve growth factor pathway that, after synthetically performing a handful of optimal experiments, the uncertainty in all 48 parameters converged below 10 per cent. Furthermore, the fitted parameters converged to their true values with a small error consistent with the residual uncertainty. When untested experimental conditions were simulated with the fitted models, the predicted species concentrations converged to their true values with errors that were consistent with the residual uncertainty. This paper suggests that accurate parameter estimation is achievable with complementary experiments specifically designed for the task, and that the resulting parametrized models are capable of accurate predictions.

  6. Measuring upper limb function in children with hemiparesis with 3D inertial sensors.

    PubMed

    Newman, Christopher J; Bruchez, Roselyn; Roches, Sylvie; Jequier Gygax, Marine; Duc, Cyntia; Dadashi, Farzin; Massé, Fabien; Aminian, Kamiar

    2017-12-01

    Upper limb assessments in children with hemiparesis rely on clinical measurements, which despite standardization are prone to error. Recently, 3D movement analysis using optoelectronic setups has been used to measure upper limb movement, but generalization is hindered by time and cost. Body worn inertial sensors may provide a simple, cost-effective alternative. We instrumented a subset of 30 participants in a mirror therapy clinical trial at baseline, post-treatment, and follow-up clinical assessments, with wireless inertial sensors positioned on the arms and trunk to monitor motion during reaching tasks. Inertial sensor measurements distinguished paretic and non-paretic limbs with significant differences (P < 0.01) in movement duration, power, range of angular velocity, elevation, and smoothness (normalized jerk index and spectral arc length). Inertial sensor measurements correlated with functional clinical tests (Melbourne Assessment 2); movement duration and complexity (Higuchi fractal dimension) showed moderate to strong negative correlations with clinical measures of amplitude, accuracy, and fluency. Inertial sensor measurements reliably identify paresis and correlate with clinical measurements; they can therefore provide a complementary dimension of assessment in clinical practice and during clinical trials aimed at improving upper limb function.

  7. A Note on Complementary Medicines

    MedlinePlus

    ... Photo: iStock Herbal supplements, meditation, chiropractic manipulation, and acupuncture are types of complementary and alternative medicine (CAM) ... effective. For example, NCCAM studies have shown that: Acupuncture can provide pain relief and improve function for ...

  8. PET guidance for liver radiofrequency ablation: an evaluation

    NASA Astrophysics Data System (ADS)

    Lei, Peng; Dandekar, Omkar; Mahmoud, Faaiza; Widlus, David; Malloy, Patrick; Shekhar, Raj

    2007-03-01

    Radiofrequency ablation (RFA) is emerging as the primary mode of treatment of unresectable malignant liver tumors. With current intraoperative imaging modalities, quick, precise, and complete localization of lesions remains a challenge for liver RFA. Fusion of intraoperative CT and preoperative PET images, which relies on PET and CT registration, can produce a new image with complementary metabolic and anatomic data and thus greatly improve the targeting accuracy. Unlike neurological images, alignment of abdominal images by combined PET/CT scanner is prone to errors as a result of large nonrigid misalignment in abdominal images. Our use of a normalized mutual information-based 3D nonrigid registration technique has proven powerful for whole-body PET and CT registration. We demonstrate here that this technique is capable of acceptable abdominal PET and CT registration as well. In five clinical cases, both qualitative and quantitative validation showed that the registration is robust and accurate. Quantitative accuracy was evaluated by comparison between the result from the algorithm and clinical experts. The accuracy of registration is much less than the allowable margin in liver RFA. Study findings show the technique's potential to enable the augmentation of intraoperative CT with preoperative PET to reduce procedure time, avoid repeating procedures, provide clinicians with complementary functional/anatomic maps, avoid omitting dispersed small lesions, and improve the accuracy of tumor targeting in liver RFA.

  9. Synergistic Allocation of Flight Expertise on the Flight Deck (SAFEdeck): A Design Concept to Combat Mode Confusion, Complacency, and Skill Loss in the Flight Deck

    NASA Technical Reports Server (NTRS)

    Schutte, Paul; Goodrich, Kenneth; Williams, Ralph

    2016-01-01

    This paper presents a new design and function allocation philosophy between pilots and automation that seeks to support the human in mitigating innate weaknesses (e.g., memory, vigilance) while enhancing their strengths (e.g., adaptability, resourcefulness). In this new allocation strategy, called Synergistic Allocation of Flight Expertise in the Flight Deck (SAFEdeck), the automation and the human provide complementary support and backup for each other. Automation is designed to be compliant with the practices of Crew Resource Management. The human takes a more active role in the normal operation of the aircraft without adversely increasing workload over the current automation paradigm. This designed involvement encourages the pilot to be engaged and ready to respond to unexpected situations. As such, the human may be less prone to error than the current automation paradigm.

  10. Precision Pointing Control System (PPCS) system design and analysis. [for gimbaled experiment platforms

    NASA Technical Reports Server (NTRS)

    Frew, A. M.; Eisenhut, D. F.; Farrenkopf, R. L.; Gates, R. F.; Iwens, R. P.; Kirby, D. K.; Mann, R. J.; Spencer, D. J.; Tsou, H. S.; Zaremba, J. G.

    1972-01-01

    The precision pointing control system (PPCS) is an integrated system for precision attitude determination and orientation of gimbaled experiment platforms. The PPCS concept configures the system to perform orientation of up to six independent gimbaled experiment platforms to design goal accuracy of 0.001 degrees, and to operate in conjunction with a three-axis stabilized earth-oriented spacecraft in orbits ranging from low altitude (200-2500 n.m., sun synchronous) to 24 hour geosynchronous, with a design goal life of 3 to 5 years. The system comprises two complementary functions: (1) attitude determination where the attitude of a defined set of body-fixed reference axes is determined relative to a known set of reference axes fixed in inertial space; and (2) pointing control where gimbal orientation is controlled, open-loop (without use of payload error/feedback) with respect to a defined set of body-fixed reference axes to produce pointing to a desired target.

  11. Risk-Taking and the Feedback Negativity Response to Loss among At-Risk Adolescents

    PubMed Central

    Crowley, Michael J.; Wu, Jia; Crutcher, Clifford; Bailey, Christopher A.; Lejuez, C.W.; Mayes, Linda C.

    2009-01-01

    Event-related brain potentials were examined in 32 adolescents (50% female) from a high-risk sample, who were exposed to cocaine and other drugs prenatally. Adolescents were selected for extreme high- or low-risk behavior on the Balloon Analog Risk Task, a measure of real-world risk-taking propensity. The feedback error-related negativity (fERN), an event-related potential (ERP) that occurs when an expected reward does not occur, was examined in a game in which choices lead to monetary gains and losses with feedback delayed 1 or 2 s. The fERN was clearly visible in the fronto-central scalp region in this adolescent sample. Feedback type, feedback delay, risk status, and sex were all associated with fERN variability. Monetary feedback also elicited a P300-like component, moderated by delay and sex. Delaying reward feedback may provide a means for studying complementary functioning of dopamine and norepinephrine systems. PMID:19372694

  12. Segment density profiles of polyelectrolyte brushes determined by Fourier transform ellipsometry

    NASA Astrophysics Data System (ADS)

    Biesalski, Markus; Rühe, Jürgen; Johannsmann, Diethelm

    1999-10-01

    We describe a method for the explicit determination of the segment density profile φ(z) of surface-attached polymer brushes with multiple angle of incidence null-ellipsometry. Because the refractive index contrast between the brush layer and the solvent is weak, multiple reflections are of minor influence and the ellipsometric spectrum is closely related to the Fourier transform of the refractive index profile, thereby allowing for explicit inversion of the ellipsometric data. We chose surface-attached monolayers of polymethacrylic acid (PMAA), a weak polyelectrolyte, as a model system and determined the segment density profile of this system as a function of the pH value of the surrounding medium by the Fourier method. Complementary to the Fourier analysis, fits with error functions are given as well. The brushes were prepared on the bases of high refractive index prisms with the "grafting-from" technique. In water, the brushes swell by more than a factor of 30. The swelling increases with increasing pH because of a growing fraction of dissociated acidic groups leading to a larger electrostatic repulsion.

  13. Statistics of Dark Matter Halos from Gravitational Lensing.

    PubMed

    Jain; Van Waerbeke L

    2000-02-10

    We present a new approach to measure the mass function of dark matter halos and to discriminate models with differing values of Omega through weak gravitational lensing. We measure the distribution of peaks from simulated lensing surveys and show that the lensing signal due to dark matter halos can be detected for a wide range of peak heights. Even when the signal-to-noise ratio is well below the limit for detection of individual halos, projected halo statistics can be constrained for halo masses spanning galactic to cluster halos. The use of peak statistics relies on an analytical model of the noise due to the intrinsic ellipticities of source galaxies. The noise model has been shown to accurately describe simulated data for a variety of input ellipticity distributions. We show that the measured peak distribution has distinct signatures of gravitational lensing, and its non-Gaussian shape can be used to distinguish models with different values of Omega. The use of peak statistics is complementary to the measurement of field statistics, such as the ellipticity correlation function, and is possibly not susceptible to the same systematic errors.

  14. Quantum subsystems: Exploring the complementarity of quantum privacy and error correction

    NASA Astrophysics Data System (ADS)

    Jochym-O'Connor, Tomas; Kribs, David W.; Laflamme, Raymond; Plosker, Sarah

    2014-09-01

    This paper addresses and expands on the contents of the recent Letter [Phys. Rev. Lett. 111, 030502 (2013), 10.1103/PhysRevLett.111.030502] discussing private quantum subsystems. Here we prove several previously presented results, including a condition for a given random unitary channel to not have a private subspace (although this does not mean that private communication cannot occur, as was previously demonstrated via private subsystems) and algebraic conditions that characterize when a general quantum subsystem or subspace code is private for a quantum channel. These conditions can be regarded as the private analog of the Knill-Laflamme conditions for quantum error correction, and we explore how the conditions simplify in some special cases. The bridge between quantum cryptography and quantum error correction provided by complementary quantum channels motivates the study of a new, more general definition of quantum error-correcting code, and we initiate this study here. We also consider the concept of complementarity for the general notion of a private quantum subsystem.

  15. Application of chemical reaction mechanistic domains to an ecotoxicity QSAR model, the KAshinhou Tool for Ecotoxicity (KATE).

    PubMed

    Furuhama, A; Hasunuma, K; Aoki, Y; Yoshioka, Y; Shiraishi, H

    2011-01-01

    The validity of chemical reaction mechanistic domains defined by skin sensitisation in the Quantitative Structure-Activity Relationship (QSAR) ecotoxicity system, KAshinhou Tools for Ecotoxicity (KATE), March 2009 version, has been assessed and an external validation of the current KATE system carried out. In the case of the fish end-point, the group of chemicals with substructures reactive to skin sensitisation always exhibited higher root mean square errors (RMSEs) than chemicals without reactive substructures under identical C- or log P-judgements in KATE. However, in the case of the Daphnia end-point this was not so, and the group of chemicals with reactive substructures did not always have higher RMSEs: the Schiff base mechanism did not function as a high error detector. In addition to the RMSE findings, the presence of outliers suggested that the KATE classification rules needs to be reconsidered, particularly for the amine group. Examination of the dependency of the organism on the toxic action of chemicals in fish and Daphnia revealed that some of the reactive substructures could be applied to the improvement of the KATE system. It was concluded that the reaction mechanistic domains of toxic action for skin sensitisation could provide useful complementary information in predicting acute aquatic ecotoxicity, especially at the fish end-point.

  16. Use of machine learning methods to reduce predictive error of groundwater models.

    PubMed

    Xu, Tianfang; Valocchi, Albert J; Choi, Jaesik; Amir, Eyal

    2014-01-01

    Quantitative analyses of groundwater flow and transport typically rely on a physically-based model, which is inherently subject to error. Errors in model structure, parameter and data lead to both random and systematic error even in the output of a calibrated model. We develop complementary data-driven models (DDMs) to reduce the predictive error of physically-based groundwater models. Two machine learning techniques, the instance-based weighting and support vector regression, are used to build the DDMs. This approach is illustrated using two real-world case studies of the Republican River Compact Administration model and the Spokane Valley-Rathdrum Prairie model. The two groundwater models have different hydrogeologic settings, parameterization, and calibration methods. In the first case study, cluster analysis is introduced for data preprocessing to make the DDMs more robust and computationally efficient. The DDMs reduce the root-mean-square error (RMSE) of the temporal, spatial, and spatiotemporal prediction of piezometric head of the groundwater model by 82%, 60%, and 48%, respectively. In the second case study, the DDMs reduce the RMSE of the temporal prediction of piezometric head of the groundwater model by 77%. It is further demonstrated that the effectiveness of the DDMs depends on the existence and extent of the structure in the error of the physically-based model. © 2013, National GroundWater Association.

  17. Renyi entropy measures of heart rate Gaussianity.

    PubMed

    Lake, Douglas E

    2006-01-01

    Sample entropy and approximate entropy are measures that have been successfully utilized to study the deterministic dynamics of heart rate (HR). A complementary stochastic point of view and a heuristic argument using the Central Limit Theorem suggests that the Gaussianity of HR is a complementary measure of the physiological complexity of the underlying signal transduction processes. Renyi entropy (or q-entropy) is a widely used measure of Gaussianity in many applications. Particularly important members of this family are differential (or Shannon) entropy (q = 1) and quadratic entropy (q = 2). We introduce the concepts of differential and conditional Renyi entropy rate and, in conjunction with Burg's theorem, develop a measure of the Gaussianity of a linear random process. Robust algorithms for estimating these quantities are presented along with estimates of their standard errors.

  18. Advancing the research agenda for diagnostic error reduction.

    PubMed

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  19. Predicting ambient aerosol Thermal Optical Reflectance (TOR) measurements from infrared spectra: organic carbon

    NASA Astrophysics Data System (ADS)

    Dillner, A. M.; Takahama, S.

    2014-11-01

    Organic carbon (OC) can constitute 50% or more of the mass of atmospheric particulate matter. Typically, the organic carbon concentration is measured using thermal methods such as Thermal-Optical Reflectance (TOR) from quartz fiber filters. Here, methods are presented whereby Fourier Transform Infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE or Teflon) filters are used to accurately predict TOR OC. Transmittance FT-IR analysis is rapid, inexpensive, and non-destructive to the PTFE filters. To develop and test the method, FT-IR absorbance spectra are obtained from 794 samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites sampled during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to artifact-corrected TOR OC. The FTIR spectra are divided into calibration and test sets by sampling site and date which leads to precise and accurate OC predictions by FT-IR as indicated by high coefficient of determination (R2; 0.96), low bias (0.02 μg m-3, all μg m-3 values based on the nominal IMPROVE sample volume of 32.8 m-3), low error (0.08 μg m-3) and low normalized error (11%). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. FT-IR spectra are also divided into calibration and test sets by OC mass and by OM / OC which reflects the organic composition of the particulate matter and is obtained from organic functional group composition; this division also leads to precise and accurate OC predictions. Low OC concentrations have higher bias and normalized error due to TOR analytical errors and artifact correction errors, not due to the range of OC mass of the samples in the calibration set. However, samples with low OC mass can be used to predict samples with high OC mass indicating that the calibration is linear. Using samples in the calibration set that have a different OM / OC or ammonium / OC distributions than the test set leads to only a modest increase in bias and normalized error in the predicted samples. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR OC in IMPROVE network samples; providing complementary information to the organic functional group composition and organic aerosol mass estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).

  20. Cognitive Adaptability: The Role of Metacognition and Feedback in Entrepreneural Decision Policies

    DTIC Science & Technology

    2005-01-01

    their environments in such a way as to facilitate effective and dynamic cognitive functioning. In this dissertation, I present three complementary studies ...the study of metacognition (Jost, Kruglanski, and Nelson, 1998; Mischel, 1998; Schwarz, 1998b). This research has three goals, specifically to...environments in such a way as to facilitate effective and dynamic cognitive functioning. In this dissertation, I present three complementary studies that

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somayaji, Anil B.; Amai, Wendy A.; Walther, Eleanor A.

    This reports describes the successful extension of artificial immune systems from the domain of computer security to the domain of real time control systems for robotic vehicles. A biologically-inspired computer immune system was added to the control system of two different mobile robots. As an additional layer in a multi-layered approach, the immune system is complementary to traditional error detection and error handling techniques. This can be thought of as biologically-inspired defense in depth. We demonstrated an immune system can be added with very little application developer effort, resulting in little to no performance impact. The methods described here aremore » extensible to any system that processes a sequence of data through a software interface.« less

  2. [Analysis of Conformational Features of Watson-Crick Duplex Fragments by Molecular Mechanics and Quantum Mechanics Methods].

    PubMed

    Poltev, V I; Anisimov, V M; Sanchez, C; Deriabina, A; Gonzalez, E; Garcia, D; Rivas, F; Polteva, N A

    2016-01-01

    It is generally accepted that the important characteristic features of the Watson-Crick duplex originate from the molecular structure of its subunits. However, it still remains to elucidate what properties of each subunit are responsible for the significant characteristic features of the DNA structure. The computations of desoxydinucleoside monophosphates complexes with Na-ions using density functional theory revealed a pivotal role of DNA conformational properties of single-chain minimal fragments in the development of unique features of the Watson-Crick duplex. We found that directionality of the sugar-phosphate backbone and the preferable ranges of its torsion angles, combined with the difference between purines and pyrimidines. in ring bases, define the dependence of three-dimensional structure of the Watson-Crick duplex on nucleotide base sequence. In this work, we extended these density functional theory computations to the minimal' fragments of DNA duplex, complementary desoxydinucleoside monophosphates complexes with Na-ions. Using several computational methods and various functionals, we performed a search for energy minima of BI-conformation for complementary desoxydinucleoside monophosphates complexes with different nucleoside sequences. Two sequences are optimized using ab initio method at the MP2/6-31++G** level of theory. The analysis of torsion angles, sugar ring puckering and mutual base positions of optimized structures demonstrates that the conformational characteristic features of complementary desoxydinucleoside monophosphates complexes with Na-ions remain within BI ranges and become closer to the corresponding characteristic features of the Watson-Crick duplex crystals. Qualitatively, the main characteristic features of each studied complementary desoxydinucleoside monophosphates complex remain invariant when different computational methods are used, although the quantitative values of some conformational parameters could vary lying within the limits typical for the corresponding family. We observe that popular functionals in density functional theory calculations lead to the overestimated distances between base pairs, while MP2 computations and the newer complex functionals produce the structures that have too close atom-atom contacts. A detailed study of some complementary desoxydinucleoside monophosphate complexes with Na-ions highlights the existence of several energy minima corresponding to BI-conformations, in other words, the complexity of the relief pattern of the potential energy surface of complementary desoxydinucleoside monophosphate complexes. This accounts for variability of conformational parameters of duplex fragments with the same base sequence. Popular molecular mechanics force fields AMBER and CHARMM reproduce most of the conformational characteristics of desoxydinucleoside monophosphates and their complementary complexes with Na-ions but fail to reproduce some details of the dependence of the Watson-Crick duplex conformation on the nucleotide sequence.

  3. Transversal Clifford gates on folded surface codes

    DOE PAGES

    Moussa, Jonathan E.

    2016-10-12

    Surface and color codes are two forms of topological quantum error correction in two spatial dimensions with complementary properties. Surface codes have lower-depth error detection circuits and well-developed decoders to interpret and correct errors, while color codes have transversal Clifford gates and better code efficiency in the number of physical qubits needed to achieve a given code distance. A formal equivalence exists between color codes and folded surface codes, but it does not guarantee the transferability of any of these favorable properties. However, the equivalence does imply the existence of constant-depth circuit implementations of logical Clifford gates on folded surfacemore » codes. We achieve and improve this result by constructing two families of folded surface codes with transversal Clifford gates. This construction is presented generally for qudits of any dimension. Lastly, the specific application of these codes to universal quantum computation based on qubit fusion is also discussed.« less

  4. Complementary and alternative treatment in functional dyspepsia

    PubMed Central

    Chiarioni, Giuseppe; Pesce, Marcella; Fantin, Alberto; Sarnelli, Giovanni

    2017-01-01

    Introduction and aim The popularity of complementary and alternative medicine (CAM) in treating functional gastrointestinal disorders (FGIDs) has steadily increased in Western countries. We aimed at analyzing available data on CAM effectiveness in functional dyspepsia (FD) patients. Methods A bibliographical search was performed in PubMed using the following keywords: “complementary/alternative medicine,” “hypnosis,” “acupuncture” and/or “functional dyspepsia.” Results In community settings, almost 50% of patients with FGIDs used CAM therapies. Herbal remedies consist of multi-component preparations, whose mechanisms of action have not been systematically clarified. Few studies analyzed the effectiveness of acupuncture in Western countries, yielding conflicting results and possibly reflecting a population bias of this treatment. Hypnosis has been extensively used in irritable bowel syndrome, but few data support its role in treating FD. Conclusions Although some supporting well-designed studies have been recently performed, additional randomized, controlled trials are needed before stating any recommendation on CAM effectiveness in treating FD. PMID:29435308

  5. General Solutions for Hydromagnetic Free Convection Flow over an Infinite Plate with Newtonian Heating, Mass Diffusion and Chemical Reaction

    NASA Astrophysics Data System (ADS)

    Fetecau, Constatin; Shah, Nehad Ali; Vieru, Dumitru

    2017-12-01

    The problem of hydromagnetic free convection flow over a moving infinite vertical plate with Newtonian heating, mass diffusion and chemical reaction in the presence of a heat source is completely solved. Radiative and porous effects are not taken into consideration but they can be immediately included by a simple rescaling of Prandtl number and magnetic parameter. Exact general solutions for the dimensionless velocity and concentration fields and the corresponding Sherwood number and skin friction coefficient are determined under integral form in terms of error function or complementary error function of Gauss. They satisfy all imposed initial and boundary conditions and can generate exact solutions for any problem with technical relevance of this type. As an interesting completion, uncommon in the literature, the differential equations which describe the thermal, concentration and momentum boundary layer, as well as the exact expressions for the thicknesses of thermal, concentration or velocity boundary layers were determined. Numerical results have shown that the thermal boundary layer thickness decreases for increasing values of Prandtl number and the concentration boundary layer thickness is decreasing with Schmidt number. Finally, for illustration, three special cases are considered and the influence of physical parameters on some fundamental motions is graphically underlined and discussed. The required time to reach the flow according with post-transient solution (the steady-state), for cosine/sine oscillating concentrations on the boundary is graphically determined. It is found that, the presence of destructive chemical reaction improves this time for increasing values of chemical reaction parameter.

  6. Shape reconstruction of irregular bodies with multiple complementary data sources

    NASA Astrophysics Data System (ADS)

    Kaasalainen, M.; Viikinkoski, M.

    2012-07-01

    We discuss inversion methods for shape reconstruction with complementary data sources. The current main sources are photometry, adaptive optics or other images, occultation timings, and interferometry, and the procedure can readily be extended to include range-Doppler radar and thermal infrared data as well. We introduce the octantoid, a generally applicable shape support that can be automatically used for surface types encountered in planetary research, including strongly nonconvex or non-starlike shapes. We present models of Kleopatra and Hermione from multimodal data as examples of this approach. An important concept in this approach is the optimal weighting of the various data modes. We define the maximum compatibility estimate, a multimodal generalization of the maximum likelihood estimate, for this purpose. We also present a specific version of the procedure for asteroid flyby missions, with which one can reconstruct the complete shape of the target by using the flyby-based map of a part of the surface together with other available data. Finally, we show that the relative volume error of a shape solution is usually approximately equal to the relative shape error rather than its multiple. Our algorithms are trivially parallelizable, so running the code on a CUDA-enabled graphics processing unit is some two orders of magnitude faster than the usual single-processor mode.

  7. Complementary π-π interactions induce multicomponent coassembly into functional fibrils.

    PubMed

    Ryan, Derek M; Doran, Todd M; Nilsson, Bradley L

    2011-09-06

    Noncovalent self-assembled materials inspired by amyloid architectures are useful for biomedical applications ranging from regenerative medicine to drug delivery. The selective coassembly of complementary monomeric units to provide ordered multicomponent fibrils is a possible strategy for enhancing the sophistication of these noncovalent materials. Herein we report that complementary π-π interactions can be exploited to promote the coassembly of phenylalanine (Phe) derivatives that possess complementary aromatic side-chain functionality. Specifically, equimolar mixtures of Fmoc-Phe and Fmoc-F(5)-Phe, which possess side-chain groups with complementary quadrupole electronics, readily coassemble to form two-component fibrils and hydrogels under conditions where Fmoc-Phe alone fails to self-assemble. In addition, it was found that equimolar mixtures of Fmoc-Phe with monohalogenated (F, Cl, and Br) Fmoc-Phe derivatives also coassembled into two-component fibrils. These results collectively indicate that face-to-face quadrupole stacking between benzyl side-chain groups does not account for the molecular recognition between Phe and halogenated Phe derivatives that promote cofibrillization but that coassembly is mediated by more subtle π-π effects arising from the halogenation of the benzyl side chain. The use of complementary π-π interactions to promote the coassembly of two distinct monomeric units into ordered two-component fibrils dramatically expands the repertoire of noncovalent interactions that can be used in the development of sophisticated noncovalent materials. © 2011 American Chemical Society

  8. Investigation into the use of complementary and alternative medicine and affecting factors in Turkish asthmatic patients.

    PubMed

    Tokem, Yasemin; Aytemur, Zeynep Ayfer; Yildirim, Yasemin; Fadiloglu, Cicek

    2012-03-01

    The purpose of this study was to examine the frequency of complementary and alternative medicine usage in asthmatic patients living in the west of Turkey, the most frequently used complementary and alternative medicine methods and socio-demographic factors affecting this and factors related to the disease. While the rate of complementary and alternative medicine usage in asthmatic patients and the reasons for using it vary, practices specific to different countries and regions are of interest. Differing cultural and social factors even in geographically similar regions can affect the type of complementary and alternative medicine used. Two hundred asthmatic patients registered in the asthma outpatient clinic of a large hospital in Turkey and who had undergone pulmonary function tests within the previous six months were included in this study, which was planned according to a descriptive design. The patients filled out a questionnaire on their demographic characteristics and complementary and alternative medicine usage. The proportion of patients who reported using one or more of the complementary and alternative medicine methods was 63·0%. Of these patients, 61·9% were using plants and herbal treatments, 53·2% were doing exercises and 36·5% said that they prayed. The objectives of their use of complementary and alternative medicine were to reduce asthma-related complaints (58%) and to feel better (37·8%). The proportion of people experiencing adverse effects was 3·3% (n = 4). Factors motivating asthmatic patients to use complementary and alternative medicine were the existence of comorbid diseases and a long period since diagnosis (p < 0·05). No statistically significant difference was found between the use of complementary and alternative medicine and the severity of the disease, pulmonary function test parameters, the number of asthma attacks or hospitalisations because of asthma within the last year (p > 0·05). Understanding by nurses of the causes and patterns of the use of complementary and alternative medicine in asthmatic patients helps them in directing patient care and patient safety. Nurses should conduct comprehensive diagnostics in the light of complementary and alternative medicine use, and they should be aware of the potential risks. © 2011 Blackwell Publishing Ltd.

  9. Maximum likelihood method for estimating airplane stability and control parameters from flight data in frequency domain

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1980-01-01

    A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.

  10. Unsupervised discovery of microbial population structure within metagenomes using nucleotide base composition

    PubMed Central

    Saeed, Isaam; Tang, Sen-Lin; Halgamuge, Saman K.

    2012-01-01

    An approach to infer the unknown microbial population structure within a metagenome is to cluster nucleotide sequences based on common patterns in base composition, otherwise referred to as binning. When functional roles are assigned to the identified populations, a deeper understanding of microbial communities can be attained, more so than gene-centric approaches that explore overall functionality. In this study, we propose an unsupervised, model-based binning method with two clustering tiers, which uses a novel transformation of the oligonucleotide frequency-derived error gradient and GC content to generate coarse groups at the first tier of clustering; and tetranucleotide frequency to refine these groups at the secondary clustering tier. The proposed method has a demonstrated improvement over PhyloPythia, S-GSOM, TACOA and TaxSOM on all three benchmarks that were used for evaluation in this study. The proposed method is then applied to a pyrosequenced metagenomic library of mud volcano sediment sampled in southwestern Taiwan, with the inferred population structure validated against complementary sequencing of 16S ribosomal RNA marker genes. Finally, the proposed method was further validated against four publicly available metagenomes, including a highly complex Antarctic whale-fall bone sample, which was previously assumed to be too complex for binning prior to functional analysis. PMID:22180538

  11. Flexible methods for segmentation evaluation: results from CT-based luggage screening.

    PubMed

    Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry

    2014-01-01

    Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms' behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms.

  12. Self-Interaction Error in Density Functional Theory: An Appraisal.

    PubMed

    Bao, Junwei Lucas; Gagliardi, Laura; Truhlar, Donald G

    2018-05-03

    Self-interaction error (SIE) is considered to be one of the major sources of error in most approximate exchange-correlation functionals for Kohn-Sham density-functional theory (KS-DFT), and it is large with all local exchange-correlation functionals and with some hybrid functionals. In this work, we consider systems conventionally considered to be dominated by SIE. For these systems, we demonstrate that by using multiconfiguration pair-density functional theory (MC-PDFT), the error of a translated local density-functional approximation is significantly reduced (by a factor of 3) when using an MCSCF density and on-top density, as compared to using KS-DFT with the parent functional; the error in MC-PDFT with local on-top functionals is even lower than the error in some popular KS-DFT hybrid functionals. Density-functional theory, either in MC-PDFT form with local on-top functionals or in KS-DFT form with some functionals having 50% or more nonlocal exchange, has smaller errors for SIE-prone systems than does CASSCF, which has no SIE.

  13. Charge-transfer optical absorption mechanism of DNA:Ag-nanocluster complexes

    NASA Astrophysics Data System (ADS)

    Longuinhos, R.; Lúcio, A. D.; Chacham, H.; Alexandre, S. S.

    2016-05-01

    Optical properties of DNA:Ag-nanoclusters complexes have been successfully applied experimentally in Chemistry, Physics, and Biology. Nevertheless, the mechanisms behind their optical activity remain unresolved. In this work, we present a time-dependent density functional study of optical absorption in DNA:Ag4. In all 23 different complexes investigated, we obtain new absorption peaks in the visible region that are not found in either the isolated Ag4 or isolated DNA base pairs. Absorption from red to green are predominantly of charge-transfer character, from the Ag4 to the DNA fragment, while absorption in the blue-violet range are mostly associated to electronic transitions of a mixed character, involving either DNA-Ag4 hybrid orbitals or intracluster orbitals. We also investigate the role of exchange-correlation functionals in the calculated optical spectra. Significant differences are observed between the calculations using the PBE functional (without exact exchange) and the CAM-B3LYP functional (which partly includes exact exchange). Specifically, we observe a tendency of charge-transfer excitations to involve purines bases, and the PBE spectra error is more pronounced in the complexes where the Ag cluster is bound to the purines. Finally, our results also highlight the importance of adding both the complementary base pair and the sugar-phosphate backbone in order to properly characterize the absorption spectrum of DNA:Ag complexes.

  14. Charge-transfer optical absorption mechanism of DNA:Ag-nanocluster complexes.

    PubMed

    Longuinhos, R; Lúcio, A D; Chacham, H; Alexandre, S S

    2016-05-01

    Optical properties of DNA:Ag-nanoclusters complexes have been successfully applied experimentally in Chemistry, Physics, and Biology. Nevertheless, the mechanisms behind their optical activity remain unresolved. In this work, we present a time-dependent density functional study of optical absorption in DNA:Ag_{4}. In all 23 different complexes investigated, we obtain new absorption peaks in the visible region that are not found in either the isolated Ag_{4} or isolated DNA base pairs. Absorption from red to green are predominantly of charge-transfer character, from the Ag_{4} to the DNA fragment, while absorption in the blue-violet range are mostly associated to electronic transitions of a mixed character, involving either DNA-Ag_{4} hybrid orbitals or intracluster orbitals. We also investigate the role of exchange-correlation functionals in the calculated optical spectra. Significant differences are observed between the calculations using the PBE functional (without exact exchange) and the CAM-B3LYP functional (which partly includes exact exchange). Specifically, we observe a tendency of charge-transfer excitations to involve purines bases, and the PBE spectra error is more pronounced in the complexes where the Ag cluster is bound to the purines. Finally, our results also highlight the importance of adding both the complementary base pair and the sugar-phosphate backbone in order to properly characterize the absorption spectrum of DNA:Ag complexes.

  15. Surface characterization protocol for precision aspheric optics

    NASA Astrophysics Data System (ADS)

    Sarepaka, RamaGopal V.; Sakthibalan, Siva; Doodala, Somaiah; Panwar, Rakesh S.; Kotaria, Rajendra

    2017-10-01

    In Advanced Optical Instrumentation, Aspherics provide an effective performance alternative. The aspheric fabrication and surface metrology, followed by aspheric design are complementary iterative processes for Precision Aspheric development. As in fabrication, a holistic approach of aspheric surface characterization is adopted to evaluate actual surface error and to aim at the deliverance of aspheric optics with desired surface quality. Precision optical surfaces are characterized by profilometry or by interferometry. Aspheric profiles are characterized by contact profilometers, through linear surface scans to analyze their Form, Figure and Finish errors. One must ensure that, the surface characterization procedure does not add to the resident profile errors (generated during the aspheric surface fabrication). This presentation examines the errors introduced post-surface generation and during profilometry of aspheric profiles. This effort is to identify sources of errors and is to optimize the metrology process. The sources of error during profilometry may be due to: profilometer settings, work-piece placement on the profilometer stage, selection of zenith/nadir points of aspheric profiles, metrology protocols, clear aperture - diameter analysis, computational limitations of the profiler and the software issues etc. At OPTICA, a PGI 1200 FTS contact profilometer (Taylor-Hobson make) is used for this study. Precision Optics of various profiles are studied, with due attention to possible sources of errors during characterization, with multi-directional scan approach for uniformity and repeatability of error estimation. This study provides an insight of aspheric surface characterization and helps in optimal aspheric surface production methodology.

  16. Complementary and Alternative Therapies for Down Syndrome

    ERIC Educational Resources Information Center

    Roizen, Nancy J.

    2005-01-01

    In their role as committed advocates, parents of children with Down syndrome have always sought alternative therapies, mainly to enhance cognitive function but also to improve their appearance. Nutritional supplements have been the most frequent type of complementary and alternative therapy used. Cell therapy, plastic surgery, hormonal therapy,…

  17. The hadronic vacuum polarization contribution to the muon g - 2 from lattice QCD

    NASA Astrophysics Data System (ADS)

    Morte, M. Della; Francis, A.; Gülpers, V.; Herdoíza, G.; von Hippel, G.; Horch, H.; Jäger, B.; Meyer, H. B.; Nyffeler, A.; Wittig, H.

    2017-10-01

    We present a calculation of the hadronic vacuum polarization contribution to the muon anomalous magnetic moment, a μ hvp , in lattice QCD employing dynamical up and down quarks. We focus on controlling the infrared regime of the vacuum polarization function. To this end we employ several complementary approaches, including Padé fits, time moments and the time-momentum representation. We correct our results for finite-volume effects by combining the Gounaris-Sakurai parameterization of the timelike pion form factor with the Lüscher formalism. On a subset of our ensembles we have derived an upper bound on the magnitude of quark-disconnected diagrams and found that they decrease the estimate for a μ hvp by at most 2%. Our final result is {a}_{μ}^{hvp} = (654 ± {32}{^{-23}}^{+21}) ·10-10, where the first error is statistical, and the second denotes the combined systematic uncertainty. Based on our findings we discuss the prospects for determining a μ hvp with sub-percent precision.

  18. Exchange-Correlation Effects for Noncovalent Interactions in Density Functional Theory.

    PubMed

    Otero-de-la-Roza, A; DiLabio, Gino A; Johnson, Erin R

    2016-07-12

    In this article, we develop an understanding of how errors from exchange-correlation functionals affect the modeling of noncovalent interactions in dispersion-corrected density-functional theory. Computed CCSD(T) reference binding energies for a collection of small-molecule clusters are decomposed via a molecular many-body expansion and are used to benchmark density-functional approximations, including the effect of semilocal approximation, exact-exchange admixture, and range separation. Three sources of error are identified. Repulsion error arises from the choice of semilocal functional approximation. This error affects intermolecular repulsions and is present in all n-body exchange-repulsion energies with a sign that alternates with the order n of the interaction. Delocalization error is independent of the choice of semilocal functional but does depend on the exact exchange fraction. Delocalization error misrepresents the induction energies, leading to overbinding in all induction n-body terms, and underestimates the electrostatic contribution to the 2-body energies. Deformation error affects only monomer relaxation (deformation) energies and behaves similarly to bond-dissociation energy errors. Delocalization and deformation errors affect systems with significant intermolecular orbital interactions (e.g., hydrogen- and halogen-bonded systems), whereas repulsion error is ubiquitous. Many-body errors from the underlying exchange-correlation functional greatly exceed in general the magnitude of the many-body dispersion energy term. A functional built to accurately model noncovalent interactions must contain a dispersion correction, semilocal exchange, and correlation components that minimize the repulsion error independently and must also incorporate exact exchange in such a way that delocalization error is absent.

  19. Electronic Inventory Systems and Barcode Technology: Impact on Pharmacy Technical Accuracy and Error Liability

    PubMed Central

    Oldland, Alan R.; May, Sondra K.; Barber, Gerard R.; Stolpman, Nancy M.

    2015-01-01

    Purpose: To measure the effects associated with sequential implementation of electronic medication storage and inventory systems and product verification devices on pharmacy technical accuracy and rates of potential medication dispensing errors in an academic medical center. Methods: During four 28-day periods of observation, pharmacists recorded all technical errors identified at the final visual check of pharmaceuticals prior to dispensing. Technical filling errors involving deviations from order-specific selection of product, dosage form, strength, or quantity were documented when dispensing medications using (a) a conventional unit dose (UD) drug distribution system, (b) an electronic storage and inventory system utilizing automated dispensing cabinets (ADCs) within the pharmacy, (c) ADCs combined with barcode (BC) verification, and (d) ADCs and BC verification utilized with changes in product labeling and individualized personnel training in systems application. Results: Using a conventional UD system, the overall incidence of technical error was 0.157% (24/15,271). Following implementation of ADCs, the comparative overall incidence of technical error was 0.135% (10/7,379; P = .841). Following implementation of BC scanning, the comparative overall incidence of technical error was 0.137% (27/19,708; P = .729). Subsequent changes in product labeling and intensified staff training in the use of BC systems was associated with a decrease in the rate of technical error to 0.050% (13/26,200; P = .002). Conclusions: Pharmacy ADCs and BC systems provide complementary effects that improve technical accuracy and reduce the incidence of potential medication dispensing errors if this technology is used with comprehensive personnel training. PMID:25684799

  20. Electronic inventory systems and barcode technology: impact on pharmacy technical accuracy and error liability.

    PubMed

    Oldland, Alan R; Golightly, Larry K; May, Sondra K; Barber, Gerard R; Stolpman, Nancy M

    2015-01-01

    To measure the effects associated with sequential implementation of electronic medication storage and inventory systems and product verification devices on pharmacy technical accuracy and rates of potential medication dispensing errors in an academic medical center. During four 28-day periods of observation, pharmacists recorded all technical errors identified at the final visual check of pharmaceuticals prior to dispensing. Technical filling errors involving deviations from order-specific selection of product, dosage form, strength, or quantity were documented when dispensing medications using (a) a conventional unit dose (UD) drug distribution system, (b) an electronic storage and inventory system utilizing automated dispensing cabinets (ADCs) within the pharmacy, (c) ADCs combined with barcode (BC) verification, and (d) ADCs and BC verification utilized with changes in product labeling and individualized personnel training in systems application. Using a conventional UD system, the overall incidence of technical error was 0.157% (24/15,271). Following implementation of ADCs, the comparative overall incidence of technical error was 0.135% (10/7,379; P = .841). Following implementation of BC scanning, the comparative overall incidence of technical error was 0.137% (27/19,708; P = .729). Subsequent changes in product labeling and intensified staff training in the use of BC systems was associated with a decrease in the rate of technical error to 0.050% (13/26,200; P = .002). Pharmacy ADCs and BC systems provide complementary effects that improve technical accuracy and reduce the incidence of potential medication dispensing errors if this technology is used with comprehensive personnel training.

  1. Views and experiences of healthcare professionals towards the use of African traditional, complementary and alternative medicines among patients with HIV infection: the case of eThekwini health district, South Africa.

    PubMed

    Nlooto, Manimbulu

    2015-06-06

    Many patients with human immunodeficiency virus infection use traditional, complementary, and alternative medicines and other practices to combat the disease, with some also using prescribed antiretroviral therapy provided by the public health sector. This study aimed to establish the awareness of public sector biomedical health care providers on the use of traditional, complementary and alternative medicines by HIV-infected patients who also used highly active antiretroviral therapy, and to determine whether this was based on patients seen or cases being reported to them. Potential risks of interactions between the prescribed antiretroviral and non-prescribed medication therapies may pose safety and effectiveness issues in patients using both types of treatment. A descriptive cross-sectional study, using a researcher administered semi-structured questionnaire, was conducted from June to August 2013 at ten public sector antiretroviral clinics in five regional, three specialised and two district hospitals in eThekwini Health District, South Africa. Questionnaires were administered through face-to face interview to 120 eligible participants consisting of doctors, nurses, pharmacists and post-basic pharmacist assistants in HIV clinical practice. The results are presented as percent or proportion with standard error (SE), or as frequency. Ninety-four respondents completed the questionnaire, yielding a response rate of 78.3 %. Almost half (48/94) were aware of patients using African traditional herbal medicines, over-the-counter supplements, unnamed complementary Ayurveda medicines and acupuncture. Twenty-three of the 94 respondents (24.4 %) said they had consulted patients who were using both antiretroviral therapy and certain types of non-prescribed medication in the previous three months. Awareness among healthcare providers on patient use of traditional, complementary and alternative medicines was relatively high. Few respondents had seen patients who used mostly African traditional medicines, over-the counter supplements, and negligible complementary Ayurveda medicines and acupuncture, with caution being advised in the interpretation of the former. Further research is needed to investigate communication between healthcare providers and patients in this regard, and levels of acceptance of traditional, complementary and alternative medicines by biomedical health care workers in HIV public sector practice.

  2. Phenol-enriched olive oils improve HDL antioxidant content in hypercholesterolemic subjects. A randomized, double-blind, cross-over, controlled trial.

    PubMed

    Farràs, Marta; Fernández-Castillejo, Sara; Rubió, Laura; Arranz, Sara; Catalán, Úrsula; Subirana, Isaac; Romero, Mari-Paz; Castañer, Olga; Pedret, Anna; Blanchart, Gemma; Muñoz-Aguayo, Daniel; Schröder, Helmut; Covas, Maria-Isabel; de la Torre, Rafael; Motilva, Maria-José; Solà, Rosa; Fitó, Montserrat

    2018-01-01

    At present, high-density lipoprotein (HDL) function is thought to be more relevant than HDL cholesterol quantity. Consumption of olive oil phenolic compounds (PCs) has beneficial effects on HDL-related markers. Enriched food with complementary antioxidants could be a suitable option to obtain additional protective effects. Our aim was to ascertain whether virgin olive oils (VOOs) enriched with (a) their own PC (FVOO) and (b) their own PC plus complementary ones from thyme (FVOOT) could improve HDL status and function. Thirty-three hypercholesterolemic individuals ingested (25 ml/day, 3 weeks) (a) VOO (80 ppm), (b) FVOO (500 ppm) and (c) FVOOT (500 ppm) in a randomized, double-blind, controlled, crossover trial. A rise in HDL antioxidant compounds was observed after both functional olive oil interventions. Nevertheless, α-tocopherol, the main HDL antioxidant, was only augmented after FVOOT versus its baseline. In conclusion, long-term consumption of phenol-enriched olive oils induced a better HDL antioxidant content, the complementary phenol-enriched olive oil being the one which increased the main HDL antioxidant, α-tocopherol. Complementary phenol-enriched olive oil could be a useful dietary tool for improving HDL richness in antioxidants. Copyright © 2017. Published by Elsevier Inc.

  3. Complementary and alternative medicine contacts by persons with mental disorders in 25 countries: results from the World Mental Health Surveys.

    PubMed

    de Jonge, P; Wardenaar, K J; Hoenders, H R; Evans-Lacko, S; Kovess-Masfety, V; Aguilar-Gaxiola, S; Al-Hamzawi, A; Alonso, J; Andrade, L H; Benjet, C; Bromet, E J; Bruffaerts, R; Bunting, B; Caldas-de-Almeida, J M; Dinolova, R V; Florescu, S; de Girolamo, G; Gureje, O; Haro, J M; Hu, C; Huang, Y; Karam, E G; Karam, G; Lee, S; Lépine, J-P; Levinson, D; Makanjuola, V; Navarro-Mateu, F; Pennell, B-E; Posada-Villa, J; Scott, K; Tachimori, H; Williams, D; Wojtyniak, B; Kessler, R C; Thornicroft, G

    2017-12-28

    A substantial proportion of persons with mental disorders seek treatment from complementary and alternative medicine (CAM) professionals. However, data on how CAM contacts vary across countries, mental disorders and their severity, and health care settings is largely lacking. The aim was therefore to investigate the prevalence of contacts with CAM providers in a large cross-national sample of persons with 12-month mental disorders. In the World Mental Health Surveys, the Composite International Diagnostic Interview was administered to determine the presence of past 12 month mental disorders in 138 801 participants aged 18-100 derived from representative general population samples. Participants were recruited between 2001 and 2012. Rates of self-reported CAM contacts for each of the 28 surveys across 25 countries and 12 mental disorder groups were calculated for all persons with past 12-month mental disorders. Mental disorders were grouped into mood disorders, anxiety disorders or behavioural disorders, and further divided by severity levels. Satisfaction with conventional care was also compared with CAM contact satisfaction. An estimated 3.6% (standard error 0.2%) of persons with a past 12-month mental disorder reported a CAM contact, which was two times higher in high-income countries (4.6%; standard error 0.3%) than in low- and middle-income countries (2.3%; standard error 0.2%). CAM contacts were largely comparable for different disorder types, but particularly high in persons receiving conventional care (8.6-17.8%). CAM contacts increased with increasing mental disorder severity. Among persons receiving specialist mental health care, CAM contacts were reported by 14.0% for severe mood disorders, 16.2% for severe anxiety disorders and 22.5% for severe behavioural disorders. Satisfaction with care was comparable with respect to CAM contacts (78.3%) and conventional care (75.6%) in persons that received both. CAM contacts are common in persons with severe mental disorders, in high-income countries, and in persons receiving conventional care. Our findings support the notion of CAM as largely complementary but are in contrast to suggestions that this concerns person with only mild, transient complaints. There was no indication that persons were less satisfied by CAM visits than by receiving conventional care. We encourage health care professionals in conventional settings to openly discuss the care patients are receiving, whether conventional or not, and their reasons for doing so.

  4. Functional Regression Models for Epistasis Analysis of Multiple Quantitative Traits.

    PubMed

    Zhang, Futao; Xie, Dan; Liang, Meimei; Xiong, Momiao

    2016-04-01

    To date, most genetic analyses of phenotypes have focused on analyzing single traits or analyzing each phenotype independently. However, joint epistasis analysis of multiple complementary traits will increase statistical power and improve our understanding of the complicated genetic structure of the complex diseases. Despite their importance in uncovering the genetic structure of complex traits, the statistical methods for identifying epistasis in multiple phenotypes remains fundamentally unexplored. To fill this gap, we formulate a test for interaction between two genes in multiple quantitative trait analysis as a multiple functional regression (MFRG) in which the genotype functions (genetic variant profiles) are defined as a function of the genomic position of the genetic variants. We use large-scale simulations to calculate Type I error rates for testing interaction between two genes with multiple phenotypes and to compare the power with multivariate pairwise interaction analysis and single trait interaction analysis by a single variate functional regression model. To further evaluate performance, the MFRG for epistasis analysis is applied to five phenotypes of exome sequence data from the NHLBI's Exome Sequencing Project (ESP) to detect pleiotropic epistasis. A total of 267 pairs of genes that formed a genetic interaction network showed significant evidence of epistasis influencing five traits. The results demonstrate that the joint interaction analysis of multiple phenotypes has a much higher power to detect interaction than the interaction analysis of a single trait and may open a new direction to fully uncovering the genetic structure of multiple phenotypes.

  5. Towards fully automated structure-based function prediction in structural genomics: a case study.

    PubMed

    Watson, James D; Sanderson, Steve; Ezersky, Alexandra; Savchenko, Alexei; Edwards, Aled; Orengo, Christine; Joachimiak, Andrzej; Laskowski, Roman A; Thornton, Janet M

    2007-04-13

    As the global Structural Genomics projects have picked up pace, the number of structures annotated in the Protein Data Bank as hypothetical protein or unknown function has grown significantly. A major challenge now involves the development of computational methods to assign functions to these proteins accurately and automatically. As part of the Midwest Center for Structural Genomics (MCSG) we have developed a fully automated functional analysis server, ProFunc, which performs a battery of analyses on a submitted structure. The analyses combine a number of sequence-based and structure-based methods to identify functional clues. After the first stage of the Protein Structure Initiative (PSI), we review the success of the pipeline and the importance of structure-based function prediction. As a dataset, we have chosen all structures solved by the MCSG during the 5 years of the first PSI. Our analysis suggests that two of the structure-based methods are particularly successful and provide examples of local similarity that is difficult to identify using current sequence-based methods. No one method is successful in all cases, so, through the use of a number of complementary sequence and structural approaches, the ProFunc server increases the chances that at least one method will find a significant hit that can help elucidate function. Manual assessment of the results is a time-consuming process and subject to individual interpretation and human error. We present a method based on the Gene Ontology (GO) schema using GO-slims that can allow the automated assessment of hits with a success rate approaching that of expert manual assessment.

  6. Wings: A New Paradigm in Human-Centered Design

    NASA Technical Reports Server (NTRS)

    Schutte, Paul C.

    1997-01-01

    Many aircraft accidents/incidents investigations cite crew error as a causal factor (Boeing Commercial Airplane Group 1996). Human factors experts suggest that crew error has many underlying causes and should be the start of an accident investigation and not the end. One of those causes, the flight deck design, is correctable. If a flight deck design does not accommodate the human's unique abilities and deficits, crew error may simply be the manifestation of this mismatch. Pilots repeatedly report that they are "behind the aircraft" , i.e., they do not know what the automated aircraft is doing or how the aircraft is doing it until after the fact. Billings (1991) promotes the concept of "human-centered automation"; calling on designers to allocate appropriate control and information to the human. However, there is much ambiguity regarding what it mean's to be human-centered. What often are labeled as "human-centered designs" are actually designs where a human factors expert has been involved in the design process or designs where tests have shown that humans can operate them. While such designs may be excellent, they do not represent designs that are systematically produced according to some set of prescribed methods and procedures. This paper describes a design concept, called Wings, that offers a clearer definition for human-centered design. This new design concept is radically different from current design processes in that the design begins with the human and uses the human body as a metaphor for designing the aircraft. This is not because the human is the most important part of the aircraft (certainly the aircraft would be useless without lift and thrust), but because he is the least understood, the least programmable, and one of the more critical elements. The Wings design concept has three properties: a reversal in the design process, from aerodynamics-, structures-, and propulsion-centered to truly human-centered; a design metaphor that guides function allocation and control and display design; and a deliberate distinction between two fundamental functions of design, to complement and to interpret human performance. The complementary function extends the human's capabilities beyond his or her current limitations - this includes sensing, computation, memory, physical force, and human decision making styles and skills. The interpretive (or hermeneutic, Hollnagel 1991) function translates information, functionality, and commands between the human and the aircraft. The Wings design concept allows the human to remain aware of the aircraft through natural interpretation. It also affords great improvements in system performance by maximizing the human's natural abilities and complementing the human's skills in a natural way. This paper will discuss the Wings design concept by describing the reversal in the traditional design process, the function allocation strategy of Wings, and the functions of complementing and interpreting the human.

  7. Multiconfiguration Pair-Density Functional Theory Is Free From Delocalization Error.

    PubMed

    Bao, Junwei Lucas; Wang, Ying; He, Xiao; Gagliardi, Laura; Truhlar, Donald G

    2017-11-16

    Delocalization error has been singled out by Yang and co-workers as the dominant error in Kohn-Sham density functional theory (KS-DFT) with conventional approximate functionals. In this Letter, by computing the vertical first ionization energy for well separated He clusters, we show that multiconfiguration pair-density functional theory (MC-PDFT) is free from delocalization error. To put MC-PDFT in perspective, we also compare it with some Kohn-Sham density functionals, including both traditional and modern functionals. Whereas large delocalization errors are almost universal in KS-DFT (the only exception being the very recent corrected functionals of Yang and co-workers), delocalization error is removed by MC-PDFT, which bodes well for its future as a step forward from KS-DFT.

  8. Functions of Turkish Complementary Schools in the UK: Official vs. Insider Discourses

    ERIC Educational Resources Information Center

    Çavusoglu, Çise

    2014-01-01

    Complementary schools in the United Kingdom (UK) are community organised schools with the general aim of teaching younger generations their "native" languages and cultures. However, the aims and practices of these schools are predominantly dependent on changes in the social and political contexts both in the host country (in this case…

  9. [New analgesics in paediatrics].

    PubMed

    Avez-Couturier, Justine; Wood, Chantal

    2016-01-01

    There are a number of different types of analgesics in paediatrics. They must be used in accordance with the situation, the type of pain and the characteristics of the child. In all cases, strict compliance with the posology and the instructions for use is essential to avoid any risk of error. Finally, pharmacological, physical and psychological treatments are employed in a complementary manner, for the biopsychosocial management of the child's care. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  10. Investigation of smoothness-increasing accuracy-conserving filters for improving streamline integration through discontinuous fields.

    PubMed

    Steffen, Michael; Curtis, Sean; Kirby, Robert M; Ryan, Jennifer K

    2008-01-01

    Streamline integration of fields produced by computational fluid mechanics simulations is a commonly used tool for the investigation and analysis of fluid flow phenomena. Integration is often accomplished through the application of ordinary differential equation (ODE) integrators--integrators whose error characteristics are predicated on the smoothness of the field through which the streamline is being integrated--smoothness which is not available at the inter-element level of finite volume and finite element data. Adaptive error control techniques are often used to ameliorate the challenge posed by inter-element discontinuities. As the root of the difficulties is the discontinuous nature of the data, we present a complementary approach of applying smoothness-enhancing accuracy-conserving filters to the data prior to streamline integration. We investigate whether such an approach applied to uniform quadrilateral discontinuous Galerkin (high-order finite volume) data can be used to augment current adaptive error control approaches. We discuss and demonstrate through numerical example the computational trade-offs exhibited when one applies such a strategy.

  11. Superficial vessel reconstruction with a multiview camera system

    PubMed Central

    Marreiros, Filipe M. M.; Rossitti, Sandro; Karlsson, Per M.; Wang, Chunliang; Gustafsson, Torbjörn; Carleberg, Per; Smedby, Örjan

    2016-01-01

    Abstract. We aim at reconstructing superficial vessels of the brain. Ultimately, they will serve to guide the deformation methods to compensate for the brain shift. A pipeline for three-dimensional (3-D) vessel reconstruction using three mono-complementary metal-oxide semiconductor cameras has been developed. Vessel centerlines are manually selected in the images. Using the properties of the Hessian matrix, the centerline points are assigned direction information. For correspondence matching, a combination of methods was used. The process starts with epipolar and spatial coherence constraints (geometrical constraints), followed by relaxation labeling and an iterative filtering where the 3-D points are compared to surfaces obtained using the thin-plate spline with decreasing relaxation parameter. Finally, the points are shifted to their local centroid position. Evaluation in virtual, phantom, and experimental images, including intraoperative data from patient experiments, shows that, with appropriate camera positions, the error estimates (root-mean square error and mean error) are ∼1  mm. PMID:26759814

  12. Characterizing error distributions for MISR and MODIS optical depth data

    NASA Astrophysics Data System (ADS)

    Paradise, S.; Braverman, A.; Kahn, R.; Wilson, B.

    2008-12-01

    The Multi-angle Imaging SpectroRadiometer (MISR) and Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's EOS satellites collect massive, long term data records on aerosol amounts and particle properties. MISR and MODIS have different but complementary sampling characteristics. In order to realize maximum scientific benefit from these data, the nature of their error distributions must be quantified and understood so that discrepancies between them can be rectified and their information combined in the most beneficial way. By 'error' we mean all sources of discrepancies between the true value of the quantity of interest and the measured value, including instrument measurement errors, artifacts of retrieval algorithms, and differential spatial and temporal sampling characteristics. Previously in [Paradise et al., Fall AGU 2007: A12A-05] we presented a unified, global analysis and comparison of MISR and MODIS measurement biases and variances over lives of the missions. We used AErosol RObotic NETwork (AERONET) data as ground truth and evaluated MISR and MODIS optical depth distributions relative to AERONET using simple linear regression. However, AERONET data are themselves instrumental measurements subject to sources of uncertainty. In this talk, we discuss results from an improved analysis of MISR and MODIS error distributions that uses errors-in-variables regression, accounting for uncertainties in both the dependent and independent variables. We demonstrate on optical depth data, but the method is generally applicable to other aerosol properties as well.

  13. Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.

    PubMed

    OConnor, William; Runquist, Elizabeth A

    2008-07-01

    Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.

  14. A Bayesian framework for infrasound location

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan T.; Arrowsmith, Stephen J.; Anderson, Dale N.

    2010-04-01

    We develop a framework for location of infrasound events using backazimuth and infrasonic arrival times from multiple arrays. Bayesian infrasonic source location (BISL) developed here estimates event location and associated credibility regions. BISL accounts for unknown source-to-array path or phase by formulating infrasonic group velocity as random. Differences between observed and predicted source-to-array traveltimes are partitioned into two additive Gaussian sources, measurement error and model error, the second of which accounts for the unknown influence of wind and temperature on path. By applying the technique to both synthetic tests and ground-truth events, we highlight the complementary nature of back azimuths and arrival times for estimating well-constrained event locations. BISL is an extension to methods developed earlier by Arrowsmith et al. that provided simple bounds on location using a grid-search technique.

  15. Comparative High Voltage Impulse Measurement

    PubMed Central

    FitzPatrick, Gerald J.; Kelley, Edward F.

    1996-01-01

    A facility has been developed for the determination of the ratio of pulse high voltage dividers over the range from 10 kV to 300 kV using comparative techniques with Kerr electro-optic voltage measurement systems and reference resistive voltage dividers. Pulse voltage ratios of test dividers can be determined with relative expanded uncertainties of 0.4 % (coverage factor k = 2 and thus a two standard deviation estimate) or less using the complementary resistive divider/Kerr cell reference systems. This paper describes the facility and specialized procedures used at NIST for the determination of test voltage divider ratios through comparative techniques. The error sources and special considerations in the construction and use of reference voltage dividers to minimize errors are discussed, and estimates of the measurement uncertainties are presented. PMID:27805083

  16. Inferring the temperature dependence of population parameters: the effects of experimental design and inference algorithm

    PubMed Central

    Palamara, Gian Marco; Childs, Dylan Z; Clements, Christopher F; Petchey, Owen L; Plebani, Marco; Smith, Matthew J

    2014-01-01

    Understanding and quantifying the temperature dependence of population parameters, such as intrinsic growth rate and carrying capacity, is critical for predicting the ecological responses to environmental change. Many studies provide empirical estimates of such temperature dependencies, but a thorough investigation of the methods used to infer them has not been performed yet. We created artificial population time series using a stochastic logistic model parameterized with the Arrhenius equation, so that activation energy drives the temperature dependence of population parameters. We simulated different experimental designs and used different inference methods, varying the likelihood functions and other aspects of the parameter estimation methods. Finally, we applied the best performing inference methods to real data for the species Paramecium caudatum. The relative error of the estimates of activation energy varied between 5% and 30%. The fraction of habitat sampled played the most important role in determining the relative error; sampling at least 1% of the habitat kept it below 50%. We found that methods that simultaneously use all time series data (direct methods) and methods that estimate population parameters separately for each temperature (indirect methods) are complementary. Indirect methods provide a clearer insight into the shape of the functional form describing the temperature dependence of population parameters; direct methods enable a more accurate estimation of the parameters of such functional forms. Using both methods, we found that growth rate and carrying capacity of Paramecium caudatum scale with temperature according to different activation energies. Our study shows how careful choice of experimental design and inference methods can increase the accuracy of the inferred relationships between temperature and population parameters. The comparison of estimation methods provided here can increase the accuracy of model predictions, with important implications in understanding and predicting the effects of temperature on the dynamics of populations. PMID:25558365

  17. Characterizing genomic alterations in cancer by complementary functional associations.

    PubMed

    Kim, Jong Wook; Botvinnik, Olga B; Abudayyeh, Omar; Birger, Chet; Rosenbluh, Joseph; Shrestha, Yashaswi; Abazeed, Mohamed E; Hammerman, Peter S; DiCara, Daniel; Konieczkowski, David J; Johannessen, Cory M; Liberzon, Arthur; Alizad-Rahvar, Amir Reza; Alexe, Gabriela; Aguirre, Andrew; Ghandi, Mahmoud; Greulich, Heidi; Vazquez, Francisca; Weir, Barbara A; Van Allen, Eliezer M; Tsherniak, Aviad; Shao, Diane D; Zack, Travis I; Noble, Michael; Getz, Gad; Beroukhim, Rameen; Garraway, Levi A; Ardakani, Masoud; Romualdi, Chiara; Sales, Gabriele; Barbie, David A; Boehm, Jesse S; Hahn, William C; Mesirov, Jill P; Tamayo, Pablo

    2016-05-01

    Systematic efforts to sequence the cancer genome have identified large numbers of mutations and copy number alterations in human cancers. However, elucidating the functional consequences of these variants, and their interactions to drive or maintain oncogenic states, remains a challenge in cancer research. We developed REVEALER, a computational method that identifies combinations of mutually exclusive genomic alterations correlated with functional phenotypes, such as the activation or gene dependency of oncogenic pathways or sensitivity to a drug treatment. We used REVEALER to uncover complementary genomic alterations associated with the transcriptional activation of β-catenin and NRF2, MEK-inhibitor sensitivity, and KRAS dependency. REVEALER successfully identified both known and new associations, demonstrating the power of combining functional profiles with extensive characterization of genomic alterations in cancer genomes.

  18. Comparison of various error functions in predicting the optimum isotherm by linear and non-linear regression analysis for the sorption of basic red 9 by activated carbon.

    PubMed

    Kumar, K Vasanth; Porkodi, K; Rocha, F

    2008-01-15

    A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of basic red 9 sorption by activated carbon. The r(2) was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions namely coefficient of determination (r(2)), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), the average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. Non-linear regression was found to be a better way to obtain the parameters involved in the isotherms and also the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r(2) was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K(2) was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm.

  19. Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors

    NASA Astrophysics Data System (ADS)

    Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.

    2018-04-01

    The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.

  20. Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2002-01-01

    Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.

  1. Tumour functional sphericity from PET images: prognostic value in NSCLC and impact of delineation method.

    PubMed

    Hatt, Mathieu; Laurent, Baptiste; Fayad, Hadi; Jaouen, Vincent; Visvikis, Dimitris; Le Rest, Catherine Cheze

    2018-04-01

    Sphericity has been proposed as a parameter for characterizing PET tumour volumes, with complementary prognostic value with respect to SUV and volume in both head and neck cancer and lung cancer. The objective of the present study was to investigate its dependency on tumour delineation and the resulting impact on its prognostic value. Five segmentation methods were considered: two thresholds (40% and 50% of SUV max ), ant colony optimization, fuzzy locally adaptive Bayesian (FLAB), and gradient-aided region-based active contour. The accuracy of each method in extracting sphericity was evaluated using a dataset of 176 simulated, phantom and clinical PET images of tumours with associated ground truth. The prognostic value of sphericity and its complementary value with respect to volume for each segmentation method was evaluated in a cohort of 87 patients with stage II/III lung cancer. Volume and associated sphericity values were dependent on the segmentation method. The correlation between segmentation accuracy and sphericity error was moderate (|ρ| from 0.24 to 0.57). The accuracy in measuring sphericity was not dependent on volume (|ρ| < 0.4). In the patients with lung cancer, sphericity had prognostic value, although lower than that of volume, except for that derived using FLAB for which when combined with volume showed a small improvement over volume alone (hazard ratio 2.67, compared with 2.5). Substantial differences in patient prognosis stratification were observed depending on the segmentation method used. Tumour functional sphericity was found to be dependent on the segmentation method, although the accuracy in retrieving the true sphericity was not dependent on tumour volume. In addition, even accurate segmentation can lead to an inaccurate sphericity value, and vice versa. Sphericity had similar or lower prognostic value than volume alone in the patients with lung cancer, except when determined using the FLAB method for which there was a small improvement in stratification when the parameters were combined.

  2. Primary status, complementary status, and organizational survival in the U.S. venture capital industry.

    PubMed

    Bothner, Matthew S; Kim, Young-Kyu; Lee, Wonjae

    2015-07-01

    We introduce a distinction between two kinds of status and examine their effects on the exit rates of organizations investing in the U.S. venture capital industry. Extending past work on status-based competition, we start with a simple baseline: we describe primary status as a network-related signal of an organization's quality in a leadership role, that is, as a function of the degree to which an organization leads others that are themselves well regarded as lead organizations in the context of investment syndicates. Combining Harary's (1959) image of the elite consultant with Goffman's (1956) concept of "capacity-esteem," we then discuss complementary status as an affiliation-based signal of an organization's quality in a supporting role. We measure complementary status as a function of the extent to which an organization is invited into syndicates by well-regarded lead organizations-that is, by those possessing high levels of primary status. Findings show that, conditioning on primary status, complementary status reduces the rate at which venture capital organizations exit the industry. Consistent with the premise that these kinds of status correspond to different roles and market identities, we also find that complementary status attenuates (and ultimately reverses) the otherwise favorable effect of primary status on an organization's life chances. Theoretically and methodologically oriented scope conditions, as well as implications for future research, are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. DNA Photo Lithography with Cinnamate-based Photo-Bio-Nano-Glue

    NASA Astrophysics Data System (ADS)

    Feng, Lang; Li, Minfeng; Romulus, Joy; Sha, Ruojie; Royer, John; Wu, Kun-Ta; Xu, Qin; Seeman, Nadrian; Weck, Marcus; Chaikin, Paul

    2013-03-01

    We present a technique to make patterned functional surfaces, using a cinnamate photo cross-linker and photolithography. We have designed and modified a complementary set of single DNA strands to incorporate a pair of opposing cinnamate molecules. On exposure to 360nm UV, the cinnamate makes a highly specific covalent bond permanently linking only the complementary strands containing the cinnamates. We have studied this specific and efficient crosslinking with cinnamate-containing DNA in solution and on particles. UV addressability allows us to pattern surfaces functionally. The entire surface is coated with a DNA sequence A incorporating cinnamate. DNA strands A'B with one end containing a complementary cinnamated sequence A' attached to another sequence B, are then hybridized to the surface. UV photolithography is used to bind the A'B strand in a specific pattern. The system is heated and the unbound DNA is washed away. The pattern is then observed by thermo-reversibly hybridizing either fluorescently dyed B' strands complementary to B, or colloids coated with B' strands. Our techniques can be used to reversibly and/or permanently bind, via DNA linkers, an assortment of molecules, proteins and nanostructures. Potential applications range from advanced self-assembly, such as templated self-replication schemes recently reported, to designed physical and chemical patterns, to high-resolution multi-functional DNA surfaces for genetic detection or DNA computing.

  4. Flexible methods for segmentation evaluation: Results from CT-based luggage screening

    PubMed Central

    Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry

    2017-01-01

    BACKGROUND Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms’ behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. OBJECTIVE To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. METHODS We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. RESULTS Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. CONCLUSIONS Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms. PMID:24699346

  5. Non-flickering 100 m RGB visible light communication transmission based on a CMOS image sensor.

    PubMed

    Chow, Chi-Wai; Shiu, Ruei-Jie; Liu, Yen-Chun; Liu, Yang; Yeh, Chien-Hung

    2018-03-19

    We demonstrate a non-flickering 100 m long-distance RGB visible light communication (VLC) transmission based on a complementary-metal-oxide-semiconductor (CMOS) camera. Experimental bit-error rate (BER) measurements under different camera ISO values and different transmission distances are evaluated. Here, we also experimentally reveal that the rolling shutter effect- (RSE) based VLC system cannot work at long distance transmission, and the under-sampled modulation- (USM) based VLC system is a good choice.

  6. Efficient demodulation scheme for rolling-shutter-patterning of CMOS image sensor based visible light communications.

    PubMed

    Chen, Chia-Wei; Chow, Chi-Wai; Liu, Yang; Yeh, Chien-Hung

    2017-10-02

    Recently even the low-end mobile-phones are equipped with a high-resolution complementary-metal-oxide-semiconductor (CMOS) image sensor. This motivates using a CMOS image sensor for visible light communication (VLC). Here we propose and demonstrate an efficient demodulation scheme to synchronize and demodulate the rolling shutter pattern in image sensor based VLC. The implementation algorithm is discussed. The bit-error-rate (BER) performance and processing latency are evaluated and compared with other thresholding schemes.

  7. [Ecologic evaluation in the cognitive assessment of brain injury patients: generation and execution of script].

    PubMed

    Baguena, N; Thomas-Antérion, C; Sciessere, K; Truche, A; Extier, C; Guyot, E; Paris, N

    2006-06-01

    Assessment of executive functions in an everyday life activity, evaluating brain injury subjects with script generation and execution tasks. We compared a script generation task to a script execution task, whereby subjects had to make a cooked dish. Two grids were used for the quotation, qualitative and quantitative, as well as the calculation of an anosognosis score. We checked whether the execution task was more sensitive to a dysexecutive disorder than the script generation task and compared the scores obtained in this evaluation with those from classical frontal tests. Twelve subjects with brain injury 6 years+/-4.79 ago and 12 healthy control subjects were tested. The subjects carried out a script generation task whereby they had to explain the necessary stages to make a chocolate cake. They also had to do a script execution task corresponding to the cake making. The 2 quotation grids were operational and complementary. The quantitative grid is more sensitive to a dysexecutive disorder. The brain injury subjects made more errors in the execution task. It is important to evaluate the executive functions of subjects with brain injury in everyday life tasks, not just in psychometric or script-generation tests. Indeed the ecological realization of a very simple task can reveal executive function difficulties such as the planning or the sequencing of actions, which are under-evaluated in laboratory tests.

  8. PI controller design of a wind turbine: evaluation of the pole-placement method and tuning using constrained optimization

    NASA Astrophysics Data System (ADS)

    Mirzaei, Mahmood; Tibaldi, Carlo; Hansen, Morten H.

    2016-09-01

    PI/PID controllers are the most common wind turbine controllers. Normally a first tuning is obtained using methods such as pole-placement or Ziegler-Nichols and then extensive aeroelastic simulations are used to obtain the best tuning in terms of regulation of the outputs and reduction of the loads. In the traditional tuning approaches, the properties of different open loop and closed loop transfer functions of the system are not normally considered. In this paper, an assessment of the pole-placement tuning method is presented based on robustness measures. Then a constrained optimization setup is suggested to automatically tune the wind turbine controller subject to robustness constraints. The properties of the system such as the maximum sensitivity and complementary sensitivity functions (Ms and Mt ), along with some of the responses of the system, are used to investigate the controller performance and formulate the optimization problem. The cost function is the integral absolute error (IAE) of the rotational speed from a disturbance modeled as a step in wind speed. Linearized model of the DTU 10-MW reference wind turbine is obtained using HAWCStab2. Thereafter, the model is reduced with model order reduction. The trade-off curves are given to assess the tunings of the poles- placement method and a constrained optimization problem is solved to find the best tuning.

  9. Complementary and alternative medicine for multiple sclerosis.

    PubMed

    Schwarz, S; Knorr, C; Geiger, H; Flachenecker, P

    2008-09-01

    We analyzed characteristics, motivation, and effectiveness of complementary and alternative medicine in a large sample of people with multiple sclerosis. A 53-item survey was mailed to the members of the German Multiple Sclerosis Society, chapter of Baden-Wuerttemberg. Surveys of 1573 patients (48.5 +/- 11.7 years, 74% women, duration of illness 18.1 +/- 10.5 years) were analyzed. In comparison with conventional medicine, more patients displayed a positive attitude toward complementary and alternative medicine (44% vs 38%, P < 0.05), with 70% reporting lifetime use of at least one method. Among a wide variety of complementary and alternative medicine, diet modification (41%), Omega-3 fatty acids (37%), removal of amalgam fillings (28%), vitamins E (28%), B (36%), and C (28%), homeopathy (26%), and selenium (24%) were cited most frequently. Most respondents (69%) were satisfied with the effects of complementary and alternative medicine. Use of complementary and alternative medicine was associated with religiosity, functional independence, female sex, white-collar job, and higher education (P < 0.05). Compared with conventional therapies, complementary and alternative medicine rarely showed unwanted side effects (9% vs 59%, P < 0.00001). A total of 52% stated that the initial consultation with their physician lasted less than 15 min. To conclude, main reasons for the use of complementary and alternative medicine include the high rate of side effects and low levels of satisfaction with conventional treatments and brief patients/physicians contacts.

  10. Biomimetic enzyme nanocomplexes and their use as antidotes and preventive measures for alcohol intoxication

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Du, Juanjuan; Yan, Ming; Lau, Mo Yin; Hu, Jay; Han, Hui; Yang, Otto O.; Liang, Sheng; Wei, Wei; Wang, Hui; Li, Jianmin; Zhu, Xinyuan; Shi, Linqi; Chen, Wei; Ji, Cheng; Lu, Yunfeng

    2013-03-01

    Organisms have sophisticated subcellular compartments containing enzymes that function in tandem. These confined compartments ensure effective chemical transformation and transport of molecules, and the elimination of toxic metabolic wastes. Creating functional enzyme complexes that are confined in a similar way remains challenging. Here we show that two or more enzymes with complementary functions can be assembled and encapsulated within a thin polymer shell to form enzyme nanocomplexes. These nanocomplexes exhibit improved catalytic efficiency and enhanced stability when compared with free enzymes. Furthermore, the co-localized enzymes display complementary functions, whereby toxic intermediates generated by one enzyme can be promptly eliminated by another enzyme. We show that nanocomplexes containing alcohol oxidase and catalase could reduce blood alcohol levels in intoxicated mice, offering an alternative antidote and prophylactic for alcohol intoxication.

  11. A modified adjoint-based grid adaptation and error correction method for unstructured grid

    NASA Astrophysics Data System (ADS)

    Cui, Pengcheng; Li, Bin; Tang, Jing; Chen, Jiangtao; Deng, Youqi

    2018-05-01

    Grid adaptation is an important strategy to improve the accuracy of output functions (e.g. drag, lift, etc.) in computational fluid dynamics (CFD) analysis and design applications. This paper presents a modified robust grid adaptation and error correction method for reducing simulation errors in integral outputs. The procedure is based on discrete adjoint optimization theory in which the estimated global error of output functions can be directly related to the local residual error. According to this relationship, local residual error contribution can be used as an indicator in a grid adaptation strategy designed to generate refined grids for accurately estimating the output functions. This grid adaptation and error correction method is applied to subsonic and supersonic simulations around three-dimensional configurations. Numerical results demonstrate that the sensitive grids to output functions are detected and refined after grid adaptation, and the accuracy of output functions is obviously improved after error correction. The proposed grid adaptation and error correction method is shown to compare very favorably in terms of output accuracy and computational efficiency relative to the traditional featured-based grid adaptation.

  12. CMOS plasmonics in WDM data transmission: 200 Gb/s (8 × 25Gb/s) transmission over aluminum plasmonic waveguides.

    PubMed

    Dabos, G; Manolis, A; Papaioannou, S; Tsiokos, D; Markey, L; Weeber, J-C; Dereux, A; Giesecke, A L; Porschatis, C; Chmielak, B; Pleros, N

    2018-05-14

    We demonstrate wavelength-division-multiplexed (WDM) 200 Gb/s (8 × 25 Gb/s) data transmission over 100 μm long aluminum (Al) surface-plasmon-polariton (SPP) waveguides on a Si 3 N 4 waveguide platform at telecom wavelengths. The Al SPP waveguide was evaluated in terms of signal integrity by performing bit-error-rate (BER) measurements that revealed error-free operation for all eight 25 Gb/s non-return-to-zero (NRZ) modulated data channels with power penalties not exceeding 0.2 dB at 10 -9 . To the best of our knowledge, this is the first demonstration of WDM enabled data transmission over complementary-metal-oxide-semiconductor (CMOS) SPP waveguides fueling future development of CMOS compatible plasmo-photonic devices for on-chip optical interconnections.

  13. Integrated polarization-dependent sensor for autonomous navigation

    NASA Astrophysics Data System (ADS)

    Liu, Ze; Zhang, Ran; Wang, Zhiwen; Guan, Le; Li, Bin; Chu, Jinkui

    2015-01-01

    Based on the navigation strategy of insects utilizing the polarized skylight, an integrated polarization-dependent sensor for autonomous navigation is presented. The navigation sensor has the features of compact structure, high precision, strong robustness, and a simple manufacture technique. The sensor is composed by integrating a complementary-metal-oxide-semiconductor sensor with a multiorientation nanowire grid polarizer. By nanoimprint lithography, the multiorientation nanowire polarizer is fabricated in one step and the alignment error is eliminated. The statistical theory is added to the interval-division algorithm to calculate the polarization angle of the incident light. The laboratory and outdoor tests for the navigation sensor are implemented and the errors of the measured angle are ±0.02 deg and ±1.3 deg, respectively. The results show that the proposed sensor has potential for application in autonomous navigation.

  14. Interpretation of physiological indicators of motivation: Caveats and recommendations.

    PubMed

    Richter, Michael; Slade, Kate

    2017-09-01

    Motivation scientists employing physiological measures to gather information about motivation-related states are at risk of committing two fundamental errors: overstating the inferences that can be drawn from their physiological measures and circular reasoning. We critically discuss two complementary approaches, Cacioppo and colleagues' model of psychophysiological relations and construct validation theory, to highlight the conditions under which these errors are committed and provide guidance on how to avoid them. In particular, we demonstrate that the direct inference from changes in a physiological measure to changes in a motivation-related state requires the demonstration that the measure is not related to other relevant psychological states. We also point out that circular reasoning can be avoided by separating the definition of the motivation-related state from the hypotheses that are empirically tested. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Discrete wavelet transform: a tool in smoothing kinematic data.

    PubMed

    Ismail, A R; Asfour, S S

    1999-03-01

    Motion analysis systems typically introduce noise to the displacement data recorded. Butterworth digital filters have been used to smooth the displacement data in order to obtain smoothed velocities and accelerations. However, this technique does not yield satisfactory results, especially when dealing with complex kinematic motions that occupy the low- and high-frequency bands. The use of the discrete wavelet transform, as an alternative to digital filters, is presented in this paper. The transform passes the original signal through two complementary low- and high-pass FIR filters and decomposes the signal into an approximation function and a detail function. Further decomposition of the signal results in transforming the signal into a hierarchy set of orthogonal approximation and detail functions. A reverse process is employed to perfectly reconstruct the signal (inverse transform) back from its approximation and detail functions. The discrete wavelet transform was applied to the displacement data recorded by Pezzack et al., 1977. The smoothed displacement data were twice differentiated and compared to Pezzack et al.'s acceleration data in order to choose the most appropriate filter coefficients and decomposition level on the basis of maximizing the percentage of retained energy (PRE) and minimizing the root mean square error (RMSE). Daubechies wavelet of the fourth order (Db4) at the second decomposition level showed better results than both the biorthogonal and Coiflet wavelets (PRE = 97.5%, RMSE = 4.7 rad s-2). The Db4 wavelet was then used to compress complex displacement data obtained from a noisy mathematically generated function. Results clearly indicate superiority of this new smoothing approach over traditional filters.

  16. Sustained attention to response task (SART) shows impaired vigilance in a spectrum of disorders of excessive daytime sleepiness.

    PubMed

    Van Schie, Mojca K M; Thijs, Roland D; Fronczek, Rolf; Middelkoop, Huub A M; Lammers, Gert Jan; Van Dijk, J Gert

    2012-08-01

    The sustained attention to response task comprises withholding key presses to one in nine of 225 target stimuli; it proved to be a sensitive measure of vigilance in a small group of narcoleptics. We studied sustained attention to response task results in 96 patients from a tertiary narcolepsy referral centre. Diagnoses according to ICSD-2 criteria were narcolepsy with (n=42) and without cataplexy (n=5), idiopathic hypersomnia without long sleep time (n=37), and obstructive sleep apnoea syndrome (n=12). The sustained attention to response task was administered prior to each of five multiple sleep latency test sessions. Analysis concerned error rates, mean reaction time, reaction time variability and post-error slowing, as well as the correlation of sustained attention to response task results with mean latency of the multiple sleep latency test and possible time of day influences. Median sustained attention to response task error scores ranged from 8.4 to 11.1, and mean reaction times from 332 to 366ms. Sustained attention to response task error score and mean reaction time did not differ significantly between patient groups. Sustained attention to response task error score did not correlate with multiple sleep latency test sleep latency. Reaction time was more variable as the error score was higher. Sustained attention to response task error score was highest for the first session. We conclude that a high sustained attention to response task error rate reflects vigilance impairment in excessive daytime sleepiness irrespective of its cause. The sustained attention to response task and the multiple sleep latency test reflect different aspects of sleep/wakefulness and are complementary. © 2011 European Sleep Research Society.

  17. Applications of CRISPR/Cas9 technology for targeted mutagenesis, gene replacement and stacking of genes in higher plants.

    PubMed

    Luo, Ming; Gilbert, Brian; Ayliffe, Michael

    2016-07-01

    Mutagenesis continues to play an essential role for understanding plant gene function and, in some instances, provides an opportunity for plant improvement. The development of gene editing technologies such as TALENs and zinc fingers has revolutionised the targeted mutation specificity that can now be achieved. The CRISPR/Cas9 system is the most recent addition to gene editing technologies and arguably the simplest requiring only two components; a small guide RNA molecule (sgRNA) and Cas9 endonuclease protein which complex to recognise and cleave a specific 20 bp target site present in a genome. Target specificity is determined by complementary base pairing between the sgRNA and target site sequence enabling highly specific, targeted mutation to be readily engineered. Upon target site cleavage, error-prone endogenous repair mechanisms produce small insertion/deletions at the target site usually resulting in loss of gene function. CRISPR/Cas9 gene editing has been rapidly adopted in plants and successfully undertaken in numerous species including major crop species. Its applications are not restricted to mutagenesis and target site cleavage can be exploited to promote sequence insertion or replacement by recombination. The multiple applications of this technology in plants are described.

  18. A function space approach to smoothing with applications to model error estimation for flexible spacecraft control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1981-01-01

    A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.

  19. Investigating mode errors on automated flight decks: illustrating the problem-driven, cumulative, and interdisciplinary nature of human factors research.

    PubMed

    Sarter, Nadine

    2008-06-01

    The goal of this article is to illustrate the problem-driven, cumulative, and highly interdisciplinary nature of human factors research by providing a brief overview of the work on mode errors on modern flight decks over the past two decades. Mode errors on modem flight decks were first reported in the late 1980s. Poor feedback, inadequate mental models of the automation, and the high degree of coupling and complexity of flight deck systems were identified as main contributors to these breakdowns in human-automation interaction. Various improvements of design, training, and procedures were proposed to address these issues. The author describes when and why the problem of mode errors surfaced, summarizes complementary research activities that helped identify and understand the contributing factors to mode errors, and describes some countermeasures that have been developed in recent years. This brief review illustrates how one particular human factors problem in the aviation domain enabled various disciplines and methodological approaches to contribute to a better understanding of, as well as provide better support for, effective human-automation coordination. Converging operations and interdisciplinary collaboration over an extended period of time are hallmarks of successful human factors research. The reported body of research can serve as a model for future research and as a teaching tool for students in this field of work.

  20. Rail-to-rail differential input amplification stage with main and surrogate differential pairs

    DOEpatents

    Britton, Jr., Charles Lanier; Smith, Stephen Fulton

    2007-03-06

    An operational amplifier input stage provides a symmetrical rail-to-rail input common-mode voltage without turning off either pair of complementary differential input transistors. Secondary, or surrogate, transistor pairs assume the function of the complementary differential transistors. The circuit also maintains essentially constant transconductance, constant slew rate, and constant signal-path supply current as it provides rail-to-rail operation.

  1. Advancing biodiversity-ecosystem functioning science using high-density tree-based experiments over functional diversity gradients.

    PubMed

    Tobner, Cornelia M; Paquette, Alain; Reich, Peter B; Gravel, Dominique; Messier, Christian

    2014-03-01

    Increasing concern about loss of biodiversity and its effects on ecosystem functioning has triggered a series of manipulative experiments worldwide, which have demonstrated a general trend for ecosystem functioning to increase with diversity. General mechanisms proposed to explain diversity effects include complementary resource use and invoke a key role for species' functional traits. The actual mechanisms by which complementary resource use occurs remain, however, poorly understood, as well as whether they apply to tree-dominated ecosystems. Here we present an experimental approach offering multiple innovative aspects to the field of biodiversity-ecosystem functioning (BEF) research. The International Diversity Experiment Network with Trees (IDENT) allows research to be conducted at several hierarchical levels within individuals, neighborhoods, and communities. The network investigates questions related to intraspecific trait variation, complementarity, and environmental stress. The goal of IDENT is to identify some of the mechanisms through which individuals and species interact to promote coexistence and the complementary use of resources. IDENT includes several implemented and planned sites in North America and Europe, and uses a replicated design of high-density tree plots of fixed species-richness levels varying in functional diversity (FD). The design reduces the space and time needed for trees to interact allowing a thorough set of mixtures varying over different diversity gradients (specific, functional, phylogenetic) and environmental conditions (e.g., water stress) to be tested in the field. The intention of this paper is to share the experience in designing FD-focused BEF experiments with trees, to favor collaborations and expand the network to different conditions.

  2. Accurate Attitude Estimation Using ARS under Conditions of Vehicle Movement Based on Disturbance Acceleration Adaptive Estimation and Correction

    PubMed Central

    Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong

    2016-01-01

    This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions. PMID:27754469

  3. Accurate Attitude Estimation Using ARS under Conditions of Vehicle Movement Based on Disturbance Acceleration Adaptive Estimation and Correction.

    PubMed

    Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong

    2016-10-16

    This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions.

  4. Force-Time Entropy of Isometric Impulse.

    PubMed

    Hsieh, Tsung-Yu; Newell, Karl M

    2016-01-01

    The relation between force and temporal variability in discrete impulse production has been viewed as independent (R. A. Schmidt, H. Zelaznik, B. Hawkins, J. S. Frank, & J. T. Quinn, 1979 ) or dependent on the rate of force (L. G. Carlton & K. M. Newell, 1993 ). Two experiments in an isometric single finger force task investigated the joint force-time entropy with (a) fixed time to peak force and different percentages of force level and (b) fixed percentage of force level and different times to peak force. The results showed that the peak force variability increased either with the increment of force level or through a shorter time to peak force that also reduced timing error variability. The peak force entropy and entropy of time to peak force increased on the respective dimension as the parameter conditions approached either maximum force or a minimum rate of force production. The findings show that force error and timing error are dependent but complementary when considered in the same framework with the joint force-time entropy at a minimum in the middle parameter range of discrete impulse.

  5. Systemic errors in quantitative polymerase chain reaction titration of self-complementary adeno-associated viral vectors and improved alternative methods.

    PubMed

    Fagone, Paolo; Wright, J Fraser; Nathwani, Amit C; Nienhuis, Arthur W; Davidoff, Andrew M; Gray, John T

    2012-02-01

    Self-complementary AAV (scAAV) vector genomes contain a covalently closed hairpin derived from a mutated inverted terminal repeat that connects the two monomer single-stranded genomes into a head-to-head or tail-to-tail dimer. We found that during quantitative PCR (qPCR) this structure inhibits the amplification of proximal amplicons and causes the systemic underreporting of copy number by as much as 10-fold. We show that cleavage of scAAV vector genomes with restriction endonuclease to liberate amplicons from the covalently closed terminal hairpin restores quantitative amplification, and we implement this procedure in a simple, modified qPCR titration method for scAAV vectors. In addition, we developed and present an AAV genome titration procedure based on gel electrophoresis that requires minimal sample processing and has low interassay variability, and as such is well suited for the rigorous quality control demands of clinical vector production facilities.

  6. Radiation-hardened MRAM-based LUT for non-volatile FPGA soft error mitigation with multi-node upset tolerance

    NASA Astrophysics Data System (ADS)

    Zand, Ramtin; DeMara, Ronald F.

    2017-12-01

    In this paper, we have developed a radiation-hardened non-volatile lookup table (LUT) circuit utilizing spin Hall effect (SHE)-magnetic random access memory (MRAM) devices. The design is motivated by modeling the effect of radiation particles striking hybrid complementary metal oxide semiconductor/spin based circuits, and the resistive behavior of SHE-MRAM devices via established and precise physics equations. The models developed are leveraged in the SPICE circuit simulator to verify the functionality of the proposed design. The proposed hardening technique is based on using feedback transistors, as well as increasing the radiation capacity of the sensitive nodes. Simulation results show that our proposed LUT circuit can achieve multiple node upset (MNU) tolerance with more than 38% and 60% power-delay product improvement as well as 26% and 50% reduction in device count compared to the previous energy-efficient radiation-hardened LUT designs. Finally, we have performed a process variation analysis showing that the MNU immunity of our proposed circuit is realized at the cost of increased susceptibility to transistor and MRAM variations compared to an unprotected LUT design.

  7. An alternative approach based on artificial neural networks to study controlled drug release.

    PubMed

    Reis, Marcus A A; Sinisterra, Rubén D; Belchior, Jadson C

    2004-02-01

    An alternative methodology based on artificial neural networks is proposed to be a complementary tool to other conventional methods to study controlled drug release. Two systems are used to test the approach; namely, hydrocortisone in a biodegradable matrix and rhodium (II) butyrate complexes in a bioceramic matrix. Two well-established mathematical models are used to simulate different release profiles as a function of fundamental properties; namely, diffusion coefficient (D), saturation solubility (C(s)), drug loading (A), and the height of the device (h). The models were tested, and the results show that these fundamental properties can be predicted after learning the experimental or model data for controlled drug release systems. The neural network results obtained after the learning stage can be considered to quantitatively predict ideal experimental conditions. Overall, the proposed methodology was shown to be efficient for ideal experiments, with a relative average error of <1% in both tests. This approach can be useful for the experimental analysis to simulate and design efficient controlled drug-release systems. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association

  8. Generation and tooth contact analysis of spiral bevel gears with predesigned parabolic functions of transmission errors

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Lee, Hong-Tao

    1989-01-01

    A new approach for determination of machine-tool settings for spiral bevel gears is proposed. The proposed settings provide a predesigned parabolic function of transmission errors and the desired location and orientation of the bearing contact. The predesigned parabolic function of transmission errors is able to absorb piece-wise linear functions of transmission errors that are caused by the gear misalignment and reduce gear noise. The gears are face-milled by head cutters with conical surfaces or surfaces of revolution. A computer program for simulation of meshing, bearing contact and determination of transmission errors for misaligned gear has been developed.

  9. Predicting ambient aerosol thermal-optical reflectance (TOR) measurements from infrared spectra: organic carbon

    NASA Astrophysics Data System (ADS)

    Dillner, A. M.; Takahama, S.

    2015-03-01

    Organic carbon (OC) can constitute 50% or more of the mass of atmospheric particulate matter. Typically, organic carbon is measured from a quartz fiber filter that has been exposed to a volume of ambient air and analyzed using thermal methods such as thermal-optical reflectance (TOR). Here, methods are presented that show the feasibility of using Fourier transform infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE or Teflon) filters to accurately predict TOR OC. This work marks an initial step in proposing a method that can reduce the operating costs of large air quality monitoring networks with an inexpensive, non-destructive analysis technique using routinely collected PTFE filter samples which, in addition to OC concentrations, can concurrently provide information regarding the composition of organic aerosol. This feasibility study suggests that the minimum detection limit and errors (or uncertainty) of FT-IR predictions are on par with TOR OC such that evaluation of long-term trends and epidemiological studies would not be significantly impacted. To develop and test the method, FT-IR absorbance spectra are obtained from 794 samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least-squares regression is used to calibrate sample FT-IR absorbance spectra to TOR OC. The FTIR spectra are divided into calibration and test sets by sampling site and date. The calibration produces precise and accurate TOR OC predictions of the test set samples by FT-IR as indicated by high coefficient of variation (R2; 0.96), low bias (0.02 μg m-3, the nominal IMPROVE sample volume is 32.8 m3), low error (0.08 μg m-3) and low normalized error (11%). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision to collocated TOR measurements. FT-IR spectra are also divided into calibration and test sets by OC mass and by OM / OC ratio, which reflects the organic composition of the particulate matter and is obtained from organic functional group composition; these divisions also leads to precise and accurate OC predictions. Low OC concentrations have higher bias and normalized error due to TOR analytical errors and artifact-correction errors, not due to the range of OC mass of the samples in the calibration set. However, samples with low OC mass can be used to predict samples with high OC mass, indicating that the calibration is linear. Using samples in the calibration set that have different OM / OC or ammonium / OC distributions than the test set leads to only a modest increase in bias and normalized error in the predicted samples. We conclude that FT-IR analysis with partial least-squares regression is a robust method for accurately predicting TOR OC in IMPROVE network samples - providing complementary information to the organic functional group composition and organic aerosol mass estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).

  10. Research on the attitude of small UAV based on MEMS devices

    NASA Astrophysics Data System (ADS)

    Shi, Xiaojie; Lu, Libin; Jin, Guodong; Tan, Lining

    2017-05-01

    This paper mainly introduces the research principle and implementation method of the small UAV navigation attitude system based on MEMS devices. The Gauss - Newton method based on least squares is used to calibrate the MEMS accelerometer and gyroscope for calibration. Improve the accuracy of the attitude by using the modified complementary filtering to correct the attitude angle error. The experimental data show that the design of the attitude and attitude system in this paper to meet the requirements of small UAV attitude accuracy to achieve a small, low cost.

  11. Direct Nanoscale Conversion of Biomolecular Signals into Electronic Information

    DTIC Science & Technology

    2008-09-22

    the electrode surface. In this experiment, the single free cysteine group featured in the GOx structure was exploited to demonstrate that orientation...first with GOx-ssDNA conjugates featuring a sequence complementary to the address strand, then with a non-complementary conjugate and finally with...fully-functional for an enzyme that features a free thiol group, or that can be engineered to incorporate a thiol onto its outer shell

  12. Terahertz transmission resonances in complementary multilayered metamaterial with deep subwavelength interlayer spacing

    NASA Astrophysics Data System (ADS)

    Choi, Muhan; Kang, Byungsoo; Yi, Yoonsik; Lee, Seung Hoon; Kim, Inbo; Han, Jae-Hyung; Yi, Minwoo; Ahn, Jaewook; Choi, Choon-Gi

    2016-05-01

    We introduce a flexible multilayered THz metamaterial designed by using the Babinet's principle with the functionality of narrow band-pass filter. The metamaterial gives us systematic way to design frequency selective surfaces working on intended frequencies and bandwidths. It shows highly enhanced transmission of 80% for the normal incident THz waves due to the strong coupling of the two layers of metamaterial complementary to each other.

  13. Comparison of PSF maxima and minima of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems

    NASA Astrophysics Data System (ADS)

    Ratnam, Challa; Lakshmana Rao, Vadlamudi; Lachaa Goud, Sivagouni

    2006-10-01

    In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the point spread function are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper.

  14. Complementary and partially complementary DNA duplexes tethered to a functionalized substrate: a molecular dynamics approach to biosensing.

    PubMed

    Monti, Susanna; Cacelli, Ivo; Ferretti, Alessandro; Prampolini, Giacomo; Barone, Vincenzo

    2011-07-21

    Molecular dynamics simulations (90 ns) of different DNA complexes attached to a functionalized substrate in solution were performed in order to clarify the behavior of mismatched DNA sequences captured by a tethered DNA probe (biochip). Examination of the trajectories revealed that the substrate influence and a series of cooperative events, including recognition, reorientation and reorganization of the bases, could induce the formation of stable duplexes having non-canonical arrangements. Major adjustment of the structures was observed when the mutated base was located in the end region of the chain close to the surface. This journal is © the Owner Societies 2011

  15. Balancing Hole and Electron Conduction in Ambipolar Split-Gate Thin-Film Transistors.

    PubMed

    Yoo, Hocheon; Ghittorelli, Matteo; Lee, Dong-Kyu; Smits, Edsger C P; Gelinck, Gerwin H; Ahn, Hyungju; Lee, Han-Koo; Torricelli, Fabrizio; Kim, Jae-Joon

    2017-07-10

    Complementary organic electronics is a key enabling technology for the development of new applications including smart ubiquitous sensors, wearable electronics, and healthcare devices. High-performance, high-functionality and reliable complementary circuits require n- and p-type thin-film transistors with balanced characteristics. Recent advancements in ambipolar organic transistors in terms of semiconductor and device engineering demonstrate the great potential of this route but, unfortunately, the actual development of ambipolar organic complementary electronics is currently hampered by the uneven electron (n-type) and hole (p-type) conduction in ambipolar organic transistors. Here we show ambipolar organic thin-film transistors with balanced n-type and p-type operation. By manipulating air exposure and vacuum annealing conditions, we show that well-balanced electron and hole transport properties can be easily obtained. The method is used to control hole and electron conductions in split-gate transistors based on a solution-processed donor-acceptor semiconducting polymer. Complementary logic inverters with balanced charging and discharging characteristics are demonstrated. These findings may open up new opportunities for the rational design of complementary electronics based on ambipolar organic transistors.

  16. Gradient ascent pulse engineering approach to CNOT gates in donor electron spin quantum computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsai, D.-B.; Goan, H.-S.

    2008-11-07

    In this paper, we demonstrate how gradient ascent pulse engineering (GRAPE) optimal control methods can be implemented on donor electron spin qubits in semiconductors with an architecture complementary to the original Kane's proposal. We focus on the high fidelity controlled-NOT (CNOT) gate and we explicitly find the digitized control sequences for a controlled-NOT gate by optimizing its fidelity using the effective, reduced donor electron spin Hamiltonian with external controls over the hyperfine A and exchange J interactions. We then simulate the CNOT-gate sequence with the full spin Hamiltonian and find that it has an error of 10{sup -6} that ismore » below the error threshold of 10{sup -4} required for fault-tolerant quantum computation. Also the CNOT gate operation time of 100 ns is 3 times faster than 297 ns of the proposed global control scheme.« less

  17. A Dissimilarity Measure for Clustering High- and Infinite Dimensional Data that Satisfies the Triangle Inequality

    NASA Technical Reports Server (NTRS)

    Socolovsky, Eduardo A.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The cosine or correlation measures of similarity used to cluster high dimensional data are interpreted as projections, and the orthogonal components are used to define a complementary dissimilarity measure to form a similarity-dissimilarity measure pair. Using a geometrical approach, a number of properties of this pair is established. This approach is also extended to general inner-product spaces of any dimension. These properties include the triangle inequality for the defined dissimilarity measure, error estimates for the triangle inequality and bounds on both measures that can be obtained with a few floating-point operations from previously computed values of the measures. The bounds and error estimates for the similarity and dissimilarity measures can be used to reduce the computational complexity of clustering algorithms and enhance their scalability, and the triangle inequality allows the design of clustering algorithms for high dimensional distributed data.

  18. High data rate Reed-Solomon encoding and decoding using VLSI technology

    NASA Technical Reports Server (NTRS)

    Miller, Warner; Morakis, James

    1987-01-01

    Presented as an implementation of a Reed-Solomon encode and decoder, which is 16-symbol error correcting, each symbol is 8 bits. This Reed-Solomon (RS) code is an efficient error correcting code that the National Aeronautics and Space Administration (NASA) will use in future space communications missions. A Very Large Scale Integration (VLSI) implementation of the encoder and decoder accepts data rates up 80 Mbps. A total of seven chips are needed for the decoder (four of the seven decoding chips are customized using 3-micron Complementary Metal Oxide Semiconduction (CMOS) technology) and one chip is required for the encoder. The decoder operates with the symbol clock being the system clock for the chip set. Approximately 1.65 billion Galois Field (GF) operations per second are achieved with the decoder chip set and 640 MOPS are achieved with the encoder chip.

  19. Double dissociation of value computations in orbitofrontal and anterior cingulate neurons

    PubMed Central

    Kennerley, Steven W.; Behrens, Timothy E. J.; Wallis, Jonathan D.

    2011-01-01

    Damage to prefrontal cortex (PFC) impairs decision-making, but the underlying value computations that might cause such impairments remain unclear. Here we report that value computations are doubly dissociable within PFC neurons. While many PFC neurons encoded chosen value, they used opponent encoding schemes such that averaging the neuronal population eliminated value coding. However, a special population of neurons in anterior cingulate cortex (ACC) - but not orbitofrontal cortex (OFC) - multiplex chosen value across decision parameters using a unified encoding scheme, and encoded reward prediction errors. In contrast, neurons in OFC - but not ACC - encoded chosen value relative to the recent history of choice values. Together, these results suggest complementary valuation processes across PFC areas: OFC neurons dynamically evaluate current choices relative to recent choice values, while ACC neurons encode choice predictions and prediction errors using a common valuation currency reflecting the integration of multiple decision parameters. PMID:22037498

  20. Effects of a cochlear implant simulation on immediate memory in normal-hearing adults

    PubMed Central

    Burkholder, Rose A.; Pisoni, David B.; Svirsky, Mario A.

    2012-01-01

    This study assessed the effects of stimulus misidentification and memory processing errors on immediate memory span in 25 normal-hearing adults exposed to degraded auditory input simulating signals provided by a cochlear implant. The identification accuracy of degraded digits in isolation was measured before digit span testing. Forward and backward digit spans were shorter when digits were degraded than when they were normal. Participants’ normal digit spans and their accuracy in identifying isolated digits were used to predict digit spans in the degraded speech condition. The observed digit spans in degraded conditions did not differ significantly from predicted digit spans. This suggests that the decrease in memory span is related primarily to misidentification of digits rather than memory processing errors related to cognitive load. These findings provide complementary information to earlier research on auditory memory span of listeners exposed to degraded speech either experimentally or as a consequence of a hearing-impairment. PMID:16317807

  1. Inkjet printed circuits based on ambipolar and p-type carbon nanotube thin-film transistors

    NASA Astrophysics Data System (ADS)

    Kim, Bongjun; Geier, Michael L.; Hersam, Mark C.; Dodabalapur, Ananth

    2017-02-01

    Ambipolar and p-type single-walled carbon nanotube (SWCNT) thin-film transistors (TFTs) are reliably integrated into various complementary-like circuits on the same substrate by inkjet printing. We describe the fabrication and characteristics of inverters, ring oscillators, and NAND gates based on complementary-like circuits fabricated with such TFTs as building blocks. We also show that complementary-like circuits have potential use as chemical sensors in ambient conditions since changes to the TFT characteristics of the p-channel TFTs in the circuit alter the overall operating characteristics of the circuit. The use of circuits rather than individual devices as sensors integrates sensing and signal processing functions, thereby simplifying overall system design.

  2. Influence of combined visual and vestibular cues on human perception and control of horizontal rotation

    NASA Technical Reports Server (NTRS)

    Zacharias, G. L.; Young, L. R.

    1981-01-01

    Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation is modeled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A dual-input describing function analysis supports the complementary model; vestibular cues dominate sensation at higher frequencies. The describing function model is extended by the proposal of a nonlinear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.

  3. [News items about clinical errors and safety perceptions in hospital patients].

    PubMed

    Mira, José Joaquín; Guilabert, Mercedes; Ortíz, Lidia; Navarro, Isabel María; Pérez-Jover, María Virtudes; Aranaz, Jesús María

    2010-01-01

    To analyze how news items about clinical errors are treated by the press in Spain and their influence on patients. We performed a quantitative and qualitative study. Firstly, news items published between April and November 2007 in six newspapers were analyzed. Secondly, 829 patients from five hospitals in four autonomous regions were surveyed. We analyzed 90 cases generating 128 news items, representing a mean of 16 items per month. In 91 news items (71.1%) the source was checked. In 78 items (60.9%) the author could be identified. The impact of these news items was -4.86 points (95% confidence interval [95%CI]: -4.15-5.57). In 59 cases (57%) the error was attributed to the system, in 27 (21.3%) to health professionals, and in 41 (32.3%) to both. Neither the number of columns (p=0.702), nor the inclusion of a sub-header (p=0.195), nor a complementary image (p=0.9) were found to be related to the effect of the error on safety perceptions. Of the 829 patients, 515 (62.1%; 95%CI: 58.8-65.4%) claimed to have recently seen or heard news about clinical errors in the press, on the radio or on television. The perception of safety decreased when the same person was worried about being the victim of a clinical error and had seen a recent news item about such adverse events (chi(2)=15.17; p=0.001). Every week news items about clinical errors are published or broadcast. The way in which newspapers report legal claims over alleged medical errors is similar to the way they report judicial sentences for negligence causing irreparable damage or harm. News about errors generates insecurity in patients. It is advisable to create interfaces between journalists and health professionals. Copyright 2009 SESPAS. Published by Elsevier Espana. All rights reserved.

  4. Roles for Msx and Dlx homeoproteins in vertebrate development.

    PubMed

    Bendall, A J; Abate-Shen, C

    2000-04-18

    This review provides a comparative analysis of the expression patterns, functions, and biochemical properties of Msx and Dlx homeobox genes. These comprise multi-gene families that are closely related with respect to sequence features as well as expression patterns during vertebrate development. Thus, members of the Msx and Dlx families are expressed in overlapping, but distinct, patterns and display complementary or antagonistic functions, depending upon the context. A common theme shared among Msx and Dlx genes is that they are required during early, middle, and late phases of development where their differential expression mediates patterning, morphogenesis, and histogenesis of tissues in which they are expressed. With respect to their biochemical properties, Msx proteins function as transcriptional repressors, while Dlx proteins are transcriptional activators. Moreover, their ability to oppose each other's transcriptional actions implies a mechanism underlying their complementary or antagonistic functions during development.

  5. Continually emerging mechanistic complexity of the multi-enzyme cellulosome complex.

    PubMed

    Smith, Steven P; Bayer, Edward A; Czjzek, Mirjam

    2017-06-01

    The robust plant cell wall polysaccharide-degrading properties of anaerobic bacteria are harnessed within elegant, marcomolecular assemblages called cellulosomes, in which proteins of complementary activities amass on scaffold protein networks. Research efforts have focused and continue to focus on providing detailed mechanistic insights into cellulosomal complex assembly, topology, and function. The accumulated information is expanding our fundamental understanding of the lignocellulosic biomass decomposition process and enhancing the potential of engineered cellulosomal systems for biotechnological purposes. Ongoing biochemical studies continue to reveal unexpected functional diversity within traditional cellulase families. Genomic, proteomic, and functional analyses have uncovered unanticipated cellulosomal proteins that augment the function of the native and designer cellulosomes. In addition, complementary structural and computational methods are continuing to provide much needed insights on the influence of cellulosomal interdomain linker regions on cellulosomal assembly and activity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Thyroid paraganglioma. Report of 3 cases and description of an immunohistochemical profile useful in the differential diagnosis with medullary thyroid carcinoma, based on complementary DNA array results.

    PubMed

    Castelblanco, Esmeralda; Gallel, Pilar; Ros, Susana; Gatius, Sonia; Valls, Joan; De-Cubas, Aguirre A; Maliszewska, Agnieszka; Yebra-Pimentel, M Teresa; Menarguez, Javier; Gamallo, Carlos; Opocher, Giuseppe; Robledo, Mercedes; Matias-Guiu, Xavier

    2012-07-01

    Thyroid paraganglioma is a rare disorder that sometimes poses problems in differential diagnosis with medullary thyroid carcinoma. So far, differential diagnosis is solved with the help of some markers that are frequently expressed in medullary thyroid carcinoma (thyroid transcription factor 1, calcitonin, and carcinoembryonic antigen). However, some of these markers are not absolutely specific of medullary thyroid carcinoma and may be expressed in other tumors. Here we report 3 new cases of thyroid paraganglioma and describe our strategy to design a diagnostic immunohistochemical battery. First, we performed a comparative analysis of the expression profile of head and neck paragangliomas and medullary thyroid carcinoma, obtained after complementary DNA array analysis of 2 series of fresh-frozen samples of paragangliomas and medullary thyroid carcinoma, respectively. Seven biomarkers showing differential expression were selected (nicotinamide adenine dinucleotide dehydrogenase 1 alpha subcomplex, 4-like 2, NDUFA4L2; cytochrome c oxidase subunit IV isoform 2; vesicular monoamine transporter 2; calcitonin gene-related protein/calcitonin; carcinoembryonic antigen; and thyroid transcription factor 1) for immunohistochemical analysis. Two tissue microarrays were constructed from 2 different series of paraffin-embedded samples of paragangliomas and medullary thyroid carcinoma. We provide a classifying rule for differential diagnosis that combines negativity or low staining for calcitonin gene-related protein (histologic score, <10) or calcitonin (histologic score, <50) together with positivity of any of NADH dehydrogenase 1 alpha subcomplex, 4-like 2; cytochrome c oxidase subunit IV isoform 2; or vesicular monoamine transporter 2 to predict paragangliomas, showing a prediction error of 0%. Finally, the immunohistochemical battery was checked in paraffin-embedded blocks from 4 examples of thyroid paraganglioma (1 previously reported case and 3 new cases), showing also a prediction error of 0%. Our results suggest that the comparative expression profile, obtained by complementary DNA arrays, seems to be a good tool to design immunohistochemical batteries used in differential diagnosis. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. The deficit of joint position sense in the chronic unstable ankle as measured by inversion angle replication error.

    PubMed

    Nakasa, Tomoyuki; Fukuhara, Kohei; Adachi, Nobuo; Ochi, Mitsuo

    2008-05-01

    Functional instability is defined as a repeated ankle inversion sprain and a giving way sensation. Previous studies have described the damage of sensori-motor control in ankle sprain as being a possible cause of functional instability. The aim of this study was to evaluate the inversion angle replication errors in patients with functional instability after ankle sprain. The difference between the index angle and replication angle was measured in 12 subjects with functional instability, with the aim of evaluating the replication error. As a control group, the replication errors of 17 healthy volunteers were investigated. The side-to-side differences of the replication errors were compared between both the groups, and the relationship between the side-to-side differences of the replication errors and the mechanical instability were statistically analyzed in the unstable group. The side-to-side difference of the replication errors was 1.0 +/- 0.7 degrees in the unstable group and 0.2 +/- 0.7 degrees in the control group. There was a statistically significant difference between both the groups. The side-to-side differences of the replication errors in the unstable group did not statistically correlate to the anterior talar translation and talar tilt. The patients with functional instability had the deficit of joint position sense in comparison with healthy volunteers. The replication error did not correlate to the mechanical instability. The patients with functional instability should be treated appropriately in spite of having less mechanical instability.

  8. The MSFC complementary metal oxide semiconductor (including multilevel interconnect metallization) process handbook

    NASA Technical Reports Server (NTRS)

    Bouldin, D. L.; Eastes, R. W.; Feltner, W. R.; Hollis, B. R.; Routh, D. E.

    1979-01-01

    The fabrication techniques for creation of complementary metal oxide semiconductor integrated circuits at George C. Marshall Space Flight Center are described. Examples of C-MOS integrated circuits manufactured at MSFC are presented with functional descriptions of each. Typical electrical characteristics of both p-channel metal oxide semiconductor and n-channel metal oxide semiconductor discrete devices under given conditions are provided. Procedures design, mask making, packaging, and testing are included.

  9. Ketamine Effects on Memory Reconsolidation Favor a Learning Model of Delusions

    PubMed Central

    Gardner, Jennifer M.; Piggot, Jennifer S.; Turner, Danielle C.; Everitt, Jessica C.; Arana, Fernando Sergio; Morgan, Hannah L.; Milton, Amy L.; Lee, Jonathan L.; Aitken, Michael R. F.; Dickinson, Anthony; Everitt, Barry J.; Absalom, Anthony R.; Adapa, Ram; Subramanian, Naresh; Taylor, Jane R.; Krystal, John H.; Fletcher, Paul C.

    2013-01-01

    Delusions are the persistent and often bizarre beliefs that characterise psychosis. Previous studies have suggested that their emergence may be explained by disturbances in prediction error-dependent learning. Here we set up complementary studies in order to examine whether such a disturbance also modulates memory reconsolidation and hence explains their remarkable persistence. First, we quantified individual brain responses to prediction error in a causal learning task in 18 human subjects (8 female). Next, a placebo-controlled within-subjects study of the impact of ketamine was set up on the same individuals. We determined the influence of this NMDA receptor antagonist (previously shown to induce aberrant prediction error signal and lead to transient alterations in perception and belief) on the evolution of a fear memory over a 72 hour period: they initially underwent Pavlovian fear conditioning; 24 hours later, during ketamine or placebo administration, the conditioned stimulus (CS) was presented once, without reinforcement; memory strength was then tested again 24 hours later. Re-presentation of the CS under ketamine led to a stronger subsequent memory than under placebo. Moreover, the degree of strengthening correlated with individual vulnerability to ketamine's psychotogenic effects and with prediction error brain signal. This finding was partially replicated in an independent sample with an appetitive learning procedure (in 8 human subjects, 4 female). These results suggest a link between altered prediction error, memory strength and psychosis. They point to a core disruption that may explain not only the emergence of delusional beliefs but also their persistence. PMID:23776445

  10. Integrated Sachs-Wolfe map reconstruction in the presence of systematic errors

    NASA Astrophysics Data System (ADS)

    Weaverdyck, Noah; Muir, Jessica; Huterer, Dragan

    2018-02-01

    The decay of gravitational potentials in the presence of dark energy leads to an additional, late-time contribution to anisotropies in the cosmic microwave background (CMB) at large angular scales. The imprint of this so-called integrated Sachs-Wolfe (ISW) effect to the CMB angular power spectrum has been detected and studied in detail, but reconstructing its spatial contributions to the CMB map, which would offer the tantalizing possibility of separating the early- from the late-time contributions to CMB temperature fluctuations, is more challenging. Here, we study the technique for reconstructing the ISW map based on information from galaxy surveys and focus in particular on how its accuracy is impacted by the presence of photometric calibration errors in input galaxy maps, which were previously found to be a dominant contaminant for ISW signal estimation. We find that both including tomographic information from a single survey and using data from multiple, complementary galaxy surveys improve the reconstruction by mitigating the impact of spurious power contributions from calibration errors. A high-fidelity reconstruction further requires one to account for the contribution of calibration errors to the observed galaxy power spectrum in the model used to construct the ISW estimator. We find that if the photometric calibration errors in galaxy surveys can be independently controlled at the level required to obtain unbiased dark energy constraints, then it is possible to reconstruct ISW maps with excellent accuracy using a combination of maps from two galaxy surveys with properties similar to Euclid and SPHEREx.

  11. STAMP-Based HRA Considering Causality Within a Sociotechnical System: A Case of Minuteman III Missile Accident.

    PubMed

    Rong, Hao; Tian, Jin

    2015-05-01

    The study contributes to human reliability analysis (HRA) by proposing a method that focuses more on human error causality within a sociotechnical system, illustrating its rationality and feasibility by using a case of the Minuteman (MM) III missile accident. Due to the complexity and dynamics within a sociotechnical system, previous analyses of accidents involving human and organizational factors clearly demonstrated that the methods using a sequential accident model are inadequate to analyze human error within a sociotechnical system. System-theoretic accident model and processes (STAMP) was used to develop a universal framework of human error causal analysis. To elaborate the causal relationships and demonstrate the dynamics of human error, system dynamics (SD) modeling was conducted based on the framework. A total of 41 contributing factors, categorized into four types of human error, were identified through the STAMP-based analysis. All factors are related to a broad view of sociotechnical systems, and more comprehensive than the causation presented in the accident investigation report issued officially. Recommendations regarding both technical and managerial improvement for a lower risk of the accident are proposed. The interests of an interdisciplinary approach provide complementary support between system safety and human factors. The integrated method based on STAMP and SD model contributes to HRA effectively. The proposed method will be beneficial to HRA, risk assessment, and control of the MM III operating process, as well as other sociotechnical systems. © 2014, Human Factors and Ergonomics Society.

  12. Repeat analysis of intraoral digital imaging performed by undergraduate students using a complementary metal oxide semiconductor sensor: An institutional case study.

    PubMed

    Yusof, Mohd Yusmiaidil Putera Mohd; Rahman, Nur Liyana Abdul; Asri, Amiza Aqiela Ahmad; Othman, Noor Ilyani; Wan Mokhtar, Ilham

    2017-12-01

    This study was performed to quantify the repeat rate of imaging acquisitions based on different clinical examinations, and to assess the prevalence of error types in intraoral bitewing and periapical imaging using a digital complementary metal-oxide-semiconductor (CMOS) intraoral sensor. A total of 8,030 intraoral images were retrospectively collected from 3 groups of undergraduate clinical dental students. The type of examination, stage of the procedure, and reasons for repetition were analysed and recorded. The repeat rate was calculated as the total number of repeated images divided by the total number of examinations. The weighted Cohen's kappa for inter- and intra-observer agreement was used after calibration and prior to image analysis. The overall repeat rate on intraoral periapical images was 34.4%. A total of 1,978 repeated periapical images were from endodontic assessment, which included working length estimation (WLE), trial gutta-percha (tGP), obturation, and removal of gutta-percha (rGP). In the endodontic imaging, the highest repeat rate was from WLE (51.9%) followed by tGP (48.5%), obturation (42.2%), and rGP (35.6%). In bitewing images, the repeat rate was 15.1% and poor angulation was identified as the most common cause of error. A substantial level of intra- and interobserver agreement was achieved. The repeat rates in this study were relatively high, especially for certain clinical procedures, warranting training in optimization techniques and radiation protection. Repeat analysis should be performed from time to time to enhance quality assurance and hence deliver high-quality health services to patients.

  13. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  14. A signal detection-item response theory model for evaluating neuropsychological measures.

    PubMed

    Thomas, Michael L; Brown, Gregory G; Gur, Ruben C; Moore, Tyler M; Patt, Virginie M; Risbrough, Victoria B; Baker, Dewleen G

    2018-02-05

    Models from signal detection theory are commonly used to score neuropsychological test data, especially tests of recognition memory. Here we show that certain item response theory models can be formulated as signal detection theory models, thus linking two complementary but distinct methodologies. We then use the approach to evaluate the validity (construct representation) of commonly used research measures, demonstrate the impact of conditional error on neuropsychological outcomes, and evaluate measurement bias. Signal detection-item response theory (SD-IRT) models were fitted to recognition memory data for words, faces, and objects. The sample consisted of U.S. Infantry Marines and Navy Corpsmen participating in the Marine Resiliency Study. Data comprised item responses to the Penn Face Memory Test (PFMT; N = 1,338), Penn Word Memory Test (PWMT; N = 1,331), and Visual Object Learning Test (VOLT; N = 1,249), and self-report of past head injury with loss of consciousness. SD-IRT models adequately fitted recognition memory item data across all modalities. Error varied systematically with ability estimates, and distributions of residuals from the regression of memory discrimination onto self-report of past head injury were positively skewed towards regions of larger measurement error. Analyses of differential item functioning revealed little evidence of systematic bias by level of education. SD-IRT models benefit from the measurement rigor of item response theory-which permits the modeling of item difficulty and examinee ability-and from signal detection theory-which provides an interpretive framework encompassing the experimentally validated constructs of memory discrimination and response bias. We used this approach to validate the construct representation of commonly used research measures and to demonstrate how nonoptimized item parameters can lead to erroneous conclusions when interpreting neuropsychological test data. Future work might include the development of computerized adaptive tests and integration with mixture and random-effects models.

  15. Polymeric lithography editor: Editing lithographic errors with nanoporous polymeric probes

    PubMed Central

    Rajasekaran, Pradeep Ramiah; Zhou, Chuanhong; Dasari, Mallika; Voss, Kay-Obbe; Trautmann, Christina; Kohli, Punit

    2017-01-01

    A new lithographic editing system with an ability to erase and rectify errors in microscale with real-time optical feedback is demonstrated. The erasing probe is a conically shaped hydrogel (tip size, ca. 500 nm) template-synthesized from track-etched conical glass wafers. The “nanosponge” hydrogel probe “erases” patterns by hydrating and absorbing molecules into a porous hydrogel matrix via diffusion analogous to a wet sponge. The presence of an interfacial liquid water layer between the hydrogel tip and the substrate during erasing enables frictionless, uninterrupted translation of the eraser on the substrate. The erasing capacity of the hydrogel is extremely high because of the large free volume of the hydrogel matrix. The fast frictionless translocation and interfacial hydration resulted in an extremely high erasing rate (~785 μm2/s), which is two to three orders of magnitude higher in comparison with the atomic force microscopy–based erasing (~0.1 μm2/s) experiments. The high precision and accuracy of the polymeric lithography editor (PLE) system stemmed from coupling piezoelectric actuators to an inverted optical microscope. Subsequently after erasing the patterns using agarose erasers, a polydimethylsiloxane probe fabricated from the same conical track-etched template was used to precisely redeposit molecules of interest at the erased spots. PLE also provides a continuous optical feedback throughout the entire molecular editing process—writing, erasing, and rewriting. To demonstrate its potential in device fabrication, we used PLE to electrochemically erase metallic copper thin film, forming an interdigitated array of microelectrodes for the fabrication of a functional microphotodetector device. High-throughput dot and line erasing, writing with the conical “wet nanosponge,” and continuous optical feedback make PLE complementary to the existing catalog of nanolithographic/microlithographic and three-dimensional printing techniques. This new PLE technique will potentially open up many new and exciting avenues in lithography, which remain unexplored due to the inherent limitations in error rectification capabilities of the existing lithographic techniques. PMID:28630898

  16. Impact of toroidal and poloidal mode spectra on the control of non-axisymmetric fields in tokamaks

    NASA Astrophysics Data System (ADS)

    Lanctot, Matthew J.

    2016-10-01

    In several tokamaks, non-axisymmetric magnetic field studies show applied n=2 fields can lead to disruptive n=1 locked modes, suggesting nonlinear mode coupling. A multimode plasma response to n=2 fields can be observed in H-mode plasmas, in contrast to the single-mode response found in Ohmic plasmas. These effects highlight a role for n >1 error field correction in disruption avoidance, and identify additional degrees of freedom for 3D field optimization at high plasma pressure. In COMPASS, EAST, and DIII-D Ohmic plasmas, n=2 magnetic reconnection thresholds in otherwise stable discharges are readily accessed at edge safety factors q 3 and low density. Similar to previous studies, the thresholds are correlated with the ``overlap'' field for the dominant linear ideal MHD plasma mode calculated with the IPEC code. The overlap field measures the plasma-mediated coupling of the external field to the resonant field. Remarkably, the critical overlap fields are similar for n=1 and 2 fields with m >nq fields dominating the drive for resonant fields. Complementary experiments in RFX-Mod show fields with m 1 control, including the need for multiple rows of coils to control selected plasma parameters for specific functions (e.g., rotation control or ELM suppression). Optimal multi-harmonic (n=1 and n=2) error field control may be achieved using control algorithms that continuously respond to time-varying 3D field sources and plasma parameters. Supported by the US DOE under DE-FC02-04ER54698.

  17. Error Propagation in a System Model

    NASA Technical Reports Server (NTRS)

    Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)

    2015-01-01

    Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.

  18. Correlating behavioral responses to FMRI signals from human prefrontal cortex: examining cognitive processes using task analysis.

    PubMed

    DeSouza, Joseph F X; Ovaysikia, Shima; Pynn, Laura

    2012-06-20

    The aim of this methods paper is to describe how to implement a neuroimaging technique to examine complementary brain processes engaged by two similar tasks. Participants' behavior during task performance in an fMRI scanner can then be correlated to the brain activity using the blood-oxygen-level-dependent signal. We measure behavior to be able to sort correct trials, where the subject performed the task correctly and then be able to examine the brain signals related to correct performance. Conversely, if subjects do not perform the task correctly, and these trials are included in the same analysis with the correct trials we would introduce trials that were not only for correct performance. Thus, in many cases these errors can be used themselves to then correlate brain activity to them. We describe two complementary tasks that are used in our lab to examine the brain during suppression of an automatic responses: the stroop(1) and anti-saccade tasks. The emotional stroop paradigm instructs participants to either report the superimposed emotional 'word' across the affective faces or the facial 'expressions' of the face stimuli(1,2). When the word and the facial expression refer to different emotions, a conflict between what must be said and what is automatically read occurs. The participant has to resolve the conflict between two simultaneously competing processes of word reading and facial expression. Our urge to read out a word leads to strong 'stimulus-response (SR)' associations; hence inhibiting these strong SR's is difficult and participants are prone to making errors. Overcoming this conflict and directing attention away from the face or the word requires the subject to inhibit bottom up processes which typically directs attention to the more salient stimulus. Similarly, in the anti-saccade task(3,4,5,6), where an instruction cue is used to direct only attention to a peripheral stimulus location but then the eye movement is made to the mirror opposite position. Yet again we measure behavior by recording the eye movements of participants which allows for the sorting of the behavioral responses into correct and error trials(7) which then can be correlated to brain activity. Neuroimaging now allows researchers to measure different behaviors of correct and error trials that are indicative of different cognitive processes and pinpoint the different neural networks involved.

  19. Waking and Dreaming Need Profiles: An Exploratory Study of Adaptive Functioning.

    ERIC Educational Resources Information Center

    Hutchinson, Robert Linton, II

    Research has defined the various adaptive, compensatory and complementary functions of dreams. To investigate the evidence of adaptive functioning in the dream state, 30 medical students (21 males, 9 females) from St. George's University, Grenada, completed personal surveys, a waking psychological profile, and a dreaming psychological profile…

  20. The development of performance-monitoring function in the posterior medial frontal cortex

    PubMed Central

    Fitzgerald, Kate Dimond; Perkins, Suzanne C.; Angstadt, Mike; Johnson, Timothy; Stern, Emily R.; Welsh, Robert C.; Taylor, Stephan F.

    2009-01-01

    Background Despite its critical role in performance-monitoring, the development of posterior medial prefrontal cortex (pMFC) in goal-directed behaviors remains poorly understood. Performance monitoring depends on distinct, but related functions that may differentially activate the pMFC, such as monitoring response conflict and detecting errors. Developmental differences in conflict- and error-related activations, coupled with age-related changes in behavioral performance, may confound attempts to map the maturation of pMFC functions. To characterize the development of pMFC-based performance monitoring functions, we segregated interference and error-processing, while statistically controlling for performance. Methods Twenty-one adults and 23 youth performed an event-related version of the Multi-Source Interference Task during functional magnetic resonance imaging (fMRI). Linear modeling of interference and error contrast estimates derived from the pMFC were regressed on age, while covarying for performance. Results Interference- and error-processing were associated with robust activation of the pMFC in both youth and adults. Among youth, interference- and error-related activation of the pMFC increased with age, independent of performance. Greater accuracy associated with greater pMFC activity during error commission in both groups. Discussion Increasing pMFC response to interference and errors occurs with age, likely contributing to the improvement of performance monitoring capacity during development. PMID:19913101

  1. Error bounds of adaptive dynamic programming algorithms for solving undiscounted optimal control problems.

    PubMed

    Liu, Derong; Li, Hongliang; Wang, Ding

    2015-06-01

    In this paper, we establish error bounds of adaptive dynamic programming algorithms for solving undiscounted infinite-horizon optimal control problems of discrete-time deterministic nonlinear systems. We consider approximation errors in the update equations of both value function and control policy. We utilize a new assumption instead of the contraction assumption in discounted optimal control problems. We establish the error bounds for approximate value iteration based on a new error condition. Furthermore, we also establish the error bounds for approximate policy iteration and approximate optimistic policy iteration algorithms. It is shown that the iterative approximate value function can converge to a finite neighborhood of the optimal value function under some conditions. To implement the developed algorithms, critic and action neural networks are used to approximate the value function and control policy, respectively. Finally, a simulation example is given to demonstrate the effectiveness of the developed algorithms.

  2. Use of complementary and alternative medicine for physical performance, energy, immune function, and general health among older women and men in the United States.

    PubMed

    Tait, Elizabeth M; Laditka, Sarah B; Laditka, James N; Nies, Mary A; Racine, Elizabeth F

    2012-01-01

    We examined use of complementary and alternative medicine (CAM) for health and well-being by older women and men. Data were from the 2007 National Health Interview Survey, representing 89.5 million Americans ages 50+. Multivariate logistic regression accounted for the survey design. For general health, 52 million people used CAM. The numbers for immune function, physical performance, and energy were 21.6, 15.9, and 10.1 million respectively. In adjusted results, women were much more likely than men to use CAM for all four reasons, especially energy. Older adults, particularly women, could benefit from research on CAM benefits and risks.

  3. Spatial Statistical Data Fusion (SSDF)

    NASA Technical Reports Server (NTRS)

    Braverman, Amy J.; Nguyen, Hai M.; Cressie, Noel

    2013-01-01

    As remote sensing for scientific purposes has transitioned from an experimental technology to an operational one, the selection of instruments has become more coordinated, so that the scientific community can exploit complementary measurements. However, tech nological and scientific heterogeneity across devices means that the statistical characteristics of the data they collect are different. The challenge addressed here is how to combine heterogeneous remote sensing data sets in a way that yields optimal statistical estimates of the underlying geophysical field, and provides rigorous uncertainty measures for those estimates. Different remote sensing data sets may have different spatial resolutions, different measurement error biases and variances, and other disparate characteristics. A state-of-the-art spatial statistical model was used to relate the true, but not directly observed, geophysical field to noisy, spatial aggregates observed by remote sensing instruments. The spatial covariances of the true field and the covariances of the true field with the observations were modeled. The observations are spatial averages of the true field values, over pixels, with different measurement noise superimposed. A kriging framework is used to infer optimal (minimum mean squared error and unbiased) estimates of the true field at point locations from pixel-level, noisy observations. A key feature of the spatial statistical model is the spatial mixed effects model that underlies it. The approach models the spatial covariance function of the underlying field using linear combinations of basis functions of fixed size. Approaches based on kriging require the inversion of very large spatial covariance matrices, and this is usually done by making simplifying assumptions about spatial covariance structure that simply do not hold for geophysical variables. In contrast, this method does not require these assumptions, and is also computationally much faster. This method is fundamentally different than other approaches to data fusion for remote sensing data because it is inferential rather than merely descriptive. All approaches combine data in a way that minimizes some specified loss function. Most of these are more or less ad hoc criteria based on what looks good to the eye, or some criteria that relate only to the data at hand.

  4. Interfacial Shear Strength and Adhesive Behavior of Silk Ionomer Surfaces.

    PubMed

    Kim, Sunghan; Geryak, Ren D; Zhang, Shuaidi; Ma, Ruilong; Calabrese, Rossella; Kaplan, David L; Tsukruk, Vladimir V

    2017-09-11

    The interfacial shear strength between different layers in multilayered structures of layer-by-layer (LbL) microcapsules is a crucial mechanical property to ensure their robustness. In this work, we investigated the interfacial shear strength of modified silk fibroin ionomers utilized in LbL shells, an ionic-cationic pair with complementary ionic pairing, (SF)-poly-l-glutamic acid (Glu) and SF-poly-l-lysine (Lys), and a complementary pair with partially screened Coulombic interactions due to the presence of poly(ethylene glycol) (PEG) segments and SF-Glu/SF-Lys[PEG] pair. Shearing and adhesive behavior between these silk ionomer surfaces in the swollen state were probed at different spatial scales and pressure ranges by using functionalized atomic force microscopy (AFM) tips as well as functionalized colloidal probes. The results show that both approaches were consistent in analyzing the interfacial shear strength of LbL silk ionomers at different spatial scales from a nanoscale to a fraction of a micron. Surprisingly, the interfacial shear strength between SF-Glu and SF-Lys[PEG] pair with partially screened ionic pairing was greater than the interfacial shear strength of the SF-Glu and SF-Lys pair with a high density of complementary ionic groups. The difference in interfacial shear strength and adhesive strength is suggested to be predominantly facilitated by the interlayer hydrogen bonding of complementary amino acids and overlap of highly swollen PEG segments.

  5. Case study of Bell's palsy applying complementary treatment within an occupational therapy model.

    PubMed

    Haltiwanger, Emily; Huber, Theresa; Chang, Joe C; Gonzalez-Stuart, Armando; Gonzales-Stuart, Armando

    2009-01-01

    For 7% of people with Bell's palsy, facial impairment is permanent. The case study patient was a 48-year-old female who had no recovery from paralysis 12 weeks after onset. Goals were to restore facial sensory-motor functions, functional abilities and reduce depression. Facial paralysis was assessed by clinical observations, the Facial Disability Index and Beck Depression Index. Complementary interventions of aromatherapy, reflexology and electro-acupuncture were used with common physical agent modalities in an intensive home activity and exercise programme. The patient had 100% return of function and resolution of depression after 10 days of intervention. The limitation of this study is that it was a retrospective case study and the investigators reconstructed the case from clinical notes. Further research using a prospective approach is recommended to replicate this study. 2009 John Wiley & Sons, Ltd

  6. Inkjet printed circuits based on ambipolar and p-type carbon nanotube thin-film transistors

    PubMed Central

    Kim, Bongjun; Geier, Michael L.; Hersam, Mark C.; Dodabalapur, Ananth

    2017-01-01

    Ambipolar and p-type single-walled carbon nanotube (SWCNT) thin-film transistors (TFTs) are reliably integrated into various complementary-like circuits on the same substrate by inkjet printing. We describe the fabrication and characteristics of inverters, ring oscillators, and NAND gates based on complementary-like circuits fabricated with such TFTs as building blocks. We also show that complementary-like circuits have potential use as chemical sensors in ambient conditions since changes to the TFT characteristics of the p-channel TFTs in the circuit alter the overall operating characteristics of the circuit. The use of circuits rather than individual devices as sensors integrates sensing and signal processing functions, thereby simplifying overall system design. PMID:28145438

  7. Reducing measurement errors during functional capacity tests in elders.

    PubMed

    da Silva, Mariane Eichendorf; Orssatto, Lucas Bet da Rosa; Bezerra, Ewertton de Souza; Silva, Diego Augusto Santos; Moura, Bruno Monteiro de; Diefenthaeler, Fernando; Freitas, Cíntia de la Rocha

    2018-06-01

    Accuracy is essential to the validity of functional capacity measurements. To evaluate the error of measurement of functional capacity tests for elders and suggest the use of the technical error of measurement and credibility coefficient. Twenty elders (65.8 ± 4.5 years) completed six functional capacity tests that were simultaneously filmed and timed by four evaluators by means of a chronometer. A fifth evaluator timed the tests by analyzing the videos (reference data). The means of most evaluators for most tests were different from the reference (p < 0.05), except for two evaluators for two different tests. There were different technical error of measurement between tests and evaluators. The Bland-Altman test showed difference in the concordance of the results between methods. Short duration tests showed higher technical error of measurement than longer tests. In summary, tests timed by a chronometer underestimate the real results of the functional capacity. Difference between evaluators' reaction time and perception to determine the start and the end of the tests would justify the errors of measurement. Calculation of the technical error of measurement or the use of the camera can increase data validity.

  8. Probability and Statistics in Sensor Performance Modeling

    DTIC Science & Technology

    2010-12-01

    language software program is called Environmental Awareness for Sensor and Emitter Employment. Some important numerical issues in the implementation...3 Statistical analysis for measuring sensor performance...complementary cumulative distribution function cdf cumulative distribution function DST decision-support tool EASEE Environmental Awareness of

  9. A climatology of visible surface reflectance spectra

    NASA Astrophysics Data System (ADS)

    Zoogman, Peter; Liu, Xiong; Chance, Kelly; Sun, Qingsong; Schaaf, Crystal; Mahr, Tobias; Wagner, Thomas

    2016-09-01

    We present a high spectral resolution climatology of visible surface reflectance as a function of wavelength for use in satellite measurements of ozone and other atmospheric species. The Tropospheric Emissions: Monitoring of Pollution (TEMPO) instrument is planned to measure backscattered solar radiation in the 290-740 nm range, including the ultraviolet and visible Chappuis ozone bands. Observation in the weak Chappuis band takes advantage of the relative transparency of the atmosphere in the visible to achieve sensitivity to near-surface ozone. However, due to the weakness of the ozone absorption features this measurement is more sensitive to errors in visible surface reflectance, which is highly variable. We utilize reflectance measurements of individual plant, man-made, and other surface types to calculate the primary modes of variability of visible surface reflectance at a high spectral resolution, comparable to that of TEMPO (0.6 nm). Using the Moderate-resolution Imaging Spectroradiometer (MODIS) Bidirection Reflectance Distribution Function (BRDF)/albedo product and our derived primary modes we construct a high spatial resolution climatology of wavelength-dependent surface reflectance over all viewing scenes and geometries. The Global Ozone Monitoring Experiment-2 (GOME-2) Lambertian Equivalent Reflectance (LER) product provides complementary information over water and snow scenes. Preliminary results using this approach in multispectral ultraviolet+visible ozone retrievals from the GOME-2 instrument show significant improvement to the fitting residuals over vegetated scenes.

  10. Knowledge, Attitude and Practice of General Practitioners toward Complementary and Alternative Medicine: a Cross-Sectional Study.

    PubMed

    Barikani, Ameneh; Beheshti, Akram; Javadi, Maryam; Yasi, Marzieh

    2015-08-01

    Orientation of public and physicians to the complementary and alternative medicine (CAM) is one of the most prominent symbols of structural changes in the health service system. The aim of his study was a determination of knowledge, attitude, and practice of general practitioners in complementary and alternative medicine. This cross- sectional study was conducted in Qazvin, Iran in 2013. A self-administered questionnaire was used for collecting data including four information parts: population information, physicians' attitude and knowledge, methods of getting information and their function. A total of 228 physicians in Qazvin comprised the population of study according to the deputy of treatment's report of Qazvin University of Medical Sciences. A total of 150 physicians were selected randomly, and SPSS Statistical program was used to enter questionnaires' data. Results were analyzed as descriptive statistics and statistical analysis. Sixty percent of all responders were male. About sixty (59.4) percent of participating practitioners had worked less than 10 years.96.4 percent had a positive attitude towards complementary and alternative medicine. Knowledge of practitioners about traditional medicine in 11 percent was good, 36.3% and 52.7% had average and little information, respectively. 17.9% of practitioners offered their patients complementary and alternative medicine for treatment. Although there was little knowledge among practitioners about traditional medicine and complementary approaches, a significant percentage of them had attitude higher than the lower limit.

  11. Two-dimensional optoelectronic interconnect-processor and its operational bit error rate

    NASA Astrophysics Data System (ADS)

    Liu, J. Jiang; Gollsneider, Brian; Chang, Wayne H.; Carhart, Gary W.; Vorontsov, Mikhail A.; Simonis, George J.; Shoop, Barry L.

    2004-10-01

    Two-dimensional (2-D) multi-channel 8x8 optical interconnect and processor system were designed and developed using complementary metal-oxide-semiconductor (CMOS) driven 850-nm vertical-cavity surface-emitting laser (VCSEL) arrays and the photodetector (PD) arrays with corresponding wavelengths. We performed operation and bit-error-rate (BER) analysis on this free-space integrated 8x8 VCSEL optical interconnects driven by silicon-on-sapphire (SOS) circuits. Pseudo-random bit stream (PRBS) data sequence was used in operation of the interconnects. Eye diagrams were measured from individual channels and analyzed using a digital oscilloscope at data rates from 155 Mb/s to 1.5 Gb/s. Using a statistical model of Gaussian distribution for the random noise in the transmission, we developed a method to compute the BER instantaneously with the digital eye-diagrams. Direct measurements on this interconnects were also taken on a standard BER tester for verification. We found that the results of two methods were in the same order and within 50% accuracy. The integrated interconnects were investigated in an optoelectronic processing architecture of digital halftoning image processor. Error diffusion networks implemented by the inherently parallel nature of photonics promise to provide high quality digital halftoned images.

  12. The many places of frequency: evidence for a novel locus of the lexical frequency effect in word production.

    PubMed

    Knobel, Mark; Finkbeiner, Matthew; Caramazza, Alfonso

    2008-03-01

    The effect of lexical frequency on language-processing tasks is exceptionally reliable. For example, pictures with higher frequency names are named faster and more accurately than those with lower frequency names. Experiments with normal participants and patients strongly suggest that this production effect arises at the level of lexical access. Further work has suggested that within lexical access this effect arises at the level of lexical representations. Here we present patient E.C. who shows an effect of lexical frequency on his nonword error rate. The best explanation of his performance is that there is an additional locus of frequency at the interface of lexical and segmental representational levels. We confirm this hypothesis by showing that only computational models with frequency at this new locus can produce a similar error pattern to that of patient E.C. Finally, in an analysis of a large group of Italian patients, we show that there exist patients who replicate E.C.'s pattern of results and others who show the complementary pattern of frequency effects on semantic error rates. Our results combined with previous findings suggest that frequency plays a role throughout the process of lexical access.

  13. A cis-antisense RNA acts in trans in Staphylococcus aureus to control translation of a human cytolytic peptide.

    PubMed

    Sayed, Nour; Jousselin, Ambre; Felden, Brice

    2011-12-25

    Antisense RNAs (asRNAs) pair to RNAs expressed from the complementary strand, and their functions are thought to depend on nucleotide overlap with genes on the opposite strand. There is little information on the roles and mechanisms of asRNAs. We show that a cis asRNA acts in trans, using a domain outside its target complementary sequence. SprA1 small regulatory RNA (sRNA) and SprA1(AS) asRNA are concomitantly expressed in S. aureus. SprA1(AS) forms a complex with SprA1, preventing translation of the SprA1-encoded open reading frame by occluding translation initiation signals through pairing interactions. The SprA1 peptide sequence is within two RNA pseudoknots. SprA1(AS) represses production of the SprA1-encoded cytolytic peptide in trans, as its overlapping region is dispensable for regulation. These findings demonstrate that sometimes asRNA functional domains are not their gene-target complementary sequences, suggesting there is a need for mechanistic re-evaluation of asRNAs expressed in prokaryotes and eukaryotes.

  14. Stiff upper lip: Labrum deformity and functionality in bees (Hymenoptera: Apoidea)

    USDA-ARS?s Scientific Manuscript database

    In hyper-diverse groups such as Hymenoptera, a variety of structures with different, complementary functions are used for feeding. Although the function of the parts such as the mandibles is obvious, the use of others, like the labrum, is more difficult to discern. Here, we discuss the labrum’s func...

  15. Nematode Damage Functions: The Problems of Experimental and Sampling Error

    PubMed Central

    Ferris, H.

    1984-01-01

    The development and use of pest damage functions involves measurement and experimental errors associated with cultural, environmental, and distributional factors. Damage predictions are more valuable if considered with associated probability. Collapsing population densities into a geometric series of population classes allows a pseudo-replication removal of experimental and sampling error in damage function development. Recognition of the nature of sampling error for aggregated populations allows assessment of probability associated with the population estimate. The product of the probabilities incorporated in the damage function and in the population estimate provides a basis for risk analysis of the yield loss prediction and the ensuing management decision. PMID:19295865

  16. Rollovers during play: Complementary perspectives.

    PubMed

    Smuts, Barbara; Bauer, Erika; Ward, Camille

    2015-07-01

    In this commentary, we compare and contrast Norman et al.s' findings on rollovers during dog play (Norman et al., 2015; the "target article") with our work on dog play fighting (Bauer and Smuts, 2007; Ward et al., 2008). We first review our major findings and then correct some errors in the target article's descriptions of our work. We then further explore the concept of "defensive" rollovers proposed in the target article. We conclude that a combination of the target article's approach and ours should inform future investigations of dog rollovers. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Engraftment of gene-modified umbilical cord blood cells in neonates with adenosine deaminase deficiency

    PubMed Central

    Kohn, Donald B.; Weinberg, Kenneth I.; Nolta, Jan A.; Heiss, Linda N.; Lenarsky, Carl; Crooks, Gay M.; Hanley, Mary E.; Annett, Geralyn; Brooks, Judith S.; El-Khoureiy, Anthony; Lawrence, Kim; Wells, Susie; Moen, Robert C.; Bastian, John; Williams-Herman, Debora E.; Elder, Melissa; Wara, Diane; Bowen, Thomas; Hershfield, Michael S.; Mullen, Craig A.; Blaese, R. Michael; Parkman, Robertson

    2010-01-01

    Haematopoietic stem cells in umbilical cord blood are an attractive target for gene therapy of inborn errors of metabolism. Three neonates with severe combined immunodeficiency were treated by retroviral-mediated transduction of the CD34+ cells from their umbilical cord blood with a normal human adenosine deaminase complementary DNA followed by autologous transplantation. The continued presence and expression of the introduced gene in leukocytes from bone marrow and peripheral blood for 18 months demonstrates that umbilical cord blood cells may be genetically modified with retroviral vectors and engrafted in neonates for gene therapy. PMID:7489356

  18. Complementary Reliability-Based Decodings of Binary Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1997-01-01

    This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.

  19. From feedback- to response-based performance monitoring in active and observational learning.

    PubMed

    Bellebaum, Christian; Colosio, Marco

    2014-09-01

    Humans can adapt their behavior by learning from the consequences of their own actions or by observing others. Gradual active learning of action-outcome contingencies is accompanied by a shift from feedback- to response-based performance monitoring. This shift is reflected by complementary learning-related changes of two ACC-driven ERP components, the feedback-related negativity (FRN) and the error-related negativity (ERN), which have both been suggested to signal events "worse than expected," that is, a negative prediction error. Although recent research has identified comparable components for observed behavior and outcomes (observational ERN and FRN), it is as yet unknown, whether these components are similarly modulated by prediction errors and thus also reflect behavioral adaptation. In this study, two groups of 15 participants learned action-outcome contingencies either actively or by observation. In active learners, FRN amplitude for negative feedback decreased and ERN amplitude in response to erroneous actions increased with learning, whereas observational ERN and FRN in observational learners did not exhibit learning-related changes. Learning performance, assessed in test trials without feedback, was comparable between groups, as was the ERN following actively performed errors during test trials. In summary, the results show that action-outcome associations can be learned similarly well actively and by observation. The mechanisms involved appear to differ, with the FRN in active learning reflecting the integration of information about own actions and the accompanying outcomes.

  20. Spelling Errors in French-speaking Children with Dyslexia: Phonology May Not Provide the Best Evidence.

    PubMed

    Daigle, Daniel; Costerg, Agnès; Plisson, Anne; Ruberto, Noémia; Varin, Joëlle

    2016-05-01

    For children with dyslexia, learning to write constitutes a great challenge. There has been consensus that the explanation for these learners' delay is related to a phonological deficit. Results from studies designed to describe dyslexic children's spelling errors are not always as clear concerning the role of phonological processes as those found in reading studies. In irregular languages like French, spelling abilities involve other processes than phonological processes. The main goal of this study was to describe the relative contribution of these other processes in dyslexic children's spelling ability. In total, 32 francophone dyslexic children with a mean age of 11.4 years were compared with 24 reading-age matched controls (RA) and 24 chronological-age matched controls (CA). All had to write a text that was analysed at the graphemic level. All errors were classified as either phonological, morphological, visual-orthographic or lexical. Results indicated that dyslexic children's spelling ability lagged behind not only that of the CA group but also of the RA group. Because the majority of errors, in all groups, could not be explained by inefficiency of phonological processing, the importance of visual knowledge/processes will be discussed as a complementary explanation of dyslexic children's delay in writing. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Accuracy Enhancement of Inertial Sensors Utilizing High Resolution Spectral Analysis

    PubMed Central

    Noureldin, Aboelmagd; Armstrong, Justin; El-Shafie, Ahmed; Karamat, Tashfeen; McGaughey, Don; Korenberg, Michael; Hussain, Aini

    2012-01-01

    In both military and civilian applications, the inertial navigation system (INS) and the global positioning system (GPS) are two complementary technologies that can be integrated to provide reliable positioning and navigation information for land vehicles. The accuracy enhancement of INS sensors and the integration of INS with GPS are the subjects of widespread research. Wavelet de-noising of INS sensors has had limited success in removing the long-term (low-frequency) inertial sensor errors. The primary objective of this research is to develop a novel inertial sensor accuracy enhancement technique that can remove both short-term and long-term error components from inertial sensor measurements prior to INS mechanization and INS/GPS integration. A high resolution spectral analysis technique called the fast orthogonal search (FOS) algorithm is used to accurately model the low frequency range of the spectrum, which includes the vehicle motion dynamics and inertial sensor errors. FOS models the spectral components with the most energy first and uses an adaptive threshold to stop adding frequency terms when fitting a term does not reduce the mean squared error more than fitting white noise. The proposed method was developed, tested and validated through road test experiments involving both low-end tactical grade and low cost MEMS-based inertial systems. The results demonstrate that in most cases the position accuracy during GPS outages using FOS de-noised data is superior to the position accuracy using wavelet de-noising.

  2. Model error estimation for distributed systems described by elliptic equations

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1983-01-01

    A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.

  3. A new method to evaluate human-robot system performance

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Weisbin, C. R.

    2003-01-01

    One of the key issues in space exploration is that of deciding what space tasks are best done with humans, with robots, or a suitable combination of each. In general, human and robot skills are complementary. Humans provide as yet unmatched capabilities to perceive, think, and act when faced with anomalies and unforeseen events, but there can be huge potential risks to human safety in getting these benefits. Robots provide complementary skills in being able to work in extremely risky environments, but their ability to perceive, think, and act by themselves is currently not error-free, although these capabilities are continually improving with the emergence of new technologies. Substantial past experience validates these generally qualitative notions. However, there is a need for more rigorously systematic evaluation of human and robot roles, in order to optimize the design and performance of human-robot system architectures using well-defined performance evaluation metrics. This article summarizes a new analytical method to conduct such quantitative evaluations. While the article focuses on evaluating human-robot systems, the method is generally applicable to a much broader class of systems whose performance needs to be evaluated.

  4. Fine-Granularity Functional Interaction Signatures for Characterization of Brain Conditions

    PubMed Central

    Hu, Xintao; Zhu, Dajiang; Lv, Peili; Li, Kaiming; Han, Junwei; Wang, Lihong; Shen, Dinggang; Guo, Lei; Liu, Tianming

    2014-01-01

    In the human brain, functional activity occurs at multiple spatial scales. Current studies on functional brain networks and their alterations in brain diseases via resting-state functional magnetic resonance imaging (rs-fMRI) are generally either at local scale (regionally confined analysis and inter-regional functional connectivity analysis) or at global scale (graph theoretic analysis). In contrast, inferring functional interaction at fine-granularity sub-network scale has not been adequately explored yet. Here our hypothesis is that functional interaction measured at fine-granularity subnetwork scale can provide new insight into the neural mechanisms of neurological and psychological conditions, thus offering complementary information for healthy and diseased population classification. In this paper, we derived fine-granularity functional interaction (FGFI) signatures in subjects with Mild Cognitive Impairment (MCI) and Schizophrenia by diffusion tensor imaging (DTI) and rsfMRI, and used patient-control classification experiments to evaluate the distinctiveness of the derived FGFI features. Our experimental results have shown that the FGFI features alone can achieve comparable classification performance compared with the commonly used inter-regional connectivity features. However, the classification performance can be substantially improved when FGFI features and inter-regional connectivity features are integrated, suggesting the complementary information achieved from the FGFI signatures. PMID:23319242

  5. Exploring the Phenotype of Phonological Reading Disability as a Function of the Phonological Deficit Severity: Evidence from the Error Analysis Paradigm in Arabic

    ERIC Educational Resources Information Center

    Taha, Haitham; Ibrahim, Raphiq; Khateb, Asaid

    2014-01-01

    The dominant error types were investigated as a function of phonological processing (PP) deficit severity in four groups of impaired readers. For this aim, an error analysis paradigm distinguishing between four error types was used. The findings revealed that the different types of impaired readers were characterized by differing predominant error…

  6. Average symbol error rate for M-ary quadrature amplitude modulation in generalized atmospheric turbulence and misalignment errors

    NASA Astrophysics Data System (ADS)

    Sharma, Prabhat Kumar

    2016-11-01

    A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.

  7. High-resolution nerve ultrasound and magnetic resonance neurography as complementary neuroimaging tools for chronic inflammatory demyelinating polyneuropathy

    PubMed Central

    Pitarokoili, Kalliopi; Kronlage, Moritz; Bäumer, Philip; Schwarz, Daniel; Gold, Ralf; Bendszus, Martin; Yoon, Min-Suk

    2018-01-01

    Background: We present a clinical, electrophysiological, sonographical and magnetic resonance neurography (MRN) study examining the complementary role of two neuroimaging methods of the peripheral nervous system for patients with chronic inflammatory demyelinating polyneuropathy (CIDP). Furthermore, we explore the significance of cross-sectional area (CSA) increase through correlations with MRN markers of nerve integrity. Methods: A total of 108 nerve segments on the median, ulnar, radial, tibial and fibular nerve, as well as the lumbar and cervical plexus of 18 CIDP patients were examined with high-resonance nerve ultrasound (HRUS) and MRN additionally to the nerve conduction studies. Results: We observed a fair degree of correlation of the CSA values for all nerves/nerve segments between the two methods, with a low random error in Bland–Altman analysis (bias = HRUS-CSA − MRN-CSA, −0.61 to −3.26 mm). CSA in HRUS correlated with the nerve T2-weighted (nT2) signal increase as well as with diffusion tensor imaging parameters such as fractional anisotropy, a marker of microstructural integrity. HRUS-CSA of the interscalene brachial plexus correlated significantly with the MRN-CSA and nT2 signal of the L5 and S1 roots of the lumbar plexus. Conclusions: HRUS allows for reliable CSA imaging of all peripheral nerves and the cervical plexus, and CSA correlates with markers of nerve integrity. Imaging of proximal segments as well as the estimation of nerve integrity require MRN as a complementary method. PMID:29552093

  8. Performance of concatenated Reed-Solomon/Viterbi channel coding

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Yuen, J. H.

    1982-01-01

    The concatenated Reed-Solomon (RS)/Viterbi coding system is reviewed. The performance of the system is analyzed and results are derived with a new simple approach. A functional model for the input RS symbol error probability is presented. Based on this new functional model, we compute the performance of a concatenated system in terms of RS word error probability, output RS symbol error probability, bit error probability due to decoding failure, and bit error probability due to decoding error. Finally we analyze the effects of the noisy carrier reference and the slow fading on the system performance.

  9. High-speed logic integrated circuits with solution-processed self-assembled carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Han, Shu-Jen; Tang, Jianshi; Kumar, Bharat; Falk, Abram; Farmer, Damon; Tulevski, George; Jenkins, Keith; Afzali, Ali; Oida, Satoshi; Ott, John; Hannon, James; Haensch, Wilfried

    2017-09-01

    As conventional monolithic silicon technology struggles to meet the requirements for the 7-nm technology node, there has been tremendous progress in demonstrating the scalability of carbon nanotube field-effect transistors down to the size that satisfies the 3-nm node and beyond. However, to date, circuits built with carbon nanotubes have overlooked key aspects of a practical logic technology and have stalled at simple functionality demonstrations. Here, we report high-performance complementary carbon nanotube ring oscillators using fully manufacturable processes, with a stage switching frequency of 2.82 GHz. The circuit was built on solution-processed, self-assembled carbon nanotube arrays with over 99.9% semiconducting purity, and the complementary feature was achieved by employing two different work function electrodes.

  10. High-speed logic integrated circuits with solution-processed self-assembled carbon nanotubes.

    PubMed

    Han, Shu-Jen; Tang, Jianshi; Kumar, Bharat; Falk, Abram; Farmer, Damon; Tulevski, George; Jenkins, Keith; Afzali, Ali; Oida, Satoshi; Ott, John; Hannon, James; Haensch, Wilfried

    2017-09-01

    As conventional monolithic silicon technology struggles to meet the requirements for the 7-nm technology node, there has been tremendous progress in demonstrating the scalability of carbon nanotube field-effect transistors down to the size that satisfies the 3-nm node and beyond. However, to date, circuits built with carbon nanotubes have overlooked key aspects of a practical logic technology and have stalled at simple functionality demonstrations. Here, we report high-performance complementary carbon nanotube ring oscillators using fully manufacturable processes, with a stage switching frequency of 2.82 GHz. The circuit was built on solution-processed, self-assembled carbon nanotube arrays with over 99.9% semiconducting purity, and the complementary feature was achieved by employing two different work function electrodes.

  11. 5 Things You Should Know: The Science of Chronic Pain and Complementary Health Practices

    MedlinePlus

    ... some evidence that mindfulness-based stress reduction and cognitive-behavioral therapy improves pain and functional limitation compared to usual ... pain found that mindfulness-based stress reduction and cognitive-behavioral therapy resulted in greater improvement in pain and functional ...

  12. A randomized controlled trial of qigong exercise on fatigue symptoms, functioning, and telomerase activity in persons with chronic fatigue or chronic fatigue syndrome.

    PubMed

    Ho, Rainbow T H; Chan, Jessie S M; Wang, Chong-Wen; Lau, Benson W M; So, Kwok Fai; Yuen, Li Ping; Sham, Jonathan S T; Chan, Cecilia L W

    2012-10-01

    Chronic fatigue is common in the general population. Complementary therapies are often used by patients with chronic fatigue or chronic fatigue syndrome to manage their symptoms. This study aimed to assess the effect of a 4-month qigong intervention program among patients with chronic fatigue or chronic fatigue syndrome. Sixty-four participants were randomly assigned to either an intervention group or a wait list control group. Outcome measures included fatigue symptoms, physical functioning, mental functioning, and telomerase activity. Fatigue symptoms and mental functioning were significantly improved in the qigong group compared to controls. Telomerase activity increased in the qigong group from 0.102 to 0.178 arbitrary units (p < 0.05). The change was statistically significant when compared to the control group (p < 0.05). Qigong exercise may be used as an alternative and complementary therapy or rehabilitative program for chronic fatigue and chronic fatigue syndrome.

  13. Regulation of T-cell receptor signalling by membrane microdomains

    PubMed Central

    Razzaq, Tahir M; Ozegbe, Patricia; Jury, Elizabeth C; Sembi, Phupinder; Blackwell, Nathan M; Kabouridis, Panagiotis S

    2004-01-01

    There is now considerable evidence suggesting that the plasma membrane of mammalian cells is compartmentalized by functional lipid raft microdomains. These structures are assemblies of specialized lipids and proteins and have been implicated in diverse biological functions. Analysis of their protein content using proteomics and other methods revealed enrichment of signalling proteins, suggesting a role for these domains in intracellular signalling. In T lymphocytes, structure/function experiments and complementary pharmacological studies have shown that raft microdomains control the localization and function of proteins which are components of signalling pathways regulated by the T-cell antigen receptor (TCR). Based on these studies, a model for TCR phosphorylation in lipid rafts is presented. However, despite substantial progress in the field, critical questions remain. For example, it is unclear if membrane rafts represent a homogeneous population and if their structure is modified upon TCR stimulation. In the future, proteomics and the parallel development of complementary analytical methods will undoubtedly contribute in further delineating the role of lipid rafts in signal transduction mechanisms. PMID:15554919

  14. Numerical simulation of terahertz transmission of bilayer metallic meshes with different thickness of substrates

    NASA Astrophysics Data System (ADS)

    Zhang, Gaohui; Zhao, Guozhong; Zhang, Shengbo

    2012-12-01

    The terahertz transmission characteristics of bilayer metallic meshes are studied based on the finite difference time domain method. The bilayer well-shaped grid, the array of complementary square metallic pill and the cross wire-hole array were investigated. The results show that the bilayer well-shaped grid achieves a high-pass of filter function, while the bilayer array of complementary square metallic pill achieves a low-pass of filter function, the bilayer cross wire-hole array achieves a band-pass of filter function. Between two metallic microstructures, the medium need to be deposited. Obviously, medium thicknesses have an influence on the terahertz transmission characteristics of metallic microstructures. Simulation results show that with increasing the thicknesses of the medium the cut-off frequency of high-pass filter and low-pass filter move to low frequency. But the bilayer cross wire-hole array possesses two transmission peaks which display competition effect.

  15. Regression-assisted deconvolution.

    PubMed

    McIntyre, Julie; Stefanski, Leonard A

    2011-06-30

    We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.

  16. Calibration of Kinect for Xbox One and Comparison between the Two Generations of Microsoft Sensors

    PubMed Central

    Pagliari, Diana; Pinto, Livio

    2015-01-01

    In recent years, the videogame industry has been characterized by a great boost in gesture recognition and motion tracking, following the increasing request of creating immersive game experiences. The Microsoft Kinect sensor allows acquiring RGB, IR and depth images with a high frame rate. Because of the complementary nature of the information provided, it has proved an attractive resource for researchers with very different backgrounds. In summer 2014, Microsoft launched a new generation of Kinect on the market, based on time-of-flight technology. This paper proposes a calibration of Kinect for Xbox One imaging sensors, focusing on the depth camera. The mathematical model that describes the error committed by the sensor as a function of the distance between the sensor itself and the object has been estimated. All the analyses presented here have been conducted for both generations of Kinect, in order to quantify the improvements that characterize every single imaging sensor. Experimental results show that the quality of the delivered model improved applying the proposed calibration procedure, which is applicable to both point clouds and the mesh model created with the Microsoft Fusion Libraries. PMID:26528979

  17. Neuro-fuzzy controller to navigate an unmanned vehicle.

    PubMed

    Selma, Boumediene; Chouraqui, Samira

    2013-12-01

    A Neuro-fuzzy control method for an Unmanned Vehicle (UV) simulation is described. The objective is guiding an autonomous vehicle to a desired destination along a desired path in an environment characterized by a terrain and a set of distinct objects, such as obstacles like donkey traffic lights and cars circulating in the trajectory. The autonomous navigate ability and road following precision are mainly influenced by its control strategy and real-time control performance. Fuzzy Logic Controller can very well describe the desired system behavior with simple "if-then" relations owing the designer to derive "if-then" rules manually by trial and error. On the other hand, Neural Networks perform function approximation of a system but cannot interpret the solution obtained neither check if its solution is plausible. The two approaches are complementary. Combining them, Neural Networks will allow learning capability while Fuzzy-Logic will bring knowledge representation (Neuro-Fuzzy). In this paper, an artificial neural network fuzzy inference system (ANFIS) controller is described and implemented to navigate the autonomous vehicle. Results show several improvements in the control system adjusted by neuro-fuzzy techniques in comparison to the previous methods like Artificial Neural Network (ANN).

  18. Calibration of Kinect for Xbox One and Comparison between the Two Generations of Microsoft Sensors.

    PubMed

    Pagliari, Diana; Pinto, Livio

    2015-10-30

    In recent years, the videogame industry has been characterized by a great boost in gesture recognition and motion tracking, following the increasing request of creating immersive game experiences. The Microsoft Kinect sensor allows acquiring RGB, IR and depth images with a high frame rate. Because of the complementary nature of the information provided, it has proved an attractive resource for researchers with very different backgrounds. In summer 2014, Microsoft launched a new generation of Kinect on the market, based on time-of-flight technology. This paper proposes a calibration of Kinect for Xbox One imaging sensors, focusing on the depth camera. The mathematical model that describes the error committed by the sensor as a function of the distance between the sensor itself and the object has been estimated. All the analyses presented here have been conducted for both generations of Kinect, in order to quantify the improvements that characterize every single imaging sensor. Experimental results show that the quality of the delivered model improved applying the proposed calibration procedure, which is applicable to both point clouds and the mesh model created with the Microsoft Fusion Libraries.

  19. Model for threading dislocations in metamorphic tandem solar cells on GaAs (001) substrates

    NASA Astrophysics Data System (ADS)

    Song, Yifei; Kujofsa, Tedi; Ayers, John E.

    2018-02-01

    We present an approximate model for the threading dislocations in III-V heterostructures and have applied this model to study the defect behavior in metamorphic triple-junction solar cells. This model represents a new approach in which the coefficient for second-order threading dislocation annihilation and coalescence reactions is considered to be determined by the length of misfit dislocations, LMD, in the structure, and we therefore refer to it as the LMD model. On the basis of this model we have compared the average threading dislocation densities in the active layers of triple junction solar cells using linearly-graded buffers of varying thicknesses as well as S-graded (complementary error function) buffers with varying thicknesses and standard deviation parameters. We have shown that the threading dislocation densities in the active regions of metamorphic tandem solar cells depend not only on the thicknesses of the buffer layers but on their compositional grading profiles. The use of S-graded buffer layers instead of linear buffers resulted in lower threading dislocation densities. Moreover, the threading dislocation densities depended strongly on the standard deviation parameters used in the S-graded buffers, with smaller values providing lower threading dislocation densities.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kicker, Dwayne Curtis; Herrick, Courtney G; Zeitler, Todd

    The numerical code DRSPALL (from direct release spallings) is written to calculate the volume of Waste Isolation Pilot Plant solid waste subject to material failure and transport to the surface (i.e., spallings) as a result of a hypothetical future inadvertent drilling intrusion into the repository. An error in the implementation of the DRSPALL finite difference equations was discovered and documented in a software problem report in accordance with the quality assurance procedure for software requirements. This paper describes the corrections to DRSPALL and documents the impact of the new spallings data from the modified DRSPALL on previous performance assessment calculations.more » Updated performance assessments result in more simulations with spallings, which generally translates to an increase in spallings releases to the accessible environment. Total normalized radionuclide releases using the modified DRSPALL data were determined by forming the summation of releases across each potential release pathway, namely borehole cuttings and cavings releases, spallings releases, direct brine releases, and transport releases. Because spallings releases are not a major contributor to the total releases, the updated performance assessment calculations of overall mean complementary cumulative distribution functions for total releases are virtually unchanged. Therefore, the corrections to the spallings volume calculation did not impact Waste Isolation Pilot Plant performance assessment calculation results.« less

  1. Promoting Student Autonomy through the Use of the European Language Portfolio

    ERIC Educational Resources Information Center

    Gonzalez, Jesus Angel

    2009-01-01

    The European Language Portfolio (ELP) is a document launched by the Council of Europe in 2001 which consists of three sections: the Passport, the Language Biography, and the Dossier. It has two complementary functions: a pedagogic function (helping students to reflect on their learning and objectives) and a reporting function (providing a record…

  2. Accelerating calculations of RNA secondary structure partition functions using GPUs

    PubMed Central

    2013-01-01

    Background RNA performs many diverse functions in the cell in addition to its role as a messenger of genetic information. These functions depend on its ability to fold to a unique three-dimensional structure determined by the sequence. The conformation of RNA is in part determined by its secondary structure, or the particular set of contacts between pairs of complementary bases. Prediction of the secondary structure of RNA from its sequence is therefore of great interest, but can be computationally expensive. In this work we accelerate computations of base-pair probababilities using parallel graphics processing units (GPUs). Results Calculation of the probabilities of base pairs in RNA secondary structures using nearest-neighbor standard free energy change parameters has been implemented using CUDA to run on hardware with multiprocessor GPUs. A modified set of recursions was introduced, which reduces memory usage by about 25%. GPUs are fastest in single precision, and for some hardware, restricted to single precision. This may introduce significant roundoff error. However, deviations in base-pair probabilities calculated using single precision were found to be negligible compared to those resulting from shifting the nearest-neighbor parameters by a random amount of magnitude similar to their experimental uncertainties. For large sequences running on our particular hardware, the GPU implementation reduces execution time by a factor of close to 60 compared with an optimized serial implementation, and by a factor of 116 compared with the original code. Conclusions Using GPUs can greatly accelerate computation of RNA secondary structure partition functions, allowing calculation of base-pair probabilities for large sequences in a reasonable amount of time, with a negligible compromise in accuracy due to working in single precision. The source code is integrated into the RNAstructure software package and available for download at http://rna.urmc.rochester.edu. PMID:24180434

  3. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    PubMed

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.

  4. Empirically Defined Patterns of Executive Function Deficits in Schizophrenia and Their Relation to Everyday Functioning: A Person-Centered Approach

    PubMed Central

    Iampietro, Mary; Giovannetti, Tania; Drabick, Deborah A. G.; Kessler, Rachel K.

    2013-01-01

    Executive function (EF) deficits in schizophrenia (SZ) are well documented, although much less is known about patterns of EF deficits and their association to differential impairments in everyday functioning. The present study empirically defined SZ groups based on measures of various EF abilities and then compared these EF groups on everyday action errors. Participants (n=45) completed various subtests from the Delis–Kaplan Executive Function System (D-KEFS) and the Naturalistic Action Test (NAT), a performance-based measure of everyday action that yields scores reflecting total errors and a range of different error types (e.g., omission, perseveration). Results of a latent class analysis revealed three distinct EF groups, characterized by (a) multiple EF deficits, (b) relatively spared EF, and (c) perseverative responding. Follow-up analyses revealed that the classes differed significantly on NAT total errors, total commission errors, and total perseveration errors; the two classes with EF impairment performed comparably on the NAT but performed worse than the class with relatively spared EF. In sum, people with SZ demonstrate variable patterns of EF deficits, and distinct aspects of these EF deficit patterns (i.e., poor mental control abilities) may be associated with everyday functioning capabilities. PMID:23035705

  5. Adaptive Constructive Processes and the Future of Memory

    ERIC Educational Resources Information Center

    Schacter, Daniel L.

    2012-01-01

    Memory serves critical functions in everyday life but is also prone to error. This article examines adaptive constructive processes, which play a functional role in memory and cognition but can also produce distortions, errors, and illusions. The article describes several types of memory errors that are produced by adaptive constructive processes…

  6. High-Precision Attitude Estimation Method of Star Sensors and Gyro Based on Complementary Filter and Unscented Kalman Filter

    NASA Astrophysics Data System (ADS)

    Guo, C.; Tong, X.; Liu, S.; Liu, S.; Lu, X.; Chen, P.; Jin, Y.; Xie, H.

    2017-07-01

    Determining the attitude of satellite at the time of imaging then establishing the mathematical relationship between image points and ground points is essential in high-resolution remote sensing image mapping. Star tracker is insensitive to the high frequency attitude variation due to the measure noise and satellite jitter, but the low frequency attitude motion can be determined with high accuracy. Gyro, as a short-term reference to the satellite's attitude, is sensitive to high frequency attitude change, but due to the existence of gyro drift and integral error, the attitude determination error increases with time. Based on the opposite noise frequency characteristics of two kinds of attitude sensors, this paper proposes an on-orbit attitude estimation method of star sensors and gyro based on Complementary Filter (CF) and Unscented Kalman Filter (UKF). In this study, the principle and implementation of the proposed method are described. First, gyro attitude quaternions are acquired based on the attitude kinematics equation. An attitude information fusion method is then introduced, which applies high-pass filtering and low-pass filtering to the gyro and star tracker, respectively. Second, the attitude fusion data based on CF are introduced as the observed values of UKF system in the process of measurement updating. The accuracy and effectiveness of the method are validated based on the simulated sensors attitude data. The obtained results indicate that the proposed method can suppress the gyro drift and measure noise of attitude sensors, improving the accuracy of the attitude determination significantly, comparing with the simulated on-orbit attitude and the attitude estimation results of the UKF defined by the same simulation parameters.

  7. Repeat analysis of intraoral digital imaging performed by undergraduate students using a complementary metal oxide semiconductor sensor: An institutional case study

    PubMed Central

    Rahman, Nur Liyana Abdul; Asri, Amiza Aqiela Ahmad; Othman, Noor Ilyani; Wan Mokhtar, Ilham

    2017-01-01

    Purpose This study was performed to quantify the repeat rate of imaging acquisitions based on different clinical examinations, and to assess the prevalence of error types in intraoral bitewing and periapical imaging using a digital complementary metal-oxide-semiconductor (CMOS) intraoral sensor. Materials and Methods A total of 8,030 intraoral images were retrospectively collected from 3 groups of undergraduate clinical dental students. The type of examination, stage of the procedure, and reasons for repetition were analysed and recorded. The repeat rate was calculated as the total number of repeated images divided by the total number of examinations. The weighted Cohen's kappa for inter- and intra-observer agreement was used after calibration and prior to image analysis. Results The overall repeat rate on intraoral periapical images was 34.4%. A total of 1,978 repeated periapical images were from endodontic assessment, which included working length estimation (WLE), trial gutta-percha (tGP), obturation, and removal of gutta-percha (rGP). In the endodontic imaging, the highest repeat rate was from WLE (51.9%) followed by tGP (48.5%), obturation (42.2%), and rGP (35.6%). In bitewing images, the repeat rate was 15.1% and poor angulation was identified as the most common cause of error. A substantial level of intra- and interobserver agreement was achieved. Conclusion The repeat rates in this study were relatively high, especially for certain clinical procedures, warranting training in optimization techniques and radiation protection. Repeat analysis should be performed from time to time to enhance quality assurance and hence deliver high-quality health services to patients. PMID:29279822

  8. Numerical optimization in Hilbert space using inexact function and gradient evaluations

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.

  9. Gaussian copula as a likelihood function for environmental models

    NASA Astrophysics Data System (ADS)

    Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.

    2017-12-01

    Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.

  10. From plane waves to local Gaussians for the simulation of correlated periodic systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booth, George H., E-mail: george.booth@kcl.ac.uk; Tsatsoulis, Theodoros; Grüneis, Andreas, E-mail: a.grueneis@fkf.mpg.de

    2016-08-28

    We present a simple, robust, and black-box approach to the implementation and use of local, periodic, atom-centered Gaussian basis functions within a plane wave code, in a computationally efficient manner. The procedure outlined is based on the representation of the Gaussians within a finite bandwidth by their underlying plane wave coefficients. The core region is handled within the projected augment wave framework, by pseudizing the Gaussian functions within a cutoff radius around each nucleus, smoothing the functions so that they are faithfully represented by a plane wave basis with only moderate kinetic energy cutoff. To mitigate the effects of themore » basis set superposition error and incompleteness at the mean-field level introduced by the Gaussian basis, we also propose a hybrid approach, whereby the complete occupied space is first converged within a large plane wave basis, and the Gaussian basis used to construct a complementary virtual space for the application of correlated methods. We demonstrate that these pseudized Gaussians yield compact and systematically improvable spaces with an accuracy comparable to their non-pseudized Gaussian counterparts. A key advantage of the described method is its ability to efficiently capture and describe electronic correlation effects of weakly bound and low-dimensional systems, where plane waves are not sufficiently compact or able to be truncated without unphysical artifacts. We investigate the accuracy of the pseudized Gaussians for the water dimer interaction, neon solid, and water adsorption on a LiH surface, at the level of second-order Møller–Plesset perturbation theory.« less

  11. Functionalized C-Glycoside Ketohydrazones: Carbohydrate Derivatization that Retains the Ring Integrity of the Terminal Reducing Sugar

    USDA-ARS?s Scientific Manuscript database

    Glycosylation often mediates important biological processes through the interaction of carbohydrates with complementary proteins. Most chemical tools for the functional analysis of glycans are highly dependent upon various linkage chemistries that involve the reducing-terminus of carbohydrates. Ho...

  12. A Method of Calculating Motion Error in a Linear Motion Bearing Stage

    PubMed Central

    Khim, Gyungho; Park, Chun Hong; Oh, Jeong Seok

    2015-01-01

    We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement. PMID:25705715

  13. Integrated navigation fusion strategy of INS/UWB for indoor carrier attitude angle and position synchronous tracking.

    PubMed

    Fan, Qigao; Wu, Yaheng; Hui, Jing; Wu, Lei; Yu, Zhenzhong; Zhou, Lijuan

    2014-01-01

    In some GPS failure conditions, positioning for mobile target is difficult. This paper proposed a new method based on INS/UWB for attitude angle and position synchronous tracking of indoor carrier. Firstly, error model of INS/UWB integrated system is built, including error equation of INS and UWB. And combined filtering model of INS/UWB is researched. Simulation results show that the two subsystems are complementary. Secondly, integrated navigation data fusion strategy of INS/UWB based on Kalman filtering theory is proposed. Simulation results show that FAKF method is better than the conventional Kalman filtering. Finally, an indoor experiment platform is established to verify the integrated navigation theory of INS/UWB, which is geared to the needs of coal mine working environment. Static and dynamic positioning results show that the INS/UWB integrated navigation system is stable and real-time, positioning precision meets the requirements of working condition and is better than any independent subsystem.

  14. A plasmid-based lacZα gene assay for DNA polymerase fidelity measurement

    PubMed Central

    Keith, Brian J.; Jozwiakowski, Stanislaw K.; Connolly, Bernard A.

    2013-01-01

    A significantly improved DNA polymerase fidelity assay, based on a gapped plasmid containing the lacZα reporter gene in a single-stranded region, is described. Nicking at two sites flanking lacZα, and removing the excised strand by thermocycling in the presence of complementary competitor DNA, is used to generate the gap. Simple methods are presented for preparing the single-stranded competitor. The gapped plasmid can be purified, in high amounts and in a very pure state, using benzoylated–naphthoylated DEAE–cellulose, resulting in a low background mutation frequency (∼1 × 10−4). Two key parameters, the number of detectable sites and the expression frequency, necessary for measuring polymerase error rates have been determined. DNA polymerase fidelity is measured by gap filling in vitro, followed by transformation into Escherichia coli and scoring of blue/white colonies and converting the ratio to error rate. Several DNA polymerases have been used to fully validate this straightforward and highly sensitive system. PMID:23098700

  15. A Hybrid dasymetric and machine learning approach to high-resolution residential electricity consumption modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morton, April M; Nagle, Nicholas N; Piburn, Jesse O

    As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for detailed information regarding residential energy consumption patterns has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy consumption, the majority of techniques are highly dependent on region-specific data sources and often require building- or dwelling-level details that are not publicly available for many regions in the United States. Furthermore, many existing methods do not account for errors in input data sources and may not accurately reflect inherent uncertainties in modelmore » outputs. We propose an alternative and more general hybrid approach to high-resolution residential electricity consumption modeling by merging a dasymetric model with a complementary machine learning algorithm. The method s flexible data requirement and statistical framework ensure that the model both is applicable to a wide range of regions and considers errors in input data sources.« less

  16. Error analysis of the Golay3 optical imaging system.

    PubMed

    Wu, Quanying; Fan, Junliu; Wu, Feng; Zhao, Jun; Qian, Lin

    2013-05-01

    We use aberration theory to derive a generalized pupil function of the Golay3 imaging system when astigmatisms exist in its submirrors. Theoretical analysis and numerical simulation using ZEMAX show that the point spread function (PSF) and the modulation transfer function (MTF) of the Golay3 sparse aperture system have a periodic change when there are piston errors. When the peak-valley value of the wavefront (PV(tilt)) due to the tilt error increases from zero to λ, the PSF and the MTF change significantly, and the change direction is determined by the location of the submirror with the tilt error. When PV(tilt) becomes larger than λ, the PSF and the MTF remain unvaried. We calculate the peaks of the signal-to-noise ratio (PSNR) resulting from the piston and tilt errors according to the Strehl ratio, and show that the PSNR decreases when the errors increase.

  17. Dopamine neurons share common response function for reward prediction error

    PubMed Central

    Eshel, Neir; Tian, Ju; Bukwich, Michael; Uchida, Naoshige

    2016-01-01

    Dopamine neurons are thought to signal reward prediction error, or the difference between actual and predicted reward. How dopamine neurons jointly encode this information, however, remains unclear. One possibility is that different neurons specialize in different aspects of prediction error; another is that each neuron calculates prediction error in the same way. We recorded from optogenetically-identified dopamine neurons in the lateral ventral tegmental area (VTA) while mice performed classical conditioning tasks. Our tasks allowed us to determine the full prediction error functions of dopamine neurons and compare them to each other. We found striking homogeneity among individual dopamine neurons: their responses to both unexpected and expected rewards followed the same function, just scaled up or down. As a result, we could describe both individual and population responses using just two parameters. Such uniformity ensures robust information coding, allowing each dopamine neuron to contribute fully to the prediction error signal. PMID:26854803

  18. Programmable display of DNA-protein chimeras for controlling cell-hydrogel interactions via reversible intermolecular hybridization.

    PubMed

    Zhang, Zhaoyang; Li, Shihui; Chen, Niancao; Yang, Cheng; Wang, Yong

    2013-04-08

    Extensive studies have been recently carried out to achieve dynamic control of cell-material interactions primarily through physicochemical stimulation. The purpose of this study was to apply reversible intermolecular hybridization to program cell-hydrogel interactions in physiological conditions based on DNA-antibody chimeras and complementary oligonucleotides. The results showed that DNA oligonucleotides could be captured to and released from the immobilizing DNA-functionalized hydrogels with high specificity via DNA hybridization. Accordingly, DNA-antibody chimeras were captured to the hydrogels, successfully inducing specific cell attachment. The cell attachment to the hydrogels reached the plateau at approximately half an hour after the functionalized hydrogels and the cells were incubated together. The attached cells were rapidly released from the bound hydrogels when triggering complementary oligonucleotides were introduced to the system. However, the capability of the triggering complementary oligonucleotides in releasing cells was affected by the length of intermolecular hybridization. The length needed to be at least more than 20 base pairs in the current experimental setting. Notably, because the procedure of intermolecular hybridization did not involve any harsh condition, the released cells maintained the same viability as that of the cultured cells. The functionalized hydrogels also exhibited the potential to catch and release cells repeatedly. Therefore, this study demonstrates that it is promising to regulate cell-material interactions dynamically through the DNA-programmed display of DNA-protein chimeras.

  19. Test functions for three-dimensional control-volume mixed finite-element methods on irregular grids

    USGS Publications Warehouse

    Naff, R.L.; Russell, T.F.; Wilson, J.D.; ,; ,; ,; ,; ,

    2000-01-01

    Numerical methods based on unstructured grids, with irregular cells, usually require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element methods, vector shape functions are used to approximate the distribution of velocities across cells and vector test functions are used to minimize the error associated with the numerical approximation scheme. For a logically cubic mesh, the lowest-order shape functions are chosen in a natural way to conserve intercell fluxes that vary linearly in logical space. Vector test functions, while somewhat restricted by the mapping into the logical reference cube, admit a wider class of possibilities. Ideally, an error minimization procedure to select the test function from an acceptable class of candidates would be the best procedure. Lacking such a procedure, we first investigate the effect of possible test functions on the pressure distribution over the control volume; specifically, we look for test functions that allow for the elimination of intermediate pressures on cell faces. From these results, we select three forms for the test function for use in a control-volume mixed method code and subject them to an error analysis for different forms of grid irregularity; errors are reported in terms of the discrete L2 norm of the velocity error. Of these three forms, one appears to produce optimal results for most forms of grid irregularity.

  20. B-spline goal-oriented error estimators for geometrically nonlinear rods

    DTIC Science & Technology

    2011-04-01

    respectively, for the output functionals q2–q4 (linear and nonlinear with the trigonometric functions sine and cosine) in all the tests considered...of the errors resulting from the linear, quadratic and nonlinear (with trigonometric functions sine and cosine) outputs and for p = 1, 2. If the... Portugal . References [1] A.T. Adams. Sobolev Spaces. Academic Press, Boston, 1975. [2] M. Ainsworth and J.T. Oden. A posteriori error estimation in

  1. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    NASA Astrophysics Data System (ADS)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  2. Orbit-determination performance of Doppler data for interplanetary cruise trajectories. Part 1: Error analysis methodology

    NASA Technical Reports Server (NTRS)

    Ulvestad, J. S.; Thurman, S. W.

    1992-01-01

    An error covariance analysis methodology is used to investigate different weighting schemes for two-way (coherent) Doppler data in the presence of transmission-media and observing-platform calibration errors. The analysis focuses on orbit-determination performance in the interplanetary cruise phase of deep-space missions. Analytical models for the Doppler observable and for transmission-media and observing-platform calibration errors are presented, drawn primarily from previous work. Previously published analytical models were improved upon by the following: (1) considering the effects of errors in the calibration of radio signal propagation through the troposphere and ionosphere as well as station-location errors; (2) modelling the spacecraft state transition matrix using a more accurate piecewise-linear approximation to represent the evolution of the spacecraft trajectory; and (3) incorporating Doppler data weighting functions that are functions of elevation angle, which reduce the sensitivity of the estimated spacecraft trajectory to troposphere and ionosphere calibration errors. The analysis is motivated by the need to develop suitable weighting functions for two-way Doppler data acquired at 8.4 GHz (X-band) and 32 GHz (Ka-band). This weighting is likely to be different from that in the weighting functions currently in use; the current functions were constructed originally for use with 2.3 GHz (S-band) Doppler data, which are affected much more strongly by the ionosphere than are the higher frequency data.

  3. Optimal estimation of large structure model errors. [in Space Shuttle controller design

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1979-01-01

    In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.

  4. Generalized Variance Function Applications in Forestry

    Treesearch

    James Alegria; Charles T. Scott; Charles T. Scott

    1991-01-01

    Adequately predicting the sampling errors of tabular data can reduce printing costs by eliminating the need to publish separate sampling error tables. Two generalized variance functions (GVFs) found in the literature and three GVFs derived for this study were evaluated for their ability to predict the sampling error of tabular forestry estimates. The recommended GVFs...

  5. Research on Standard Errors of Equating Differences. Research Report. ETS RR-10-25

    ERIC Educational Resources Information Center

    Moses, Tim; Zhang, Wenmin

    2010-01-01

    In this paper, the "standard error of equating difference" (SEED) is described in terms of originally proposed kernel equating functions (von Davier, Holland, & Thayer, 2004) and extended to incorporate traditional linear and equipercentile functions. These derivations expand on prior developments of SEEDs and standard errors of equating and…

  6. A new view for nanoparticle assemblies: from crystalline to binary cooperative complementarity.

    PubMed

    Yan, Cong; Wang, Tie

    2017-03-06

    Studies on nanoparticle assemblies and their applications have been research frontiers in nanoscience in the past few decades and remarkable progress has been made in the synthetic strategies and techniques. Recently, the design and fabrication of the nanoparticle-based nanomaterials or nanodevices with integrated and enhanced properties compared to those of the individual components have gradually become the mainstream. However, a systematic solution to provide a big picture for future development and guide the investigation of different aspects of the study of nanoparticle assemblies remains a challenge. The binary cooperative complementary principle could be an answer. The binary cooperative complementary principle is a universal discipline and can describe the fundamental properties of matter from the subatomic particles to the universe. According to its definition, a variety of nanoparticle assemblies, which represent the cutting-edge work in the nanoparticle studies, are naturally binary cooperative complementary materials. Therefore, the introduction of the binary cooperative complementary principle in the studies of nanoparticle assemblies could provide a unique perspective for reviewing this field and help in the design and fabrication of novel functional nanoparticle assemblies.

  7. Dissociating error-based and reinforcement-based loss functions during sensorimotor learning

    PubMed Central

    McGregor, Heather R.; Mohatarem, Ayman

    2017-01-01

    It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback. PMID:28753634

  8. Dissociating error-based and reinforcement-based loss functions during sensorimotor learning.

    PubMed

    Cashaback, Joshua G A; McGregor, Heather R; Mohatarem, Ayman; Gribble, Paul L

    2017-07-01

    It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback.

  9. Neural Prediction Errors Distinguish Perception and Misperception of Speech.

    PubMed

    Blank, Helen; Spangenberg, Marlene; Davis, Matthew H

    2018-06-11

    Humans use prior expectations to improve perception, especially of sensory signals that are degraded or ambiguous. However, if sensory input deviates from prior expectations, correct perception depends on adjusting or rejecting prior expectations. Failure to adjust or reject the prior leads to perceptual illusions especially if there is partial overlap (hence partial mismatch) between expectations and input. With speech, "Slips of the ear" occur when expectations lead to misperception. For instance, a entomologist, might be more susceptible to hear "The ants are my friends" for "The answer, my friend" (in the Bob Dylan song "Blowing in the Wind"). Here, we contrast two mechanisms by which prior expectations may lead to misperception of degraded speech. Firstly, clear representations of the common sounds in the prior and input (i.e., expected sounds) may lead to incorrect confirmation of the prior. Secondly, insufficient representations of sounds that deviate between prior and input (i.e., prediction errors) could lead to deception. We used cross-modal predictions from written words that partially match degraded speech to compare neural responses when male and female human listeners were deceived into accepting the prior or correctly reject it. Combined behavioural and multivariate representational similarity analysis of functional magnetic resonance imaging data shows that veridical perception of degraded speech is signalled by representations of prediction error in the left superior temporal sulcus. Instead of using top-down processes to support perception of expected sensory input, our findings suggest that the strength of neural prediction error representations distinguishes correct perception and misperception. SIGNIFICANCE STATEMENT Misperceiving spoken words is an everyday experience with outcomes that range from shared amusement to serious miscommunication. For hearing-impaired individuals, frequent misperception can lead to social withdrawal and isolation with severe consequences for well-being. In this work, we specify the neural mechanisms by which prior expectations - which are so often helpful for perception - can lead to misperception of degraded sensory signals. Most descriptive theories of illusory perception explain misperception as arising from a clear sensory representation of features or sounds that are in common between prior expectations and sensory input. Our work instead provides support for a complementary proposal; namely that misperception occurs when there is an insufficient sensory representations of the deviation between expectations and sensory signals. Copyright © 2018 the authors.

  10. Molecular radiotherapy: the NUKFIT software for calculating the time-integrated activity coefficient.

    PubMed

    Kletting, P; Schimmel, S; Kestler, H A; Hänscheid, H; Luster, M; Fernández, M; Bröer, J H; Nosske, D; Lassmann, M; Glatting, G

    2013-10-01

    Calculation of the time-integrated activity coefficient (residence time) is a crucial step in dosimetry for molecular radiotherapy. However, available software is deficient in that it is either not tailored for the use in molecular radiotherapy and/or does not include all required estimation methods. The aim of this work was therefore the development and programming of an algorithm which allows for an objective and reproducible determination of the time-integrated activity coefficient and its standard error. The algorithm includes the selection of a set of fitting functions from predefined sums of exponentials and the choice of an error model for the used data. To estimate the values of the adjustable parameters an objective function, depending on the data, the parameters of the error model, the fitting function and (if required and available) Bayesian information, is minimized. To increase reproducibility and user-friendliness the starting values are automatically determined using a combination of curve stripping and random search. Visual inspection, the coefficient of determination, the standard error of the fitted parameters, and the correlation matrix are provided to evaluate the quality of the fit. The functions which are most supported by the data are determined using the corrected Akaike information criterion. The time-integrated activity coefficient is estimated by analytically integrating the fitted functions. Its standard error is determined assuming Gaussian error propagation. The software was implemented using MATLAB. To validate the proper implementation of the objective function and the fit functions, the results of NUKFIT and SAAM numerical, a commercially available software tool, were compared. The automatic search for starting values was successfully tested for reproducibility. The quality criteria applied in conjunction with the Akaike information criterion allowed the selection of suitable functions. Function fit parameters and their standard error estimated by using SAAM numerical and NUKFIT showed differences of <1%. The differences for the time-integrated activity coefficients were also <1% (standard error between 0.4% and 3%). In general, the application of the software is user-friendly and the results are mathematically correct and reproducible. An application of NUKFIT is presented for three different clinical examples. The software tool with its underlying methodology can be employed to objectively and reproducibly estimate the time integrated activity coefficient and its standard error for most time activity data in molecular radiotherapy.

  11. Traditional, complementary and alternative medicine use by HIV patients a decade after public sector antiretroviral therapy roll out in South Africa: a cross sectional study.

    PubMed

    Nlooto, Manimbulu; Naidoo, Panjasaram

    2016-05-17

    The roll out of antiretroviral therapy in the South African public health sector in 2004 was preceded by the politicisation of HIV-infection which was used to promote traditional medicine for the management of HIV/AIDS. One decade has passed since; however, questions remain on the extent of the use of traditional, complementary and alternative medicine (TCAM) by HIV-infected patients. This study therefore aimed at investigating the prevalence of the use of African traditional medicine (ATM), complementary and alternative medicines (CAM) by adult patients in the eThekwini and UThukela Health Districts, South Africa. A cross- sectional study was carried out at 8 public health sector antiretroviral clinics using interviewer-administered semi-structured questionnaires. These were completed from April to October 2014 by adult patients who had been on antiretroviral therapy (ART) for at least three months. Use of TCAM by patients was analysed by descriptive statistics using frequency and percentages with standard error. Where the associated relative error was equal or greater to 0.50, the percentage was rejected as unstable. A p-value <0.05 was estimated as statistically significant. The majority of the 1748 participants were Black Africans (1685/1748, 96.40 %, SE: 0.00045), followed by Coloured (39/1748, 2.23 %, SE: 0.02364), Indian (17/1748, 0.97 %, SE: 0.02377), and Whites (4/1748, 0.23 %, SE: 0.02324), p < 0.05. The prevalence of ATM use varied prior to (382/1748, 21.85 %) and after ART initiation (142/1748, 8.12 %), p <0.05, specifically by Black African females both before (14.41 %) and after uptake (5.49 %), p < 0.05. Overall, 35 Black Africans, one Coloured and one Indian (37/1748, 2.12 %) reported visiting CAM practitioners for their HIV condition and related symptoms post ART. Despite a progressive implementation of a successful antiretroviral programme over the first decade of free antiretroviral therapy in the South African public health sector, the use of TCAM is still prevalent amongst a small percentage of HIV infected patients attending public healthcare sector antiretroviral clinics. Further research is needed to explore reasons for use and health benefits or risks experienced by the minority that uses both conventional antiretroviral therapy with TCAM.

  12. Nonlinear optical imaging for sensitive detection of crystals in bulk amorphous powders.

    PubMed

    Kestur, Umesh S; Wanapun, Duangporn; Toth, Scott J; Wegiel, Lindsay A; Simpson, Garth J; Taylor, Lynne S

    2012-11-01

    The primary aim of this study was to evaluate the utility of second-order nonlinear imaging of chiral crystals (SONICC) to quantify crystallinity in drug-polymer blends, including solid dispersions. Second harmonic generation (SHG) can potentially exhibit scaling with crystallinity between linear and quadratic depending on the nature of the source, and thus, it is important to determine the response of pharmaceutical powders. Physical mixtures containing different proportions of crystalline naproxen and hydroxyl propyl methyl cellulose acetate succinate (HPMCAS) were prepared by blending and a dispersion was produced by solvent evaporation. A custom-built SONICC instrument was used to characterize the SHG intensity as a function of the crystalline drug fraction in the various samples. Powder X-ray diffraction (PXRD) and Raman spectroscopy were used as complementary methods known to exhibit linear scaling. SONICC was able to detect crystalline drug even in the presence of 99.9 wt % HPMCAS in the binary mixtures. The calibration curve revealed a linear dynamic range with a R(2) value of 0.99 spanning the range from 0.1 to 100 wt % naproxen with a root mean square error of prediction of 2.7%. Using the calibration curve, the errors in the validation samples were in the range of 5%-10%. Analysis of a 75 wt % HPMCAS-naproxen solid dispersion with SONICC revealed the presence of crystallites at an earlier time point than could be detected with PXRD and Raman spectroscopy. In addition, results from the crystallization kinetics experiment using SONICC were in good agreement with Raman spectroscopy and PXRD. In conclusion, SONICC has been found to be a sensitive technique for detecting low levels (0.1% or lower) of crystallinity, even in the presence of large quantities of a polymer. Copyright © 2012 Wiley-Liss, Inc.

  13. Facial emotion recognition in childhood-onset bipolar I disorder: an evaluation of developmental differences between youths and adults.

    PubMed

    Wegbreit, Ezra; Weissman, Alexandra B; Cushman, Grace K; Puzia, Megan E; Kim, Kerri L; Leibenluft, Ellen; Dickstein, Daniel P

    2015-08-01

    Bipolar disorder (BD) is a severe mental illness with high healthcare costs and poor outcomes. Increasing numbers of youths are diagnosed with BD, and many adults with BD report that their symptoms started in childhood, suggesting that BD can be a developmental disorder. Studies advancing our understanding of BD have shown alterations in facial emotion recognition both in children and adults with BD compared to healthy comparison (HC) participants, but none have evaluated the development of these deficits. To address this, we examined the effect of age on facial emotion recognition in a sample that included children and adults with confirmed childhood-onset type-I BD, with the adults having been diagnosed and followed since childhood by the Course and Outcome in Bipolar Youth study. Using the Diagnostic Analysis of Non-Verbal Accuracy, we compared facial emotion recognition errors among participants with BD (n = 66; ages 7-26 years) and HC participants (n = 87; ages 7-25 years). Complementary analyses investigated errors for child and adult faces. A significant diagnosis-by-age interaction indicated that younger BD participants performed worse than expected relative to HC participants their own age. The deficits occurred both for child and adult faces and were particularly strong for angry child faces, which were most often mistaken as sad. Our results were not influenced by medications, comorbidities/substance use, or mood state/global functioning. Younger individuals with BD are worse than their peers at this important social skill. This deficit may be an important developmentally salient treatment target - that is, for cognitive remediation to improve BD youths' emotion recognition abilities. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Facial emotion recognition in childhood-onset bipolar I disorder: an evaluation of developmental differences between youths and adults

    PubMed Central

    Wegbreit, Ezra; Weissman, Alexandra B; Cushman, Grace K; Puzia, Megan E; Kim, Kerri L; Leibenluft, Ellen; Dickstein, Daniel P

    2015-01-01

    Objectives Bipolar disorder (BD) is a severe mental illness with high healthcare costs and poor outcomes. Increasing numbers of youths are diagnosed with BD, and many adults with BD report their symptoms started in childhood, suggesting BD can be a developmental disorder. Studies advancing our understanding of BD have shown alterations in facial emotion recognition in both children and adults with BD compared to healthy comparison (HC) participants, but none have evaluated the development of these deficits. To address this, we examined the effect of age on facial emotion recognition in a sample that included children and adults with confirmed childhood-onset type-I BD, with the adults having been diagnosed and followed since childhood by the Course and Outcome in Bipolar Youth study. Methods Using the Diagnostic Analysis of Non-Verbal Accuracy, we compared facial emotion recognition errors among participants with BD (n = 66; ages 7–26 years) and HC participants (n = 87; ages 7–25 years). Complementary analyses investigated errors for child and adult faces. Results A significant diagnosis-by-age interaction indicated that younger BD participants performed worse than expected relative to HC participants their own age. The deficits occurred for both child and adult faces and were particularly strong for angry child faces, which were most often mistaken as sad. Our results were not influenced by medications, comorbidities/substance use, or mood state/global functioning. Conclusions Younger individuals with BD are worse than their peers at this important social skill. This deficit may be an important developmentally salient treatment target, i.e., for cognitive remediation to improve BD youths’ emotion recognition abilities. PMID:25951752

  15. A Posteriori Error Estimation for Discontinuous Galerkin Approximations of Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Larson, Mats G.; Barth, Timothy J.

    1999-01-01

    This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques, we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

  16. Use of complementary and alternative medicine among US adults with and without functional limitations.

    PubMed

    Okoro, Catherine A; Zhao, Guixiang; Li, Chaoyang; Balluz, Lina S

    2012-01-01

    This study characterizes the use of complementary and alternative medicine (CAM) among adults with and without functional limitations. We also examine the reasons for using CAM and for disclosing its use to conventional medical professionals. Data were obtained from the 2007 adult CAM supplement and components of the National Health Interview Survey (n = 20,710). Adults with functional limitations used CAM more frequently than those without (48.7% vs. 35.4%; p < 0.001). Adults with functional limitations used mind-body therapies the most (27.4%) and alternative medical systems the least (4.8%). Relaxation techniques were the most common therapy used by adults with functional limitations, and they used it more often than those without limitations (24.6% vs. 13.7%; P < 0.001). More than half of the adults with functional limitations (51.3%) discussed CAM use with conventional medical professionals, compared with 37.9% of adults without limitations (p < 0.001). The main reason for CAM use was general wellness/disease prevention among adults with and without functional limitations (59.8% vs. 63.1%; P = 0.051). CAM use among adults with functional limitations is high. Health practitioners should screen for and discuss the safety and efficacy of CAM when providing health care.

  17. Role of point defects and HfO2/TiN interface stoichiometry on effective work function modulation in ultra-scaled complementary metal-oxide-semiconductor devices

    NASA Astrophysics Data System (ADS)

    Pandey, R. K.; Sathiyanarayanan, Rajesh; Kwon, Unoh; Narayanan, Vijay; Murali, K. V. R. M.

    2013-07-01

    We investigate the physical properties of a portion of the gate stack of an ultra-scaled complementary metal-oxide-semiconductor (CMOS) device. The effects of point defects, such as oxygen vacancy, oxygen, and aluminum interstitials at the HfO2/TiN interface, on the effective work function of TiN are explored using density functional theory. We compute the diffusion barriers of such point defects in the bulk TiN and across the HfO2/TiN interface. Diffusion of these point defects across the HfO2/TiN interface occurs during the device integration process. This results in variation of the effective work function and hence in the threshold voltage variation in the devices. Further, we simulate the effects of varying the HfO2/TiN interface stoichiometry on the effective work function modulation in these extremely-scaled CMOS devices. Our results show that the interface rich in nitrogen gives higher effective work function, whereas the interface rich in titanium gives lower effective work function, compared to a stoichiometric HfO2/TiN interface. This theoretical prediction is confirmed by the experiment, demonstrating over 700 meV modulation in the effective work function.

  18. Hyperactive error responses and altered connectivity in ventromedial and frontoinsular cortices in obsessive-compulsive disorder.

    PubMed

    Stern, Emily R; Welsh, Robert C; Fitzgerald, Kate D; Gehring, William J; Lister, Jamey J; Himle, Joseph A; Abelson, James L; Taylor, Stephan F

    2011-03-15

    Patients with obsessive-compulsive disorder (OCD) show abnormal functioning in ventral frontal brain regions involved in emotional/motivational processes, including anterior insula/frontal operculum (aI/fO) and ventromedial frontal cortex (VMPFC). While OCD has been associated with an increased neural response to errors, the influence of motivational factors on this effect remains poorly understood. To investigate the contribution of motivational factors to error processing in OCD and to examine functional connectivity between regions involved in the error response, functional magnetic resonance imaging data were measured in 39 OCD patients (20 unmedicated, 19 medicated) and 38 control subjects (20 unmedicated, 18 medicated) during an error-eliciting interference task where motivational context was varied using monetary incentives (null, loss, and gain). Across all errors, OCD patients showed reduced deactivation of VMPFC and greater activation in left aI/FO compared with control subjects. For errors specifically resulting in a loss, patients further hyperactivated VMPFC, as well as right aI/FO. Independent of activity associated with task events, OCD patients showed greater functional connectivity between VMPFC and regions of bilateral aI/FO and right thalamus. Obsessive-compulsive disorder patients show greater activation in neural regions associated with emotion and valuation when making errors, which could be related to altered intrinsic functional connectivity between brain networks. These results highlight the importance of emotional/motivational responses to mistakes in OCD and point to the need for further study of network interactions in the disorder. Copyright © 2011 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  19. High-accuracy process based on the corrective calibration of removal function in the magnetorheological finishing

    NASA Astrophysics Data System (ADS)

    Zhong, Xianyun; Fan, Bin; Wu, Fan

    2017-08-01

    The corrective calibration of the removal function plays an important role in the magnetorheological finishing (MRF) high-accuracy process. This paper mainly investigates the asymmetrical characteristic of the MRF removal function shape and further analyzes its influence on the surface residual error by means of an iteration algorithm and simulations. By comparing the ripple errors and convergence ratios based on the ideal MRF tool function and the deflected tool function, the mathematical models for calibrating the deviation of horizontal and flowing directions are presented. Meanwhile, revised mathematical models for the coordinate transformation of an MRF machine is also established. Furthermore, a Ø140-mm fused silica plane and a Ø196 mm, f/1∶1, fused silica concave sphere samples are taken as the experiments. After two runs, the plane mirror final surface error reaches PV 17.7 nm, RMS 1.75 nm, and the polishing time is 16 min in total; after three runs, the sphere mirror final surfer error reaches RMS 2.7 nm and the polishing time is 70 min in total. The convergence ratios are 96.2% and 93.5%, respectively. The spherical simulation error and the polishing result are almost consistent, which fully validate the efficiency and feasibility of the calibration method of MRF removal function error using for the high-accuracy subaperture optical manufacturing.

  20. [Algorithms of artificial neural networks--practical application in medical science].

    PubMed

    Stefaniak, Bogusław; Cholewiński, Witold; Tarkowska, Anna

    2005-12-01

    Artificial Neural Networks (ANN) may be a tool alternative and complementary to typical statistical analysis. However, in spite of many computer applications of various ANN algorithms ready for use, artificial intelligence is relatively rarely applied to data processing. This paper presents practical aspects of scientific application of ANN in medicine using widely available algorithms. Several main steps of analysis with ANN were discussed starting from material selection and dividing it into groups, to the quality assessment of obtained results at the end. The most frequent, typical reasons for errors as well as the comparison of ANN method to the modeling by regression analysis were also described.

  1. Cluster mislocation in kinematic Sunyaev-Zel'dovich effect extraction

    NASA Astrophysics Data System (ADS)

    Calafut, Victoria; Bean, Rachel; Yu, Byeonghee

    2017-12-01

    We investigate the impact of a variety of analysis assumptions that influence cluster identification and location on the kinematic Sunyaev-Zel'dovich (kSZ) pairwise momentum signal and covariance estimation. Photometric and spectroscopic galaxy tracers from SDSS, WISE, and DECaLs, spanning redshifts 0.05

  2. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.

    PubMed

    White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K

    2016-12-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.

  3. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems

    PubMed Central

    Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.

    2016-01-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060

  4. Functional language shift to the right hemisphere in patients with language-eloquent brain tumors.

    PubMed

    Krieg, Sandro M; Sollmann, Nico; Hauck, Theresa; Ille, Sebastian; Foerschler, Annette; Meyer, Bernhard; Ringel, Florian

    2013-01-01

    Language function is mainly located within the left hemisphere of the brain, especially in right-handed subjects. However, functional MRI (fMRI) has demonstrated changes of language organization in patients with left-sided perisylvian lesions to the right hemisphere. Because intracerebral lesions can impair fMRI, this study was designed to investigate human language plasticity with a virtual lesion model using repetitive navigated transcranial magnetic stimulation (rTMS). Fifteen patients with lesions of left-sided language-eloquent brain areas and 50 healthy and purely right-handed participants underwent bilateral rTMS language mapping via an object-naming task. All patients were proven to have left-sided language function during awake surgery. The rTMS-induced language errors were categorized into 6 different error types. The error ratio (induced errors/number of stimulations) was determined for each brain region on both hemispheres. A hemispheric dominance ratio was then defined for each region as the quotient of the error ratio (left/right) of the corresponding area of both hemispheres (ratio >1 = left dominant; ratio <1 = right dominant). Patients with language-eloquent lesions showed a statistically significantly lower ratio than healthy participants concerning "all errors" and "all errors without hesitations", which indicates a higher participation of the right hemisphere in language function. Yet, there was no cortical region with pronounced difference in language dominance compared to the whole hemisphere. This is the first study that shows by means of an anatomically accurate virtual lesion model that a shift of language function to the non-dominant hemisphere can occur.

  5. Functional heterogeneity of conflict, error, task-switching, and unexpectedness effects within medial prefrontal cortex.

    PubMed

    Nee, Derek Evan; Kastner, Sabine; Brown, Joshua W

    2011-01-01

    The last decade has seen considerable discussion regarding a theoretical account of medial prefrontal cortex (mPFC) function with particular focus on the anterior cingulate cortex. The proposed theories have included conflict detection, error likelihood prediction, volatility monitoring, and several distinct theories of error detection. Arguments for and against particular theories often treat mPFC as functionally homogeneous, or at least nearly so, despite some evidence for distinct functional subregions. Here we used functional magnetic resonance imaging (fMRI) to simultaneously contrast multiple effects of error, conflict, and task-switching that have been individually construed in support of various theories. We found overlapping yet functionally distinct subregions of mPFC, with activations related to dominant error, conflict, and task-switching effects successively found along a rostral-ventral to caudal-dorsal gradient within medial prefrontal cortex. Activations in the rostral cingulate zone (RCZ) were strongly correlated with the unexpectedness of outcomes suggesting a role in outcome prediction and preparing control systems to deal with anticipated outcomes. The results as a whole support a resolution of some ongoing debates in that distinct theories may each pertain to corresponding distinct yet overlapping subregions of mPFC. Copyright © 2010 Elsevier Inc. All rights reserved.

  6. Entropy of space-time outcome in a movement speed-accuracy task.

    PubMed

    Hsieh, Tsung-Yu; Pacheco, Matheus Maia; Newell, Karl M

    2015-12-01

    The experiment reported was set-up to investigate the space-time entropy of movement outcome as a function of a range of spatial (10, 20 and 30 cm) and temporal (250-2500 ms) criteria in a discrete aiming task. The variability and information entropy of the movement spatial and temporal errors considered separately increased and decreased on the respective dimension as a function of an increment of movement velocity. However, the joint space-time entropy was lowest when the relative contribution of spatial and temporal task criteria was comparable (i.e., mid-range of space-time constraints), and it increased with a greater trade-off between spatial or temporal task demands, revealing a U-shaped function across space-time task criteria. The traditional speed-accuracy functions of spatial error and temporal error considered independently mapped to this joint space-time U-shaped entropy function. The trade-off in movement tasks with joint space-time criteria is between spatial error and timing error, rather than movement speed and accuracy. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Pre-University Students' Errors in Integration of Rational Functions and Implications for Classroom Teaching

    ERIC Educational Resources Information Center

    Yee, Ng Kin; Lam, Toh Tin

    2008-01-01

    This paper reports on students' errors in performing integration of rational functions, a topic of calculus in the pre-university mathematics classrooms. Generally the errors could be classified as those due to the students' weak algebraic concepts and their lack of understanding of the concept of integration. With the students' inability to link…

  8. Error Patterns with Fraction Calculations at Fourth Grade as a Function of Students' Mathematics Achievement Status

    ERIC Educational Resources Information Center

    Schumacher, Robin F.; Malone, Amelia S.

    2017-01-01

    The goal of this study was to describe fraction-calculation errors among fourth-grade students and to determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low-, average-, or high-achieving). We…

  9. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  10. Cluster mislocation in kinematic Sunyaev-Zel'dovich (kSZ) effect extraction

    NASA Astrophysics Data System (ADS)

    Calafut, Victoria Rose; Bean, Rachel; Yu, Byeonghee

    2018-01-01

    We investigate the impact of a variety of analysis assumptions that influence cluster identification and location on the kSZ pairwise momentum signal and covariance estimation. Photometric and spectroscopic galaxy tracers from SDSS, WISE, and DECaLs, spanning redshifts 0.05

  11. Uses of tuberculosis mortality surveillance to identify programme errors and improve database reporting.

    PubMed

    Selig, L; Guedes, R; Kritski, A; Spector, N; Lapa E Silva, J R; Braga, J U; Trajman, A

    2009-08-01

    In 2006, 848 persons died from tuberculosis (TB) in Rio de Janeiro, Brazil, corresponding to a mortality rate of 5.4 per 100 000 population. No specific TB death surveillance actions are currently in place in Brazil. Two public general hospitals with large open emergency rooms in Rio de Janeiro City. To evaluate the contribution of TB death surveillance in detecting gaps in TB control. We conducted a survey of TB deaths from September 2005 to August 2006. Records of TB-related deaths and deaths due to undefined causes were investigated. Complementary data were gathered from the mortality and TB notification databases. Seventy-three TB-related deaths were investigated. Transmission hazards were identified among firefighters, health care workers and in-patients. Management errors included failure to isolate suspected cases, to confirm TB, to correct drug doses in underweight patients and to trace contacts. Following the survey, 36 cases that had not previously been notified were included in the national TB notification database and the outcome of 29 notified cases was corrected. TB mortality surveillance can contribute to TB monitoring and evaluation by detecting correctable and specific programme- and hospital-based care errors, and by improving the accuracy of TB database reporting. Specific local and programmatic interventions can be proposed as a result.

  12. Sensory training with vibration-induced kinesthetic illusions improves proprioceptive integration in patients with Parkinson's disease.

    PubMed

    Ribot-Ciscar, Edith; Aimonetti, Jean-Marc; Azulay, Jean-Philippe

    2017-12-15

    The present study investigates whether proprioceptive training, based on kinesthetic illusions, can help in re-educating the processing of muscle proprioceptive input, which is impaired in patients with Parkinson's disease (PD). The processing of proprioceptive input before and after training was evaluated by determining the error in the amplitude of voluntary dorsiflexion ankle movement (20°), induced by applying a vibration on the tendon of the gastrocnemius-soleus muscle (a vibration-induced movement error). The training consisted of the subjects focusing their attention upon a series of illusory movements of the ankle. Eleven PD patients and eleven age-matched control subjects were tested. Before training, vibration reduced dorsiflexion amplitude in controls by 4.3° (P<0.001); conversely, vibration was inefficient in PD's movement amplitude (reduction of 2.1°, P=0.20). After training, vibration significantly reduced the estimated movement amplitude in PD patients by 5.3° (P=0.01). This re-emergence of a vibration-induced error leads us to conclude that proprioceptive training, based on kinesthetic illusions, is a simple means for re-educating the processing of muscle proprioceptive input in PD patients. Such complementary training should be included in rehabilitation programs that presently focus on improving balance and motor performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Dependence of the compensation error on the error of a sensor and corrector in an adaptive optics phase-conjugating system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiyko, V V; Kislov, V I; Ofitserov, E N

    2015-08-31

    In the framework of a statistical model of an adaptive optics system (AOS) of phase conjugation, three algorithms based on an integrated mathematical approach are considered, each of them intended for minimisation of one of the following characteristics: the sensor error (in the case of an ideal corrector), the corrector error (in the case of ideal measurements) and the compensation error (with regard to discreteness and measurement noises and to incompleteness of a system of response functions of the corrector actuators). Functional and statistical relationships between the algorithms are studied and a relation is derived to ensure calculation of themore » mean-square compensation error as a function of the errors of the sensor and corrector with an accuracy better than 10%. Because in adjusting the AOS parameters, it is reasonable to proceed from the equality of the sensor and corrector errors, in the case the Hartmann sensor is used as a wavefront sensor, the required number of actuators in the absence of the noise component in the sensor error turns out 1.5 – 2.5 times less than the number of counts, and that difference grows with increasing measurement noise. (adaptive optics)« less

  14. Goal-oriented explicit residual-type error estimates in XFEM

    NASA Astrophysics Data System (ADS)

    Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin

    2013-08-01

    A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.

  15. Comprehensive Anti-error Study on Power Grid Dispatching Based on Regional Regulation and Integration

    NASA Astrophysics Data System (ADS)

    Zhang, Yunju; Chen, Zhongyi; Guo, Ming; Lin, Shunsheng; Yan, Yinyang

    2018-01-01

    With the large capacity of the power system, the development trend of the large unit and the high voltage, the scheduling operation is becoming more frequent and complicated, and the probability of operation error increases. This paper aims at the problem of the lack of anti-error function, single scheduling function and low working efficiency for technical support system in regional regulation and integration, the integrated construction of the error prevention of the integrated architecture of the system of dispatching anti - error of dispatching anti - error of power network based on cloud computing has been proposed. Integrated system of error prevention of Energy Management System, EMS, and Operation Management System, OMS have been constructed either. The system architecture has good scalability and adaptability, which can improve the computational efficiency, reduce the cost of system operation and maintenance, enhance the ability of regional regulation and anti-error checking with broad development prospects.

  16. Bidirectional transfer between joint and individual actions in a task of discrete force production.

    PubMed

    Masumoto, Junya; Inui, Nobuyuki

    2017-07-01

    The present study examined bidirectional learning transfer between joint and individual actions involving discrete isometric force production with the right index finger. To examine the effects of practice of joint action on performance of the individual action, participants performed a pre-test (individual condition), practice blocks (joint condition), and a post-test (individual condition) (IJI task). To examine the effects of practice of the individual action on performance during the joint action, the participants performed a pre-test (joint condition), practice blocks (individual condition), and a post-test (joint condition) (JIJ task). Whereas one participant made pressing movements with a target peak force of 10% maximum voluntary contraction (MVC) in the individual condition, two participants produced the target force of the sum of 10% MVC produced by each of them in the joint condition. In both the IJI and JIJ tasks, absolute errors and standard deviations of peak force were smaller post-test than pre-test, indicating bidirectional transfer between individual and joint conditions for force accuracy and variability. Although the negative correlation between forces produced by two participants (complementary force production) became stronger with practice blocks in the IJI task, there was no difference between the pre- and post-tests for the negative correlation in the JIJ task. In the JIJ task, the decrease in force accuracy and variability during the individual action did not facilitate complementary force production during the joint action. This indicates that practice performed by two people is essential for complementary force production in joint action.

  17. Evaluating the validity of the Work Role Functioning Questionnaire (Canadian French version) using classical test theory and item response theory.

    PubMed

    Hong, Quan Nha; Coutu, Marie-France; Berbiche, Djamal

    2017-01-01

    The Work Role Functioning Questionnaire (WRFQ) was developed to assess workers' perceived ability to perform job demands and is used to monitor presenteeism. Still few studies on its validity can be found in the literature. The purpose of this study was to assess the items and factorial composition of the Canadian French version of the WRFQ (WRFQ-CF). Two measurement approaches were used to test the WRFQ-CF: Classical Test Theory (CTT) and non-parametric Item Response Theory (IRT). A total of 352 completed questionnaires were analyzed. A four-factor and three-factor model models were tested and shown respectively good fit with 14 items (Root Mean Square Error of Approximation (RMSEA) = 0.06, Standardized Root Mean Square Residual (SRMR) = 0.04, Bentler Comparative Fit Index (CFI) = 0.98) and with 17 items (RMSEA = 0.059, SRMR = 0.048, CFI = 0.98). Using IRT, 13 problematic items were identified, of which 9 were common with CTT. This study tested different models with fewer problematic items found in a three-factor model. Using a non-parametric IRT and CTT for item purification gave complementary results. IRT is still scarcely used and can be an interesting alternative method to enhance the quality of a measurement instrument. More studies are needed on the WRFQ-CF to refine its items and factorial composition.

  18. Impaired driving from medical conditions: A 70-year-old man trying to decide if he should continue driving

    PubMed Central

    Rizzo, Matthew

    2012-01-01

    Some medical disorders can impair performance, increasing the risk of driving safety errors that can lead to vehicle crashes. The causal pathway often involves a concatenation of factors or events, some of which can be prevented or controlled. Effective interventions can operate before, during, or after a crash occurs at the levels of driver capacity, vehicle and road design, and public policy. A variety of systemic, neurological, psychiatric, and developmental disorders put drivers at potential increased risk of a car crash in the short or long term. Medical diagnosis and age alone are usually insufficient criteria for determining fitness to drive. Strategies are needed for determining what types and levels of reduced function provide a threshold for disqualification in drivers with medical disorders. Evidence of decreased mileage, self-restriction to driving in certain situations, collisions, moving violations, aggressive driving, sleepiness, alcohol abuse, metabolic disorders, and multiple medications may trigger considerations of driver safety. A general framework for evaluating driver fitness relies on a functional evaluation of multiple domains (cognitive, motor, perceptual, and psychiatric) that are important for safe driving and can be applied across many disorders, including conditions that have rarely been studied with respect to driving, and in patients with multiple conditions and medications. Neurocognitive tests, driving simulation, and road tests provide complementary sources of evidence to evaluate driver safety. No single test is sufficient to determine who should drive and who should not. PMID:21364126

  19. Impaired driving from medical conditions: a 70-year-old man trying to decide if he should continue driving.

    PubMed

    Rizzo, Matthew

    2011-03-09

    Some medical disorders can impair performance, increasing the risk of driving safety errors that can lead to vehicle crashes. The causal pathway often involves a concatenation of factors or events, some of which can be prevented or controlled. Effective interventions can operate before, during, or after a crash occurs at the levels of driver capacity, vehicle and road design, and public policy. A variety of systemic, neurological, psychiatric, and developmental disorders put drivers at potential increased risk of a car crash in the short or long term. Medical diagnosis and age alone are usually insufficient criteria for determining fitness to drive. Strategies are needed for determining what types and levels of reduced function provide a threshold for disqualification in drivers with medical disorders. Evidence of decreased mileage, self-restriction to driving in certain situations, collisions, moving violations, aggressive driving, sleepiness, alcohol abuse, metabolic disorders, and multiple medications may trigger considerations of driver safety. A general framework for evaluating driver fitness relies on a functional evaluation of multiple domains (cognitive, motor, perceptual, and psychiatric) that are important for safe driving and can be applied across many disorders, including conditions that have rarely been studied with respect to driving, and in patients with multiple conditions and medications. Neurocognitive tests, driving simulation, and road tests provide complementary sources of evidence to evaluate driver safety. No single test is sufficient to determine who should drive and who should not.

  20. Decision-Making under Risk of Loss in Children

    PubMed Central

    Steelandt, Sophie; Broihanne, Marie-Hélène; Romain, Amélie; Thierry, Bernard; Dufour, Valérie

    2013-01-01

    In human adults, judgment errors are known to often lead to irrational decision-making in risky contexts. While these errors can affect the accuracy of profit evaluation, they may have once enhanced survival in dangerous contexts following a “better be safe than sorry” rule of thumb. Such a rule can be critical for children, and it could develop early on. Here, we investigated the rationality of choices and the possible occurrence of judgment errors in children aged 3 to 9 years when exposed to a risky trade. Children were allocated with a piece of cookie that they could either keep or risk in exchange of the content of one cup among 6, visible in front of them. In the cups, cookies could be of larger, equal or smaller sizes than the initial allocation. Chances of losing or winning were manipulated by presenting different combinations of cookie sizes in the cups (for example 3 large, 2 equal and 1 small cookie). We investigated the rationality of children's response using the theoretical models of Expected Utility Theory (EUT) and Cumulative Prospect Theory. Children aged 3 to 4 years old were unable to discriminate the profitability of exchanging in the different combinations. From 5 years, children were better at maximizing their benefit in each combination, their decisions were negatively induced by the probability of losing, and they exhibited a framing effect, a judgment error found in adults. Confronting data to the EUT indicated that children aged over 5 were risk-seekers but also revealed inconsistencies in their choices. According to a complementary model, the Cumulative Prospect Theory (CPT), they exhibited loss aversion, a pattern also found in adults. These findings confirm that adult-like judgment errors occur in children, which suggests that they possess a survival value. PMID:23349682

  1. Decision-making under risk of loss in children.

    PubMed

    Steelandt, Sophie; Broihanne, Marie-Hélène; Romain, Amélie; Thierry, Bernard; Dufour, Valérie

    2013-01-01

    In human adults, judgment errors are known to often lead to irrational decision-making in risky contexts. While these errors can affect the accuracy of profit evaluation, they may have once enhanced survival in dangerous contexts following a "better be safe than sorry" rule of thumb. Such a rule can be critical for children, and it could develop early on. Here, we investigated the rationality of choices and the possible occurrence of judgment errors in children aged 3 to 9 years when exposed to a risky trade. Children were allocated with a piece of cookie that they could either keep or risk in exchange of the content of one cup among 6, visible in front of them. In the cups, cookies could be of larger, equal or smaller sizes than the initial allocation. Chances of losing or winning were manipulated by presenting different combinations of cookie sizes in the cups (for example 3 large, 2 equal and 1 small cookie). We investigated the rationality of children's response using the theoretical models of Expected Utility Theory (EUT) and Cumulative Prospect Theory. Children aged 3 to 4 years old were unable to discriminate the profitability of exchanging in the different combinations. From 5 years, children were better at maximizing their benefit in each combination, their decisions were negatively induced by the probability of losing, and they exhibited a framing effect, a judgment error found in adults. Confronting data to the EUT indicated that children aged over 5 were risk-seekers but also revealed inconsistencies in their choices. According to a complementary model, the Cumulative Prospect Theory (CPT), they exhibited loss aversion, a pattern also found in adults. These findings confirm that adult-like judgment errors occur in children, which suggests that they possess a survival value.

  2. Sensitivity analysis of hybrid thermoelastic techniques

    Treesearch

    W.A. Samad; J.M. Considine

    2017-01-01

    Stress functions have been used as a complementary tool to support experimental techniques, such as thermoelastic stress analysis (TSA) and digital image correlation (DIC), in an effort to evaluate the complete and separate full-field stresses of loaded structures. The need for such coupling between experimental data and stress functions is due to the fact that...

  3. Analysing Symbolic Expressions in Secondary School Chemistry: Their Functions and Implications for Pedagogy

    ERIC Educational Resources Information Center

    Liu, Yu; Taber, Keith S.

    2016-01-01

    Symbolic expressions are essential resources for producing knowledge, yet they are a source of learning difficulties in chemistry education. This study aims to employ social semiotics to analyse the symbolic representation of chemistry from two complementary perspectives, referred to here as contextual (i.e., historical) and functional. First, the…

  4. Clinical application of optical coherence tomography in combination with functional diagnostics: advantages and limitations for diagnosis and assessment of therapy outcome in central serous chorioretinopathy.

    PubMed

    Schliesser, Joshua A; Gallimore, Gary; Kunjukunju, Nancy; Sabates, Nelson R; Koulen, Peter; Sabates, Felix N

    2014-01-01

    While identifying functional and structural parameters of the retina in central serous chorioretinopathy (CSCR) patients, this study investigated how an optical coherence tomography (OCT)-based diagnosis can be significantly supplemented with functional diagnostic tools and to what degree the determination of disease severity and therapy outcome can benefit from diagnostics complementary to OCT. CSCR patients were evaluated prospectively with microperimetry (MP) and spectral domain optical coherence tomography (SD-OCT) to determine retinal sensitivity function and retinal thickness as outcome measures along with measures of visual acuity (VA). Patients received clinical care that involved focal laser photocoagulation or pharmacotherapy targeting inflammation and neovascularization. Correlation of clinical parameters with a focus on functional parameters, VA, and mean retinal sensitivity, as well as on the structural parameter mean retinal thickness, showed that functional measures were similar in diagnostic power. A moderate correlation was found between OCT data and the standard functional assessment of VA; however, a strong correlation between OCT and MP data showed that diagnostic measures cannot always be used interchangeably, but that complementary use is of higher clinical value. The study indicates that integrating SD-OCT with MP provides a more complete diagnosis with high clinical relevance for complex, difficult to quantify diseases such as CSCR.

  5. Quantization-Based Adaptive Actor-Critic Tracking Control With Tracking Error Constraints.

    PubMed

    Fan, Quan-Yong; Yang, Guang-Hong; Ye, Dan

    2018-04-01

    In this paper, the problem of adaptive actor-critic (AC) tracking control is investigated for a class of continuous-time nonlinear systems with unknown nonlinearities and quantized inputs. Different from the existing results based on reinforcement learning, the tracking error constraints are considered and new critic functions are constructed to improve the performance further. To ensure that the tracking errors keep within the predefined time-varying boundaries, a tracking error transformation technique is used to constitute an augmented error system. Specific critic functions, rather than the long-term cost function, are introduced to supervise the tracking performance and tune the weights of the AC neural networks (NNs). A novel adaptive controller with a special structure is designed to reduce the effect of the NN reconstruction errors, input quantization, and disturbances. Based on the Lyapunov stability theory, the boundedness of the closed-loop signals and the desired tracking performance can be guaranteed. Finally, simulations on two connected inverted pendulums are given to illustrate the effectiveness of the proposed method.

  6. Characteristics of Single-Event Upsets in a Fabric Switch (ADS151)

    NASA Technical Reports Server (NTRS)

    Buchner, Stephen; Carts, Martin A.; McMorrow, Dale; Kim, Hak; Marshall, Paul W.; LaBel, Kenneth A.

    2003-01-01

    Abstract-Two types of single event effects - bit errors and single event functional interrupts - were observed during heavy-ion testing of the AD8151 crosspoint switch. Bit errors occurred in bursts with the average number of bits in a burst being dependent on both the ion LET and on the data rate. A pulsed laser was used to identify the locations on the chip where the bit errors and single event functional interrupts occurred. Bit errors originated in the switches, drivers, and output buffers. Single event functional interrupts occurred when the laser was focused on the second rank latch containing the data specifying the state of each switch in the 33x17 matrix.

  7. An optimized method to calculate error correction capability of tool influence function in frequency domain

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan

    2017-10-01

    An optimized method to calculate error correction capability of tool influence function (TIF) in certain polishing conditions will be proposed based on smoothing spectral function. The basic mathematical model for this method will be established in theory. A set of polishing experimental data with rigid conformal tool is used to validate the optimized method. The calculated results can quantitatively indicate error correction capability of TIF for different spatial frequency errors in certain polishing conditions. The comparative analysis with previous method shows that the optimized method is simpler in form and can get the same accuracy results with less calculating time in contrast to previous method.

  8. Functional Assays for Neurotoxicity Testing

    EPA Science Inventory

    Neurobehavioral and pathological evaluations of the nervous system are complementary components of basic research and toxicity testing of pharmaceutical and environmental chemicals. While neuropathological assessments provide insight as to cellular changes in neurons, behavioral ...

  9. Functional Assays for Neurotoxicity Testing*

    EPA Science Inventory

    Neurobehavioral and pathological evaluations of the nervous system are complementary components of basic research and toxicity testing of pharmaceutical and environmental chemicals. While neuropathological assessments provide insight as to cellular changes in neurons, behavioral ...

  10. Error Patterns with Fraction Calculations at Fourth Grade as a Function of Students' Mathematics Achievement Status

    ERIC Educational Resources Information Center

    Schumacher, Robin F.; Malone, Amelia S.

    2017-01-01

    The goal of the present study was to describe fraction-calculation errors among 4th-grade students and determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low- vs. average- vs. high-achieving). We…

  11. Development of a scale of executive functioning for the RBANS.

    PubMed

    Spencer, Robert J; Kitchen Andren, Katherine A; Tolle, Kathryn A

    2018-01-01

    The Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) is a cognitive battery that contains scales of several cognitive abilities, but no scale in the instrument is exclusively dedicated to executive functioning. Although the subtests allow for observation of executive-type errors, each error is of fairly low base rate, and healthy and clinical normative data are lacking on the frequency of these types of errors, making their significance difficult to interpret in isolation. The aim of this project was to create an RBANS executive errors scale (RBANS EE) with items comprised of qualitatively dysexecutive errors committed throughout the test. Participants included Veterans referred for outpatient neuropsychological testing. Items were initially selected based on theoretical literature and were retained based on item-total correlations. The RBANS EE (a percentage calculated by dividing the number of dysexecutive errors by the total number of responses) was moderately related to each of seven established measures of executive functioning and was strongly predictive of dichotomous classification of executive impairment. Thus, the scale had solid concurrent validity, justifying its use as a supplementary scale. The RBANS EE requires no additional administration time and can provide a quantified measure of otherwise unmeasured aspects of executive functioning.

  12. Evaluate error correction ability of magnetorheological finishing by smoothing spectral function

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Fan, Bin; Wan, Yongjian; Shi, Chunyan; Zhuo, Bin

    2014-08-01

    Power Spectral Density (PSD) has been entrenched in optics design and manufacturing as a characterization of mid-high spatial frequency (MHSF) errors. Smoothing Spectral Function (SSF) is a newly proposed parameter that based on PSD to evaluate error correction ability of computer controlled optical surfacing (CCOS) technologies. As a typical deterministic and sub-aperture finishing technology based on CCOS, magnetorheological finishing (MRF) leads to MHSF errors inevitably. SSF is employed to research different spatial frequency error correction ability of MRF process. The surface figures and PSD curves of work-piece machined by MRF are presented. By calculating SSF curve, the correction ability of MRF for different spatial frequency errors will be indicated as a normalized numerical value.

  13. Testing the generalized complementary relationship of evaporation with continental-scale long-term water-balance data

    NASA Astrophysics Data System (ADS)

    Szilagyi, Jozsef; Crago, Richard; Qualls, Russell J.

    2016-09-01

    The original and revised versions of the generalized complementary relationship (GCR) of evaporation (ET) were tested with six-digit Hydrologic Unit Code (HUC6) level long-term (1981-2010) water-balance data (sample size of 334). The two versions of the GCR were calibrated with Parameter-Elevation Regressions on Independent Slopes Model (PRISM) mean annual precipitation (P) data and validated against water-balance ET (ETwb) as the difference of mean annual HUC6-averaged P and United States Geological Survey HUC6 runoff (Q) rates. The original GCR overestimates P in about 18% of the PRISM grid points covering the contiguous United States in contrast with 12% of the revised version. With HUC6-averaged data the original version has a bias of -25 mm yr-1 vs the revised version's -17 mm yr-1, and it tends to more significantly underestimate ETwb at high values than the revised one (slope of the best fit line is 0.78 vs 0.91). At the same time it slightly outperforms the revised version in terms of the linear correlation coefficient (0.94 vs 0.93) and the root-mean-square error (90 vs 92 mm yr-1).

  14. Complementary nonparametric analysis of covariance for logistic regression in a randomized clinical trial setting.

    PubMed

    Tangen, C M; Koch, G G

    1999-03-01

    In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.

  15. An evaluation of complementary relationship assumptions

    NASA Astrophysics Data System (ADS)

    Pettijohn, J. C.; Salvucci, G. D.

    2004-12-01

    Complementary relationship (CR) models, based on Bouchet's (1963) somewhat heuristic CR hypothesis, are advantageous in their sole reliance on readily available climatological data. While Bouchet's CR hypothesis requires a number of questionable assumptions, CR models have been evaluated on variable time and length scales with relative success. Bouchet's hypothesis is grounded on the assumption that a change in potential evapotranspiration (Ep}) is equal and opposite in sign to a change in actual evapotranspiration (Ea), i.e., -dEp / dEa = 1. In his mathematical rationalization of the CR, Morton (1965) similarly assumes that a change in potential sensible heat flux (Hp) is equal and opposite in sign to a change in actual sensible heat flux (Ha), i.e., -dHp / dHa = 1. CR models have maintained these assumptions while focusing on defining Ep and equilibrium evapotranspiration (Epo). We question Bouchet and Morton's aforementioned assumptions by revisiting CR derivation in light of a proposed variable, φ = -dEp/dEa. We evaluate φ in a simplified Monin Obukhov surface similarity framework and demonstrate how previous error in the application of CR models may be explained in part by previous assumptions that φ =1. Finally, we discuss the various time and length scales to which φ may be evaluated.

  16. Preventing medical errors by designing benign failures.

    PubMed

    Grout, John R

    2003-07-01

    One way to successfully reduce medical errors is to design health care systems that are more resistant to the tendencies of human beings to err. One interdisciplinary approach entails creating design changes, mitigating human errors, and making human error irrelevant to outcomes. This approach is intended to facilitate the creation of benign failures, which have been called mistake-proofing devices and forcing functions elsewhere. USING FAULT TREES TO DESIGN FORCING FUNCTIONS: A fault tree is a graphical tool used to understand the relationships that either directly cause or contribute to the cause of a particular failure. A careful analysis of a fault tree enables the analyst to anticipate how the process will behave after the change. EXAMPLE OF AN APPLICATION: A scenario in which a patient is scalded while bathing can serve as an example of how multiple fault trees can be used to design forcing functions. The first fault tree shows the undesirable event--patient scalded while bathing. The second fault tree has a benign event--no water. Adding a scald valve changes the outcome from the undesirable event ("patient scalded while bathing") to the benign event ("no water") Analysis of fault trees does not ensure or guarantee that changes necessary to eliminate error actually occur. Most mistake-proofing is used to prevent simple errors and to create well-defended processes, but complex errors can also result. The utilization of mistake-proofing or forcing functions can be thought of as changing the logic of a process. Errors that formerly caused undesirable failures can be converted into the causes of benign failures. The use of fault trees can provide a variety of insights into the design of forcing functions that will improve patient safety.

  17. Carbon nanotube-based three-dimensional monolithic optoelectronic integrated system

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Wang, Sheng; Liu, Huaping; Peng, Lian-Mao

    2017-06-01

    Single material-based monolithic optoelectronic integration with complementary metal oxide semiconductor-compatible signal processing circuits is one of the most pursued approaches in the post-Moore era to realize rapid data communication and functional diversification in a limited three-dimensional space. Here, we report an electrically driven carbon nanotube-based on-chip three-dimensional optoelectronic integrated circuit. We demonstrate that photovoltaic receivers, electrically driven transmitters and on-chip electronic circuits can all be fabricated using carbon nanotubes via a complementary metal oxide semiconductor-compatible low-temperature process, providing a seamless integration platform for realizing monolithic three-dimensional optoelectronic integrated circuits with diversified functionality such as the heterogeneous AND gates. These circuits can be vertically scaled down to sub-30 nm and operates in photovoltaic mode at room temperature. Parallel optical communication between functional layers, for example, bottom-layer digital circuits and top-layer memory, has been demonstrated by mapping data using a 2 × 2 transmitter/receiver array, which could be extended as the next generation energy-efficient signal processing paradigm.

  18. Dynamic covalent chemistry enables formation of antimicrobial peptide quaternary assemblies in a completely abiotic manner

    NASA Astrophysics Data System (ADS)

    Reuther, James F.; Dees, Justine L.; Kolesnichenko, Igor V.; Hernandez, Erik T.; Ukraintsev, Dmitri V.; Guduru, Rusheel; Whiteley, Marvin; Anslyn, Eric V.

    2018-01-01

    Naturally occurring peptides and proteins often use dynamic disulfide bonds to impart defined tertiary/quaternary structures for the formation of binding pockets with uniform size and function. Although peptide synthesis and modification are well established, controlling quaternary structure formation remains a significant challenge. Here, we report the facile incorporation of aryl aldehyde and acyl hydrazide functionalities into peptide oligomers via solid-phase copper-catalysed azide-alkyne cycloaddition (SP-CuAAC) click reactions. When mixed, these complementary functional groups rapidly react in aqueous media at neutral pH to form peptide-peptide intermolecular macrocycles with highly tunable ring sizes. Moreover, sequence-specific figure-of-eight, dumbbell-shaped, zipper-like and multi-loop quaternary structures were formed selectively. Controlling the proportions of reacting peptides with mismatched numbers of complementary reactive groups results in the formation of higher-molecular-weight sequence-defined ladder polymers. This also amplified antimicrobial effectiveness in select cases. This strategy represents a general approach to the creation of complex abiotic peptide quaternary structures.

  19. Pediatric irritable bowel syndrome and other functional abdominal pain disorders: an update of non-pharmacological treatments.

    PubMed

    Gupta, Shivani; Schaffer, Gilda; Saps, Miguel

    2018-05-01

    Functional abdominal pain disorders, including irritable bowel syndrome, are common in children and treatment can often be difficult. Pharmacological therapies and complementary treatments are widely used, despite the limited data in pediatrics. Areas covered: This review provides an overview of the available data for the use of diet, probiotics, percutaneous electrical nerve stimulation, and psychosocial interventions, including hypnotherapy, yoga, cognitive and behavioral therapy, and mind-body interventions for the treatment of functional abdominal pain disorders in children. The literature review included a PubMed search by each therapy, children, abdominal pain, and irritable bowel syndrome. Relevant articles to this review are discussed. Expert commentary: The decision on the use of pharmacological and complementary therapies should be based on clinical findings, evidence, availability, and in-depth discussion with the patient and family. The physician should provide education on the different interventions and their role on the treatment in an empathetic and warm manner providing ample time for the family to ask questions.

  20. Manual control of yaw motion with combined visual and vestibular cues

    NASA Technical Reports Server (NTRS)

    Zacharias, G. L.; Young, L. R.

    1977-01-01

    Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation was modelled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A correction to the frequency responses is provided by a separate measurement of manual control performance in an analogous visual pursuit nulling task. The resulting dual-input describing function for motion perception dependence on combined cue presentation supports the complementary model, in which vestibular cues dominate sensation at frequencies above 0.05 Hz. The describing function model is extended by the proposal of a non-linear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.

  1. Non-invasive mapping of calculation function by repetitive navigated transcranial magnetic stimulation.

    PubMed

    Maurer, Stefanie; Tanigawa, Noriko; Sollmann, Nico; Hauck, Theresa; Ille, Sebastian; Boeckh-Behrens, Tobias; Meyer, Bernhard; Krieg, Sandro M

    2016-11-01

    Concerning calculation function, studies have already reported on localizing computational function in patients and volunteers by functional magnetic resonance imaging and transcranial magnetic stimulation. However, the development of accurate repetitive navigated TMS (rTMS) with a considerably higher spatial resolution opens a new field in cognitive neuroscience. This study was therefore designed to evaluate the feasibility of rTMS for locating cortical calculation function in healthy volunteers, and to establish this technique for future scientific applications as well as preoperative mapping in brain tumor patients. Twenty healthy subjects underwent rTMS calculation mapping using 5 Hz/10 pulses. Fifty-two previously determined cortical spots of the whole hemispheres were stimulated on both sides. The subjects were instructed to perform the calculation task composed of 80 simple arithmetic operations while rTMS pulses were applied. The highest error rate (80 %) for all errors of all subjects was observed in the right ventral precentral gyrus. Concerning division task, a 45 % error rate was achieved in the left middle frontal gyrus. The subtraction task showed its highest error rate (40 %) in the right angular gyrus (anG). In the addition task a 35 % error rate was observed in the left anterior superior temporal gyrus. Lastly, the multiplication task induced a maximum error rate of 30 % in the left anG. rTMS seems feasible as a way to locate cortical calculation function. Besides language function, the cortical localizations are well in accordance with the current literature for other modalities or lesion studies.

  2. Open quantum systems and error correction

    NASA Astrophysics Data System (ADS)

    Shabani Barzegar, Alireza

    Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC) that applies to any linear map, in particular maps that are not completely positive (CP). This is a complementary to the second chapter which is published in [Shabani and Lidar, 2007]. In the last chapter 7 before the conclusion, a formulation for evaluating the performance of quantum error correcting codes for a general error model is presented, also published in [Shabani, 2005]. In this formulation, the correlation between errors is quantified by a Hamiltonian description of the noise process. In particular, we consider Calderbank-Shor-Steane codes and observe a better performance in the presence of correlated errors depending on the timing of the error recovery.

  3. Selling Complementary Patents: Experimental Investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bjornstad, David J; Santore, Rudy; McKee, Michael

    2010-02-01

    Production requiring licensing groups of complementary patents implements a coordination game among patent holders, who can price patents by choosing among combinations of fixed and royalty fees. Summed across patents, these fees become the total producer cost of the package of patents. Royalties, because they function as excise taxes, add to marginal costs, resulting in higher prices and reduced quantities of the downstream product and lower payoffs to the patent holders. Using fixed fees eliminates this inefficiency but yields a more complex coordination game in which there are multiple equilibria, which are very fragile in that small mistakes can leadmore » the downstream firm to not license the technology, resulting in inefficient outcomes. We report on a laboratory market investigation of the efficiency effects of coordinated pricing of patents in a patent pool. We find that pool-like pricing agreements can yield fewer coordination failures in the pricing of complementary patents.« less

  4. Optimal model of PDIG based microgrid and design of complementary stabilizer using ICA.

    PubMed

    Amini, R Mohammad; Safari, A; Ravadanegh, S Najafi

    2016-09-01

    The generalized Heffron-Phillips model (GHPM) for a microgrid containing a photovoltaic (PV)-diesel machine (DM)-induction motor (IM)-governor (GV) (PDIG) has been developed at the low voltage level. A GHPM is calculated by linearization method about a loading condition. An effective Maximum Power Point Tracking (MPPT) approach for PV network has been done using sliding mode control (SMC) to maximize output power. Additionally, to improve stability of microgrid for more penetration of renewable energy resources with nonlinear load, a complementary stabilizer has been presented. Imperialist competitive algorithm (ICA) is utilized to design of gains for the complementary stabilizer with the multiobjective function. The stability analysis of the PDIG system has been completed with eigenvalues analysis and nonlinear simulations. Robustness and validity of the proposed controllers on damping of electromechanical modes examine through time domain simulation under input mechanical torque disturbances. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Learning from nature: binary cooperative complementary nanomaterials.

    PubMed

    Su, Bin; Guo, Wei; Jiang, Lei

    2015-03-01

    In this Review, nature-inspired binary cooperative complementary nanomaterials (BCCNMs), consisting of two components with entirely opposite physiochemical properties at the nanoscale, are presented as a novel concept for the building of promising materials. Once the distance between the two nanoscopic components is comparable to the characteristic length of some physical interactions, the cooperation between these complementary building blocks becomes dominant and endows the macroscopic materials with novel and superior properties. The first implementation of the BCCNMs is the design of bio-inspired smart materials with superwettability and their reversible switching between different wetting states in response to various kinds of external stimuli. Coincidentally, recent studies on other types of functional nanomaterials contribute more examples to support the idea of BCCNMs, which suggests a potential yet comprehensive range of future applications in both materials science and engineering. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Complementary effect of patient volume and quality of care on hospital cost efficiency.

    PubMed

    Choi, Jeong Hoon; Park, Imsu; Jung, Ilyoung; Dey, Asoke

    2017-06-01

    This study explores the direct effect of an increase in patient volume in a hospital and the complementary effect of quality of care on the cost efficiency of U.S. hospitals in terms of patient volume. The simultaneous equation model with three-stage least squares is used to measure the direct effect of patient volume and the complementary effect of quality of care and volume. Cost efficiency is measured with a data envelopment analysis method. Patient volume has a U-shaped relationship with hospital cost efficiency and an inverted U-shaped relationship with quality of care. Quality of care functions as a moderator for the relationship between patient volume and efficiency. This paper addresses the economically important question of the relationship of volume with quality of care and hospital cost efficiency. The three-stage least square simultaneous equation model captures the simultaneous effects of patient volume on hospital quality of care and cost efficiency.

  7. [Biological characteristics of an enteroinvasive Escherichia coli strain with tatABC deletion].

    PubMed

    Gong, Zhaolong; Ye, Changyun; Liu, Xiaobing; Zhang, Min; Zhuo, Qin

    2013-05-04

    To study the relationship between twin-arginine translocation system (Tat) system with the biological characteristics of enteroinvasive Escherichia coli (EIEC). Through homologous recombination, we constructed EIEC's tatABC gene deletion strain and complementary strain, and explored their impact on bacterial form, substrate transport function as well as on HeLa cells and guinea pig's corneal invasion force. The tatABC gene deletion strain had apparent changes in bacterial form, loss of substrate transporter function, and significant weakened bacterial invasion force (the number of the deletion strain invading into HeLa cells was decreased significantly, and the ability of its corneal lesion capacity of the guinea pig was significantly weakened), while the complementary strain was similar to the wild strain in the above respects. EIEC's Tat protein transport system is closely related with the biological characteristics of EIEC.

  8. Methods for Probabilistic Radiological Dose Assessment at a High-Level Radioactive Waste Repository.

    NASA Astrophysics Data System (ADS)

    Maheras, Steven James

    Methods were developed to assess and evaluate the uncertainty in offsite and onsite radiological dose at a high-level radioactive waste repository to show reasonable assurance that compliance with applicable regulatory requirements will be achieved. Uncertainty in offsite dose was assessed by employing a stochastic precode in conjunction with Monte Carlo simulation using an offsite radiological dose assessment code. Uncertainty in onsite dose was assessed by employing a discrete-event simulation model of repository operations in conjunction with an occupational radiological dose assessment model. Complementary cumulative distribution functions of offsite and onsite dose were used to illustrate reasonable assurance. Offsite dose analyses were performed for iodine -129, cesium-137, strontium-90, and plutonium-239. Complementary cumulative distribution functions of offsite dose were constructed; offsite dose was lognormally distributed with a two order of magnitude range. However, plutonium-239 results were not lognormally distributed and exhibited less than one order of magnitude range. Onsite dose analyses were performed for the preliminary inspection, receiving and handling, and the underground areas of the repository. Complementary cumulative distribution functions of onsite dose were constructed and exhibited less than one order of magnitude range. A preliminary sensitivity analysis of the receiving and handling areas was conducted using a regression metamodel. Sensitivity coefficients and partial correlation coefficients were used as measures of sensitivity. Model output was most sensitive to parameters related to cask handling operations. Model output showed little sensitivity to parameters related to cask inspections.

  9. Genome-Wide Spectra of Transcription Insertions and Deletions Reveal That Slippage Depends on RNA:DNA Hybrid Complementarity.

    PubMed

    Traverse, Charles C; Ochman, Howard

    2017-08-29

    Advances in sequencing technologies have enabled direct quantification of genome-wide errors that occur during RNA transcription. These errors occur at rates that are orders of magnitude higher than rates during DNA replication, but due to technical difficulties such measurements have been limited to single-base substitutions and have not yet quantified the scope of transcription insertions and deletions. Previous reporter gene assay findings suggested that transcription indels are produced exclusively by elongation complex slippage at homopolymeric runs, so we enumerated indels across the protein-coding transcriptomes of Escherichia coli and Buchnera aphidicola , which differ widely in their genomic base compositions and incidence of repeat regions. As anticipated from prior assays, transcription insertions prevailed in homopolymeric runs of A and T; however, transcription deletions arose in much more complex sequences and were rarely associated with homopolymeric runs. By reconstructing the relocated positions of the elongation complex as inferred from the sequences inserted or deleted during transcription, we show that continuation of transcription after slippage hinges on the degree of nucleotide complementarity within the RNA:DNA hybrid at the new DNA template location. IMPORTANCE The high level of mistakes generated during transcription can result in the accumulation of malfunctioning and misfolded proteins which can alter global gene regulation and in the expenditure of energy to degrade these nonfunctional proteins. The transcriptome-wide occurrence of base substitutions has been elucidated in bacteria, but information on transcription insertions and deletions-errors that potentially have more dire effects on protein function-is limited to reporter gene constructs. Here, we capture the transcriptome-wide spectrum of insertions and deletions in Escherichia coli and Buchnera aphidicola and show that they occur at rates approaching those of base substitutions. Knowledge of the full extent of sequences subject to transcription indels supports a new model of bacterial transcription slippage, one that relies on the number of complementary bases between the transcript and the DNA template to which it slipped. Copyright © 2017 Traverse and Ochman.

  10. Fast Bayesian approach for modal identification using free vibration data, Part I - Most probable value

    NASA Astrophysics Data System (ADS)

    Zhang, Feng-Liang; Ni, Yan-Chun; Au, Siu-Kui; Lam, Heung-Fai

    2016-03-01

    The identification of modal properties from field testing of civil engineering structures is becoming economically viable, thanks to the advent of modern sensor and data acquisition technology. Its demand is driven by innovative structural designs and increased performance requirements of dynamic-prone structures that call for a close cross-checking or monitoring of their dynamic properties and responses. Existing instrumentation capabilities and modal identification techniques allow structures to be tested under free vibration, forced vibration (known input) or ambient vibration (unknown broadband loading). These tests can be considered complementary rather than competing as they are based on different modeling assumptions in the identification model and have different implications on costs and benefits. Uncertainty arises naturally in the dynamic testing of structures due to measurement noise, sensor alignment error, modeling error, etc. This is especially relevant in field vibration tests because the test condition in the field environment can hardly be controlled. In this work, a Bayesian statistical approach is developed for modal identification using the free vibration response of structures. A frequency domain formulation is proposed that makes statistical inference based on the Fast Fourier Transform (FFT) of the data in a selected frequency band. This significantly simplifies the identification model because only the modes dominating the frequency band need to be included. It also legitimately ignores the information in the excluded frequency bands that are either irrelevant or difficult to model, thereby significantly reducing modeling error risk. The posterior probability density function (PDF) of the modal parameters is derived rigorously from modeling assumptions and Bayesian probability logic. Computational difficulties associated with calculating the posterior statistics, including the most probable value (MPV) and the posterior covariance matrix, are addressed. Fast computational algorithms for determining the MPV are proposed so that the method can be practically implemented. In the companion paper (Part II), analytical formulae are derived for the posterior covariance matrix so that it can be evaluated without resorting to finite difference method. The proposed method is verified using synthetic data. It is also applied to modal identification of full-scale field structures.

  11. GDF v2.0, an enhanced version of GDF

    NASA Astrophysics Data System (ADS)

    Tsoulos, Ioannis G.; Gavrilis, Dimitris; Dermatas, Evangelos

    2007-12-01

    An improved version of the function estimation program GDF is presented. The main enhancements of the new version include: multi-output function estimation, capability of defining custom functions in the grammar and selection of the error function. The new version has been evaluated on a series of classification and regression datasets, that are widely used for the evaluation of such methods. It is compared to two known neural networks and outperforms them in 5 (out of 10) datasets. Program summaryTitle of program: GDF v2.0 Catalogue identifier: ADXC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 98 147 No. of bytes in distributed program, including test data, etc.: 2 040 684 Distribution format: tar.gz Programming language: GNU C++ Computer: The program is designed to be portable in all systems running the GNU C++ compiler Operating system: Linux, Solaris, FreeBSD RAM: 200000 bytes Classification: 4.9 Does the new version supersede the previous version?: Yes Nature of problem: The technique of function estimation tries to discover from a series of input data a functional form that best describes them. This can be performed with the use of parametric models, whose parameters can adapt according to the input data. Solution method: Functional forms are being created by genetic programming which are approximations for the symbolic regression problem. Reasons for new version: The GDF package was extended in order to be more flexible and user customizable than the old package. The user can extend the package by defining his own error functions and he can extend the grammar of the package by adding new functions to the function repertoire. Also, the new version can perform function estimation of multi-output functions and it can be used for classification problems. Summary of revisions: The following features have been added to the package GDF: Multi-output function approximation. The package can now approximate any function f:R→R. This feature gives also to the package the capability of performing classification and not only regression. User defined function can be added to the repertoire of the grammar, extending the regression capabilities of the package. This feature is limited to 3 functions, but easily this number can be increased. Capability of selecting the error function. The package offers now to the user apart from the mean square error other error functions such as: mean absolute square error, maximum square error. Also, user defined error functions can be added to the set of error functions. More verbose output. The main program displays more information to the user as well as the default values for the parameters. Also, the package gives to the user the capability to define an output file, where the output of the gdf program for the testing set will be stored after the termination of the process. Additional comments: A technical report describing the revisions, experiments and test runs is packaged with the source code. Running time: Depending on the train data.

  12. Altered Functional Connectivity of Fronto-Cingulo-Striatal Circuits during Error Monitoring in Adolescents with a History of Childhood Abuse

    PubMed Central

    Hart, Heledd; Lim, Lena; Mehta, Mitul A.; Curtis, Charles; Xu, Xiaohui; Breen, Gerome; Simmons, Andrew; Mirza, Kah; Rubia, Katya

    2018-01-01

    Childhood maltreatment is associated with error hypersensitivity. We examined the effect of childhood abuse and abuse-by-gene (5-HTTLPR, MAOA) interaction on functional brain connectivity during error processing in medication/drug-free adolescents. Functional connectivity was compared, using generalized psychophysiological interaction (gPPI) analysis of functional magnetic resonance imaging (fMRI) data, between 22 age- and gender-matched medication-naïve and substance abuse-free adolescents exposed to severe childhood abuse and 27 healthy controls, while they performed an individually adjusted tracking stop-signal task, designed to elicit 50% inhibition failures. During inhibition failures, abused participants relative to healthy controls exhibited reduced connectivity between right and left putamen, bilateral caudate and anterior cingulate cortex (ACC), and between right supplementary motor area (SMA) and right inferior and dorsolateral prefrontal cortex. Abuse-related connectivity abnormalities were associated with longer abuse duration. No group differences in connectivity were observed for successful inhibition. The findings suggest that childhood abuse is associated with decreased functional connectivity in fronto-cingulo-striatal networks during error processing. Furthermore that the severity of connectivity abnormalities increases with abuse duration. Reduced connectivity of error detection networks in maltreated individuals may be linked to constant monitoring of errors in order to avoid mistakes which, in abusive contexts, are often associated with harsh punishment. PMID:29434543

  13. Improving the Glucose Meter Error Grid With the Taguchi Loss Function.

    PubMed

    Krouwer, Jan S

    2016-07-01

    Glucose meters often have similar performance when compared by error grid analysis. This is one reason that other statistics such as mean absolute relative deviation (MARD) are used to further differentiate performance. The problem with MARD is that too much information is lost. But additional information is available within the A zone of an error grid by using the Taguchi loss function. Applying the Taguchi loss function gives each glucose meter difference from reference a value ranging from 0 (no error) to 1 (error reaches the A zone limit). Values are averaged over all data which provides an indication of risk of an incorrect medical decision. This allows one to differentiate glucose meter performance for the common case where meters have a high percentage of values in the A zone and no values beyond the B zone. Examples are provided using simulated data. © 2015 Diabetes Technology Society.

  14. Error of the slanted edge method for measuring the modulation transfer function of imaging systems.

    PubMed

    Xie, Xufen; Fan, Hongda; Wang, Hongyuan; Wang, Zebin; Zou, Nianyu

    2018-03-01

    The slanted edge method is a basic approach for measuring the modulation transfer function (MTF) of imaging systems; however, its measurement accuracy is limited in practice. Theoretical analysis of the slanted edge MTF measurement method performed in this paper reveals that inappropriate edge angles and random noise reduce this accuracy. The error caused by edge angles is analyzed using sampling and reconstruction theory. Furthermore, an error model combining noise and edge angles is proposed. We verify the analyses and model with respect to (i) the edge angle, (ii) a statistical analysis of the measurement error, (iii) the full width at half-maximum of a point spread function, and (iv) the error model. The experimental results verify the theoretical findings. This research can be referential for applications of the slanted edge MTF measurement method.

  15. Adaptive optics system performance approximations for atmospheric turbulence correction

    NASA Astrophysics Data System (ADS)

    Tyson, Robert K.

    1990-10-01

    Analysis of adaptive optics system behavior often can be reduced to a few approximations and scaling laws. For atmospheric turbulence correction, the deformable mirror (DM) fitting error is most often used to determine a priori the interactuator spacing and the total number of correction zones required. This paper examines the mirror fitting error in terms of its most commonly used exponential form. The explicit constant in the error term is dependent on deformable mirror influence function shape and actuator geometry. The method of least squares fitting of discrete influence functions to the turbulent wavefront is compared to the linear spatial filtering approximation of system performance. It is found that the spatial filtering method overstimates the correctability of the adaptive optics system by a small amount. By evaluating fitting error for a number of DM configurations, actuator geometries, and influence functions, fitting error constants verify some earlier investigations.

  16. Computerized Design and Generation of Low-Noise Gears with Localized Bearing Contact

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Chen, Ningxin; Chen, Jui-Sheng; Lu, Jian; Handschuh, Robert F.

    1995-01-01

    The results of research projects directed at the reduction of noise caused by misalignment of the following gear drives: double-circular arc helical gears, modified involute helical gears, face-milled spiral bevel gears, and face-milled formate cut hypoid gears are presented. Misalignment in these types of gear drives causes periodic, almost linear discontinuous functions of transmission errors. The period of such functions is the cycle of meshing when one pair of teeth is changed for the next. Due to the discontinuity of such functions of transmission errors high vibration and noise are inevitable. A predesigned parabolic function of transmission errors that is able to absorb linear discontinuous functions of transmission errors and change the resulting function of transmission errors into a continuous one is proposed. The proposed idea was successfully tested using spiral bevel gears and the noise was reduced a substantial amount in comparison with the existing design. The idea of a predesigned parabolic function is applied for the reduction of noise of helical and hypoid gears. The effectiveness of the proposed approach has been investigated by developed TCA (tooth contact analysis) programs. The bearing contact for the mentioned gears is localized. Conditions that avoid edge contact for the gear drives have been determined. Manufacturing of helical gears with new topology by hobs and grinding worms has been investigated.

  17. Local Improvement Results for Anderson Acceleration with Inaccurate Function Evaluations

    DOE PAGES

    Toth, Alex; Ellis, J. Austin; Evans, Tom; ...

    2017-10-26

    Here, we analyze the convergence of Anderson acceleration when the fixed point map is corrupted with errors. We also consider uniformly bounded errors and stochastic errors with infinite tails. We prove local improvement results which describe the performance of the iteration up to the point where the accuracy of the function evaluation causes the iteration to stagnate. We illustrate the results with examples from neutronics.

  18. Local Improvement Results for Anderson Acceleration with Inaccurate Function Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toth, Alex; Ellis, J. Austin; Evans, Tom

    Here, we analyze the convergence of Anderson acceleration when the fixed point map is corrupted with errors. We also consider uniformly bounded errors and stochastic errors with infinite tails. We prove local improvement results which describe the performance of the iteration up to the point where the accuracy of the function evaluation causes the iteration to stagnate. We illustrate the results with examples from neutronics.

  19. The Cut-Score Operating Function: A New Tool to Aid in Standard Setting

    ERIC Educational Resources Information Center

    Grabovsky, Irina; Wainer, Howard

    2017-01-01

    In this essay, we describe the construction and use of the Cut-Score Operating Function in aiding standard setting decisions. The Cut-Score Operating Function shows the relation between the cut-score chosen and the consequent error rate. It allows error rates to be defined by multiple loss functions and will show the behavior of each loss…

  20. Assessment of Functional Change and Cognitive Correlates in the Progression from Healthy Cognitive Aging to Dementia

    PubMed Central

    Schmitter-Edgecombe, Maureen; Parsey, Carolyn M.

    2014-01-01

    Objective There is currently limited understanding of the course of change in everyday functioning that occurs with normal aging and dementia. To better characterize the nature of this change, we evaluated the types of errors made by participants as they performed everyday tasks in a naturalistic environment. Method Participants included cognitively healthy younger adults (YA; N = 55) and older adults (OA; N =88), and individuals with mild cognitive impairment (MCI: N =55) and dementia (N = 18). Participants performed eight scripted everyday activities (e.g., filling a medication dispenser) while under direct observation in a campus apartment. Task performances were coded for the following errors: inefficient actions, omissions, substitutions, and irrelevant actions. Results Performance accuracy decreased with age and level of cognitive impairment. Relative to the YAs, the OA group exhibited more inefficient actions which were linked to performance on neuropsychological measures of executive functioning. Relative to the OAs, the MCI group committed significantly more omission errors which were strongly linked to performance on memory measures. All error types were significantly more prominent in individuals with dementia. Omission errors uniquely predicted everyday functional status as measured by both informant-report and a performance-based measure. Conclusions These findings suggest that in the progression from healthy aging to MCI, everyday task difficulties may evolve from task inefficiencies to task omission errors, leading to inaccuracies in task completion that are recognized by knowledgeable informants. Continued decline in cognitive functioning then leads to more substantial everyday errors, which compromise ability to live independently. PMID:24933485

  1. Micro-electro-mechanically switchable near infrared complementary metamaterial absorber

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pitchappa, Prakash; Pei Ho, Chong; Institute of Microelectronics

    2014-05-19

    We experimentally demonstrate a micro-electro-mechanically switchable near infrared complementary metamaterial absorber by integrating the metamaterial layer to be the out of plane movable microactuator. The metamaterial layer is electrostatically actuated by applying voltage across the suspended complementary metamaterial layer and the stationary bottom metallic reflector. Thus, the effective spacing between the metamaterial layer and bottom metal reflector is varied as a function of applied voltage. With the reduction of effective spacing between the metamaterial and reflector layers, a strong spectral blue shift in the peak absorption wavelength can be achieved. With spacing change of 300 nm, the spectral shift of 0.7 μmmore » in peak absorption wavelength was obtained for near infrared spectral region. The electro-optic switching performance of the device was characterized, and a striking switching contrast of 1500% was achieved at 2.1 μm. The reported micro-electro-mechanically tunable complementary metamaterial absorber device can potentially enable a wide range of high performance electro-optical devices, such as continuously tunable filters, modulators, and electro-optic switches that form the key components to facilitate future photonic circuit applications.« less

  2. Effect of atmospheric turbulence on the bit error probability of a space to ground near infrared laser communications link using binary pulse position modulation and an avalanche photodiode detector

    NASA Technical Reports Server (NTRS)

    Safren, H. G.

    1987-01-01

    The effect of atmospheric turbulence on the bit error rate of a space-to-ground near infrared laser communications link is investigated, for a link using binary pulse position modulation and an avalanche photodiode detector. Formulas are presented for the mean and variance of the bit error rate as a function of signal strength. Because these formulas require numerical integration, they are of limited practical use. Approximate formulas are derived which are easy to compute and sufficiently accurate for system feasibility studies, as shown by numerical comparison with the exact formulas. A very simple formula is derived for the bit error rate as a function of signal strength, which requires only the evaluation of an error function. It is shown by numerical calculations that, for realistic values of the system parameters, the increase in the bit error rate due to turbulence does not exceed about thirty percent for signal strengths of four hundred photons per bit or less. The increase in signal strength required to maintain an error rate of one in 10 million is about one or two tenths of a db.

  3. Tolerance analysis of optical telescopes using coherent addition of wavefront errors

    NASA Technical Reports Server (NTRS)

    Davenport, J. W.

    1982-01-01

    A near diffraction-limited telescope requires that tolerance analysis be done on the basis of system wavefront error. One method of analyzing the wavefront error is to represent the wavefront error function in terms of its Zernike polynomial expansion. A Ramsey-Korsch ray trace package, a computer program that simulates the tracing of rays through an optical telescope system, was expanded to include the Zernike polynomial expansion up through the fifth-order spherical term. An option to determine a 3 dimensional plot of the wavefront error function was also included in the Ramsey-Korsch package. Several assimulation runs were analyzed to determine the particular set of coefficients in the Zernike expansion that are effected by various errors such as tilt, decenter and despace. A 3 dimensional plot of each error up through the fifth-order spherical term was also included in the study. Tolerance analysis data are presented.

  4. Selective Individual Primary Cell Capture Using Locally Bio-Functionalized Micropores

    PubMed Central

    Liu, Jie; Bombera, Radoslaw; Leroy, Loïc; Roupioz, Yoann; Baganizi, Dieudonné R.; Marche, Patrice N.; Haguet, Vincent; Mailley, Pascal; Livache, Thierry

    2013-01-01

    Background Solid-state micropores have been widely employed for 6 decades to recognize and size flowing unlabeled cells. However, the resistive-pulse technique presents limitations when the cells to be differentiated have overlapping dimension ranges such as B and T lymphocytes. An alternative approach would be to specifically capture cells by solid-state micropores. Here, the inner wall of 15-µm pores made in 10 µm-thick silicon membranes was covered with antibodies specific to cell surface proteins of B or T lymphocytes. The selective trapping of individual unlabeled cells in a bio-functionalized micropore makes them recognizable just using optical microscopy. Methodology/Principal Findings We locally deposited oligodeoxynucleotide (ODN) and ODN-conjugated antibody probes on the inner wall of the micropores by forming thin films of polypyrrole-ODN copolymers using contactless electro-functionalization. The trapping capabilities of the bio-functionalized micropores were validated using optical microscopy and the resistive-pulse technique by selectively capturing polystyrene microbeads coated with complementary ODN. B or T lymphocytes from a mouse splenocyte suspension were specifically immobilized on micropore walls functionalized with complementary ODN-conjugated antibodies targeting cell surface proteins. Conclusions/Significance The results showed that locally bio-functionalized micropores can isolate target cells from a suspension during their translocation throughout the pore, including among cells of similar dimensions in complex mixtures. PMID:23469221

  5. Baryon acoustic oscillations in the Ly α forest of BOSS DR11 quasars

    DOE PAGES

    Delubac, Timothée; Bautista, Julian E.; Busca, Nicolás G.; ...

    2015-01-26

    We report a detection of the baryon acousticoscillation (BAO) feature in the flux-correlation function of the Lyα forest of high-redshift quasars with a statistical significance of five standard deviations. The study uses 137,562 quasars in the redshift range 2.1 ≤ z ≤ 3.5 from the data release 11 (DR11) of the Baryon Oscillation Spectroscopic Survey (BOSS) of SDSS-III. This sample contains three times the number of quasars used in previous studies. The measured position of the BAO peak determines the angular distance, D A(z = 2.34) and expansion rate, H(z = 2.34), both on a scale set by the sound horizon at the drag epoch, r d. We find D A/r d = 11.28 ± 0.65(1σ)more » $$+2.8\\atop{-1.2}$$(2σ) and D H/r d = 9.18 ± 0.28(1σ) ± 0.6(2σ) where D H = c/H. The optimal combination, ~D$$0.7\\atop{H}$$ D$0.3\\atop{A}/r d is determined with a precision of ~2%. For the value r d = 147.4 Mpc, consistent with the cosmic microwave background power spectrum measured by Planck, we find D A(z = 2.34) = 1662 ± 96(1σ) Mpc and H(z = 2.34) = 222 ± 7(1σ) km s -1 Mpc -1. Tests with mock catalogs and variations of our analysis procedure have revealed no systematic uncertainties comparable to our statistical errors. Our results agree with the previously reported BAO measurement at the same redshift using the quasar-Lyα forest cross-correlation. The autocorrelation and cross-correlation approaches are complementary because of the quite different impact of redshift-space distortion on the two measurements. The combined constraints from the two correlation functions imply values of D A/r d that are 7% lower and 7% higher for D H/r d than the predictions of a flat ΛCDM cosmological model with the best-fit Planck parameters. With our estimated statistical errors, the significance of this discrepancy is ≈2.5σ.« less

  6. Hip joint center localisation: A biomechanical application to hip arthroplasty population

    PubMed Central

    Bouffard, Vicky; Begon, Mickael; Champagne, Annick; Farhadnia, Payam; Vendittoli, Pascal-André; Lavigne, Martin; Prince, François

    2012-01-01

    AIM: To determine hip joint center (HJC) location on hip arthroplasty population comparing predictive and functional approaches with radiographic measurements. METHODS: The distance between the HJC and the mid-pelvis was calculated and compared between the three approaches. The localisation error between the predictive and functional approach was compared using the radiographic measurements as the reference. The operated leg was compared to the non-operated leg. RESULTS: A significant difference was found for the distance between the HJC and the mid-pelvis when comparing the predictive and functional method. The functional method leads to fewer errors. A statistical difference was found for the localization error between the predictive and functional method. The functional method is twice more precise. CONCLUSION: Although being more individualized, the functional method improves HJC localization and should be used in three-dimensional gait analysis. PMID:22919569

  7. An alternative medicine, Agaricus blazei, may have induced severe hepatic dysfunction in cancer patients.

    PubMed

    Mukai, Hirofumi; Watanabe, Toru; Ando, Masashi; Katsumata, Noriyuki

    2006-12-01

    We report three cases of patients with advanced cancer who showed severe hepatic damage, and two of whom died of fulminant hepatitis. All the patients were taking Agaricus blazei (Himematsutake) extract, one of the most popular complementary and alternative medicines among Japanese cancer patients. In one patient, liver functions recovered gradually after she stopped taking the Agaricus blazei, but she restarted taking it, which resulted in deterioration of the liver function again. The other patients who were admitted for severe liver damage had started taking the Agaricus blazei several days before admission. Although several other factors cannot be completely ruled out as the causes of liver damage, a strong causal relationship between the Agaricus blazei extract and liver damage was suggested and, at least, taking the Agaricus blazei extract made the clinical decision-making process much more complicated. Doctors who are aware of their patients taking the extract may accept it probably because they believe there is no harm in a complementary and alternative medicine. When unexpected liver damage is documented, however, doctors should consider the use of the Agaricus blazei extract as one of its causal factors. It is necessary to evaluate many modes of complementary and alternative medicines, including the Agaricus blazei extract, in rigorous, scientifically designed and peer-reviewed clinical trials.

  8. Anti-complementary activity of enzyme-treated traditional Korean rice wine (Makgeolli) hydrolysates.

    PubMed

    Bae, Song Hwan; Choi, Jang Won; Ra, Kyung Soo; Yu, Kwang-Won; Shin, Kwang-Soon; Park, Sung Sun; Suh, Hyung Joo

    2012-06-01

    Makgeolli brewed from rice contains about 150 g kg(-1) alcohol and has a fragrance as well as an acidic and sweet taste. During the brewing process, by-products such as rice bran and brewery cake are produced. At the end of fermentation the matured mash is transferred to a filter cloth and the Makgeolli is squeezed out from the cake, leaving the lees of the mash. These by-products have continued to increase every year, resulting in an ecological problem. It is therefore important to develop new uses for them. The objective of this study was to use the by-products from the brewing of Makgeolli as a valuable functional food or nutraceutical. The anti-complementary activities of crude polysaccharides isolated from Cytolase hydrolysates of Makgeolli lees at concentrations of 1000 and 500 µg mL(-1) were 84.15 and 78.70% respectively. The activity of polysaccharide krestin (PSK) was 60.00% at 1000 µg mL(-1). The active polysaccharide obtained with Cytolase comprised mainly glucose and mannose (molar ratio 1.00:0.62). Glucose- and mannose-rich crude polysaccharides were isolated from the Cytolase hydrolysate of Makgeolli lees. The polysaccharides retain anti-complementary activity to enhance the immune system as a functional food or nutraceutical. Copyright © 2011 Society of Chemical Industry.

  9. Ligand-Controlled Integration of Zn and Tb by Photoactive Terpyridyl-Functionalized Tricarboxylates as Highly Selective and Sensitive Sensors for Nitrofurans.

    PubMed

    Zhou, Zhi-Hang; Dong, Wen-Wen; Wu, Ya-Pan; Zhao, Jun; Li, Dong-Sheng; Wu, Tao; Bu, Xian-Hui

    2018-04-02

    The integration of terpyridyl and tricarboxylate functionality in a novel ligand allows concerted 3:1 stoichiometric assembly of size-and charge-complementary Zn 2+ /Tb 3+ ions into a water-stable 3D luminescent framework (CTGU-8) capable of highly selective, sensitive, and recyclable of nitrofurans.

  10. A Management Information System Design for a General Museum. Museum Data Bank Research Report No. 12.

    ERIC Educational Resources Information Center

    Scholtz, Sandra

    A management information system (MIS) is applied to a medium sized general museum to reflect the actual curatorial/registration functions. The recordkeeping functions of loan and conservation activities are examined since they too can be effectively handled by computer and constitute a complementary data base to the accession/catalog information.…

  11. Examining the Relations between Executive Function, Math, and Literacy during the Transition to Kindergarten: A Multi-Analytic Approach

    ERIC Educational Resources Information Center

    Schmitt, Sara A.; Geldhof, G. John; Purpura, David J.; Duncan, Robert; McClelland, Megan M.

    2017-01-01

    The present study explored the bidirectional and longitudinal associations between executive function (EF) and early academic skills (math and literacy) across 4 waves of measurement during the transition from preschool to kindergarten using 2 complementary analytical approaches: cross-lagged panel modeling and latent growth curve modeling (LCGM).…

  12. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  13. Errors in radial velocity variance from Doppler wind lidar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, H.; Barthelmie, R. J.; Doubrawa, P.

    A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less

  14. Errors in radial velocity variance from Doppler wind lidar

    DOE PAGES

    Wang, H.; Barthelmie, R. J.; Doubrawa, P.; ...

    2016-08-29

    A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less

  15. Variability in Post-Error Behavioral Adjustment Is Associated with Functional Abnormalities in the Temporal Cortex in Children with ADHD

    ERIC Educational Resources Information Center

    Spinelli, Simona; Vasa, Roma A.; Joel, Suresh; Nelson, Tess E.; Pekar, James J.; Mostofsky, Stewart H.

    2011-01-01

    Background: Error processing is reflected, behaviorally, by slower reaction times (RT) on trials immediately following an error (post-error). Children with attention-deficit hyperactivity disorder (ADHD) fail to show RT slowing and demonstrate increased intra-subject variability (ISV) on post-error trials. The neural correlates of these behavioral…

  16. Inhibition of BRCA2 and Thymidylate Synthase Creates Multidrug Sensitive Tumor Cells via the Induction of Combined "Complementary Lethality".

    PubMed

    Rytelewski, Mateusz; Ferguson, Peter J; Maleki Vareki, Saman; Figueredo, Rene; Vincent, Mark; Koropatnick, James

    2013-03-12

    A high mutation rate leading to tumor cell heterogeneity is a driver of malignancy in human cancers. Paradoxically, however, genomic instability can also render tumors vulnerable to therapeutic attack. Thus, targeting DNA repair may induce an intolerable level of DNA damage in tumor cells. BRCA2 mediates homologous recombination repair, and BRCA2 polymorphisms increase cancer risk. However, tumors with BRCA2 mutations respond better to chemotherapy and are associated with improved patient prognosis. Thymidylate synthase (TS) is also involved in DNA maintenance and generates cellular thymidylate. We determined that antisense downregulation of BRCA2 synergistically potentiated drugs with mechanisms of action related to BRCA2 function (cisplatin, melphalan), a phenomenon we named "complementary lethality." TS knockdown induced complementary lethality to TS-targeting drugs (5-FUdR and pemetrexed) but not DNA cross-linking agents. Combined targeting of BRCA2 and TS induced complementary lethality to both DNA-damaging and TS-targeting agents, thus creating multidrug sensitive tumors. In addition, we demonstrated for the first time that simultaneous downregulation of both targets induced combined complementary lethality to multiple mechanistically different drugs in the same cell population. In this study, we propose and define the concept of "complementary lethality" and show that actively targeting BRCA2 and TS is of potential therapeutic benefit in multidrug treatment of human tumors. This work has contributed to the development of a BRCA2-targeting antisense oligdeoxynucleotide (ASO) "BR-1" which we will test in vivo in combination with our TS-targeting ASO "SARI 83" and attempt early clinical trials in the future.Molecular Therapy - Nucleic Acids (2013) 2, e78; doi:10.1038/mtna.2013.7 published online 12 March 2013.

  17. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Yang, Yajun

    2017-01-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  18. Standard Errors of Equating Differences: Prior Developments, Extensions, and Simulations

    ERIC Educational Resources Information Center

    Moses, Tim; Zhang, Wenmin

    2011-01-01

    The purpose of this article was to extend the use of standard errors for equated score differences (SEEDs) to traditional equating functions. The SEEDs are described in terms of their original proposal for kernel equating functions and extended so that SEEDs for traditional linear and traditional equipercentile equating functions can be computed.…

  19. Refractive errors.

    PubMed

    Schiefer, Ulrich; Kraus, Christina; Baumbach, Peter; Ungewiß, Judith; Michels, Ralf

    2016-10-14

    All over the world, refractive errors are among the most frequently occuring treatable distur - bances of visual function. Ametropias have a prevalence of nearly 70% among adults in Germany and are thus of great epidemiologic and socio-economic relevance. In the light of their own clinical experience, the authors review pertinent articles retrieved by a selective literature search employing the terms "ametropia, "anisometropia," "refraction," "visual acuity," and epidemiology." In 2011, only 31% of persons over age 16 in Germany did not use any kind of visual aid; 63.4% wore eyeglasses and 5.3% wore contact lenses. Refractive errors were the most common reason for consulting an ophthalmologist, accounting for 21.1% of all outpatient visits. A pinhole aperture (stenopeic slit) is a suitable instrument for the basic diagnostic evaluation of impaired visual function due to optical factors. Spherical refractive errors (myopia and hyperopia), cylindrical refractive errors (astigmatism), unequal refractive errors in the two eyes (anisometropia), and the typical optical disturbance of old age (presbyopia) cause specific functional limitations and can be detected by a physician who does not need to be an ophthalmologist. Simple functional tests can be used in everyday clinical practice to determine quickly, easily, and safely whether the patient is suffering from a benign and easily correctable type of visual impairment, or whether there are other, more serious underlying causes.

  20. Analyzing the errors of DFT approximations for compressed water systems

    NASA Astrophysics Data System (ADS)

    Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.

    2014-07-01

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mEh ≃ 15 meV/monomer for the liquid and the clusters.

  1. Analyzing the errors of DFT approximations for compressed water systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfè, D.; London Centre for Nanotechnology, UCL, London WC1H 0AH; Thomas Young Centre, UCL, London WC1H 0AH

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm{sup 3} where the experimental pressure is 15 kilobars; second, thermal samples of compressed watermore » clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE{sub h} ≃ 15 meV/monomer for the liquid and the clusters.« less

  2. Modelling default and likelihood reasoning as probabilistic

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1990-01-01

    A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. 'Likely' and 'by default' are in fact treated as duals in the same sense as 'possibility' and 'necessity'. To model these four forms probabilistically, a logic QDP and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequence results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.

  3. A random access memory immune to single event upset using a T-Resistor

    DOEpatents

    Ochoa, A. Jr.

    1987-10-28

    In a random access memory cell, a resistance ''T'' decoupling network in each leg of the cell reduces random errors caused by the interaction of energetic ions with the semiconductor material forming the cell. The cell comprises two parallel legs each containing a series pair of complementary MOS transistors having a common gate connected to the node between the transistors of the opposite leg. The decoupling network in each leg is formed by a series pair of resistors between the transistors together with a third resistor interconnecting the junction between the pair of resistors and the gate of the transistor pair forming the opposite leg of the cell. 4 figs.

  4. [The heuristics of reaching a diagnosis].

    PubMed

    Wainstein, Eduardo

    2009-12-01

    Making a diagnosis in medicine is a complex process in which many cognitive and psychological issues are involved. After the first encounter with the patient, an unconscious process ensues to suspect the presence of a particular disease. Usually, complementary tests are requested to confirm the clinical suspicion. The interpretation of requested tests can be biased by the clinical diagnosis that was considered in the first encounter with the patient. The awareness of these sources of error is essential in the interpretation of the findings that will eventually lead to a final diagnosis. This article discusses some aspects of the heuristics involved in the adjudication of priory probabilities and provides a brief review of current concepts of the reasoning process.

  5. SNR characteristics of 850-nm OEIC receiver with a silicon avalanche photodetector.

    PubMed

    Youn, Jin-Sung; Lee, Myung-Jae; Park, Kang-Yeob; Rücker, Holger; Choi, Woo-Young

    2014-01-13

    We investigate signal-to-noise ratio (SNR) characteristics of an 850-nm optoelectronic integrated circuit (OEIC) receiver fabricated with standard 0.25-µm SiGe bipolar complementary metal-oxide-semiconductor (BiCMOS) technology. The OEIC receiver is composed of a Si avalanche photodetector (APD) and BiCMOS analog circuits including a transimpedance amplifier with DC-balanced buffer, a tunable equalizer, a limiting amplifier, and an output buffer with 50-Ω loads. We measure APD SNR characteristics dependence on the reverse bias voltage as well as BiCMOS circuit noise characteristics. From these, we determine the SNR characteristics of the entire OEIC receiver, and finally, the results are verified with bit-error rate measurement.

  6. Mathematics in chemistry: indeterminate forms and their meaning

    NASA Astrophysics Data System (ADS)

    Segurado, Manuel A. P.; Silva, Margarida F. B.; Castro, Rita

    2011-07-01

    The mathematical language and its tools are complementary to the formalism in chemistry, in particular at an advanced level. It is thus crucial, for its understanding, that students acquire a solid knowledge in Calculus and that they know how to apply it. The frequent occurrence of indeterminate forms in multiple areas, particularly in Physical Chemistry, justifies the need to properly understand the limiting process in such cases. This article emphasizes the importance of the L'Hôpital's rule as a practical tool, although often neglected, to obtain the more common indeterminate limits, through the use of some specific examples as the radioactive decay, spectrophotometric error, Planck's radiation law, second-order kinetics, or consecutive reactions.

  7. Random access memory immune to single event upset using a T-resistor

    DOEpatents

    Ochoa, Jr., Agustin

    1989-01-01

    In a random access memory cell, a resistance "T" decoupling network in each leg of the cell reduces random errors caused by the interaction of energetic ions with the semiconductor material forming the cell. The cell comprises two parallel legs each containing a series pair of complementary MOS transistors having a common gate connected to the node between the transistors of the opposite leg. The decoupling network in each leg is formed by a series pair of resistors between the transistors together with a third resistor interconnecting the junction between the pair of resistors and the gate of the transistor pair forming the opposite leg of the cell.

  8. Study of a co-designed decision feedback equalizer, deinterleaver, and decoder

    NASA Technical Reports Server (NTRS)

    Peile, Robert E.; Welch, Loyd

    1990-01-01

    A technique that promises better quality data from band limited channels at lower received power in digital transmission systems is presented. Data transmission, in such systems often suffers from intersymbol interference (ISI) and noise. Two separate techniques, channel coding and equalization, have caused considerable advances in the state of communication systems and both concern themselves with removing the undesired effects of a communication channel. Equalizers mitigate the ISI whereas coding schemes are used to incorporate error-correction. In the past, most of the research in these two areas has been carried out separately. However, the individual techniques have strengths and weaknesses that are complementary in many applications: an integrated approach realizes gains in excess to that of a simple juxtaposition. Coding schemes have been successfully used in cascade with linear equalizers which in the absence of ISI provide excellent performance. However, when both ISI and the noise level are relatively high, nonlinear receivers like the decision feedback equalizer (DFE) perform better. The DFE has its drawbacks: it suffers from error propagation. The technique presented here takes advantage of interleaving to integrate the two approaches so that the error propagation in DFE can be reduced with the help of error correction provided by the decoder. The results of simulations carried out for both, binary, and non-binary, channels confirm that significant gain can be obtained by codesigning equalizer and decoder. Although, systems with time-invariant channels and simple DFE having linear filters were looked into, the technique is fairly general and can easily be modified for more sophisticated equalizers to obtain even larger gains.

  9. Design of a Sensitive and Selective Electrochemical Aptasensor for the Determination of the Complementary cDNA of miRNA-145 Based on the Intercalation and Electrochemical Reduction of Doxorubicin.

    PubMed

    Mohamadi, Maryam; Mostafavi, Ali; Torkzadeh-Mahani, Masoud

    2017-11-01

    The aim of this research was the determination of a microRNA (miRNA) using a DNA electrochemical aptasensor. In this biosensor, the complementary complementary DNA (cDNA) of miRNA-145 (a sense RNA transcript) was the target strand and the cDNA of miRNA-145 was the probe strand. Both cDNAs can be the product of the reverse transcriptase-polymerase chain reaction of miRNA. The proposed aptasensor's function was based on the hybridization of target strands with probes immobilized on the surface of a working electrode and the subsequent intercalation of doxorubicin (DOX) molecules functioning as the electroactive indicators of any double strands that formed. Electrochemical transduction was performed by measuring the cathodic current resulting from the electrochemical reduction of the intercalated molecules at the electrode surface. In the experiment, because many DOX molecules accumulated on each target strand on the electrode surface, amplification was inherently easy, without a need for enzymatic or complicated amplification strategies. The proposed aptasensor also had the excellent ability to regenerate as a result of the melting of the DNA duplex. Moreover, the use of DNA probe strands obviated the challenges of working with an RNA probe, such as sensitivity to RNase enzyme. In addition to the linear relationship between the electrochemical signal and the concentration of the target strands that ranged from 2.0 to 80.0 nM with an LOD of 0.27 nM, the proposed biosensor was clearly capable of distinguishing between complementary (target strand) and noncomplementary sequences. The presented biosensor was successfully applied for the quantification of DNA strands corresponding to miRNA-145 in human serum samples.

  10. A Content Analysis of Infant and Toddler Food Advertisements in Taiwanese Popular Pregnancy and Early Parenting Magazines.

    PubMed

    Chen, Yi-Chun; Chang, Jung-Su; Gong, Yu-Tang

    2015-08-01

    Mothers who are exposed to formula advertisements (ads) are less likely to initiate breastfeeding and more likely to breastfeed for a shorter duration than other mothers. The purpose of this study was to examine infant and toddler food ads in pregnancy and early parenting magazines. A content analysis of infant and toddler food ads printed in 12 issues of 4 magazines published in 2011 was performed. Coding categories of ads included product category, advertisement category, marketing information, and advertising appeal. The target age and health-related message of each product were coded. The researchers identified 756 infant and toddler food ads in the magazines. Compared with complementary food ads, formula product ads used more marketing strategies such as antenatal classes and baby contests to influence consumers and promote products. Nutritional quality and child health benefits were the two most frequently used advertising appeals. In addition, this study identified 794 formula products and 400 complementary food products; 42.8% of the complementary food products were intended for 4-month-old infants. Furthermore, 91.9% of the ads for formula products and 81% of the ads for complementary food products contained claims concerning health function or nutrient content. Taiwanese pregnancy and early parenting magazines contain numerous infant and toddler food ads. These ads generally use health-related claims regarding specific nutrient content and health functions to promote infant and toddler foods. Health professionals should provide more information to parents on the differences between breast milk and formula milk, and they should be aware of the potential effect of infant and toddler food ads on parents' infant feeding decisions. © The Author(s) 2015.

  11. A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.

    PubMed

    Khelifi, Lazhar; Mignotte, Max

    2017-08-01

    Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.

  12. SPIRE: Systematic protein investigative research environment.

    PubMed

    Kolker, Eugene; Higdon, Roger; Morgan, Phil; Sedensky, Margaret; Welch, Dean; Bauman, Andrew; Stewart, Elizabeth; Haynes, Winston; Broomall, William; Kolker, Natali

    2011-12-10

    The SPIRE (Systematic Protein Investigative Research Environment) provides web-based experiment-specific mass spectrometry (MS) proteomics analysis (https://www.proteinspire.org). Its emphasis is on usability and integration of the best analytic tools. SPIRE provides an easy to use web-interface and generates results in both interactive and simple data formats. In contrast to run-based approaches, SPIRE conducts the analysis based on the experimental design. It employs novel methods to generate false discovery rates and local false discovery rates (FDR, LFDR) and integrates the best and complementary open-source search and data analysis methods. The SPIRE approach of integrating X!Tandem, OMSSA and SpectraST can produce an increase in protein IDs (52-88%) over current combinations of scoring and single search engines while also providing accurate multi-faceted error estimation. One of SPIRE's primary assets is combining the results with data on protein function, pathways and protein expression from model organisms. We demonstrate some of SPIRE's capabilities by analyzing mitochondrial proteins from the wild type and 3 mutants of C. elegans. SPIRE also connects results to publically available proteomics data through its Model Organism Protein Expression Database (MOPED). SPIRE can also provide analysis and annotation for user supplied protein ID and expression data. Copyright © 2011. Published by Elsevier B.V.

  13. Time domain reshuffling for OFDM based indoor visible light communication systems.

    PubMed

    You, Xiaodi; Chen, Jian; Yu, Changyuan; Zheng, Huanhuan

    2017-05-15

    For orthogonal frequency division multiplexing (OFDM) based indoor visible light communication (VLC) systems, partial non-ideal transmission conditions such as insufficient guard intervals and a dispersive channel can result in severe inter-symbol crosstalk (ISC). By deriving from the inverse Fourier transform, we present a novel time domain reshuffling (TDR) concept for both DC-biased optical (DCO-) and asymmetrically clipped optical (ACO-) OFDM VLC systems. By using only simple operations in the frequency domain, potential high peaks can be relocated within each OFDM symbol to alleviate ISC. To simplify the system, we also propose an effective unified design of the TDR schemes for both DCO- and ACO-OFDM. Based on Monte-Carlo simulations, we demonstrate the statistical distribution of the signal high peak values and the complementary cumulative distribution function of the peak-to-average power ratio under different cases for comparison. Simulation results indicate improved bit error rate (BER) performance by adopting TDR to counteract ISC deterioration. For example, for binary phase shift keying at a BER of 10 -3 , the signal to noise ratio gains are ~1.6 dB and ~6.6 dB for DCO- and ACO-OFDM, respectively, with ISC of 1/64. We also show a reliable transmission by adopting TDR for rectangle 8-quadrature amplitude modulation with ISC of < 1/64.

  14. Estimation of hydrolysis rate constants for carbamates ...

    EPA Pesticide Factsheets

    Cheminformatics based tools, such as the Chemical Transformation Simulator under development in EPA’s Office of Research and Development, are being increasingly used to evaluate chemicals for their potential to degrade in the environment or be transformed through metabolism. Hydrolysis represents a major environmental degradation pathway; unfortunately, only a small fraction of hydrolysis rates for about 85,000 chemicals on the Toxic Substances Control Act (TSCA) inventory are in public domain, making it critical to develop in silico approaches to estimate hydrolysis rate constants. In this presentation, we compare three complementary approaches to estimate hydrolysis rates for carbamates, an important chemical class widely used in agriculture as pesticides, herbicides and fungicides. Fragment-based Quantitative Structure Activity Relationships (QSARs) using Hammett-Taft sigma constants are widely published and implemented for relatively simple functional groups such as carboxylic acid esters, phthalate esters, and organophosphate esters, and we extend these to carbamates. We also develop a pKa based model and a quantitative structure property relationship (QSPR) model, and evaluate them against measured rate constants using R square and root mean square (RMS) error. Our work shows that for our relatively small sample size of carbamates, a Hammett-Taft based fragment model performs best, followed by a pKa and a QSPR model. This presentation compares three comp

  15. Impact of Corrections to the Spallings Volume Calculation on Waste Isolation Pilot Plant Performance Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kicker, Dwayne Curtis; Herrick, Courtney G; Zeitler, Todd

    2015-11-01

    The numerical code DRSPALL (from direct release spallings) is written to calculate the volume of Waste Isolation Pilot Plant solid waste subject to material failure and transport to the surface (i.e., spallings) as a result of a hypothetical future inadvertent drilling intrusion into the repository. An error in the implementation of the DRSPALL finite difference equations was discovered and documented in a software problem report in accordance with the quality assurance procedure for software requirements. This paper describes the corrections to DRSPALL and documents the impact of the new spallings data from the modified DRSPALL on previous performance assessment calculations.more » Updated performance assessments result in more simulations with spallings, which generally translates to an increase in spallings releases to the accessible environment. Total normalized radionuclide releases using the modified DRSPALL data were determined by forming the summation of releases across each potential release pathway, namely borehole cuttings and cavings releases, spallings releases, direct brine releases, and transport releases. Because spallings releases are not a major contributor to the total releases, the updated performance assessment calculations of overall mean complementary cumulative distribution functions for total releases are virtually unchanged. Therefore, the corrections to the spallings volume calculation did not impact Waste Isolation Pilot Plant performance assessment calculation results.« less

  16. Characterisation of case depth in induction-hardened medium carbon steels based on magnetic minor hysteresis loop measurement technique

    NASA Astrophysics Data System (ADS)

    He, Cunfu; Yang, Meng; Liu, Xiucheng; Wang, Xueqian; Wu, Bin

    2017-11-01

    The magnetic hysteresis behaviours of ferromagnetic materials vary with the heat treatment-induced micro-structural changes. In the study, the minor hysteresis loop measurement technique was used to quantitatively characterise the case depth in two types of medium carbon steels. Firstly, high-frequency induction quenching was applied in rod samples to increase the volume fraction of hard martensite to the soft ferrite/pearlite (or sorbite) in the sample surface. In order to determine the effective and total case depth, a complementary error function was employed to fit the measured hardness-depth profiles of induction-hardened samples. The cluster of minor hysteresis loops together with the tangential magnetic field (TMF) were recorded from all the samples and the comparative study was conducted among three kinds of magnetic parameters, which were sensitive to the variation of case depth. Compared to the parameters extracted from an individual minor loop and the distortion factor of the TMF, the magnitude of three-order harmonic of TMF was more suitable to indicate the variation in case depth. Two new minor-loop coefficients were introduced by combining two magnetic parameters with cumulative statistics of the cluster of minor-loops. The experimental results showed that the two coefficients monotonically linearly varied with the case depth within the carefully selected magnetisation region.

  17. Novel approaches to estimating the turbulent kinetic energy dissipation rate from low- and moderate-resolution velocity fluctuation time series

    NASA Astrophysics Data System (ADS)

    Wacławczyk, Marta; Ma, Yong-Feng; Kopeć, Jacek M.; Malinowski, Szymon P.

    2017-11-01

    In this paper we propose two approaches to estimating the turbulent kinetic energy (TKE) dissipation rate, based on the zero-crossing method by Sreenivasan et al. (1983). The original formulation requires a fine resolution of the measured signal, down to the smallest dissipative scales. However, due to finite sampling frequency, as well as measurement errors, velocity time series obtained from airborne experiments are characterized by the presence of effective spectral cutoffs. In contrast to the original formulation the new approaches are suitable for use with signals originating from airborne experiments. The suitability of the new approaches is tested using measurement data obtained during the Physics of Stratocumulus Top (POST) airborne research campaign as well as synthetic turbulence data. They appear useful and complementary to existing methods. We show the number-of-crossings-based approaches respond differently to errors due to finite sampling and finite averaging than the classical power spectral method. Hence, their application for the case of short signals and small sampling frequencies is particularly interesting, as it can increase the robustness of turbulent kinetic energy dissipation rate retrieval.

  18. a New Survey on Self-Tuning Integrated Low-Cost Gps/ins Vehicle Navigation System in Harsh Environment

    NASA Astrophysics Data System (ADS)

    Navidi, N.; Landry, R., Jr.

    2015-08-01

    Nowadays, Global Positioning System (GPS) receivers are aided by some complementary radio navigation systems and Inertial Navigation Systems (INS) to obtain more accuracy and robustness in land vehicular navigation. Extended Kalman Filter (EKF) is an acceptable conventional method to estimate the position, the velocity, and the attitude of the navigation system when INS measurements are fused with GPS data. However, the usage of the low-cost Inertial Measurement Units (IMUs) based on the Micro-Electro-Mechanical Systems (MEMS), for the land navigation systems, reduces the precision and stability of the navigation system due to their inherent errors. The main goal of this paper is to provide a new model for fusing low-cost IMU and GPS measurements. The proposed model is based on EKF aided by Fuzzy Inference Systems (FIS) as a promising method to solve the mentioned problems. This model considers the parameters of the measurement noise to adjust the measurement and noise process covariance. The simulation results show the efficiency of the proposed method to reduce the navigation system errors compared with EKF.

  19. Inductive creation of an annotation schema for manually indexing clinical conditions from emergency department reports

    PubMed Central

    Chapman, Wendy W.; Dowling, John N.

    2006-01-01

    Evaluating automated indexing applications requires comparing automatically indexed terms against manual reference standard annotations. However, there are no standard guidelines for determining which words from a textual document to include in manual annotations, and the vague task can result in substantial variation among manual indexers. We applied grounded theory to emergency department reports to create an annotation schema representing syntactic and semantic variables that could be annotated when indexing clinical conditions. We describe the annotation schema, which includes variables representing medical concepts (e.g., symptom, demographics), linguistic form (e.g., noun, adjective), and modifier types (e.g., anatomic location, severity). We measured the schema’s quality and found: (1) the schema was comprehensive enough to be applied to 20 unseen reports without changes to the schema; (2) agreement between author annotators applying the schema was high, with an F measure of 93%; and (3) an error analysis showed that the authors made complementary errors when applying the schema, demonstrating that the schema incorporates both linguistic and medical expertise. PMID:16230050

  20. Experimental studies on the effect of automation on pilot situational awareness in the datalink ATC environment

    NASA Technical Reports Server (NTRS)

    Hahn, Edward C.; Hansman, R. J., Jr.

    1992-01-01

    An experiment to study how automation, when used in conjunction with datalink for the delivery of ATC clearance amendments, affects the situational awareness of aircrews was conducted. The study was focused on the relationship of situational awareness to automated Flight Management System (FMS) programming of datalinked clearances and the readback of ATC clearances. Situational awareness was tested by issuing nominally unacceptable ATC clearances and measuring whether the error was detected by the subject pilots. The experiment also varied the mode of clearance delivery: Verbal, Textual, and Graphical. The error detection performance and pilot preference results indicate that the automated programming of the FMS may be superior to manual programming. It is believed that automated FMS programming may relieve some of the cognitive load, allowing pilots to concentrate on the strategic implications of a clearance amendment. Also, readback appears to have value, but the small sample size precludes a definite conclusion. Furthermore, because textual and graphical modes of delivery offer different but complementary advantages for cognitive processing, a combination of these modes of delivery may be advantageous in a datalink presentation.

  1. A Circuit-Based Neural Network with Hybrid Learning of Backpropagation and Random Weight Change Algorithms

    PubMed Central

    Yang, Changju; Kim, Hyongsuk; Adhikari, Shyam Prasad; Chua, Leon O.

    2016-01-01

    A hybrid learning method of a software-based backpropagation learning and a hardware-based RWC learning is proposed for the development of circuit-based neural networks. The backpropagation is known as one of the most efficient learning algorithms. A weak point is that its hardware implementation is extremely difficult. The RWC algorithm, which is very easy to implement with respect to its hardware circuits, takes too many iterations for learning. The proposed learning algorithm is a hybrid one of these two. The main learning is performed with a software version of the BP algorithm, firstly, and then, learned weights are transplanted on a hardware version of a neural circuit. At the time of the weight transplantation, a significant amount of output error would occur due to the characteristic difference between the software and the hardware. In the proposed method, such error is reduced via a complementary learning of the RWC algorithm, which is implemented in a simple hardware. The usefulness of the proposed hybrid learning system is verified via simulations upon several classical learning problems. PMID:28025566

  2. Assessment of the Accuracy of the Bethe-Salpeter (BSE/GW) Oscillator Strengths.

    PubMed

    Jacquemin, Denis; Duchemin, Ivan; Blondel, Aymeric; Blase, Xavier

    2016-08-09

    Aiming to assess the accuracy of the oscillator strengths determined at the BSE/GW level, we performed benchmark calculations using three complementary sets of molecules. In the first, we considered ∼80 states in Thiel's set of compounds and compared the BSE/GW oscillator strengths to recently determined ADC(3/2) and CC3 reference values. The second set includes the oscillator strengths of the low-lying states of 80 medium to large dyes for which we have determined CC2/aug-cc-pVTZ values. The third set contains 30 anthraquinones for which experimental oscillator strengths are available. We find that BSE/GW accurately reproduces the trends for all series with excellent correlation coefficients to the benchmark data and generally very small errors. Indeed, for Thiel's sets, the BSE/GW values are more accurate (using CC3 references) than both CC2 and ADC(3/2) values on both absolute and relative scales. For all three sets, BSE/GW errors also tend to be nicely spread with almost equal numbers of positive and negative deviations as compared to reference values.

  3. Multi-physics modelling contributions to investigate the atmospheric cosmic rays on the single event upset sensitivity along the scaling trend of CMOS technologies.

    PubMed

    Hubert, G; Regis, D; Cheminet, A; Gatti, M; Lacoste, V

    2014-10-01

    Particles originating from primary cosmic radiation, which hit the Earth's atmosphere give rise to a complex field of secondary particles. These particles include neutrons, protons, muons, pions, etc. Since the 1980s it has been known that terrestrial cosmic rays can penetrate the natural shielding of buildings, equipment and circuit package and induce soft errors in integrated circuits. Recently, research has shown that commercial static random access memories are now so small and sufficiently sensitive that single event upsets (SEUs) may be induced from the electronic stopping of a proton. With continued advancements in process size, this downward trend in sensitivity is expected to continue. Then, muon soft errors have been predicted for nano-electronics. This paper describes the effects in the specific cases such as neutron-, proton- and muon-induced SEU observed in complementary metal-oxide semiconductor. The results will allow investigating the technology node sensitivity along the scaling trend. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Focusing in Arthurs-Kelly-type joint measurements with correlated probes.

    PubMed

    Bullock, Thomas J; Busch, Paul

    2014-09-19

    Joint approximate measurement schemes of position and momentum provide us with a means of inferring pieces of complementary information if we allow for the irreducible noise required by quantum theory. One such scheme is given by the Arthurs-Kelly model, where information about a system is extracted via indirect probe measurements, assuming separable uncorrelated probes. Here, following Di Lorenzo [Phys. Rev. Lett. 110, 120403 (2013)], we extend this model to both entangled and classically correlated probes, achieving full generality. We show that correlated probes can produce more precise joint measurement outcomes than the same probes can achieve if applied alone to realize a position or momentum measurement. This phenomenon of focusing may be useful where one tries to optimize measurements with limited physical resources. Contrary to Di Lorenzo's claim, we find that there are no violations of Heisenberg's error-disturbance relation in these generalized Arthurs-Kelly models. This is simply due to the fact that, as we show, the measured observable of the system under consideration is covariant under phase space translations and as such is known to obey a tight joint measurement error relation.

  5. X/Ka Celestial Frame Improvements: Vision to Reality

    NASA Technical Reports Server (NTRS)

    Jacobs, C. S.; Bagri, D. S.; Britcliffe, M. J.; Clark, J. E.; Franco, M. M.; Garcia-Miro, C.; Goodhart, C. E.; Horiuchi, S.; Lowe, S. T.; Moll, V. E.; hide

    2010-01-01

    In order to extend the International Celestial Reference Frame from its S/X-band (2.3/8.4 GHz) basis to a complementary frame at X/Ka-band (8.4/32 GHz), we began in mid-2005 an ongoing series of X/Ka observations using NASA s Deep Space Network (DSN) radio telescopes. Over the course of 47 sessions, we have detected 351 extra-galactic radio sources covering the full 24 hours of right ascension and declinations down to -45 degrees. Angular source position accuracy is at the part-per-billion level. We developed an error budget which shows that the main errors arise from limited sensitivity, mismodeling of the troposphere, uncalibrated instrumental effects, and the lack of a southern baseline. Recent work has improved sensitivity by improving pointing calibrations and by increasing the data rate four-fold. Troposphere calibration has been demonstrated at the mm-level. Construction of instrumental phase calibrators and new digital baseband filtering electronics began in recent months. We will discuss the expected effect of these improvements on the X/Ka frame.

  6. An Experimental Study of the Effects of Automation on Pilot Situational Awareness in the Datalink ATC Environment

    NASA Technical Reports Server (NTRS)

    Hahn, Edward C.; Hansman, R. John, Jr.

    1992-01-01

    An experiment to study how automation, when used in conjunction with datalink for the delivery of air traffic control (ATC) clearance amendments, affects the situational awareness of aircrews was conducted. The study was focused on the relationship of situational awareness to automated Flight Management System (FMS) programming and the readback of ATC clearances. Situational awareness was tested by issuing nominally unacceptable ATC clearances and measuring whether the error was detected by the subject pilots. The experiment also varied the mode of clearance delivery: Verbal, Textual, and Graphical. The error detection performance and pilot preference results indicate that the automated programming of the FMS may be superior to manual programming. It is believed that automated FMS programming may relieve some of the cognitive load, allowing pilots to concentrate on the strategic implications of a clearance amendment. Also, readback appears to have value, but the small sample size precludes a definite conclusion. Furthermore, because textual and graphical modes of delivery offer different but complementary advantages for cognitive processing, a combination of these modes of delivery may be advantageous in a datalink presentation.

  7. Performance of Reclassification Statistics in Comparing Risk Prediction Models

    PubMed Central

    Paynter, Nina P.

    2012-01-01

    Concerns have been raised about the use of traditional measures of model fit in evaluating risk prediction models for clinical use, and reclassification tables have been suggested as an alternative means of assessing the clinical utility of a model. Several measures based on the table have been proposed, including the reclassification calibration (RC) statistic, the net reclassification improvement (NRI), and the integrated discrimination improvement (IDI), but the performance of these in practical settings has not been fully examined. We used simulations to estimate the type I error and power for these statistics in a number of scenarios, as well as the impact of the number and type of categories, when adding a new marker to an established or reference model. The type I error was found to be reasonable in most settings, and power was highest for the IDI, which was similar to the test of association. The relative power of the RC statistic, a test of calibration, and the NRI, a test of discrimination, varied depending on the model assumptions. These tools provide unique but complementary information. PMID:21294152

  8. Single event upset susceptibilities of latchup immune CMOS process programmable gate arrays

    NASA Astrophysics Data System (ADS)

    Koga, R.; Crain, W. R.; Crawford, K. B.; Hansel, S. J.; Lau, D. D.; Tsubota, T. K.

    Single event upsets (SEU) and latchup susceptibilities of complementary metal oxide semiconductor programmable gate arrays (CMOS PPGA's) were measured at the Lawrence Berkeley Laboratory 88-in. cyclotron facility with Xe (603 MeV), Cu (290 MeV), and Ar (180 MeV) ion beams. The PPGA devices tested were those which may be used in space. Most of the SEU measurements were taken with a newly constructed tester called the Bus Access Storage and Comparison System (BASACS) operating via a Macintosh II computer. When BASACS finds that an output does not match a prerecorded pattern, the state of all outputs, position in the test cycle, and other necessary information is transmitted and stored in the Macintosh. The upset rate was kept between 1 and 3 per second. After a sufficient number of errors are stored, the test is stopped and the total fluence of particles and total errors are recorded. The device power supply current was closely monitored to check for occurrence of latchup. Results of the tests are presented, indicating that some of the PPGA's are good candidates for selected space applications.

  9. Lipophilic oligonucleotides spontaneously insert into lipid membranes, bind complementary DNA strands, and sequester into lipid-disordered domains.

    PubMed

    Bunge, Andreas; Kurz, Anke; Windeck, Anne-Kathrin; Korte, Thomas; Flasche, Wolfgang; Liebscher, Jürgen; Herrmann, Andreas; Huster, Daniel

    2007-04-10

    For the development of surface functionalized bilayers, we have synthesized lipophilic oligonucleotides to combine the molecular recognition mechanism of nucleic acids and the self-assembly characteristics of lipids in planar membranes. A lipophilic oligonucleotide consisting of 21 thymidine units and two lipophilic nucleotides with an alpha-tocopherol moiety as a lipophilic anchor was synthesized using solid-phase methods with a phosphoramadite strategy. The interaction of the water soluble lipophilic oligonucleotide with vesicular lipid membranes and its capability to bind complementary DNA strands was studied using complementary methods such as NMR, EPR, DSC, fluorescence spectroscopy, and fluorescence microscopy. This oligonucleotide inserted stably into preformed membranes from the aqueous phase. Thereby, no significant perturbation of the lipid bilayer and its stability was observed. However, the non-lipidated end of the oligonucleotide is exposed to the aqueous environment, is relatively mobile, and is free to interact with complementary DNA strands. Binding of the complementary single-stranded DNA molecules is fast and accomplished by the formation of Watson-Crick base pairs, which was confirmed by 1H NMR chemical shift analysis and fluorescence resonance energy transfer. The molecular structure of the membrane bound DNA double helix is very similar to the free double-stranded DNA. Further, the membrane bound DNA double strands also undergo regular melting. Finally, in raft-like membrane mixtures, the lipophilic oligonucleotide was shown to preferentially sequester into liquid-disordered membrane domains.

  10. [Balneotherapy and osteoarthritis treatment].

    PubMed

    Latrille, Christian Roques

    2012-09-01

    Balneotherapy is a complementary form of medicine which uses natural thermal mineral resources in situ. It provides patients with osteoarthritis with a full treatment to ease pain and improve functions in the long-term without causing any significant therapeutic risks.

  11. Electronic switching circuit uses complementary non-linear components

    NASA Technical Reports Server (NTRS)

    Zucker, O. S.

    1972-01-01

    Inherent switching properties of saturable inductors and storage diodes are combined to perform large variety of electronic functions, such as pulse shaping, gating, and multiplexing. Passive elements replace active switching devices in generation of complex waveforms.

  12. Monte Carlo errors with less errors

    NASA Astrophysics Data System (ADS)

    Wolff, Ulli; Alpha Collaboration

    2004-01-01

    We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.

  13. Caffeine enhances real-world language processing: evidence from a proofreading task.

    PubMed

    Brunyé, Tad T; Mahoney, Caroline R; Rapp, David N; Ditman, Tali; Taylor, Holly A

    2012-03-01

    Caffeine has become the most prevalently consumed psychostimulant in the world, but its influences on daily real-world functioning are relatively unknown. The present work investigated the effects of caffeine (0 mg, 100 mg, 200 mg, 400 mg) on a commonplace language task that required readers to identify and correct 4 error types in extended discourse: simple local errors (misspelling 1- to 2-syllable words), complex local errors (misspelling 3- to 5-syllable words), simple global errors (incorrect homophones), and complex global errors (incorrect subject-verb agreement and verb tense). In 2 placebo-controlled, double-blind studies using repeated-measures designs, we found higher detection and repair rates for complex global errors, asymptoting at 200 mg in low consumers (Experiment 1) and peaking at 400 mg in high consumers (Experiment 2). In both cases, covariate analyses demonstrated that arousal state mediated the relationship between caffeine consumption and the detection and repair of complex global errors. Detection and repair rates for the other 3 error types were not affected by caffeine consumption. Taken together, we demonstrate that caffeine has differential effects on error detection and repair as a function of dose and error type, and this relationship is closely tied to caffeine's effects on subjective arousal state. These results support the notion that central nervous system stimulants may enhance global processing of language-based materials and suggest that such effects may originate in caffeine-related right hemisphere brain processes. Implications for understanding the relationships between caffeine consumption and real-world cognitive functioning are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  14. Modified complementary ensemble empirical mode decomposition and intrinsic mode functions evaluation index for high-speed train gearbox fault diagnosis

    NASA Astrophysics Data System (ADS)

    Chen, Dongyue; Lin, Jianhui; Li, Yanping

    2018-06-01

    Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.

  15. Fractional Brownian motion and the critical dynamics of zipping polymers.

    PubMed

    Walter, J-C; Ferrantini, A; Carlon, E; Vanderzande, C

    2012-03-01

    We consider two complementary polymer strands of length L attached by a common-end monomer. The two strands bind through complementary monomers and at low temperatures form a double-stranded conformation (zipping), while at high temperature they dissociate (unzipping). This is a simple model of DNA (or RNA) hairpin formation. Here we investigate the dynamics of the strands at the equilibrium critical temperature T=T(c) using Monte Carlo Rouse dynamics. We find that the dynamics is anomalous, with a characteristic time scaling as τ∼L(2.26(2)), exceeding the Rouse time ∼L(2.18). We investigate the probability distribution function, velocity autocorrelation function, survival probability, and boundary behavior of the underlying stochastic process. These quantities scale as expected from a fractional Brownian motion with a Hurst exponent H=0.44(1). We discuss similarities to and differences from unbiased polymer translocation.

  16. Water soluble nano-scale transient material germanium oxide for zero toxic waste based environmentally benign nano-manufacturing

    NASA Astrophysics Data System (ADS)

    Almuslem, A. S.; Hanna, A. N.; Yapici, T.; Wehbe, N.; Diallo, E. M.; Kutbee, A. T.; Bahabry, R. R.; Hussain, M. M.

    2017-02-01

    In the recent past, with the advent of transient electronics for mostly implantable and secured electronic applications, the whole field effect transistor structure has been dissolved in a variety of chemicals. Here, we show simple water soluble nano-scale (sub-10 nm) germanium oxide (GeO2) as the dissolvable component to remove the functional structures of metal oxide semiconductor devices and then reuse the expensive germanium substrate again for functional device fabrication. This way, in addition to transiency, we also show an environmentally friendly manufacturing process for a complementary metal oxide semiconductor (CMOS) technology. Every year, trillions of complementary metal oxide semiconductor (CMOS) electronics are manufactured and billions are disposed, which extend the harmful impact to our environment. Therefore, this is a key study to show a pragmatic approach for water soluble high performance electronics for environmentally friendly manufacturing and bioresorbable electronic applications.

  17. Highly sensitive self-complementary DNA nanoswitches triggered by polyelectrolytes.

    PubMed

    Wu, Jincai; Yu, Feng; Zhang, Zheng; Chen, Yong; Du, Jie; Maruyama, Atsushi

    2016-01-07

    Dimerization of two homologous strands of genomic DNA/RNA is an essential feature of retroviral replication. Herein we show that a cationic comb-type copolymer (CCC), poly(L-lysine)-graft-dextran, accelerates the dimerization of self-complementary stem-loop DNA, frequently found in functional DNA/RNA molecules, such as aptamers. Furthermore, an anionic polymer poly(sodium vinylsulfonate) (PVS) dissociates CCC from the duplex shortly within a few seconds. Then single stem-loop DNA spontaneously transforms from its dimer. Thus we can easily control the dimer and stem-loop DNA by switching on/off CCC activity. Both polyelectrolytes and DNA concentrations are in the nanomole per liter range. The polyelectrolyte-assisted transconformation and sequences design strategy ensures the reversible state control with rapid response and effective switching under physiologically relevant conditions. A further application of this sensitive assembly is to construct an aptamer-type drug delivery system, bind or release functional molecules responding to its transconformation.

  18. Hybrid online sensor error detection and functional redundancy for systems with time-varying parameters.

    PubMed

    Feng, Jianyuan; Turksoy, Kamuran; Samadi, Sediqeh; Hajizadeh, Iman; Littlejohn, Elizabeth; Cinar, Ali

    2017-12-01

    Supervision and control systems rely on signals from sensors to receive information to monitor the operation of a system and adjust manipulated variables to achieve the control objective. However, sensor performance is often limited by their working conditions and sensors may also be subjected to interference by other devices. Many different types of sensor errors such as outliers, missing values, drifts and corruption with noise may occur during process operation. A hybrid online sensor error detection and functional redundancy system is developed to detect errors in online signals, and replace erroneous or missing values detected with model-based estimates. The proposed hybrid system relies on two techniques, an outlier-robust Kalman filter (ORKF) and a locally-weighted partial least squares (LW-PLS) regression model, which leverage the advantages of automatic measurement error elimination with ORKF and data-driven prediction with LW-PLS. The system includes a nominal angle analysis (NAA) method to distinguish between signal faults and large changes in sensor values caused by real dynamic changes in process operation. The performance of the system is illustrated with clinical data continuous glucose monitoring (CGM) sensors from people with type 1 diabetes. More than 50,000 CGM sensor errors were added to original CGM signals from 25 clinical experiments, then the performance of error detection and functional redundancy algorithms were analyzed. The results indicate that the proposed system can successfully detect most of the erroneous signals and substitute them with reasonable estimated values computed by functional redundancy system.

  19. Flux control coefficients determined by inhibitor titration: the design and analysis of experiments to minimize errors.

    PubMed Central

    Small, J R

    1993-01-01

    This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434

  20. Noise-Enhanced Eversion Force Sense in Ankles With or Without Functional Instability.

    PubMed

    Ross, Scott E; Linens, Shelley W; Wright, Cynthia J; Arnold, Brent L

    2015-08-01

    Force sense impairments are associated with functional ankle instability. Stochastic resonance stimulation (SRS) may have implications for correcting these force sense deficits. To determine if SRS improved force sense. Case-control study. Research laboratory. Twelve people with functional ankle instability (age = 23 ± 3 years, height = 174 ± 8 cm, mass = 69 ± 10 kg) and 12 people with stable ankles (age = 22 ± 2 years, height = 170 ± 7 cm, mass = 64 ± 10 kg). The eversion force sense protocol required participants to reproduce a targeted muscle tension (10% of maximum voluntary isometric contraction). This protocol was assessed under SRSon and SRSoff (control) conditions. During SRSon, random subsensory mechanical noise was applied to the lower leg at a customized optimal intensity for each participant. Constant error, absolute error, and variable error measures quantified accuracy, overall performance, and consistency of force reproduction, respectively. With SRS, we observed main effects for force sense absolute error (SRSoff = 1.01 ± 0.67 N, SRSon = 0.69 ± 0.42 N) and variable error (SRSoff = 1.11 ± 0.64 N, SRSon = 0.78 ± 0.56 N) (P < .05). No other main effects or treatment-by-group interactions were found (P > .05). Although SRS reduced the overall magnitude (absolute error) and variability (variable error) of force sense errors, it had no effect on the directionality (constant error). Clinically, SRS may enhance muscle tension ability, which could have treatment implications for ankle stability.

  1. Exploring the Function Space of Deep-Learning Machines

    NASA Astrophysics Data System (ADS)

    Li, Bo; Saad, David

    2018-06-01

    The function space of deep-learning machines is investigated by studying growth in the entropy of functions of a given error with respect to a reference function, realized by a deep-learning machine. Using physics-inspired methods we study both sparsely and densely connected architectures to discover a layerwise convergence of candidate functions, marked by a corresponding reduction in entropy when approaching the reference function, gain insight into the importance of having a large number of layers, and observe phase transitions as the error increases.

  2. Use of localized performance-based functions for the specification and correction of hybrid imaging systems

    NASA Astrophysics Data System (ADS)

    Lisson, Jerold B.; Mounts, Darryl I.; Fehniger, Michael J.

    1992-08-01

    Localized wavefront performance analysis (LWPA) is a system that allows the full utilization of the system optical transfer function (OTF) for the specification and acceptance of hybrid imaging systems. We show that LWPA dictates the correction of wavefront errors with the greatest impact on critical imaging spatial frequencies. This is accomplished by the generation of an imaging performance map-analogous to a map of the optic pupil error-using a local OTF. The resulting performance map a function of transfer function spatial frequency is directly relatable to the primary viewing condition of the end-user. In addition to optimizing quality for the viewer it will be seen that the system has the potential for an improved matching of the optical and electronic bandpass of the imager and for the development of more realistic acceptance specifications. 1. LOCAL WAVEFRONT PERFORMANCE ANALYSIS The LWPA system generates a local optical quality factor (LOQF) in the form of a map analogous to that used for the presentation and evaluation of wavefront errors. In conjunction with the local phase transfer function (LPTF) it can be used for maximally efficient specification and correction of imaging system pupil errors. The LOQF and LPTF are respectively equivalent to the global modulation transfer function (MTF) and phase transfer function (PTF) parts of the OTF. The LPTF is related to difference of the average of the errors in separated regions of the pupil. Figure

  3. Insights into DNA-mediated interparticle interactions from a coarse-grained model

    NASA Astrophysics Data System (ADS)

    Ding, Yajun; Mittal, Jeetain

    2014-11-01

    DNA-functionalized particles have great potential for the design of complex self-assembled materials. The major hurdle in realizing crystal structures from DNA-functionalized particles is expected to be kinetic barriers that trap the system in metastable amorphous states. Therefore, it is vital to explore the molecular details of particle assembly processes in order to understand the underlying mechanisms. Molecular simulations based on coarse-grained models can provide a convenient route to explore these details. Most of the currently available coarse-grained models of DNA-functionalized particles ignore key chemical and structural details of DNA behavior. These models therefore are limited in scope for studying experimental phenomena. In this paper, we present a new coarse-grained model of DNA-functionalized particles which incorporates some of the desired features of DNA behavior. The coarse-grained DNA model used here provides explicit DNA representation (at the nucleotide level) and complementary interactions between Watson-Crick base pairs, which lead to the formation of single-stranded hairpin and double-stranded DNA. Aggregation between multiple complementary strands is also prevented in our model. We study interactions between two DNA-functionalized particles as a function of DNA grafting density, lengths of the hybridizing and non-hybridizing parts of DNA, and temperature. The calculated free energies as a function of pair distance between particles qualitatively resemble experimental measurements of DNA-mediated pair interactions.

  4. Piece-wise quadratic approximations of arbitrary error functions for fast and robust machine learning.

    PubMed

    Gorban, A N; Mirkes, E M; Zinovyev, A

    2016-12-01

    Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0

  5. Crosslinking EEG time-frequency decomposition and fMRI in error monitoring.

    PubMed

    Hoffmann, Sven; Labrenz, Franziska; Themann, Maria; Wascher, Edmund; Beste, Christian

    2014-03-01

    Recent studies implicate a common response monitoring system, being active during erroneous and correct responses. Converging evidence from time-frequency decompositions of the response-related ERP revealed that evoked theta activity at fronto-central electrode positions differentiates correct from erroneous responses in simple tasks, but also in more complex tasks. However, up to now it is unclear how different electrophysiological parameters of error processing, especially at the level of neural oscillations are related, or predictive for BOLD signal changes reflecting error processing at a functional-neuroanatomical level. The present study aims to provide crosslinks between time domain information, time-frequency information, MRI BOLD signal and behavioral parameters in a task examining error monitoring due to mistakes in a mental rotation task. The results show that BOLD signal changes reflecting error processing on a functional-neuroanatomical level are best predicted by evoked oscillations in the theta frequency band. Although the fMRI results in this study account for an involvement of the anterior cingulate cortex, middle frontal gyrus, and the Insula in error processing, the correlation of evoked oscillations and BOLD signal was restricted to a coupling of evoked theta and anterior cingulate cortex BOLD activity. The current results indicate that although there is a distributed functional-neuroanatomical network mediating error processing, only distinct parts of this network seem to modulate electrophysiological properties of error monitoring.

  6. Orbit-determination performance of Doppler data for interplanetary cruise trajectories. Part 2: 8.4-GHz performance and data-weighting strategies

    NASA Technical Reports Server (NTRS)

    Ulvestad, J. S.

    1992-01-01

    A consider error covariance analysis was performed in order to investigate the orbit-determination performance attainable using two-way (coherent) 8.4-GHz (X-band) Doppler data for two segments of the planned Mars Observer trajectory. The analysis includes the effects of the current level of calibration errors in tropospheric delay, ionospheric delay, and station locations, with particular emphasis placed on assessing the performance of several candidate elevation-dependent data-weighting functions. One weighting function was found that yields good performance for a variety of tracking geometries. This weighting function is simple and robust; it reduces the danger of error that might exist if an analyst had to select one of several different weighting functions that are highly sensitive to the exact choice of parameters and to the tracking geometry. Orbit-determination accuracy improvements that may be obtained through the use of calibration data derived from Global Positioning System (GPS) satellites also were investigated, and can be as much as a factor of three in some components of the spacecraft state vector. Assuming that both station-location errors and troposphere calibration errors are reduced simultaneously, the recommended data-weighting function need not be changed when GPS calibrations are incorporated in the orbit-determination process.

  7. An Astronomical Test of CCD Photometric Precision

    NASA Technical Reports Server (NTRS)

    Koch, David; Dunham, Edward; Borucki, William; Jenkins, Jon; DeVingenzi, D. (Technical Monitor)

    1998-01-01

    This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques. we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

  8. The effect of errors in the assignment of the transmission functions on the accuracy of the thermal sounding of the atmosphere

    NASA Technical Reports Server (NTRS)

    Timofeyev, Y. M.

    1979-01-01

    In order to test the error of calculation in assumed values of the transmission function for Soviet and American radiometers sounding the atmosphere thermally from orbiting satellites, the assumptions of the transmission calculation is varied with respect to atmospheric CO2 content, transmission frequency, and atmospheric absorption. The error arising from variations of the assumptions from the standard basic model is calculated.

  9. Functional analysis of a frame-shift mutant of the dihydropyridine receptor pore subunit (α1S) expressing two complementary protein fragments

    PubMed Central

    Ahern, Chris A; Vallejo, Paola; Mortenson, Lindsay; Coronado, Roberto

    2001-01-01

    Background The L-type Ca2+ channel formed by the dihydropyridine receptor (DHPR) of skeletal muscle senses the membrane voltage and opens the ryanodine receptor (RyR1). This channel-to-channel coupling is essential for Ca2+ signaling but poorly understood. We characterized a single-base frame-shift mutant of α1S, the pore subunit of the DHPR, that has the unusual ability to function voltage sensor for excitation-contraction (EC) coupling by virtue of expressing two complementary hemi-Ca2+ channel fragments. Results Functional analysis of cDNA transfected dysgenic myotubes lacking α1S were carried out using voltage-clamp, confocal Ca2+ indicator fluoresence, epitope immunofluorescence and immunoblots of expressed proteins. The frame-shift mutant (fs-α1S) expressed the N-terminal half of α1S (M1 to L670) and the C-terminal half starting at M701 separately. The C-terminal fragment was generated by an unexpected restart of translation of the fs-α1S message at M701 and was eliminated by a M701I mutation. Protein-protein complementation between the two fragments produced recovery of skeletal-type EC coupling but not L-type Ca2+ current. Discussion A premature stop codon in the II-III loop may not necessarily cause a loss of DHPR function due to a restart of translation within the II-III loop, presumably by a mechanism involving leaky ribosomal scanning. In these cases, function is recovered by expression of complementary protein fragments from the same cDNA. DHPR-RyR1 interactions can be achieved via protein-protein complementation between hemi-Ca2+ channel proteins, hence an intact II-III loop is not essential for coupling the DHPR voltage sensor to the opening of RyR1 channel. PMID:11806762

  10. Total energy based flight control system

    NASA Technical Reports Server (NTRS)

    Lambregts, Antonius A. (Inventor)

    1985-01-01

    An integrated aircraft longitudinal flight control system uses a generalized thrust and elevator command computation (38), which accepts flight path angle, longitudinal acceleration command signals, along with associated feedback signals, to form energy rate error (20) and energy rate distribution error (18) signals. The engine thrust command is developed (22) as a function of the energy rate distribution error and the elevator position command is developed (26) as a function of the energy distribution error. For any vertical flight path and speed mode the outerloop errors are normalized (30, 34) to produce flight path angle and longitudinal acceleration commands. The system provides decoupled flight path and speed control for all control modes previously provided by the longitudinal autopilot, autothrottle and flight management systems.

  11. Precise orbit determination for the most recent altimeter missions: towards the 1 mm/y stability of the radial orbit error at regional scales

    NASA Astrophysics Data System (ADS)

    Couhert, Alexandre

    The reference Ocean Surface Topography Mission/Jason-2 satellite (CNES/NASA) has been in orbit for six years (since June 2008). It extends the continuous record of highly accurate sea surface height measurements begun in 1992 by the Topex/Poseidon mission and continued in 2001 by the Jason-1 mission. The complementary missions CryoSat-2 (ESA), HY-2A (CNSA) and SARAL/AltiKa (CNES/ISRO), with lower altitudes and higher inclinations, were launched in April 2010, August 2011 and February 2013, respectively. Although the three last satellites fly in different orbits, they contribute to the altimeter constellation while enhancing the global coverage. The CNES Precision Orbit Determination (POD) Group delivers precise and homogeneous orbit solutions for these independent altimeter missions. The focus of this talk will be on the long-term stability of the orbit time series for mean sea level applications on a regional scale. We discuss various issues related to the assessment of radial orbit error trends; in particular orbit errors dependant on the tracking technique, the reference frame accuracy and stability, the modeling of the temporal variations of the geopotential. Strategies are then explored to meet a 1 mm/y radial orbit stability over decadal periods at regional scales, and the challenge of evaluating such an improvement is discussed.

  12. Multi-qubit gates protected by adiabaticity and dynamical decoupling applicable to donor qubits in silicon

    DOE PAGES

    Witzel, Wayne; Montano, Ines; Muller, Richard P.; ...

    2015-08-19

    In this paper, we present a strategy for producing multiqubit gates that promise high fidelity with minimal tuning requirements. Our strategy combines gap protection from the adiabatic theorem with dynamical decoupling in a complementary manner. Energy-level transition errors are protected by adiabaticity and remaining phase errors are mitigated via dynamical decoupling. This is a powerful way to divide and conquer the various error channels. In order to accomplish this without violating a no-go theorem regarding black-box dynamically corrected gates [Phys. Rev. A 80, 032314 (2009)], we require a robust operating point (sweet spot) in control space where the qubits interactmore » with little sensitivity to noise. There are also energy gap requirements for effective adiabaticity. We apply our strategy to an architecture in Si with P donors where we assume we can shuttle electrons between different donors. Electron spins act as mobile ancillary qubits and P nuclear spins act as long-lived data qubits. Furthermore, this system can have a very robust operating point where the electron spin is bound to a donor in the quadratic Stark shift regime. High fidelity single qubit gates may be performed using well-established global magnetic resonance pulse sequences. Single electron-spin preparation and measurement has also been demonstrated. Thus, putting this all together, we present a robust universal gate set for quantum computation.« less

  13. Forensic surface metrology: tool mark evidence.

    PubMed

    Gambino, Carol; McLaughlin, Patrick; Kuo, Loretta; Kammerman, Frani; Shenkin, Peter; Diaczuk, Peter; Petraco, Nicholas; Hamby, James; Petraco, Nicholas D K

    2011-01-01

    Over the last several decades, forensic examiners of impression evidence have come under scrutiny in the courtroom due to analysis methods that rely heavily on subjective morphological comparisons. Currently, there is no universally accepted system that generates numerical data to independently corroborate visual comparisons. Our research attempts to develop such a system for tool mark evidence, proposing a methodology that objectively evaluates the association of striated tool marks with the tools that generated them. In our study, 58 primer shear marks on 9 mm cartridge cases, fired from four Glock model 19 pistols, were collected using high-resolution white light confocal microscopy. The resulting three-dimensional surface topographies were filtered to extract all "waviness surfaces"-the essential "line" information that firearm and tool mark examiners view under a microscope. Extracted waviness profiles were processed with principal component analysis (PCA) for dimension reduction. Support vector machines (SVM) were used to make the profile-gun associations, and conformal prediction theory (CPT) for establishing confidence levels. At the 95% confidence level, CPT coupled with PCA-SVM yielded an empirical error rate of 3.5%. Complementary, bootstrap-based computations for estimated error rates were 0%, indicating that the error rate for the algorithmic procedure is likely to remain low on larger data sets. Finally, suggestions are made for practical courtroom application of CPT for assigning levels of confidence to SVM identifications of tool marks recorded with confocal microscopy. Copyright © 2011 Wiley Periodicals, Inc.

  14. Software error data collection and categorization

    NASA Technical Reports Server (NTRS)

    Ostrand, T. J.; Weyuker, E. J.

    1982-01-01

    Software errors detected during development of an interactive special purpose editor system were studied. This product was followed during nine months of coding, unit testing, function testing, and system testing. A new error categorization scheme was developed.

  15. Quantitative assessment of paretic limb dexterity and interlimb coordination during bilateral arm rehabilitation training.

    PubMed

    Xu, Chang; Li, Siyi; Wang, Kui; Hou, Zengguang; Yu, Ningbo

    2017-07-01

    In neuro-rehabilitation after stroke, the conventional constrained induced movement therapy (CIMT) has been well-accepted. Existing bilateral trainings are mostly on mirrored symmetrical motion. However, complementary bilateral movements are dominantly involved in activities of daily living (ADLs), and functional bilateral therapies may bring better skill transfer from trainings to daily life. Neurophysiological evidence is also growing. In this work, we firstly introduce our bilateral arm training system realized with a haptic interface and a motion sensor, as well as the tasks that have been designed to train both the manipulation function of the paretic arm and coordination of bilateral upper limbs. Then, we propose quantitative measures for functional assessment of complementary bilateral training performance, including kinematic behavior indices, smoothness, submovement and bimanual coordination. After that, we describe the experiments with healthy subjects and the results with respect to these quantitative measures. Feasibility and sensitivity of the proposed indices were evaluated through comparison of unilateral and bilateral training outcomes. The proposed bilateral training system and tasks, as well as the quantitative measures, have been demonstrated effective for training and assessment of unilateral and bilateral arm functions.

  16. Causal Evidence from Humans for the Role of Mediodorsal Nucleus of the Thalamus in Working Memory.

    PubMed

    Peräkylä, Jari; Sun, Lihua; Lehtimäki, Kai; Peltola, Jukka; Öhman, Juha; Möttönen, Timo; Ogawa, Keith H; Hartikainen, Kaisa M

    2017-12-01

    The mediodorsal nucleus of the thalamus (MD), with its extensive connections to the lateral pFC, has been implicated in human working memory and executive functions. However, this understanding is based solely on indirect evidence from human lesion and imaging studies and animal studies. Direct, causal evidence from humans is missing. To obtain direct evidence for MD's role in humans, we studied patients treated with deep brain stimulation (DBS) for refractory epilepsy. This treatment is thought to prevent the generalization of a seizure by disrupting the functioning of the patient's anterior nuclei of the thalamus (ANT) with high-frequency electric stimulation. This structure is located superior and anterior to MD, and when the DBS lead is implanted in ANT, tip contacts of the lead typically penetrate through ANT into the adjoining MD. To study the role of MD in human executive functions and working memory, we periodically disrupted and recovered MD's function with high-frequency electric stimulation using DBS contacts reaching MD while participants performed a cognitive task engaging several aspects of executive functions. We hypothesized that the efficacy of executive functions, specifically working memory, is impaired when the functioning of MD is perturbed by high-frequency stimulation. Eight participants treated with ANT-DBS for refractory epilepsy performed a computer-based test of executive functions while DBS was repeatedly switched ON and OFF at MD and at the control location (ANT). In comparison to stimulation of the control location, when MD was stimulated, participants committed 2.26 times more errors in general (total errors; OR = 2.26, 95% CI [1.69, 3.01]) and 2.86 times more working memory-related errors specifically (incorrect button presses; OR = 2.88, CI [1.95, 4.24]). Similarly, participants committed 1.81 more errors in general ( OR = 1.81, CI [1.45, 2.24]) and 2.08 times more working memory-related errors ( OR = 2.08, CI [1.57, 2.75]) in comparison to no stimulation condition. "Total errors" is a composite score consisting of basic error types and was mostly driven by working memory-related errors. The facts that MD and a control location, ANT, are only few millimeters away from each other and that their stimulation produces very different results highlight the location-specific effect of DBS rather than regionally unspecific general effect. In conclusion, disrupting and recovering MD's function with high-frequency electric stimulation modulated participants' online working memory performance providing causal, in vivo evidence from humans for the role of MD in human working memory.

  17. Response Surface Modeling Using Multivariate Orthogonal Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; DeLoach, Richard

    2001-01-01

    A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.

  18. Uncertainties in the cluster-cluster correlation function

    NASA Astrophysics Data System (ADS)

    Ling, E. N.; Frenk, C. S.; Barrow, J. D.

    1986-12-01

    The bootstrap resampling technique is applied to estimate sampling errors and significance levels of the two-point correlation functions determined for a subset of the CfA redshift survey of galaxies and a redshift sample of 104 Abell clusters. The angular correlation function for a sample of 1664 Abell clusters is also calculated. The standard errors in xi(r) for the Abell data are found to be considerably larger than quoted 'Poisson errors'. The best estimate for the ratio of the correlation length of Abell clusters (richness class R greater than or equal to 1, distance class D less than or equal to 4) to that of CfA galaxies is 4.2 + 1.4 or - 1.0 (68 percentile error). The enhancement of cluster clustering over galaxy clustering is statistically significant in the presence of resampling errors. The uncertainties found do not include the effects of possible systematic biases in the galaxy and cluster catalogs and could be regarded as lower bounds on the true uncertainty range.

  19. A new approach to the characterization of subtle errors in everyday action: implications for mild cognitive impairment.

    PubMed

    Seligman, Sarah C; Giovannetti, Tania; Sestito, John; Libon, David J

    2014-01-01

    Mild functional difficulties have been associated with early cognitive decline in older adults and increased risk for conversion to dementia in mild cognitive impairment, but our understanding of this decline has been limited by a dearth of objective methods. This study evaluated the reliability and validity of a new system to code subtle errors on an established performance-based measure of everyday action and described preliminary findings within the context of a theoretical model of action disruption. Here 45 older adults completed the Naturalistic Action Test (NAT) and neuropsychological measures. NAT performance was coded for overt errors, and subtle action difficulties were scored using a novel coding system. An inter-rater reliability coefficient was calculated. Validity of the coding system was assessed using a repeated-measures ANOVA with NAT task (simple versus complex) and error type (overt versus subtle) as within-group factors. Correlation/regression analyses were conducted among overt NAT errors, subtle NAT errors, and neuropsychological variables. The coding of subtle action errors was reliable and valid, and episodic memory breakdown predicted subtle action disruption. Results suggest that the NAT can be useful in objectively assessing subtle functional decline. Treatments targeting episodic memory may be most effective in addressing early functional impairment in older age.

  20. A combined confocal and magnetic resonance microscope for biological studies

    NASA Astrophysics Data System (ADS)

    Majors, Paul D.; Minard, Kevin R.; Ackerman, Eric J.; Holtom, Gary R.; Hopkins, Derek F.; Parkinson, Christopher I.; Weber, Thomas J.; Wind, Robert A.

    2002-12-01

    Complementary data acquired with different microscopy techniques provide a basis for establishing a more comprehensive understanding of cell function in health and disease, particularly when results acquired with different methodologies can be correlated in time and space. In this article, a novel microscope is described for studying live cells simultaneously with both confocal scanning laser fluorescence optical microscopy and magnetic resonance microscopy. The various design considerations necessary for integrating these two complementary techniques are discussed, the layout and specifications of the instrument are given, and examples of confocal and magnetic resonance images of large frog cells and model tumor spheroids obtained with the compound microscope are presented.

  1. Complementary frequency shifter based on polarization modulator used for generation of a high-quality frequency-locked multicarrier.

    PubMed

    Li, Jianping; Yu, Changyuan; Li, Zhaohui

    2014-03-15

    A novel polarization-modulator-based complementary frequency shifter (PCFS) has been proposed and then used to implement the generation of a frequency-locked multicarrier with single- and dual-recirculating frequency shifting loops, respectively. The transfer functions and output properties of PCFS and PCFS-based multicarrier generator have been studied theoretically. Based on our simulation results through VPItransmissionMaker software, 100 stable carriers have been obtained with acceptable flatness while no DC bias control is required. The results show that the proposed PCFS has the potential to become a commercial product and then used in various scenarios.

  2. Accounting for uncertainty in pedotransfer functions in vulnerability assessments of pesticide leaching to groundwater.

    PubMed

    Stenemo, Fredrik; Jarvis, Nicholas

    2007-09-01

    A simulation tool for site-specific vulnerability assessments of pesticide leaching to groundwater was developed, based on the pesticide fate and transport model MACRO, parameterized using pedotransfer functions and reasonable worst-case parameter values. The effects of uncertainty in the pedotransfer functions on simulation results were examined for 48 combinations of soils, pesticides and application timings, by sampling pedotransfer function regression errors and propagating them through the simulation model in a Monte Carlo analysis. An uncertainty factor, f(u), was derived, defined as the ratio between the concentration simulated with no errors, c(sim), and the 80th percentile concentration for the scenario. The pedotransfer function errors caused a large variation in simulation results, with f(u) ranging from 1.14 to 1440, with a median of 2.8. A non-linear relationship was found between f(u) and c(sim), which can be used to account for parameter uncertainty by correcting the simulated concentration, c(sim), to an estimated 80th percentile value. For fine-textured soils, the predictions were most sensitive to errors in the pedotransfer functions for two parameters regulating macropore flow (the saturated matrix hydraulic conductivity, K(b), and the effective diffusion pathlength, d) and two water retention function parameters (van Genuchten's N and alpha parameters). For coarse-textured soils, the model was also sensitive to errors in the exponent in the degradation water response function and the dispersivity, in addition to K(b), but showed little sensitivity to d. To reduce uncertainty in model predictions, improved pedotransfer functions for K(b), d, N and alpha would therefore be most useful. 2007 Society of Chemical Industry

  3. Phase transition of Boolean networks with partially nested canalizing functions

    NASA Astrophysics Data System (ADS)

    Jansen, Kayse; Matache, Mihaela Teodora

    2013-07-01

    We generate the critical condition for the phase transition of a Boolean network governed by partially nested canalizing functions for which a fraction of the inputs are canalizing, while the remaining non-canalizing inputs obey a complementary threshold Boolean function. Past studies have considered the stability of fully or partially nested canalizing functions paired with random choices of the complementary function. In some of those studies conflicting results were found with regard to the presence of chaotic behavior. Moreover, those studies focus mostly on ergodic networks in which initial states are assumed equally likely. We relax that assumption and find the critical condition for the sensitivity of the network under a non-ergodic scenario. We use the proposed mathematical model to determine parameter values for which phase transitions from order to chaos occur. We generate Derrida plots to show that the mathematical model matches the actual network dynamics. The phase transition diagrams indicate that both order and chaos can occur, and that certain parameters induce a larger range of values leading to order versus chaos. The edge-of-chaos curves are identified analytically and numerically. It is shown that the depth of canalization does not cause major dynamical changes once certain thresholds are reached; these thresholds are fairly small in comparison to the connectivity of the nodes.

  4. A Posteriori Comparison of Natural and Surgical Destabilization Models of Canine Osteoarthritis

    PubMed Central

    Pelletier, Jean-Pierre; d'Anjou, Marc-André; Blond, Laurent; Pelletier, Johanne-Martel; del Castillo, Jérôme R. E.

    2013-01-01

    For many years Canis familiaris, the domestic dog, has drawn particular interest as a model of osteoarthritis (OA). Here, we optimized the dog model of experimental OA induced by cranial cruciate ligament sectioning. The usefulness of noninvasive complementary outcome measures, such as gait analysis for the limb function and magnetic resonance imaging for structural changes, was demonstrated in this model. Relationships were established between the functional impairment and the severity of structural changes including the measurement of cartilage thinning. In the dog model of naturally occurring OA, excellent test-retest reliability was denoted for the measurement of the limb function. A criterion to identify clinically meaningful responders to therapy was determined for privately owned dogs undergoing clinical trials. In addition, the recording of accelerometer-based duration of locomotor activity showed strong and complementary agreement with the biomechanical limb function. The translation potential of these models to the human OA condition is underlined. A preclinical testing protocol which combines the dog model of experimental OA induced by cranial cruciate ligament transection and the Dog model of naturally occurring OA offers the opportunity to further investigate the structural and functional benefits of disease-modifying strategies. Ultimately, a better prediction of outcomes for human clinical trials would be brought. PMID:24288664

  5. Thermometric sensing of nitrofurantoin by noncovalently imprinted polymers containing two complementary functional monomers.

    PubMed

    Athikomrattanakul, Umporn; Gajovic-Eichelmann, Nenad; Scheller, Frieder W

    2011-10-15

    Molecularly imprinted polymers (MIPs) for nitrofurantoin (NFT) recognition addressing in parallel of two complementary functional groups were created using a noncovalent imprinting approach. Specific tailor-made functional monomers were synthesized: a diaminopyridine derivative as the receptor for the imide residue and three (thio)urea derivatives for the interaction with the nitro group of NFT. A significantly improved binding of NFT to the new MIPs was revealed from the imprinting factor, efficiency of binding, affinity constants and maximum binding number as compared to previously reported MIPs, which addressed either the imide or the nitro residue. Substances possessing only one functionality (either the imide group or nitro group) showed significantly weaker binding to the new imprinted polymers than NFT. However, the compounds lacking both functionalities binds extremely weak to all imprinted polymers. The new imprinted polymers were applied in a flow-through thermistor in organic solvent for the first time. The MIP-thermistor allows the detection of NFT down to a concentration of 5 μM in acetonitrile + 0.2% dimethyl sulfoxide (DMSO). The imprinting factor of 3.91 at 0.1 mM of NFT as obtained by thermistor measurements is well comparable to the value obtained by batch binding experiments. © 2011 American Chemical Society

  6. Functional Language Shift to the Right Hemisphere in Patients with Language-Eloquent Brain Tumors

    PubMed Central

    Krieg, Sandro M.; Sollmann, Nico; Hauck, Theresa; Ille, Sebastian; Foerschler, Annette; Meyer, Bernhard; Ringel, Florian

    2013-01-01

    Objectives Language function is mainly located within the left hemisphere of the brain, especially in right-handed subjects. However, functional MRI (fMRI) has demonstrated changes of language organization in patients with left-sided perisylvian lesions to the right hemisphere. Because intracerebral lesions can impair fMRI, this study was designed to investigate human language plasticity with a virtual lesion model using repetitive navigated transcranial magnetic stimulation (rTMS). Experimental design Fifteen patients with lesions of left-sided language-eloquent brain areas and 50 healthy and purely right-handed participants underwent bilateral rTMS language mapping via an object-naming task. All patients were proven to have left-sided language function during awake surgery. The rTMS-induced language errors were categorized into 6 different error types. The error ratio (induced errors/number of stimulations) was determined for each brain region on both hemispheres. A hemispheric dominance ratio was then defined for each region as the quotient of the error ratio (left/right) of the corresponding area of both hemispheres (ratio >1  =  left dominant; ratio <1  =  right dominant). Results Patients with language-eloquent lesions showed a statistically significantly lower ratio than healthy participants concerning “all errors” and “all errors without hesitations”, which indicates a higher participation of the right hemisphere in language function. Yet, there was no cortical region with pronounced difference in language dominance compared to the whole hemisphere. Conclusions This is the first study that shows by means of an anatomically accurate virtual lesion model that a shift of language function to the non-dominant hemisphere can occur. PMID:24069410

  7. Clinical Outcomes of an Optimized Prolate Ablation Procedure for Correcting Residual Refractive Errors Following Laser Surgery.

    PubMed

    Chung, Byunghoon; Lee, Hun; Choi, Bong Joon; Seo, Kyung Ryul; Kim, Eung Kwon; Kim, Dae Yune; Kim, Tae-Im

    2017-02-01

    The purpose of this study was to investigate the clinical efficacy of an optimized prolate ablation procedure for correcting residual refractive errors following laser surgery. We analyzed 24 eyes of 15 patients who underwent an optimized prolate ablation procedure for the correction of residual refractive errors following laser in situ keratomileusis, laser-assisted subepithelial keratectomy, or photorefractive keratectomy surgeries. Preoperative ophthalmic examinations were performed, and uncorrected distance visual acuity, corrected distance visual acuity, manifest refraction values (sphere, cylinder, and spherical equivalent), point spread function, modulation transfer function, corneal asphericity (Q value), ocular aberrations, and corneal haze measurements were obtained postoperatively at 1, 3, and 6 months. Uncorrected distance visual acuity improved and refractive errors decreased significantly at 1, 3, and 6 months postoperatively. Total coma aberration increased at 3 and 6 months postoperatively, while changes in all other aberrations were not statistically significant. Similarly, no significant changes in point spread function were detected, but modulation transfer function increased significantly at the postoperative time points measured. The optimized prolate ablation procedure was effective in terms of improving visual acuity and objective visual performance for the correction of persistent refractive errors following laser surgery.

  8. Error Detection/Correction in Collaborative Writing

    ERIC Educational Resources Information Center

    Pilotti, Maura; Chodorow, Martin

    2009-01-01

    In the present study, we examined error detection/correction during collaborative writing. Subjects were asked to identify and correct errors in two contexts: a passage written by the subject (familiar text) and a passage written by a person other than the subject (unfamiliar text). A computer program inserted errors in function words prior to the…

  9. Optimization of Aimpoints for Coordinate Seeking Weapons

    DTIC Science & Technology

    2015-09-01

    aiming) and independent ( ballistic ) errors are taken into account, before utilizing each of the three damage functions representing the weapon. A Monte...characteristics such as the radius of the circle containing the weapon aimpoint, impact angle, dependent (aiming) and independent ( ballistic ) errors are taken...Dependent (Aiming) Error .................................8 2. Single Weapon Independent ( Ballistic ) Error .............................9 3

  10. The decline and fall of Type II error rates

    Treesearch

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  11. Errors and conflict at the task level and the response level.

    PubMed

    Desmet, Charlotte; Fias, Wim; Hartstra, Egbert; Brass, Marcel

    2011-01-26

    In the last decade, research on error and conflict processing has become one of the most influential research areas in the domain of cognitive control. There is now converging evidence that a specific part of the posterior frontomedian cortex (pFMC), the rostral cingulate zone (RCZ), is crucially involved in the processing of errors and conflict. However, error-related research has focused primarily on a specific error type, namely, response errors. The aim of the present study was to investigate whether errors on the task level rely on the same neural and functional mechanisms. Here we report a dissociation of both error types in the pFMC: whereas response errors activate the RCZ, task errors activate the dorsal frontomedian cortex. Although this last region shows an overlap in activation for task and response errors on the group level, a closer inspection of the single-subject data is more in accordance with a functional anatomical dissociation. When investigating brain areas related to conflict on the task and response levels, a clear dissociation was perceived between areas associated with response conflict and with task conflict. Overall, our data support a dissociation between response and task levels of processing in the pFMC. In addition, we provide additional evidence for a dissociation between conflict and errors both at the response level and at the task level.

  12. Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.

    PubMed

    Wei, Qinglai; Li, Benkai; Song, Ruizhuo

    2018-04-01

    In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.

  13. A theoretical basis for the analysis of multiversion software subject to coincident errors

    NASA Technical Reports Server (NTRS)

    Eckhardt, D. E., Jr.; Lee, L. D.

    1985-01-01

    Fundamental to the development of redundant software techniques (known as fault-tolerant software) is an understanding of the impact of multiple joint occurrences of errors, referred to here as coincident errors. A theoretical basis for the study of redundant software is developed which: (1) provides a probabilistic framework for empirically evaluating the effectiveness of a general multiversion strategy when component versions are subject to coincident errors, and (2) permits an analytical study of the effects of these errors. An intensity function, called the intensity of coincident errors, has a central role in this analysis. This function describes the propensity of programmers to introduce design faults in such a way that software components fail together when executing in the application environment. A condition under which a multiversion system is a better strategy than relying on a single version is given.

  14. Frequency-domain Green's functions for radar waves in heterogeneous 2.5D media

    USGS Publications Warehouse

    Ellefsen, K.J.; Croize, D.; Mazzella, A.T.; McKenna, J.R.

    2009-01-01

    Green's functions for radar waves propagating in heterogeneous 2.5D media might be calculated in the frequency domain using a hybrid method. The model is defined in the Cartesian coordinate system, and its electromagnetic properties might vary in the x- and z-directions, but not in the y-direction. Wave propagation in the x- and z-directions is simulated with the finite-difference method, and wave propagation in the y-direction is simulated with an analytic function. The absorbing boundaries on the finite-difference grid are perfectly matched layers that have been modified to make them compatible with the hybrid method. The accuracy of these numerical Greens functions is assessed by comparing them with independently calculated Green's functions. For a homogeneous model, the magnitude errors range from -4.16% through 0.44%, and the phase errors range from -0.06% through 4.86%. For a layered model, the magnitude errors range from -2.60% through 2.06%, and the phase errors range from -0.49% through 2.73%. These numerical Green's functions might be used for forward modeling and full waveform inversion. ?? 2009 Society of Exploration Geophysicists. All rights reserved.

  15. How Prediction Errors Shape Perception, Attention, and Motivation

    PubMed Central

    den Ouden, Hanneke E. M.; Kok, Peter; de Lange, Floris P.

    2012-01-01

    Prediction errors (PE) are a central notion in theoretical models of reinforcement learning, perceptual inference, decision-making and cognition, and prediction error signals have been reported across a wide range of brain regions and experimental paradigms. Here, we will make an attempt to see the forest for the trees and consider the commonalities and differences of reported PE signals in light of recent suggestions that the computation of PE forms a fundamental mode of brain function. We discuss where different types of PE are encoded, how they are generated, and the different functional roles they fulfill. We suggest that while encoding of PE is a common computation across brain regions, the content and function of these error signals can be very different and are determined by the afferent and efferent connections within the neural circuitry in which they arise. PMID:23248610

  16. Structural correlates of affinity in fetal versus adult endplate nicotinic receptors

    NASA Astrophysics Data System (ADS)

    Nayak, Tapan Kumar; Chakraborty, Srirupa; Zheng, Wenjun; Auerbach, Anthony

    2016-04-01

    Adult-type nicotinic acetylcholine receptors (AChRs) mediate signalling at mature neuromuscular junctions and fetal-type AChRs are necessary for proper synapse development. Each AChR has two neurotransmitter binding sites located at the interface of a principal and a complementary subunit. Although all agonist binding sites have the same core of five aromatic amino acids, the fetal site has ~30-fold higher affinity for the neurotransmitter ACh. Here we use molecular dynamics simulations of adult versus fetal homology models to identify complementary-subunit residues near the core that influence affinity, and use single-channel electrophysiology to corroborate the results. Four residues in combination determine adult versus fetal affinity. Simulations suggest that at lower-affinity sites, one of these unsettles the core directly and the others (in loop E) increase backbone flexibility to unlock a key, complementary tryptophan from the core. Swapping only four amino acids is necessary and sufficient to exchange function between adult and fetal AChRs.

  17. Immunochemical Proof that a Novel Rearranging Gene Encodes the T Cell Receptor δ Subunit

    NASA Astrophysics Data System (ADS)

    Band, Hamid; Hochstenbach, Frans; McLean, Joanne; Hata, Shingo; Krangel, Michael S.; Brenner, Michael B.

    1987-10-01

    The T cell receptor (TCR) δ protein is expressed as part of a heterodimer with TCR γ , in association with the CD3 polypeptides on a subset of functional peripheral blood T lymphocytes, thymocytes, and certain leukemic T cell lines. A monoclonal antibody directed against TCR δ was produced that binds specifically to the surface of several TCR γ δ cell lines and immunoprecipitates the TCR γ δ as a heterodimer from Triton X-100 detergent lysates and also immunoprecipitates the TCR δ subunit alone after chain separation. A candidate human TCR δ complementary DNA clone (IDP2 O-240/38), reported in a companion paper, was isolated by the subtractive library approach from a TCR γ δ cell line. This complementary DNA clone was used to direct the synthesis of a polypeptide that is specifically recognized by the monoclonal antibody to TCR δ . This complementary DNA clone thus corresponds to the gene that encodes the TCR δ subunit.

  18. Use of total electron content data to analyze ionosphere electron density gradients

    NASA Astrophysics Data System (ADS)

    Nava, B.; Radicella, S. M.; Leitinger, R.; Coïsson, P.

    In the presence of electron density gradients the thin shell approximation for the ionosphere, used together with a simple mapping function to convert slant total electron content (TEC) to vertical TEC, could lead to TEC conversion errors. These "mapping function errors" can therefore be used to detect the electron density gradients in the ionosphere. In the present work GPS derived slant TEC data have been used to investigate the effects of the electron density gradients in the middle and low latitude ionosphere under geomagnetic quiet and disturbed conditions. In particular the data corresponding to the geographic area of the American Sector for the days 5-7 April 2000 have been used to perform a complete analysis of mapping function errors based on the "coinciding pierce point technique". The results clearly illustrate the electron density gradient effects according to the locations considered and to the actual levels of disturbance of the ionosphere. In addition, the possibility to assess an ionospheric shell height able to minimize the mapping function errors has been verified.

  19. Quantitative Tomography for Continuous Variable Quantum Systems

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    2018-03-01

    We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.

  20. Variational bounds on the temperature distribution

    NASA Astrophysics Data System (ADS)

    Kalikstein, Kalman; Spruch, Larry; Baider, Alberto

    1984-02-01

    Upper and lower stationary or variational bounds are obtained for functions which satisfy parabolic linear differential equations. (The error in the bound, that is, the difference between the bound on the function and the function itself, is of second order in the error in the input function, and the error is of known sign.) The method is applicable to a range of functions associated with equalization processes, including heat conduction, mass diffusion, electric conduction, fluid friction, the slowing down of neutrons, and certain limiting forms of the random walk problem, under conditions which are not unduly restrictive: in heat conduction, for example, we do not allow the thermal coefficients or the boundary conditions to depend upon the temperature, but the thermal coefficients can be functions of space and time and the geometry is unrestricted. The variational bounds follow from a maximum principle obeyed by the solutions of these equations.

  1. Punishment sensitivity modulates the processing of negative feedback but not error-induced learning.

    PubMed

    Unger, Kerstin; Heintz, Sonja; Kray, Jutta

    2012-01-01

    Accumulating evidence suggests that individual differences in punishment and reward sensitivity are associated with functional alterations in neural systems underlying error and feedback processing. In particular, individuals highly sensitive to punishment have been found to be characterized by larger mediofrontal error signals as reflected in the error negativity/error-related negativity (Ne/ERN) and the feedback-related negativity (FRN). By contrast, reward sensitivity has been shown to relate to the error positivity (Pe). Given that Ne/ERN, FRN, and Pe have been functionally linked to flexible behavioral adaptation, the aim of the present research was to examine how these electrophysiological reflections of error and feedback processing vary as a function of punishment and reward sensitivity during reinforcement learning. We applied a probabilistic learning task that involved three different conditions of feedback validity (100%, 80%, and 50%). In contrast to prior studies using response competition tasks, we did not find reliable correlations between punishment sensitivity and the Ne/ERN. Instead, higher punishment sensitivity predicted larger FRN amplitudes, irrespective of feedback validity. Moreover, higher reward sensitivity was associated with a larger Pe. However, only reward sensitivity was related to better overall learning performance and higher post-error accuracy, whereas highly punishment sensitive participants showed impaired learning performance, suggesting that larger negative feedback-related error signals were not beneficial for learning or even reflected maladaptive information processing in these individuals. Thus, although our findings indicate that individual differences in reward and punishment sensitivity are related to electrophysiological correlates of error and feedback processing, we found less evidence for influences of these personality characteristics on the relation between performance monitoring and feedback-based learning.

  2. Measurement uncertainty and feasibility study of a flush airdata system for a hypersonic flight experiment

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.

    1994-01-01

    Presented is a feasibility and error analysis for a hypersonic flush airdata system on a hypersonic flight experiment (HYFLITE). HYFLITE heating loads make intrusive airdata measurement impractical. Although this analysis is specifically for the HYFLITE vehicle and trajectory, the problems analyzed are generally applicable to hypersonic vehicles. A layout of the flush-port matrix is shown. Surface pressures are related airdata parameters using a simple aerodynamic model. The model is linearized using small perturbations and inverted using nonlinear least-squares. Effects of various error sources on the overall uncertainty are evaluated using an error simulation. Error sources modeled include boundarylayer/viscous interactions, pneumatic lag, thermal transpiration in the sensor pressure tubing, misalignment in the matrix layout, thermal warping of the vehicle nose, sampling resolution, and transducer error. Using simulated pressure data for input to the estimation algorithm, effects caused by various error sources are analyzed by comparing estimator outputs with the original trajectory. To obtain ensemble averages the simulation is run repeatedly and output statistics are compiled. Output errors resulting from the various error sources are presented as a function of Mach number. Final uncertainties with all modeled error sources included are presented as a function of Mach number.

  3. Derivation and Application of a Global Albedo yielding an Optical Brightness To Physical Size Transformation Free of Systematic Errors

    NASA Technical Reports Server (NTRS)

    Mulrooney, Dr. Mark K.; Matney, Dr. Mark J.

    2007-01-01

    Orbital object data acquired via optical telescopes can play a crucial role in accurately defining the space environment. Radar systems probe the characteristics of small debris by measuring the reflected electromagnetic energy from an object of the same order of size as the wavelength of the radiation. This signal is affected by electrical conductivity of the bulk of the debris object, as well as its shape and orientation. Optical measurements use reflected solar radiation with wavelengths much smaller than the size of the objects. Just as with radar, the shape and orientation of an object are important, but we only need to consider the surface electrical properties of the debris material (i.e., the surface albedo), not the bulk electromagnetic properties. As a result, these two methods are complementary in that they measure somewhat independent physical properties to estimate the same thing, debris size. Short arc optical observations such as are typical of NASA's Liquid Mirror Telescope (LMT) give enough information to estimate an Assumed Circular Orbit (ACO) and an associated range. This information, combined with the apparent magnitude, can be used to estimate an "absolute" brightness (scaled to a fixed range and phase angle). This absolute magnitude is what is used to estimate debris size. However, the shape and surface albedo effects make the size estimates subject to systematic and random errors, such that it is impossible to ascertain the size of an individual object with any certainty. However, as has been shown with radar debris measurements, that does not preclude the ability to estimate the size distribution of a number of objects statistically. After systematic errors have been eliminated (range errors, phase function assumptions, photometry) there remains a random geometric albedo distribution that relates object size to absolute magnitude. Measurements by the LMT of a subset of tracked debris objects with sizes estimated from their radar cross sections indicate that the random variations in the albedo follow a log-normal distribution quite well. In addition, this distribution appears to be independent of object size over a considerable range in size. Note that this relation appears to hold for debris only, where the shapes and other properties are not primarily the result of human manufacture, but of random processes. With this information in hand, it now becomes possible to estimate the actual size distribution we are sampling from. We have identified two characteristics of the space debris population that make this process tractable and by extension have developed a methodology for performing the transformation.

  4. An approach to the analysis of performance of quasi-optimum digital phase-locked loops.

    NASA Technical Reports Server (NTRS)

    Polk, D. R.; Gupta, S. C.

    1973-01-01

    An approach to the analysis of performance of quasi-optimum digital phase-locked loops (DPLL's) is presented. An expression for the characteristic function of the prior error in the state estimate is derived, and from this expression an infinite dimensional equation for the prior error variance is obtained. The prior error-variance equation is a function of the communication system model and the DPLL gain and is independent of the method used to derive the DPLL gain. Two approximations are discussed for reducing the prior error-variance equation to finite dimension. The effectiveness of one approximation in analyzing DPLL performance is studied.

  5. General error analysis in the relationship between free thyroxine and thyrotropin and its clinical relevance.

    PubMed

    Goede, Simon L; Leow, Melvin Khee-Shing

    2013-01-01

    This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH]. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect). When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.

  6. Network Dynamics Underlying Speed-Accuracy Trade-Offs in Response to Errors

    PubMed Central

    Agam, Yigal; Carey, Caitlin; Barton, Jason J. S.; Dyckman, Kara A.; Lee, Adrian K. C.; Vangel, Mark; Manoach, Dara S.

    2013-01-01

    The ability to dynamically and rapidly adjust task performance based on its outcome is fundamental to adaptive, flexible behavior. Over trials of a task, responses speed up until an error is committed and after the error responses slow down. These dynamic adjustments serve to optimize performance and are well-described by the speed-accuracy trade-off (SATO) function. We hypothesized that SATOs based on outcomes reflect reciprocal changes in the allocation of attention between the internal milieu and the task-at-hand, as indexed by reciprocal changes in activity between the default and dorsal attention brain networks. We tested this hypothesis using functional MRI to examine the pattern of network activation over a series of trials surrounding and including an error. We further hypothesized that these reciprocal changes in network activity are coordinated by the posterior cingulate cortex (PCC) and would rely on the structural integrity of its white matter connections. Using diffusion tensor imaging, we examined whether fractional anisotropy of the posterior cingulum bundle correlated with the magnitude of reciprocal changes in network activation around errors. As expected, reaction time (RT) in trials surrounding errors was consistent with predictions from the SATO function. Activation in the default network was: (i) inversely correlated with RT, (ii) greater on trials before than after an error and (iii) maximal at the error. In contrast, activation in the right intraparietal sulcus of the dorsal attention network was (i) positively correlated with RT and showed the opposite pattern: (ii) less activation before than after an error and (iii) the least activation on the error. Greater integrity of the posterior cingulum bundle was associated with greater reciprocity in network activation around errors. These findings suggest that dynamic changes in attention to the internal versus external milieu in response to errors underlie SATOs in RT and are mediated by the PCC. PMID:24069223

  7. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  8. Complementary effects of cereal and pulse polyphenols and dietary fiber on chronic inflammation and gut health.

    PubMed

    Awika, Joseph M; Rose, Devin J; Simsek, Senay

    2018-03-01

    Cereal grains and grain pulses are primary staples often consumed together, and contribute a major portion of daily human calorie and protein intake globally. Protective effects of consuming whole grain cereals and grain pulses against various inflammation-related chronic diseases are well documented. However, potential benefits of combined intake of whole cereals and pulses beyond their complementary amino acid nutrition is rarely considered in literature. There is ample evidence that key bioactive components of whole grain cereals and pulses are structurally different and thus may be optimized to provide synergistic/complementary health benefits. Among the most important whole grain bioactive components are polyphenols and dietary fiber, not only because of their demonstrated biological function, but also their major impact on consumer choice of whole grain/pulse products. This review highlights the distinct structural differences between key cereal grain and pulse polyphenols and non-starch polysaccharides (dietary fiber), and the evidence on specific synergistic/complementary benefits of combining the bioactive components from the two commodities. Interactive effects of the polyphenols and fiber on gut microbiota and associated benefits to colon health, and against systemic inflammation, are discussed. Processing technologies that can be used to further enhance the interactive benefits of combined cereal-pulse bioactive compounds are highlighted.

  9. Determining relative error bounds for the CVBEM

    USGS Publications Warehouse

    Hromadka, T.V.

    1985-01-01

    The Complex Variable Boundary Element Methods provides a measure of relative error which can be utilized to subsequently reduce the error or provide information for further modeling analysis. By maximizing the relative error norm on each boundary element, a bound on the total relative error for each boundary element can be evaluated. This bound can be utilized to test CVBEM convergence, to analyze the effects of additional boundary nodal points in reducing the modeling error, and to evaluate the sensitivity of resulting modeling error within a boundary element from the error produced in another boundary element as a function of geometric distance. ?? 1985.

  10. A cerebellar thalamic cortical circuit for error-related cognitive control.

    PubMed

    Ide, Jaime S; Li, Chiang-shan R

    2011-01-01

    Error detection and behavioral adjustment are core components of cognitive control. Numerous studies have focused on the anterior cingulate cortex (ACC) as a critical locus of this executive function. Our previous work showed greater activation in the dorsal ACC and subcortical structures during error detection, and activation in the ventrolateral prefrontal cortex (VLPFC) during post-error slowing (PES) in a stop signal task (SST). However, the extent of error-related cortical or subcortical activation across subjects was not correlated with VLPFC activity during PES. So then, what causes VLPFC activation during PES? To address this question, we employed Granger causality mapping (GCM) and identified regions that Granger caused VLPFC activation in 54 adults performing the SST during fMRI. These brain regions, including the supplementary motor area (SMA), cerebellum, a pontine region, and medial thalamus, represent potential targets responding to errors in a way that could influence VLPFC activation. In confirmation of this hypothesis, the error-related activity of these regions correlated with VLPFC activation during PES, with the cerebellum showing the strongest association. The finding that cerebellar activation Granger causes prefrontal activity during behavioral adjustment supports a cerebellar function in cognitive control. Furthermore, multivariate GCA described the "flow of information" across these brain regions. Through connectivity with the thalamus and SMA, the cerebellum mediates error and post-error processing in accord with known anatomical projections. Taken together, these new findings highlight the role of the cerebello-thalamo-cortical pathway in an executive function that has heretofore largely been ascribed to the anterior cingulate-prefrontal cortical circuit. Copyright © 2010 Elsevier Inc. All rights reserved.

  11. Spacetime geodesy and the LAGEOS-3 satellite experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, W.A.; Chen, Kaiyou; Habib, S.

    1996-04-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). LAGEOS-1 is a dense spherical satellite whose tracking accuracy is such as to yield a medium-term inertial reference frame and that is used as an adjunct to more difficult and more data-intensive absolute frame measurements. LAGEOS-3, an identical satellite to be launched into an orbit complementary to that of LAGEOS-1, would experience an equal and opposite classical precession to that of LAGEOS- 1. Besides providing a more accurate real-time measurement of the earth`s length of day and polar wobble,more » this paired-satellite system would provide the first direct measurement of the general relativistic frame-dragging effect. Of the five dominant error sources in this experiment, the largest one involves surface forces on the satellite and their consequent impact on the orbital nodal precession. The surface forces are a function of the spin dynamics of the satellite. We have modeled the spin dynamics of a LAGEOS-type satellite and used this spin model to estimate the impact of the thermal rocketing effect on the LAGEOS-3 experiment. We have also performed an analytic tensor expansion of Synge`s world function to better reveal the nature of the predicted frame-dragging effect. We showed that this effect is not due to the Riemann curvature tensor, but rather is a ``potential effect`` arising from the acceleration of the world lines in the Kerr spacetime geometry.« less

  12. Real-Time In-Situ Measurements for Earthquake Early Warning and Space-Borne Deformation Measurement Mission Support

    NASA Astrophysics Data System (ADS)

    Kedar, S.; Bock, Y.; Webb, F.; Clayton, R. W.; Owen, S. E.; Moore, A. W.; Yu, E.; Dong, D.; Fang, P.; Jamason, P.; Squibb, M. B.; Crowell, B. W.

    2010-12-01

    In situ geodetic networks for observing crustal motion have proliferated over the last two decades and are now recognized as indispensable tools in geophysical research, along side more traditional seismic networks. The 2007 National Research Council’s Decadal Survey recognizes that space-borne and in situ observations, such as Interferometric Synthetic Aperture Radar (InSAR) and ground-based continuous GPS (CGPS) are complementary in forecasting, in assessing, and in mitigating natural hazards. However, the information content and timeliness of in situ geodetic observations have not been fully exploited, particularly at higher frequencies than traditional daily CGPS position time series. Nor have scientists taken full advantage of the complementary natures of geodetic and seismic data, as well as those of space-based and in situ observations. To address these deficits we are developing real-time CGPS data products for earthquake early warning and for space-borne deformation measurement mission support. Our primary mission objective is in situ verification and validation for DESDynI, but our work is also applicable to other international missions (Sentinel 1a/1b, SAOCOM, ALOS 2). Our project is developing new capabilities to continuously observe and mitigate earthquake-related hazards (direct seismic damage, tsunamis, landslides, volcanoes) in near real-time with high spatial-temporal resolution, to improve the planning and accuracy of space-borne observations. We also are using GPS estimates of tropospheric zenith delay combined with water vapor data from weather models to generate tropospheric calibration maps for mitigating the largest source of error, atmospheric artifacts, in InSAR interferograms. These functions will be fully integrated into a Geophysical Resource Web Services and interactive GPS Explorer data portal environment being developed as part of an ongoing MEaSUREs project and NASA’s contribution to the EarthScope project. GPS Explorer, originally designed for web-based dissemination of long-term Solid Earth Science Data Records (ESDR’s) such as deformation time series, tectonic velocity vectors, and strain maps, provides the framework for seamless inclusion of the high rate data products. Detection and preliminary modeling of interesting signals by dense real-time high-rate ground networks will allow mission planners and decision makers to fully exploit the less-frequent but higher resolution InSAR observations. Fusion of in situ elements into an advanced observation system will significantly improve the scientific value of extensive surface displacement data, provide scientists with improved access to modern software tools to manipulate and model these data, increase the data’s accuracy and timeliness at higher frequencies than available from space-based observations, and increase the accuracy of space-based observations through calibration of atmospheric and other systematic errors.

  13. Improvement and comparison of likelihood functions for model calibration and parameter uncertainty analysis within a Markov chain Monte Carlo scheme

    NASA Astrophysics Data System (ADS)

    Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim

    2014-11-01

    In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.

  14. Counting-backward test for executive function in idiopathic normal pressure hydrocephalus.

    PubMed

    Kanno, S; Saito, M; Hayashi, A; Uchiyama, M; Hiraoka, K; Nishio, Y; Hisanaga, K; Mori, E

    2012-10-01

    The aim of this study was to develop and validate a bedside test for executive function in patients with idiopathic normal pressure hydrocephalus (INPH). Twenty consecutive patients with INPH and 20 patients with Alzheimer's disease (AD) were enrolled in this study. We developed the counting-backward test for evaluating executive function in patients with INPH. Two indices that are considered to be reflective of the attention deficits and response suppression underlying executive dysfunction in INPH were calculated: the first-error score and the reverse-effect index. Performance on both the counting-backward test and standard neuropsychological tests for executive function was assessed in INPH and AD patients. The first-error score, reverse-effect index and the scores from the standard neuropsychological tests for executive function were significantly lower for individuals in the INPH group than in the AD group. The two indices for the counting-backward test in the INPH group were strongly correlated with the total scores for Frontal Assessment Battery and Phonemic Verbal Fluency. The first-error score was also significantly correlated with the error rate of the Stroop colour-word test and the score of the go/no-go test. In addition, we found that the first-error score highly distinguished patients with INPH from those with AD using these tests. The counting-backward test is useful for evaluating executive dysfunction in INPH and for differentiating between INPH and AD patients. In particular, the first-error score may reflect deficits in the response suppression related to executive dysfunction in INPH. © 2012 John Wiley & Sons A/S.

  15. Selection of neural network structure for system error correction of electro-optical tracker system with horizontal gimbal

    NASA Astrophysics Data System (ADS)

    Liu, Xing-fa; Cen, Ming

    2007-12-01

    Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.

  16. A frame selective dynamic programming approach for noise robust pitch estimation.

    PubMed

    Yarra, Chiranjeevi; Deshmukh, Om D; Ghosh, Prasanta Kumar

    2018-04-01

    The principles of the existing pitch estimation techniques are often different and complementary in nature. In this work, a frame selective dynamic programming (FSDP) method is proposed which exploits the complementary characteristics of two existing methods, namely, sub-harmonic to harmonic ratio (SHR) and sawtooth-wave inspired pitch estimator (SWIPE). Using variants of SHR and SWIPE, the proposed FSDP method classifies all the voiced frames into two classes-the first class consists of the frames where a confidence score maximization criterion is used for pitch estimation, while for the second class, a dynamic programming (DP) based approach is proposed. Experiments are performed on speech signals separately from KEELE, CSLU, and PaulBaghsaw corpora under clean and additive white Gaussian noise at 20, 10, 5, and 0 dB SNR conditions using four baseline schemes including SHR, SWIPE, and two DP based techniques. The pitch estimation performance of FSDP, when averaged over all SNRs, is found to be better than those of the baseline schemes suggesting the benefit of applying smoothness constraint using DP in selected frames in the proposed FSDP scheme. The VuV classification error from FSDP is also found to be lower than that from all four baseline schemes in almost all SNR conditions on three corpora.

  17. Uprobe: a genome-wide universal probe resource for comparative physical mapping in vertebrates.

    PubMed

    Kellner, Wendy A; Sullivan, Robert T; Carlson, Brian H; Thomas, James W

    2005-01-01

    Interspecies comparisons are important for deciphering the functional content and evolution of genomes. The expansive array of >70 public vertebrate genomic bacterial artificial chromosome (BAC) libraries can provide a means of comparative mapping, sequencing, and functional analysis of targeted chromosomal segments that is independent and complementary to whole-genome sequencing. However, at the present time, no complementary resource exists for the efficient targeted physical mapping of the majority of these BAC libraries. Universal overgo-hybridization probes, designed from regions of sequenced genomes that are highly conserved between species, have been demonstrated to be an effective resource for the isolation of orthologous regions from multiple BAC libraries in parallel. Here we report the application of the universal probe design principal across entire genomes, and the subsequent creation of a complementary probe resource, Uprobe, for screening vertebrate BAC libraries. Uprobe currently consists of whole-genome sets of universal overgo-hybridization probes designed for screening mammalian or avian/reptilian libraries. Retrospective analysis, experimental validation of the probe design process on a panel of representative BAC libraries, and estimates of probe coverage across the genome indicate that the majority of all eutherian and avian/reptilian genes or regions of interest can be isolated using Uprobe. Future implementation of the universal probe design strategy will be used to create an expanded number of whole-genome probe sets that will encompass all vertebrate genomes.

  18. Dual Roles of RNF2 in Melanoma Progression | Office of Cancer Genomics

    Cancer.gov

    Epigenetic regulators have emerged as critical factors governing the biology of cancer. Here, in the context of melanoma, we show that RNF2 is prognostic, exhibiting progression-correlated expression in human melanocytic neoplasms. Through a series of complementary gain-of-function and loss-of-function studies in mouse and human systems, we establish that RNF2 is oncogenic and prometastatic.

  19. Approximating exponential and logarithmic functions using polynomial interpolation

    NASA Astrophysics Data System (ADS)

    Gordon, Sheldon P.; Yang, Yajun

    2017-04-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.

  20. Optimizer convergence and local minima errors and their clinical importance

    NASA Astrophysics Data System (ADS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.

    2003-09-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  1. Optimizer convergence and local minima errors and their clinical importance.

    PubMed

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-09-07

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  2. Temporal lobe stimulation reveals anatomic distinction between auditory naming processes.

    PubMed

    Hamberger, M J; Seidel, W T; Goodman, R R; Perrine, K; McKhann, G M

    2003-05-13

    Language errors induced by cortical stimulation can provide insight into function(s) supported by the area stimulated. The authors observed that some stimulation-induced errors during auditory description naming were characterized by tip-of-the-tongue responses or paraphasic errors, suggesting expressive difficulty, whereas others were qualitatively different, suggesting receptive difficulty. They hypothesized that these two response types reflected disruption at different stages of auditory verbal processing and that these "subprocesses" might be supported by anatomically distinct cortical areas. To explore the topographic distribution of error types in auditory verbal processing. Twenty-one patients requiring left temporal lobe surgery underwent preresection language mapping using direct cortical stimulation. Auditory naming was tested at temporal sites extending from 1 cm from the anterior tip to the parietal operculum. Errors were dichotomized as either "expressive" or "receptive." The topographic distribution of error types was explored. Sites associated with the two error types were topographically distinct from one another. Most receptive sites were located in the middle portion of the superior temporal gyrus (STG), whereas most expressive sites fell outside this region, scattered along lateral temporal and temporoparietal cortex. Results raise clinical questions regarding the inclusion of the STG in temporal lobe epilepsy surgery and suggest that more detailed cortical mapping might enable better prediction of postoperative language decline. From a theoretical perspective, results carry implications regarding the understanding of structure-function relations underlying temporal lobe mediation of auditory language processing.

  3. Segmented Mirror Image Degradation Due to Surface Dust, Alignment and Figure

    NASA Technical Reports Server (NTRS)

    Schreur, Julian J.

    1999-01-01

    In 1996 an algorithm was developed to include the effects of surface roughness in the calculation of the point spread function of a telescope mirror. This algorithm has been extended to include the effects of alignment errors and figure errors for the individual elements, and an overall contamination by surface dust. The final algorithm builds an array for a guard-banded pupil function of a mirror that may or may not have a central hole, a central reflecting segment, or an outer ring of segments. The central hole, central reflecting segment, and outer ring may be circular or polygonal, and the outer segments may have trimmed comers. The modeled point spread functions show that x-tilt and y-tilt, or the corresponding R-tilt and theta-tilt for a segment in an outer ring, is readily apparent for maximum wavefront errors of 0.1 lambda. A similar sized piston error is also apparent, but integral wavelength piston errors are not. Severe piston error introduces a focus error of the opposite sign, so piston could be adjusted to compensate for segments with varying focal lengths. Dust affects the image principally by decreasing the Strehl ratio, or peak intensity of the image. For an eight-meter telescope a 25% coverage by dust produced a scattered light intensity of 10(exp -9) of the peak intensity, a level well below detectability.

  4. Facial motion parameter estimation and error criteria in model-based image coding

    NASA Astrophysics Data System (ADS)

    Liu, Yunhai; Yu, Lu; Yao, Qingdong

    2000-04-01

    Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.

  5. Simulation of the Effects of Random Measurement Errors

    ERIC Educational Resources Information Center

    Kinsella, I. A.; Hannaidh, P. B. O.

    1978-01-01

    Describes a simulation method for measurement of errors that requires calculators and tables of random digits. Each student simulates the random behaviour of the component variables in the function and by combining the results of all students, the outline of the sampling distribution of the function can be obtained. (GA)

  6. All-digital duty-cycle corrector with synchronous and high accuracy output for double date rate synchronous dynamic random-access memory application

    NASA Astrophysics Data System (ADS)

    Tsai, Chih-Wei; Lo, Yu-Lung; Chang, Chia-Chen; Liu, Han-Ying; Yang, Wei-Bin; Cheng, Kuo-Hsing

    2017-04-01

    A synchronous and highly accurate all-digital duty-cycle corrector (ADDCC), which uses simplified dual-loop architecture, is presented in this paper. To explain the operational principle, a detailed circuit description and formula derivation are provided. To verify the proposed design, a chip was fabricated through the 0.18-µm standard complementary metal oxide semiconductor process with a core area of 0.091 mm2. The measurement results indicate that the proposed ADDCC can operate between 300 and 600 MHz with an input duty-cycle range of 40-60%, and that the output duty-cycle error is less than 1% with a root-mean-square jitter of 3.86 ps.

  7. Modelling default and likelihood reasoning as probabilistic reasoning

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1990-01-01

    A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. Likely and by default are in fact treated as duals in the same sense as possibility and necessity. To model these four forms probabilistically, a qualitative default probabilistic (QDP) logic and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequent results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.

  8. Surface geometry and optical aberrations of ex-vivo crystalline lenses

    NASA Astrophysics Data System (ADS)

    Bueno, Juan M.; Schwarz, Christina; Acosta, Eva; Artal, Pablo

    2010-02-01

    The shape of the surfaces of ex-vivo human crystalline lenses was measured using a shadow photography technique. From these data, the back-focal distance and the contribution of each surface to the main optical aberrations of the lenses were estimated. The aberrations of the lenses were measured separately with two complementary techniques: a Hartmann-Shack wavefront sensor and a point-diffraction interferometer. A laser scanning set-up was also used to measure the actual back-focal length as well as the phase aberration in one meridian section of the lenses. Measured and predicted back-focal length agreed well within the experimental errors. The lens aberrations computed with a ray-tracing approach from the measured surfaces and geometrical data only reproduce quantitatively the measured aberrations.

  9. CP function: an alpha spending function based on conditional power.

    PubMed

    Jiang, Zhiwei; Wang, Ling; Li, Chanjuan; Xia, Jielai; Wang, William

    2014-11-20

    Alpha spending function and stochastic curtailment are two frequently used methods in group sequential design. In the stochastic curtailment approach, the actual type I error probability cannot be well controlled within the specified significance level. But conditional power (CP) in stochastic curtailment is easier to be accepted and understood by clinicians. In this paper, we develop a spending function based on the concept of conditional power, named CP function, which combines desirable features of alpha spending and stochastic curtailment. Like other two-parameter functions, CP function is flexible to fit the needs of the trial. A simulation study is conducted to explore the choice of CP boundary in CP function that maximizes the trial power. It is equivalent to, even better than, classical Pocock, O'Brien-Fleming, and quadratic spending function as long as a proper ρ0 is given, which is pre-specified CP threshold for efficacy. It also well controls the overall type I error type I error rate and overcomes the disadvantage of stochastic curtailment. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Assimilation of flood extent data with 2D flood inundation models for localised intense rainfall events

    NASA Astrophysics Data System (ADS)

    Neal, J. C.; Wood, M.; Bermúdez, M.; Hostache, R.; Freer, J. E.; Bates, P. D.; Coxon, G.

    2017-12-01

    Remote sensing of flood inundation extent has long been a potential source of data for constraining and correcting simulations of floodplain inundation. Hydrodynamic models and the computing resources to run them have developed to the extent that simulation of flood inundation in two-dimensional space is now feasible over large river basins in near real-time. However, despite substantial evidence that there is useful information content within inundation extent data, even from low resolution SAR such as that gathered by Envisat ASAR in wide swath mode, making use of the information in a data assimilation system has proved difficult. He we review recent applications of the Ensemble Kalman Filter (EnKF) and Particle Filter for assimilating SAR data, with a focus on the River Severn UK and compare these with complementary research that has looked at the internal error sources and boundary condition errors using detailed terrestrial data that is not available in most locations. Previous applications of the EnKF to this reach have focused on upstream boundary conditions as the source of flow error, however this description of errors was too simplistic for the simulation of summer flood events where localised intense rainfall can be substantial. Therefore, we evaluate the introduction of uncertain lateral inflows to the ensemble. A further limitation of the existing EnKF based methods is the need to convert flood extent to water surface elevations by intersecting the shoreline location with a high quality digital elevation model (e.g. LiDAR). To simplify this data processing step, we evaluate a method to directly assimilate inundation extent as a EnKF model state rather than assimilating water heights, potentially allowing the scheme to be used where high-quality terrain data are sparse.

  11. An optomechanical model eye for ophthalmological refractive studies.

    PubMed

    Arianpour, Ashkan; Tremblay, Eric J; Stamenov, Igor; Ford, Joseph E; Schanzlin, David J; Lo, Yuhwa

    2013-02-01

    To create an accurate, low-cost optomechanical model eye for investigation of refractive errors in clinical and basic research studies. An optomechanical fluid-filled eye model with dimensions consistent with the human eye was designed and fabricated. Optical simulations were performed on the optomechanical eye model, and the quantified resolution and refractive errors were compared with the widely used Navarro eye model using the ray-tracing software ZEMAX (Radiant Zemax, Redmond, WA). The resolution of the physical optomechanical eye model was then quantified with a complementary metal-oxide semiconductor imager using the image resolution software SFR Plus (Imatest, Boulder, CO). Refractive, manufacturing, and assembling errors were also assessed. A refractive intraocular lens (IOL) and a diffractive IOL were added to the optomechanical eye model for tests and analyses of a 1951 U.S. Air Force target chart. Resolution and aberrations of the optomechanical eye model and the Navarro eye model were qualitatively similar in ZEMAX simulations. Experimental testing found that the optomechanical eye model reproduced properties pertinent to human eyes, including resolution better than 20/20 visual acuity and a decrease in resolution as the field of view increased in size. The IOLs were also integrated into the optomechanical eye model to image objects at distances of 15, 10, and 3 feet, and they indicated a resolution of 22.8 cycles per degree at 15 feet. A life-sized optomechanical eye model with the flexibility to be patient-specific was designed and constructed. The model had the resolution of a healthy human eye and recreated normal refractive errors. This model may be useful in the evaluation of IOLs for cataract surgery. Copyright 2013, SLACK Incorporated.

  12. Ultraaccurate genome sequencing and haplotyping of single human cells.

    PubMed

    Chu, Wai Keung; Edge, Peter; Lee, Ho Suk; Bansal, Vikas; Bafna, Vineet; Huang, Xiaohua; Zhang, Kun

    2017-11-21

    Accurate detection of variants and long-range haplotypes in genomes of single human cells remains very challenging. Common approaches require extensive in vitro amplification of genomes of individual cells using DNA polymerases and high-throughput short-read DNA sequencing. These approaches have two notable drawbacks. First, polymerase replication errors could generate tens of thousands of false-positive calls per genome. Second, relatively short sequence reads contain little to no haplotype information. Here we report a method, which is dubbed SISSOR (single-stranded sequencing using microfluidic reactors), for accurate single-cell genome sequencing and haplotyping. A microfluidic processor is used to separate the Watson and Crick strands of the double-stranded chromosomal DNA in a single cell and to randomly partition megabase-size DNA strands into multiple nanoliter compartments for amplification and construction of barcoded libraries for sequencing. The separation and partitioning of large single-stranded DNA fragments of the homologous chromosome pairs allows for the independent sequencing of each of the complementary and homologous strands. This enables the assembly of long haplotypes and reduction of sequence errors by using the redundant sequence information and haplotype-based error removal. We demonstrated the ability to sequence single-cell genomes with error rates as low as 10 -8 and average 500-kb-long DNA fragments that can be assembled into haplotype contigs with N50 greater than 7 Mb. The performance could be further improved with more uniform amplification and more accurate sequence alignment. The ability to obtain accurate genome sequences and haplotype information from single cells will enable applications of genome sequencing for diverse clinical needs. Copyright © 2017 the Author(s). Published by PNAS.

  13. Co-Transplantation of Olfactory Ensheathing Cells from Mucosa and Bulb Origin Enhances Functional Recovery after Peripheral Nerve Lesion

    PubMed Central

    Bon-Mardion, Nicolas; Duclos, Célia; Genty, Damien; Jean, Laetitia; Boyer, Olivier; Marie, Jean-Paul

    2011-01-01

    Olfactory ensheathing cells (OECs) represent an interesting candidate for cell therapy and could be obtained from olfactory mucosa (OM-OECs) or olfactory bulbs (OB-OECs). Recent reports suggest that, depending on their origin, OECs display different functional properties. We show here the complementary and additive effects of co-transplanting OM-OECs and OB-OECs after lesion of a peripheral nerve. For this, a selective motor denervation of the laryngeal muscles was performed by a section/anastomosis of the recurrent laryngeal nerve (RLN). Two months after surgery, recovery of the laryngeal movements and synkinesis phenonema were analyzed by videolaryngoscopy. To complete these assessments, measure of latency and potential duration were determined by electrophysiological recordings and myelinated nerve fiber profiles were defined based on toluidine blue staining. To explain some of the mechanisms involved, tracking of GFP positive OECs was performed. It appears that transplantation of OM-OECs or OB-OECs displayed opposite abilities to improve functional recovery. Indeed, OM-OECs increased recuperation of laryngeal muscles activities without appropriate functional recovery. In contrast, OB-OECs induced some functional recovery by enhancing axonal regrowth. Importantly, co-transplantation of OM-OECs and OB-OECs supported a major functional recovery, with reduction of synkinesis phenomena. This study is the first which clearly demonstrates the complementary and additive properties of OECs obtained from olfactory mucosa and olfactory bulb to improve functional recovery after transplantation in a nerve lesion model. PMID:21826209

  14. Discrimination of plant-parasitic nematodes from complex soil communities using ecometagenetics.

    PubMed

    Porazinska, Dorota L; Morgan, Matthew J; Gaspar, John M; Court, Leon N; Hardy, Christopher M; Hodda, Mike

    2014-07-01

    Many plant pathogens are microscopic, cryptic, and difficult to diagnose. The new approach of ecometagenetics, involving ultrasequencing, bioinformatics, and biostatistics, has the potential to improve diagnoses of plant pathogens such as nematodes from the complex mixtures found in many agricultural and biosecurity situations. We tested this approach on a gradient of complexity ranging from a few individuals from a few species of known nematode pathogens in a relatively defined substrate to a complex and poorly known suite of nematode pathogens in a complex forest soil, including its associated biota of unknown protists, fungi, and other microscopic eukaryotes. We added three known but contrasting species (Pratylenchus neglectus, the closely related P. thornei, and Heterodera avenae) to half the set of substrates, leaving the other half without them. We then tested whether all nematode pathogens-known and unknown, indigenous, and experimentally added-were detected consistently present or absent. We always detected the Pratylenchus spp. correctly and with the number of sequence reads proportional to the numbers added. However, a single cyst of H. avenae was only identified approximately half the time it was present. Other plant-parasitic nematodes and nematodes from other trophic groups were detected well but other eukaryotes were detected less consistently. DNA sampling errors or informatic errors or both were involved in misidentification of H. avenae; however, the proportions of each varied in the different bioinformatic pipelines and with different parameters used. To a large extent, false-positive and false-negative errors were complementary: pipelines and parameters with the highest false-positive rates had the lowest false-negative rates and vice versa. Sources of error identified included assumptions in the bioinformatic pipelines, slight differences in primer regions, the number of sequence reads regarded as the minimum threshold for inclusion in analysis, and inaccessible DNA in resistant life stages. Identification of the sources of error allows us to suggest ways to improve identification using ecometagenetics.

  15. The Power of the Spectrum: Combining Numerical Proxy System Models with Analytical Error Spectra to Better Understand Timescale Dependent Proxy Uncertainty

    NASA Astrophysics Data System (ADS)

    Dolman, A. M.; Laepple, T.; Kunz, T.

    2017-12-01

    Understanding the uncertainties associated with proxy-based reconstructions of past climate is critical if they are to be used to validate climate models and contribute to a comprehensive understanding of the climate system. Here we present two related and complementary approaches to quantifying proxy uncertainty. The proxy forward model (PFM) "sedproxy" bitbucket.org/ecus/sedproxy numerically simulates the creation, archiving and observation of marine sediment archived proxies such as Mg/Ca in foraminiferal shells and the alkenone unsaturation index UK'37. It includes the effects of bioturbation, bias due to seasonality in the rate of proxy creation, aliasing of the seasonal temperature cycle into lower frequencies, and error due to cleaning, processing and measurement of samples. Numerical PFMs have the advantage of being very flexible, allowing many processes to be modelled and assessed for their importance. However, as more and more proxy-climate data become available, their use in advanced data products necessitates rapid estimates of uncertainties for both the raw reconstructions, and their smoothed/derived products, where individual measurements have been aggregated to coarser time scales or time-slices. To address this, we derive closed-form expressions for power spectral density of the various error sources. The power spectra describe both the magnitude and autocorrelation structure of the error, allowing timescale dependent proxy uncertainty to be estimated from a small number of parameters describing the nature of the proxy, and some simple assumptions about the variance of the true climate signal. We demonstrate and compare both approaches for time-series of the last millennia, Holocene, and the deglaciation. While the numerical forward model can create pseudoproxy records driven by climate model simulations, the analytical model of proxy error allows for a comprehensive exploration of parameter space and mapping of climate signal re-constructability, conditional on the climate and sampling conditions.

  16. Density Functional Theory Calculation of pKa's of Thiols in Aqueous Solution Using Explicit Water Molecules and the Polarizable Continuum Model.

    PubMed

    Thapa, Bishnu; Schlegel, H Bernhard

    2016-07-21

    The pKa's of substituted thiols are important for understanding their properties and reactivities in applications in chemistry, biochemistry, and material chemistry. For a collection of 175 different density functionals and the SMD implicit solvation model, the average errors in the calculated pKa's of methanethiol and ethanethiol are almost 10 pKa units higher than for imidazole. A test set of 45 substituted thiols with pKa's ranging from 4 to 12 has been used to assess the performance of 8 functionals with 3 different basis sets. As expected, the basis set needs to include polarization functions on the hydrogens and diffuse functions on the heavy atoms. Solvent cavity scaling was ineffective in correcting the errors in the calculated pKa's. Inclusion of an explicit water molecule that is hydrogen bonded with the H of the thiol group (in neutral) or S(-) (in thiolates) lowers error by an average of 3.5 pKa units. With one explicit water and the SMD solvation model, pKa's calculated with the M06-2X, PBEPBE, BP86, and LC-BLYP functionals are found to deviate from the experimental values by about 1.5-2.0 pKa units whereas pKa's with the B3LYP, ωB97XD and PBEVWN5 functionals are still in error by more than 3 pKa units. The inclusion of three explicit water molecules lowers the calculated pKa further by about 4.5 pKa units. With the B3LYP and ωB97XD functionals, the calculated pKa's are within one unit of the experimental values whereas most other functionals used in this study underestimate the pKa's. This study shows that the ωB97XD functional with the 6-31+G(d,p) and 6-311++G(d,p) basis sets, and the SMD solvation model with three explicit water molecules hydrogen bonded to the sulfur produces the best result for the test set (average error -0.11 ± 0.50 and +0.15 ± 0.58, respectively). The B3LYP functional also performs well (average error -1.11 ± 0.82 and -0.78 ± 0.79, respectively).

  17. Time to Talk: 5 Things to Know about Complementary Health Practices for Cognitive Function, Dementia, and Alzheimer's ...

    MedlinePlus

    ... as several mind and body practices such as music therapy and mental imagery, which have shown promise ... of some mind and body practices such as music therapy suggest they may be helpful for some ...

  18. The rose of Sharon: what is the ideal timing for palliative care consultation versus ethics consultation?

    PubMed

    La Via, Jennifer; Schiedermayer, David

    2012-01-01

    Ethics committees and palliative care consultants can function in a complementary fashion, seamlessly and effectively. Ethics committees can "air" and help resolves issues, and palliative care consultants can use a low-key, longitudinal approach.

  19. Synthesis and optimization of four bar mechanism with six design parameters

    NASA Astrophysics Data System (ADS)

    Jaiswal, Ankur; Jawale, H. P.

    2018-04-01

    Function generation is synthesis of mechanism for specific task, involves complexity for specially synthesis above five precision of coupler points. Thus pertains to large structural error. The methodology for arriving to better precision solution is to use the optimization technique. Work presented herein considers methods of optimization of structural error in closed kinematic chain with single degree of freedom, for generating functions like log(x), ex, tan(x), sin(x) with five precision points. The equation in Freudenstein-Chebyshev method is used to develop five point synthesis of mechanism. The extended formulation is proposed and results are obtained to verify existing results in literature. Optimization of structural error is carried out using least square approach. Comparative structural error analysis is presented on optimized error through least square method and extended Freudenstein-Chebyshev method.

  20. A neural fuzzy controller learning by fuzzy error propagation

    NASA Technical Reports Server (NTRS)

    Nauck, Detlef; Kruse, Rudolf

    1992-01-01

    In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.

  1. Reflections on human error - Matters of life and death

    NASA Technical Reports Server (NTRS)

    Wiener, Earl L.

    1989-01-01

    The last two decades have witnessed a rapid growth in the introduction of automatic devices into aircraft cockpits, and eleswhere in human-machine systems. This was motivated in part by the assumption that when human functioning is replaced by machine functioning, human error is eliminated. Experience to date shows that this is far from true, and that automation does not replace humans, but changes their role in the system, as well as the types and severity of the errors they make. This altered role may lead to fewer, but more critical errors. Intervention strategies to prevent these errors, or ameliorate their consequences include basic human factors engineering of the interface, enhanced warning and alerting systems, and more intelligent interfaces that understand the strategic intent of the crew and can detect and trap inconsistent or erroneous input before it affects the system.

  2. Relay-aided free-space optical communications using α - μ distribution over atmospheric turbulence channels with misalignment errors

    NASA Astrophysics Data System (ADS)

    Upadhya, Abhijeet; Dwivedi, Vivek K.; Singh, G.

    2018-06-01

    In this paper, we have analyzed the performance of dual hop radio frequency (RF)/free-space optical (FSO) fixed gain relay environment confined by atmospheric turbulence induced fading channel over FSO link and modeled using α - μ distribution. The RF hop of the amplify-and-forward scheme undergoes the Rayleigh fading and the proposed system model also considers the pointing error effect on the FSO link. A novel and accurate mathematical expression of the probability density function for a FSO link experiencing α - μ distributed atmospheric turbulence in the presence of pointing error is derived. Further, we have presented analytical expressions of outage probability and bit error rate in terms of Meijer-G function. In addition to this, a useful and mathematically tractable closed-form expression for the end-to-end ergodic capacity of the dual hop scheme in terms of bivariate Fox's H function is derived. The atmospheric turbulence, misalignment errors and various binary modulation schemes for intensity modulation on optical wireless link are considered to yield the results. Finally, we have analyzed each of the three performance metrics for high SNR in order to represent them in terms of elementary functions and the achieved analytical results are supported by computer-based simulations.

  3. Does the cost function matter in Bayes decision rule?

    PubMed

    Schlü ter, Ralf; Nussbaum-Thom, Markus; Ney, Hermann

    2012-02-01

    In many tasks in pattern recognition, such as automatic speech recognition (ASR), optical character recognition (OCR), part-of-speech (POS) tagging, and other string recognition tasks, we are faced with a well-known inconsistency: The Bayes decision rule is usually used to minimize string (symbol sequence) error, whereas, in practice, we want to minimize symbol (word, character, tag, etc.) error. When comparing different recognition systems, we do indeed use symbol error rate as an evaluation measure. The topic of this work is to analyze the relation between string (i.e., 0-1) and symbol error (i.e., metric, integer valued) cost functions in the Bayes decision rule, for which fundamental analytic results are derived. Simple conditions are derived for which the Bayes decision rule with integer-valued metric cost function and with 0-1 cost gives the same decisions or leads to classes with limited cost. The corresponding conditions can be tested with complexity linear in the number of classes. The results obtained do not make any assumption w.r.t. the structure of the underlying distributions or the classification problem. Nevertheless, the general analytic results are analyzed via simulations of string recognition problems with Levenshtein (edit) distance cost function. The results support earlier findings that considerable improvements are to be expected when initial error rates are high.

  4. Selecting a restoration technique to minimize OCR error.

    PubMed

    Cannon, M; Fugate, M; Hush, D R; Scovel, C

    2003-01-01

    This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.

  5. Nanomoulding of Functional Materials, a Versatile Complementary Pattern Replication Method to Nanoimprinting

    PubMed Central

    Battaglia, Corsin; Söderström, Karin; Escarré, Jordi; Haug, Franz-Josef; Despeisse, Matthieu; Ballif, Christophe

    2013-01-01

    We describe a nanomoulding technique which allows low-cost nanoscale patterning of functional materials, materials stacks and full devices. Nanomoulding combined with layer transfer enables the replication of arbitrary surface patterns from a master structure onto the functional material. Nanomoulding can be performed on any nanoimprinting setup and can be applied to a wide range of materials and deposition processes. In particular we demonstrate the fabrication of patterned transparent zinc oxide electrodes for light trapping applications in solar cells. PMID:23380874

  6. Characterizing genomic alterations in cancer by complementary functional associations | Office of Cancer Genomics

    Cancer.gov

    Systematic efforts to sequence the cancer genome have identified large numbers of mutations and copy number alterations in human cancers. However, elucidating the functional consequences of these variants, and their interactions to drive or maintain oncogenic states, remains a challenge in cancer research. We developed REVEALER, a computational method that identifies combinations of mutually exclusive genomic alterations correlated with functional phenotypes, such as the activation or gene dependency of oncogenic pathways or sensitivity to a drug treatment.

  7. Uncertainty quantification and propagation in dynamic models using ambient vibration measurements, application to a 10-story building

    NASA Astrophysics Data System (ADS)

    Behmanesh, Iman; Yousefianmoghadam, Seyedsina; Nozari, Amin; Moaveni, Babak; Stavridis, Andreas

    2018-07-01

    This paper investigates the application of Hierarchical Bayesian model updating for uncertainty quantification and response prediction of civil structures. In this updating framework, structural parameters of an initial finite element (FE) model (e.g., stiffness or mass) are calibrated by minimizing error functions between the identified modal parameters and the corresponding parameters of the model. These error functions are assumed to have Gaussian probability distributions with unknown parameters to be determined. The estimated parameters of error functions represent the uncertainty of the calibrated model in predicting building's response (modal parameters here). The focus of this paper is to answer whether the quantified model uncertainties using dynamic measurement at building's reference/calibration state can be used to improve the model prediction accuracies at a different structural state, e.g., damaged structure. Also, the effects of prediction error bias on the uncertainty of the predicted values is studied. The test structure considered here is a ten-story concrete building located in Utica, NY. The modal parameters of the building at its reference state are identified from ambient vibration data and used to calibrate parameters of the initial FE model as well as the error functions. Before demolishing the building, six of its exterior walls were removed and ambient vibration measurements were also collected from the structure after the wall removal. These data are not used to calibrate the model; they are only used to assess the predicted results. The model updating framework proposed in this paper is applied to estimate the modal parameters of the building at its reference state as well as two damaged states: moderate damage (removal of four walls) and severe damage (removal of six walls). Good agreement is observed between the model-predicted modal parameters and those identified from vibration tests. Moreover, it is shown that including prediction error bias in the updating process instead of commonly-used zero-mean error function can significantly reduce the prediction uncertainties.

  8. Performance of correlation receivers in the presence of impulse noise.

    NASA Technical Reports Server (NTRS)

    Moore, J. D.; Houts, R. C.

    1972-01-01

    An impulse noise model, which assumes that each noise burst contains a randomly weighted version of a basic waveform, is used to derive the performance equations for a correlation receiver. The expected number of bit errors per noise burst is expressed as a function of the average signal energy, signal-set correlation coefficient, bit time, noise-weighting-factor variance and probability density function, and a time range function which depends on the crosscorrelation of the signal-set basis functions and the noise waveform. Unlike the performance results for additive white Gaussian noise, it is shown that the error performance for impulse noise is affected by the choice of signal-set basis function, and that Orthogonal signaling is not equivalent to On-Off signaling with the same average energy. Furthermore, it is demonstrated that the correlation-receiver error performance can be improved by inserting a properly specified nonlinear device prior to the receiver input.

  9. Research on error control and compensation in magnetorheological finishing.

    PubMed

    Dai, Yifan; Hu, Hao; Peng, Xiaoqiang; Wang, Jianmin; Shi, Feng

    2011-07-01

    Although magnetorheological finishing (MRF) is a deterministic finishing technology, the machining results always fall short of simulation precision in the actual process, and it cannot meet the precision requirements just through a single treatment but after several iterations. We investigate the reasons for this problem through simulations and experiments. Through controlling and compensating the chief errors in the manufacturing procedure, such as removal function calculation error, positioning error of the removal function, and dynamic performance limitation of the CNC machine, the residual error convergence ratio (ratio of figure error before and after processing) in a single process is obviously increased, and higher figure precision is achieved. Finally, an improved technical process is presented based on these researches, and the verification experiment is accomplished on the experimental device we developed. The part is a circular plane mirror of fused silica material, and the surface figure error is improved from the initial λ/5 [peak-to-valley (PV) λ=632.8 nm], λ/30 [root-mean-square (rms)] to the final λ/40 (PV), λ/330 (rms) just through one iteration in 4.4 min. Results show that a higher convergence ratio and processing precision can be obtained by adopting error control and compensation techniques in MRF.

  10. A nonlinear HP-type complementary resistive switch

    NASA Astrophysics Data System (ADS)

    Radtke, Paul K.; Schimansky-Geier, Lutz

    2016-05-01

    Resistive Switching (RS) is the change in resistance of a dielectric under the influence of an external current or electric field. This change is non-volatile, and the basis of both the memristor and resistive random access memory. In the latter, high integration densities favor the anti-serial combination of two RS-elements to a single cell, termed the complementary resistive switch (CRS). Motivated by the irregular shape of the filament protruding into the device, we suggest a nonlinearity in the resistance-interpolation function, characterized by a single parameter p. Thereby the original HP-memristor is expanded upon. We numerically simulate and analytically solve this model. Further, the nonlinearity allows for its application to the CRS.

  11. General method for labeling siRNA by click chemistry with fluorine-18 for the purpose of PET imaging.

    PubMed

    Mercier, Frédéric; Paris, Jérôme; Kaisin, Geoffroy; Thonon, David; Flagothier, Jessica; Teller, Nathalie; Lemaire, Christian; Luxen, André

    2011-01-19

    The alkyne-azide Cu(I)-catalyzed Huisgen cycloaddition, a click-type reaction, was used to label a double-stranded oligonucleotide (siRNA) with fluorine-18. An alkyne solid support CPG for the preparation of monostranded oligonucleotides functionalized with alkyne has been developed. Two complementary azide labeling agents (1-(azidomethyl)-4-[(18)F]fluorobenzene) and 1-azido-4-(3-[(18)F]fluoropropoxy)benzene have been produced with 41% and 35% radiochemical yields (decay-corrected), respectively. After annealing with the complementary strand, the siRNA was directly labeled by click chemistry with [(18)F]fluoroazide to produce the [(18)F]-radiolabeled siRNA with excellent radiochemical yield and purity.

  12. Signal enhancement based on complex curvelet transform and complementary ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Dong, Lieqian; Wang, Deying; Zhang, Yimeng; Zhou, Datong

    2017-09-01

    Signal enhancement is a necessary step in seismic data processing. In this paper we utilize the complementary ensemble empirical mode decomposition (CEEMD) and complex curvelet transform (CCT) methods to separate signal from random noise further to improve the signal to noise (S/N) ratio. Firstly, the original data with noise is decomposed into a series of intrinsic mode function (IMF) profiles with the aid of CEEMD. Then the IMFs with noise are transformed into CCT domain. By choosing different thresholds which are based on the noise level difference of each IMF profile, the noise in original data can be suppressed. Finally, we illustrate the effectiveness of the approach by simulated and field datasets.

  13. State transfer in highly connected networks and a quantum Babinet principle

    NASA Astrophysics Data System (ADS)

    Tsomokos, D. I.; Plenio, M. B.; de Vega, I.; Huelga, S. F.

    2008-12-01

    The transfer of a quantum state between distant nodes in two-dimensional networks is considered. The fidelity of state transfer is calculated as a function of the number of interactions in networks that are described by regular graphs. It is shown that perfect state transfer is achieved in a network of size N , whose structure is that of an (N/2) -cross polytope graph, if N is a multiple of 4 . The result is reminiscent of the Babinet principle of classical optics. A quantum Babinet principle is derived, which allows for the identification of complementary graphs leading to the same fidelity of state transfer, in analogy with complementary screens providing identical diffraction patterns.

  14. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    PubMed

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  15. A suggestion for computing objective function in model calibration

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang

    2014-01-01

    A parameter-optimization process (model calibration) is usually required for numerical model applications, which involves the use of an objective function to determine the model cost (model-data errors). The sum of square errors (SSR) has been widely adopted as the objective function in various optimization procedures. However, ‘square error’ calculation was found to be more sensitive to extreme or high values. Thus, we proposed that the sum of absolute errors (SAR) may be a better option than SSR for model calibration. To test this hypothesis, we used two case studies—a hydrological model calibration and a biogeochemical model calibration—to investigate the behavior of a group of potential objective functions: SSR, SAR, sum of squared relative deviation (SSRD), and sum of absolute relative deviation (SARD). Mathematical evaluation of model performance demonstrates that ‘absolute error’ (SAR and SARD) are superior to ‘square error’ (SSR and SSRD) in calculating objective function for model calibration, and SAR behaved the best (with the least error and highest efficiency). This study suggests that SSR might be overly used in real applications, and SAR may be a reasonable choice in common optimization implementations without emphasizing either high or low values (e.g., modeling for supporting resources management).

  16. LOGISTIC FUNCTION PROFILE FIT: A least-squares program for fitting interface profiles to an extended logistic function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirchhoff, William H.

    2012-09-15

    The extended logistic function provides a physically reasonable description of interfaces such as depth profiles or line scans of surface topological or compositional features. It describes these interfaces with the minimum number of parameters, namely, position, width, and asymmetry. Logistic Function Profile Fit (LFPF) is a robust, least-squares fitting program in which the nonlinear extended logistic function is linearized by a Taylor series expansion (equivalent to a Newton-Raphson approach) with no apparent introduction of bias in the analysis. The program provides reliable confidence limits for the parameters when systematic errors are minimal and provides a display of the residuals frommore » the fit for the detection of systematic errors. The program will aid researchers in applying ASTM E1636-10, 'Standard practice for analytically describing sputter-depth-profile and linescan-profile data by an extended logistic function,' and may also prove useful in applying ISO 18516: 2006, 'Surface chemical analysis-Auger electron spectroscopy and x-ray photoelectron spectroscopy-determination of lateral resolution.' Examples are given of LFPF fits to a secondary ion mass spectrometry depth profile, an Auger surface line scan, and synthetic data generated to exhibit known systematic errors for examining the significance of such errors to the extrapolation of partial profiles.« less

  17. Cognitive Deficits Underlying Error Behavior on a Naturalistic Task after Severe Traumatic Brain Injury

    PubMed Central

    Hendry, Kathryn; Ownsworth, Tamara; Beadle, Elizabeth; Chevignard, Mathilde P.; Fleming, Jennifer; Griffin, Janelle; Shum, David H. K.

    2016-01-01

    People with severe traumatic brain injury (TBI) often make errors on everyday tasks that compromise their safety and independence. Such errors potentially arise from the breakdown or failure of multiple cognitive processes. This study aimed to investigate cognitive deficits underlying error behavior on a home-based version of the Cooking Task (HBCT) following TBI. Participants included 45 adults (9 females, 36 males) with severe TBI aged 18–64 years (M = 37.91, SD = 13.43). Participants were administered the HBCT in their home kitchens, with audiovisual recordings taken to enable scoring of total errors and error subtypes (Omissions, Additions, Estimations, Substitutions, Commentary/Questions, Dangerous Behavior, Goal Achievement). Participants also completed a battery of neuropsychological tests, including the Trail Making Test, Hopkins Verbal Learning Test-Revised, Digit Span, Zoo Map test, Modified Stroop Test, and Hayling Sentence Completion Test. After controlling for cooking experience, greater Omissions and Estimation errors, lack of goal achievement, and longer completion time were significantly associated with poorer attention, memory, and executive functioning. These findings indicate that errors on naturalistic tasks arise from deficits in multiple cognitive domains. Assessment of error behavior in a real life setting provides insight into individuals' functional abilities which can guide rehabilitation planning and lifestyle support. PMID:27790099

  18. Toward isolating the role of dopamine in the acquisition of incentive salience attribution.

    PubMed

    Chow, Jonathan J; Nickell, Justin R; Darna, Mahesh; Beckmann, Joshua S

    2016-10-01

    Stimulus-reward learning has been heavily linked to the reward-prediction error learning hypothesis and dopaminergic function. However, some evidence suggests dopaminergic function may not strictly underlie reward-prediction error learning, but may be specific to incentive salience attribution. Utilizing a Pavlovian conditioned approach procedure consisting of two stimuli that were equally reward-predictive (both undergoing reward-prediction error learning) but functionally distinct in regard to incentive salience (levers that elicited sign-tracking and tones that elicited goal-tracking), we tested the differential role of D1 and D2 dopamine receptors and nucleus accumbens dopamine in the acquisition of sign- and goal-tracking behavior and their associated conditioned reinforcing value within individuals. Overall, the results revealed that both D1 and D2 inhibition disrupted performance of sign- and goal-tracking. However, D1 inhibition specifically prevented the acquisition of sign-tracking to a lever, instead promoting goal-tracking and decreasing its conditioned reinforcing value, while neither D1 nor D2 signaling was required for goal-tracking in response to a tone. Likewise, nucleus accumbens dopaminergic lesions disrupted acquisition of sign-tracking to a lever, while leaving goal-tracking in response to a tone unaffected. Collectively, these results are the first evidence of an intraindividual dissociation of dopaminergic function in incentive salience attribution from reward-prediction error learning, indicating that incentive salience, reward-prediction error, and their associated dopaminergic signaling exist within individuals and are stimulus-specific. Thus, individual differences in incentive salience attribution may be reflective of a differential balance in dopaminergic function that may bias toward the attribution of incentive salience, relative to reward-prediction error learning only. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. A strategy for reducing gross errors in the generalized Born models of implicit solvation

    PubMed Central

    Onufriev, Alexey V.; Sigalov, Grigori

    2011-01-01

    The “canonical” generalized Born (GB) formula [C. Still, A. Tempczyk, R. C. Hawley, and T. Hendrickson, J. Am. Chem. Soc. 112, 6127 (1990)] is known to provide accurate estimates for total electrostatic solvation energies ΔGel of biomolecules if the corresponding effective Born radii are accurate. Here we show that even if the effective Born radii are perfectly accurate, the canonical formula still exhibits significant number of gross errors (errors larger than 2kBT relative to numerical Poisson equation reference) in pairwise interactions between individual atomic charges. Analysis of exact analytical solutions of the Poisson equation (PE) for several idealized nonspherical geometries reveals two distinct spatial modes of the PE solution; these modes are also found in realistic biomolecular shapes. The canonical GB Green function misses one of two modes seen in the exact PE solution, which explains the observed gross errors. To address the problem and reduce gross errors of the GB formalism, we have used exact PE solutions for idealized nonspherical geometries to suggest an alternative analytical Green function to replace the canonical GB formula. The proposed functional form is mathematically nearly as simple as the original, but depends not only on the effective Born radii but also on their gradients, which allows for better representation of details of nonspherical molecular shapes. In particular, the proposed functional form captures both modes of the PE solution seen in nonspherical geometries. Tests on realistic biomolecular structures ranging from small peptides to medium size proteins show that the proposed functional form reduces gross pairwise errors in all cases, with the amount of reduction varying from more than an order of magnitude for small structures to a factor of 2 for the largest ones. PMID:21528947

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Zongtang; Both, Johan; Li, Shenggang

    The heats of formation and the normalized clustering energies (NCEs) for the group 4 and group 6 transition metal oxide (TMO) trimers and tetramers have been calculated by the Feller-Peterson-Dixon (FPD) method. The heats of formation predicted by the FPD method do not differ much from those previously derived from the NCEs at the CCSD(T)/aT level except for the CrO3 nanoclusters. New and improved heats of formation for Cr3O9 and Cr4O12 were obtained using PW91 orbitals instead of Hartree-Fock (HF) orbitals. Diffuse functions are necessary to predict accurate heats of formation. The fluoride affinities (FAs) are calculated with the CCSD(T)more » method. The relative energies (REs) of different isomers, NCEs, electron affinities (EAs), and FAs of (MO2)n ( M = Ti, Zr, Hf, n = 1 – 4 ) and (MO3)n ( M = Cr, Mo, W, n = 1 – 3) clusters have been benchmarked with 55 exchange-correlation DFT functionals including both pure and hybrid types. The absolute errors of the DFT results are mostly less than ±10 kcal/mol for the NCEs and the EAs, and less than ±15 kcal/mol for the FAs. Hybrid functionals usually perform better than the pure functionals for the REs and NCEs. The performance of the two types of functionals in predicting EAs and FAs is comparable. The B1B95 and PBE1PBE functionals provide reliable energetic properties for most isomers. Long range corrected pure functionals usually give poor FAs. The standard deviation of the absolute error is always close to the mean errors and the probability distributions of the DFT errors are often not Gaussian (normal). The breadth of the distribution of errors and the maximum probability are dependent on the energy property and the isomer.« less

Top