Sample records for sensitive microfluorimetric method

  1. Isolation and analysis of a mammalian temperature-sensitive mutant defective in G2 functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mineo, C.; Murakami, Y.; Ishimi, Y.

    1986-11-01

    A temperature-sensitive (ts) mutant, designated tsFT210, was isolated from a mouse mammary carcinoma cell line, FM3A. The tsFT210 cells grew normally at 33/sup 0/C (permissive temperature), but more than 80% of the cells were arrested at the G2 phase at 39/sup 0/C (non-permissive temperature) as revealed by flow-microfluorimetric analysis. DNA replication and synthesis of other macromolecules by this mutant seemed to be normal at 39/sup 0/C for at least 10h. However, in this mutant, hyperphosphorylation of H1 histone from the G2 to M phase, which occurs in the normal cell cycle, could not be detected at the non-permissive temperature. Thismore » suggests that a gene product which is temperature-sensitive in tsFT210 cells is necessary for hyperphosphorylation of H1 histone and that this gene product may be related to chromosome condensation.« less

  2. Preferential Zn2+ influx through Ca2+-permeable AMPA/kainate channels triggers prolonged mitochondrial superoxide production

    PubMed Central

    Sensi, Stefano L.; Yin, Hong Z.; Carriedo, Sean G.; Rao, Shyam S.; Weiss, John H.

    1999-01-01

    Synaptically released Zn2+ can enter and cause injury to postsynaptic neurons. Microfluorimetric studies using the Zn2+-sensitive probe, Newport green, examined levels of [Zn2+]i attained in cultured cortical neurons on exposure to N-methyl-d-asparte, kainate, or high K+ (to activate voltage-sensitive Ca2+ channels) in the presence of 300 μM Zn2+. Indicating particularly high permeability through Ca2+-permeable α-amino3-hydroxy-5-methyl-4-isoxazolepropionic-acid/kainate (Ca-A/K) channels, micromolar [Zn2+]i rises were observed only after kainate exposures and only in neurons expressing these channels [Ca-A/K(+) neurons]. Further studies using the oxidation-sensitive dye, hydroethidine, revealed Zn2+-dependent reactive oxygen species (ROS) generation that paralleled the [Zn2+]i rises, with rapid oxidation observed only in the case of Zn2+ entry through Ca-A/K channels. Indicating a mitochondrial source of this ROS generation, hydroethidine oxidation was inhibited by the mitochondrial electron transport blocker, rotenone. Additional evidence for a direct interaction between Zn2+ and mitochondria was provided by the observation that the Zn2+ entry through Ca-A/K channels triggered rapid mitochondrial depolarization, as assessed by using the potential-sensitive dye tetramethylrhodamine ethylester. Whereas Ca2+ influx through Ca-A/K channels also triggers ROS production, the [Zn2+]i rises and subsequent ROS production are of more prolonged duration. PMID:10051656

  3. A LED-based method for monitoring NAD(P)H and FAD fluorescence in cell cultures and brain slices.

    PubMed

    Rösner, Jörg; Liotta, Agustin; Schmitz, Dietmar; Heinemann, Uwe; Kovács, Richard

    2013-01-30

    Nicotinamide- and flavine-adenine-dinucleotides (NAD(P)H and FADH₂) are electron carriers involved in cellular energy metabolism and in a multitude of enzymatic processes. As reduced NAD(P)H and oxidised FAD molecules are fluorescent, changes in tissue auto-fluorescence provide valuable information on the cellular redox state and energy metabolism. Since fluorescence excitation, by mercury arc lamps (HBO) is inherently coupled to photo-bleaching and photo-toxicity, microfluorimetric monitoring of energy metabolism might benefit from the replacement of HBO lamps by light emitting diodes (LEDs). Here we describe a LED-based custom-built setup for monitoring NAD(P)H and FAD fluorescence at the level of single cells (HEK293) and of brain slices. We compared NAD(P)H bleaching characteristics with two light sources (HBO lamp and LED) as well as sensitivity and signal to noise ratio of three different detector types (multi-pixel photon counter (MPPC), photomultiplier tube (PMT) and photodiode). LED excitation resulted in reduced photo-bleaching at the same fluorescence output in comparison to excitation with the HBO lamp. Transiently increasing LED power resulted in reversible bleaching of NAD(P)H fluorescence. Recovery kinetics were dependent on metabolic substrates indicating coupling of NAD(P)H fluorescence to metabolism. Electrical stimulation of brain slices induced biphasic redox changes, as indicated by NAD(P)H/FAD fluorescence transients. Increasing the gain of PMT and decreasing the LED power resulted in similar sensitivity as obtained with the MPPC and the photodiode, without worsening the signal to noise ratio. In conclusion, replacement of HBO lamp with LED might improve conventional PMT based microfluorimetry of tissue auto-fluorescence. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. The cytotoxic mechanism of karlotoxin 2 (KmTx 2) from Karlodinium veneficum (Dinophyceae)

    PubMed Central

    Deeds, Jonathan R.; Hoesch, Robert E.; Place, Allen R.; Kao, Joseph P.Y.

    2015-01-01

    This study demonstrates that the polyketide toxin karlotoxin 2 (KmTx 2) produced by Karlodinium veneficum, a dinoflagellate associated with fish kills in temperate estuaries worldwide, alters vertebrate cell membrane permeability. Microfluorimetric and electrophysiological measurements were used to determine that vertebrate cellular toxicity occurs through non-selective permeabilization of plasma membranes, leading to osmotic cell lysis. Previous studies showed that KmTx 2 is lethal to fish at naturally-occurring concentrations measured during fish kills, while sub-lethal doses severely damage gill epithelia. This study provides a mechanistic explanation for the association between K. veneficum blooms and fish kills that has long been observed in temperate estuaries worldwide. PMID:25546005

  5. Calcium Signaling in Intact Dorsal Root Ganglia

    PubMed Central

    Gemes, Geza; Rigaud, Marcel; Koopmeiners, Andrew S.; Poroli, Mark J.; Zoga, Vasiliki; Hogan, Quinn H.

    2013-01-01

    Background Ca2+ is the dominant second messenger in primary sensory neurons. In addition, disrupted Ca2+ signaling is a prominent feature in pain models involving peripheral nerve injury. Standard cytoplasmic Ca2+ recording techniques use high K+ or field stimulation and dissociated neurons. To compare findings in intact dorsal root ganglia, we used a method of simultaneous electrophysiologic and microfluorimetric recording. Methods Dissociated neurons were loaded by bath-applied Fura-2-AM and subjected to field stimulation. Alternatively, we adapted a technique in which neuronal somata of intact ganglia were loaded with Fura-2 through an intracellular microelectrode that provided simultaneous membrane potential recording during activation by action potentials (APs) conducted from attached dorsal roots. Results Field stimulation at levels necessary to activate neurons generated bath pH changes through electrolysis and failed to predictably drive neurons with AP trains. In the intact ganglion technique, single APs produced measurable Ca2+ transients that were fourfold larger in presumed nociceptive C-type neurons than in nonnociceptive Aβ-type neurons. Unitary Ca2+ transients summated during AP trains, forming transients with amplitudes that were highly dependent on stimulation frequency. Each neuron was tuned to a preferred frequency at which transient amplitude was maximal. Transients predominantly exhibited monoexponential recovery and had sustained plateaus during recovery only with trains of more than 100 APs. Nerve injury decreased Ca2+ transients in C-type neurons, but increased transients in Aβ-type neurons. Conclusions Refined observation of Ca2+ signaling is possible through natural activation by conducted APs in undissociated sensory neurons and reveals features distinct to neuronal types and injury state. PMID:20526180

  6. Minimizing photodecomposition of flavin adenine dinucleotide fluorescence by the use of pulsed LEDs.

    PubMed

    Rösner, J; Liotta, A; Angamo, E A; Spies, C; Heinemann, U; Kovács, R

    2016-11-01

    Dynamic alterations in flavin adenine dinucleotide (FAD) fluorescence permit insight into energy metabolism-dependent changes of intramitochondrial redox potential. Monitoring FAD fluorescence in living tissue is impeded by photobleaching, restricting the length of microfluorimetric recordings. In addition, photodecomposition of these essential electron carriers negatively interferes with energy metabolism and viability of the biological specimen. Taking advantage of pulsed LED illumination, here we determined the optimal excitation settings giving the largest fluorescence yield with the lowest photobleaching and interference with metabolism in hippocampal brain slices. The effects of FAD bleaching on energy metabolism and viability were studied by monitoring tissue pO 2 , field potentials and changes in extracellular potassium concentration ([K + ] o ). Photobleaching with continuous illumination consisted of an initial exponential decrease followed by a nearly linear decay. The exponential decay was significantly decelerated with pulsed illumination. Pulse length of 5 ms was sufficient to reach a fluorescence output comparable to continuous illumination, whereas further increasing duration increased photobleaching. Similarly, photobleaching increased with shortening of the interpulse interval. Photobleaching was partially reversible indicating the existence of a transient nonfluorescent flavin derivative. Pulsed illumination decreased FAD photodecomposition, improved slice viability and reproducibility of stimulus-induced FAD, field potential, [K + ] o and pO 2 changes as compared to continuous illumination. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  7. High probability neurotransmitter release sites represent an energy efficient design

    PubMed Central

    Lu, Zhongmin; Chouhan, Amit K.; Borycz, Jolanta A.; Lu, Zhiyuan; Rossano, Adam J; Brain, Keith L.; Zhou, You; Meinertzhagen, Ian A.; Macleod, Gregory T.

    2016-01-01

    Nerve terminals contain multiple sites specialized for the release of neurotransmitters. Release usually occurs with low probability, a design thought to confer many advantages. High probability release sites are not uncommon but their advantages are not well understood. Here we test the hypothesis that high probability release sites represent an energy efficient design. We examined release site probabilities and energy efficiency at the terminals of two glutamatergic motor neurons synapsing on the same muscle fiber in Drosophila larvae. Through electrophysiological and ultrastructural measurements we calculated release site probabilities to differ considerably between terminals (0.33 vs. 0.11). We estimated the energy required to release and recycle glutamate from the same measurements. The energy required to remove calcium and sodium ions subsequent to nerve excitation was estimated through microfluorimetric and morphological measurements. We calculated energy efficiency as the number of glutamate molecules released per ATP molecule hydrolyzed, and high probability release site terminals were found to be more efficient (0.13 vs. 0.06). Our analytical model indicates that energy efficiency is optimal (~0.15) at high release site probabilities (~0.76). As limitations in energy supply constrain neural function, high probability release sites might ameliorate such constraints by demanding less energy. Energy efficiency can be viewed as one aspect of nerve terminal function, in balance with others, because high efficiency terminals depress significantly during episodic bursts of activity. PMID:27593375

  8. Substrate-dependent changes in mitochondrial function, intracellular free calcium concentration and membrane channels in pancreatic beta-cells.

    PubMed

    Duchen, M R; Smith, P A; Ashcroft, F M

    1993-08-15

    Microfluorimetric and patch-clamp techniques have been combined to determine the relationship between changes in mitochondrial metabolism, the activity of KATP channels and changes in intracellular free calcium concentration ([Ca2+]i) in isolated pancreatic beta-cells in response to glucose, ketoisocaproic acid (KIC) and the electron donor couple tetramethyl p-phenylenediamine (TMPD) and ascorbate. Exposure of cells to 20 mM glucose raised NAD(P)H autofluorescence after a delay of 28 +/- 1 s (mean +/- S.E.M., n = 30). The mitochondrial inner membrane potential, delta psi m (monitored using rhodamine 123 fluorescence), hyperpolarized with a latency of 49 +/- 6 s (n = 17), and the [Ca2+]i rose after 129 +/- 13 s (n = 5). The amplitudes of the metabolic changes were graded appropriately with glucose concentration over the range 2.5-20 mM. All variables responded to KIC with shorter latencies: NAD(P)H autofluorescence rose after a delay of 20 +/- 3 s (n = 5) and rhodamine 123 changed after 21 +/- 3 s (n = 6). The electron donor couple, TMPD with ascorbate, rapidly hyperpolarized delta psi m and raised [Ca2+]i. When [Ca2+]i was raised by sustained exposure to 20 mM glucose, TMPD had no further effect. TMPD also decreased whole-cell KATP currents and depolarized the cell membrane, measured with the perforated patch configuration. These data are consistent with a central role for mitochondrial oxidative phosphorylation in coupling changes in glucose concentration with the secretion of insulin.

  9. Lung Beractant Increases Free Cytosolic Levels of Ca2+ in Human Lung Fibroblasts

    PubMed Central

    Guzmán-Silva, Alejandro; Vázquez de Lara, Luis G.; Torres-Jácome, Julián; Vargaz-Guadarrama, Ajelet; Flores-Flores, Marycruz; Pezzat Said, Elias; Lagunas-Martínez, Alfredo; Mendoza-Milla, Criselda; Tanzi, Franco; Moccia, Francesco; Berra-Romani, Roberto

    2015-01-01

    Beractant, a natural surfactant, induces an antifibrogenic phenotype and apoptosis in normal human lung fibroblasts (NHLF). As intracellular Ca2+ signalling has been related to programmed cell death, we aimed to assess the effect of beractant on intracellular Ca2+ concentration ([Ca2+]i) in NHLF in vitro. Cultured NHLF were loaded with Fura-2 AM (3 μM) and Ca2+ signals were recorded by microfluorimetric techniques. Beractant causes a concentration-dependent increase in [Ca2+]i with a EC50 of 0.82 μg/ml. The application of beractant, at a concentration of 500 μg/ml, which has been shown to exert an apoptotic effect in human fibroblasts, elicited different patterns of Ca2+ signals in NHLF: a) a single Ca2+ spike which could be followed by b) Ca2+ oscillations, c) a sustained Ca2+ plateau or d) a sustained plateau overlapped by Ca2+ oscillations. The amplitude and pattern of Ca2+ transients evoked by beractant were dependent on the resting [Ca2+]i. Pharmacological manipulation revealed that beractant activates a Ca2+ signal through Ca2+ release from intracellular stores mediated by phospholipase Cβ (PLCβ), Ca2+ release from inositol 1,4,5-trisphosphate receptors (IP3Rs) and Ca2+ influx via a store-operated pathway. Moreover, beractant-induced Ca2+ release was abolished by preventing membrane depolarization upon removal of extracellular Na+ and Ca2+. Finally, the inhibition of store-operated channels prevented beractant-induced NHLF apoptosis and downregulation of α1(I) procollagen expression. Therefore, beractant utilizes SOCE to exert its pro-apoptotic and antifibrinogenic effect on NHLF. PMID:26230503

  10. Microfluorimetric analysis of a purinergic receptor (P2X7) in GH4C1 rat pituitary cells: effects of a bioactive substance produced by Pfiesteria piscicida.

    PubMed Central

    Melo, A C; Moeller, P D; Glasgow, H; Burkholder, J M; Ramsdell, J S

    2001-01-01

    Pfiesteria piscicida Steidinger & Burkholder is a toxic dinoflagellate that leads to fish and human toxicity. It produces a bioactive substance that leads to cytotoxicity of GH4C1 rat pituitary cells. Extracellular adenosine 5'-triphosphate (ATP) acting on P2X7 purinergic receptors induces the formation of a nonselective cation channel, causing elevation of the cytosolic free calcium followed by a characteristic permeabilization of the cell to progressively larger ions and subsequent cell lysis. We investigated whether GH4C1 rat pituitary cells express functional P2X7 receptors, and if so, are they activated by a bioactive substance isolated from toxic P. piscicida cultures. We tested the selective agonist 2'-3'-O-(benzoyl-4-benzoyl)-ATP (BzATP) and antagonists piridoxalphosphate-6-azophenyl-2'-4'-disulfonic acid (PPADS) and oxidized-ATP (oxATP) using elevated cytosolic free calcium in Fura-2 loaded cells, and induced permeability of these cells to the fluorescent dye YO-PRO-1 as end points. We demonstrated that in GH4C1 cells, BzATP induces both the elevation of cytosolic free calcium and the permeabilization of the cell membrane. ATP-induced membrane permeabilization was inhibited by PPADS reversibly and by oxATP irreversibly. The putative Pfiesteria toxin (pPfTx) also elevated cytosolic free calcium in Fura-2 in GH4C1 cells and increased the permeability to YO-PRO-1 in a manner inhibited fully by oxATP. This study indicates that GH4C1 cells express a purinoceptor with characteristics consistent with the P2X7 subtype, and that pPfTx mimics the kinetics of cell permeabilization by ATP. PMID:11677182

  11. Zn2+ currents are mediated by calcium-permeable AMPA/Kainate channels in cultured murine hippocampal neurones

    PubMed Central

    Jia, Yousheng; Jeng, Jade-Ming; Sensi, Stefano L; Weiss, John H

    2002-01-01

    Permeation of the endogenous cation Zn2+ through calcium-permeable AMPA/kainate receptor-gated (Ca-A/K) channels might subserve pathological and/or physiological signalling roles. Voltage-clamp recording was used to directly assess Zn2+ flux through these channels on cultured murine hippocampal neurones. Ca-A/K channels were present in large numbers only on a minority of neurones (Ca-A/K(+) neurones), many of which were GABAergic. The presence of these channels was assessed in whole-cell or outside-out patch recording as the degree of inward rectification of kainate-activated currents, quantified via a rectification index (RI = G+40/G-60), which ranged from <0.4 (strongly inwardly rectifying) to >2 (outwardly rectifying). The specificity of a low RI as an indication of robust Ca-A/K channel expression was verified by two other techniques, kainate-stimulated cobalt-uptake labelling, and fluorescence imaging of kainate-induced increases in intracellular Ca2+. In addition, the degree of inward rectification of kainate-activated currents correlated strongly with the positive shift of the reversal potential (Vrev) upon switching to a sodium-free, 10 mm Ca2+ buffer. With Zn2+ (3 mm) as the only permeant extracellular cation, kainate-induced inward currents were only observed in neurones that had previously been identified as Ca-A/K(+). A comparison between the Vrev observed with 3 mm Zn2+ and that observed with Ca2+ as the permeant cation revealed a PCa/PZn of ≈1.8. Inward currents recorded in 3 mm Ca2+ were unaffected by the addition of 0.3 mm Zn2+, while microfluorimetrically detected increases in the intracellular concentration of Zn2+ in Ca-A/K(+) neurones upon kainate exposure in the presence of 0.3 mm Zn2+ were only mildly attenuated by the addition of 1.8 mm Ca2+. These results provide direct evidence that Zn2+ can carry currents through Ca-A/K channels, and that there is little interference between Ca2+ and Zn2+ in permeating these channels. PMID:12181280

  12. NO3 −-induced pH Changes in Mammalian Cells

    PubMed Central

    Chow, Chung-Wai; Kapus, Andras; Romanek, Robert; Grinstein, Sergio

    1997-01-01

    The effect of NO3 − on intracellular pH (pHi) was assessed microfluorimetrically in mammalian cells in culture. In cells of human, hamster, and murine origin addition of extracellular NO3 − induced an intracellular acidification. This acidification was eliminated when the cytosolic pH was clamped using ionophores or by perfusing the cytosol with highly buffered solutions using patch-pipettes, ruling out spectroscopic artifacts. The NO3 −- induced pH change was not due to modulation of Na+/H+ exchange, since it was also observed in Na+/H+ antiport-deficient mutants. Though NO3 − is known to inhibit vacuolar-type (V) H+-ATPases, this effect was not responsible for the acidification since it persisted in the presence of the potent V-ATPase inhibitor bafilomycin A1. NO3 −/HCO3 − exchange as the underlying mechanism was ruled out because acidification occurred despite nominal removal of HCO3 −, despite inhibition of the anion exchanger with disulfonic stilbenes and in HEK 293 cells, which seemingly lack anion exchangers (Lee, B.S., R.B. Gunn, and R.R. Kopito. 1991. J. Biol. Chem. 266:11448– 11454). Accumulation of intracellular NO3 −, measured by the Greiss method after reduction to NO2 −, indicated that the anion is translocated into the cells along with the movement of acid equivalents. The simplest model to explain these observations is the cotransport of NO3 − with H+ (or the equivalent counter-transport of NO3 − for OH−). The transporter appears to be bi-directional, operating in the forward as well as reverse directions. A rough estimate of the fluxes of NO3 − and acid equivalents suggests a one-to-one stoichiometry. Accordingly, the rate of transport was unaffected by sizable changes in transmembrane potential. The cytosolic acidification was a saturable function of the extracellular concentration of NO3 − and was accentuated by acidification of the extracellular space. The putative NO3 −-H+ cotransport was inhibited markedly by ethacrynic acid and by α-cyano-4-hydroxycinnamate, but only marginally by 4,4′-diisothiocyanostilbene-2,2′ disulfonate or by p-chloromercuribenzene sulfonate. The transporter responsible for NO3 −-induced pH changes in mammalian cells may be related, though not identical, to the NO3 −-H+ cotransporter described in Arabidopsis and Aspergillus. The mammalian cotransporter may be important in eliminating the products of NO metabolism, particularly in cells that generate vast amounts of this messenger. By cotransporting NO3 − with H+ the cells would additionally eliminate acid equivalents from activated cells that are metabolizing actively, without added energetic investment and with minimal disruption of the transmembrane potential, inasmuch as the cotransporter is likely electroneutral. PMID:9236211

  13. Continuous-energy eigenvalue sensitivity coefficient calculations in TSUNAMI-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, C. M.; Rearden, B. T.

    2013-07-01

    Two methods for calculating eigenvalue sensitivity coefficients in continuous-energy Monte Carlo applications were implemented in the KENO code within the SCALE code package. The methods were used to calculate sensitivity coefficients for several test problems and produced sensitivity coefficients that agreed well with both reference sensitivities and multigroup TSUNAMI-3D sensitivity coefficients. The newly developed CLUTCH method was observed to produce sensitivity coefficients with high figures of merit and a low memory footprint, and both continuous-energy sensitivity methods met or exceeded the accuracy of the multigroup TSUNAMI-3D calculations. (authors)

  14. Development of a SCALE Tool for Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2013-01-01

    Two methods for calculating eigenvalue sensitivity coefficients in continuous-energy Monte Carlo applications were implemented in the KENO code within the SCALE code package. The methods were used to calculate sensitivity coefficients for several criticality safety problems and produced sensitivity coefficients that agreed well with both reference sensitivities and multigroup TSUNAMI-3D sensitivity coefficients. The newly developed CLUTCH method was observed to produce sensitivity coefficients with high figures of merit and low memory requirements, and both continuous-energy sensitivity methods met or exceeded the accuracy of the multigroup TSUNAMI-3D calculations.

  15. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.

  16. Comparison of Two Global Sensitivity Analysis Methods for Hydrologic Modeling over the Columbia River Basin

    NASA Astrophysics Data System (ADS)

    Hameed, M.; Demirel, M. C.; Moradkhani, H.

    2015-12-01

    Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.

  17. Are LOD and LOQ Reliable Parameters for Sensitivity Evaluation of Spectroscopic Methods?

    PubMed

    Ershadi, Saba; Shayanfar, Ali

    2018-03-22

    The limit of detection (LOD) and the limit of quantification (LOQ) are common parameters to assess the sensitivity of analytical methods. In this study, the LOD and LOQ of previously reported terbium sensitized analysis methods were calculated by different methods, and the results were compared with sensitivity parameters [lower limit of quantification (LLOQ)] of U.S. Food and Drug Administration guidelines. The details of the calibration curve and standard deviation of blank samples of three different terbium-sensitized luminescence methods for the quantification of mycophenolic acid, enrofloxacin, and silibinin were used for the calculation of LOD and LOQ. A comparison of LOD and LOQ values calculated by various methods and LLOQ shows a considerable difference. The significant difference of the calculated LOD and LOQ with various methods and LLOQ should be considered in the sensitivity evaluation of spectroscopic methods.

  18. Experimental Validation of Model Updating and Damage Detection via Eigenvalue Sensitivity Methods with Artificial Boundary Conditions

    DTIC Science & Technology

    2017-09-01

    VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS by Matthew D. Bouwense...VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS 5. FUNDING NUMBERS 6. AUTHOR...unlimited. EXPERIMENTAL VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY

  19. Sensitivity of Lumped Constraints Using the Adjoint Method

    NASA Technical Reports Server (NTRS)

    Akgun, Mehmet A.; Haftka, Raphael T.; Wu, K. Chauncey; Walsh, Joanne L.

    1999-01-01

    Adjoint sensitivity calculation of stress, buckling and displacement constraints may be much less expensive than direct sensitivity calculation when the number of load cases is large. Adjoint stress and displacement sensitivities are available in the literature. Expressions for local buckling sensitivity of isotropic plate elements are derived in this study. Computational efficiency of the adjoint method is sensitive to the number of constraints and, therefore, the method benefits from constraint lumping. A continuum version of the Kreisselmeier-Steinhauser (KS) function is chosen to lump constraints. The adjoint and direct methods are compared for three examples: a truss structure, a simple HSCT wing model, and a large HSCT model. These sensitivity derivatives are then used in optimization.

  20. Evaluation and recommendation of sensitivity analysis methods for application to Stochastic Human Exposure and Dose Simulation models.

    PubMed

    Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu

    2006-11-01

    Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.

  1. Variational Methods in Design Optimization and Sensitivity Analysis for Two-Dimensional Euler Equations

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.

    1997-01-01

    Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.

  2. Sensitivity analysis of infectious disease models: methods, advances and their application

    PubMed Central

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.

    2013-01-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  3. SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.

    2016-02-25

    Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less

  4. An integrated molecular docking and rescoring method for predicting the sensitivity spectrum of various serine hydrolases to organophosphorus pesticides.

    PubMed

    Yang, Ling-Ling; Yang, Xiao; Li, Guo-Bo; Fan, Kai-Ge; Yin, Peng-Fei; Chen, Xiang-Gui

    2016-04-01

    The enzymatic chemistry method is currently the most widely used method for the rapid detection of organophosphorus (OP) pesticides, but the enzymes used, such as cholinesterases, lack sufficient sensitivity to detect low concentrations of OP pesticides present in given samples. Serine hydrolase is considered an ideal enzyme source in seeking high-sensitivity enzymes used for OP pesticide detection. However, it is difficult to systematically evaluate sensitivities of various serine hydrolases to OP pesticides by in vitro experiments. This study aimed to establish an in silico method to predict the sensitivity spectrum of various serine hydrolases to OP pesticides. A serine hydrolase database containing 219 representative serine hydrolases was constructed. Based on this database, an integrated molecular docking and rescoring method was established, in which the AutoDock Vina program was used to produce the binding poses of OP pesticides to various serine hydrolases and the ID-Score method developed recently by us was adopted as a rescoring method to predict their binding affinities. In retrospective case studies, this method showed good performance in predicting the sensitivities of known serine hydrolases to two OP pesticides: paraoxon and diisopropyl fluorophosphate. The sensitivity spectrum of the 219 collected serine hydrolases to 37 commonly used OP pesticides was finally obtained using this method. Overall, this study presented a promising in silico tool to predict the sensitivity spectrum of various serine hydrolases to OP pesticides, which will help in finding high-sensitivity serine hydrolases for OP pesticide detection. © 2015 Society of Chemical Industry.

  5. A novel on-line spatial-temporal k-anonymity method for location privacy protection from sequence rules-based inference attacks.

    PubMed

    Zhang, Haitao; Wu, Chenxue; Chen, Zewei; Liu, Zhao; Zhu, Yunhong

    2017-01-01

    Analyzing large-scale spatial-temporal k-anonymity datasets recorded in location-based service (LBS) application servers can benefit some LBS applications. However, such analyses can allow adversaries to make inference attacks that cannot be handled by spatial-temporal k-anonymity methods or other methods for protecting sensitive knowledge. In response to this challenge, first we defined a destination location prediction attack model based on privacy-sensitive sequence rules mined from large scale anonymity datasets. Then we proposed a novel on-line spatial-temporal k-anonymity method that can resist such inference attacks. Our anti-attack technique generates new anonymity datasets with awareness of privacy-sensitive sequence rules. The new datasets extend the original sequence database of anonymity datasets to hide the privacy-sensitive rules progressively. The process includes two phases: off-line analysis and on-line application. In the off-line phase, sequence rules are mined from an original sequence database of anonymity datasets, and privacy-sensitive sequence rules are developed by correlating privacy-sensitive spatial regions with spatial grid cells among the sequence rules. In the on-line phase, new anonymity datasets are generated upon LBS requests by adopting specific generalization and avoidance principles to hide the privacy-sensitive sequence rules progressively from the extended sequence anonymity datasets database. We conducted extensive experiments to test the performance of the proposed method, and to explore the influence of the parameter K value. The results demonstrated that our proposed approach is faster and more effective for hiding privacy-sensitive sequence rules in terms of hiding sensitive rules ratios to eliminate inference attacks. Our method also had fewer side effects in terms of generating new sensitive rules ratios than the traditional spatial-temporal k-anonymity method, and had basically the same side effects in terms of non-sensitive rules variation ratios with the traditional spatial-temporal k-anonymity method. Furthermore, we also found the performance variation tendency from the parameter K value, which can help achieve the goal of hiding the maximum number of original sensitive rules while generating a minimum of new sensitive rules and affecting a minimum number of non-sensitive rules.

  6. A novel on-line spatial-temporal k-anonymity method for location privacy protection from sequence rules-based inference attacks

    PubMed Central

    Wu, Chenxue; Liu, Zhao; Zhu, Yunhong

    2017-01-01

    Analyzing large-scale spatial-temporal k-anonymity datasets recorded in location-based service (LBS) application servers can benefit some LBS applications. However, such analyses can allow adversaries to make inference attacks that cannot be handled by spatial-temporal k-anonymity methods or other methods for protecting sensitive knowledge. In response to this challenge, first we defined a destination location prediction attack model based on privacy-sensitive sequence rules mined from large scale anonymity datasets. Then we proposed a novel on-line spatial-temporal k-anonymity method that can resist such inference attacks. Our anti-attack technique generates new anonymity datasets with awareness of privacy-sensitive sequence rules. The new datasets extend the original sequence database of anonymity datasets to hide the privacy-sensitive rules progressively. The process includes two phases: off-line analysis and on-line application. In the off-line phase, sequence rules are mined from an original sequence database of anonymity datasets, and privacy-sensitive sequence rules are developed by correlating privacy-sensitive spatial regions with spatial grid cells among the sequence rules. In the on-line phase, new anonymity datasets are generated upon LBS requests by adopting specific generalization and avoidance principles to hide the privacy-sensitive sequence rules progressively from the extended sequence anonymity datasets database. We conducted extensive experiments to test the performance of the proposed method, and to explore the influence of the parameter K value. The results demonstrated that our proposed approach is faster and more effective for hiding privacy-sensitive sequence rules in terms of hiding sensitive rules ratios to eliminate inference attacks. Our method also had fewer side effects in terms of generating new sensitive rules ratios than the traditional spatial-temporal k-anonymity method, and had basically the same side effects in terms of non-sensitive rules variation ratios with the traditional spatial-temporal k-anonymity method. Furthermore, we also found the performance variation tendency from the parameter K value, which can help achieve the goal of hiding the maximum number of original sensitive rules while generating a minimum of new sensitive rules and affecting a minimum number of non-sensitive rules. PMID:28767687

  7. Exploring Intercultural Sensitivity in Early Adolescence: A Mixed Methods Study

    ERIC Educational Resources Information Center

    Mellizo, Jennifer M.

    2017-01-01

    The purpose of this mixed methods study was to explore levels of intercultural sensitivity in a sample of fourth to eighth grade students in the United States (n = 162). "Intercultural sensitivity" was conceptualised through Bennett's Developmental Model of Sensitivity, and assessed through the Adapted Intercultural Sensitivity Index.…

  8. 40 CFR 766.18 - Method sensitivity.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 32 2012-07-01 2012-07-01 false Method sensitivity. 766.18 Section 766.18 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT DIBENZO-PARA-DIOXINS/DIBENZOFURANS General Provisions § 766.18 Method sensitivity. The target level of...

  9. 40 CFR 766.18 - Method sensitivity.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Method sensitivity. 766.18 Section 766.18 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT DIBENZO-PARA-DIOXINS/DIBENZOFURANS General Provisions § 766.18 Method sensitivity. The target level of...

  10. 40 CFR 766.18 - Method sensitivity.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Method sensitivity. 766.18 Section 766.18 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT DIBENZO-PARA-DIOXINS/DIBENZOFURANS General Provisions § 766.18 Method sensitivity. The target level of...

  11. 40 CFR 766.18 - Method sensitivity.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 31 2014-07-01 2014-07-01 false Method sensitivity. 766.18 Section 766.18 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT DIBENZO-PARA-DIOXINS/DIBENZOFURANS General Provisions § 766.18 Method sensitivity. The target level of...

  12. 40 CFR 766.18 - Method sensitivity.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Method sensitivity. 766.18 Section 766.18 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT DIBENZO-PARA-DIOXINS/DIBENZOFURANS General Provisions § 766.18 Method sensitivity. The target level of...

  13. SCALE 6.2 Continuous-Energy TSUNAMI-3D Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2015-01-01

    The TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation) capabilities within the SCALE code system make use of sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different systems, quantifying computational biases, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved ease of use and fidelity and the desire to extend TSUNAMI analysis to advanced applications have motivated the development of a SCALE 6.2 module for calculating sensitivity coefficients using three-dimensional (3D) continuous-energy (CE) Montemore » Carlo methods: CE TSUNAMI-3D. This paper provides an overview of the theory, implementation, and capabilities of the CE TSUNAMI-3D sensitivity analysis methods. CE TSUNAMI contains two methods for calculating sensitivity coefficients in eigenvalue sensitivity applications: (1) the Iterated Fission Probability (IFP) method and (2) the Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization (CLUTCH) method. This work also presents the GEneralized Adjoint Response in Monte Carlo method (GEAR-MC), a first-of-its-kind approach for calculating adjoint-weighted, generalized response sensitivity coefficients—such as flux responses or reaction rate ratios—in CE Monte Carlo applications. The accuracy and efficiency of the CE TSUNAMI-3D eigenvalue sensitivity methods are assessed from a user perspective in a companion publication, and the accuracy and features of the CE TSUNAMI-3D GEAR-MC methods are detailed in this paper.« less

  14. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    NASA Astrophysics Data System (ADS)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  15. Shape design sensitivity analysis and optimization of three dimensional elastic solids using geometric modeling and automatic regridding. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Yao, Tse-Min; Choi, Kyung K.

    1987-01-01

    An automatic regridding method and a three dimensional shape design parameterization technique were constructed and integrated into a unified theory of shape design sensitivity analysis. An algorithm was developed for general shape design sensitivity analysis of three dimensional eleastic solids. Numerical implementation of this shape design sensitivity analysis method was carried out using the finite element code ANSYS. The unified theory of shape design sensitivity analysis uses the material derivative of continuum mechanics with a design velocity field that represents shape change effects over the structural design. Automatic regridding methods were developed by generating a domain velocity field with boundary displacement method. Shape design parameterization for three dimensional surface design problems was illustrated using a Bezier surface with boundary perturbations that depend linearly on the perturbation of design parameters. A linearization method of optimization, LINRM, was used to obtain optimum shapes. Three examples from different engineering disciplines were investigated to demonstrate the accuracy and versatility of this shape design sensitivity analysis method.

  16. Projection-based estimation and nonuniformity correction of sensitivity profiles in phased-array surface coils.

    PubMed

    Yun, Sungdae; Kyriakos, Walid E; Chung, Jun-Young; Han, Yeji; Yoo, Seung-Schik; Park, Hyunwook

    2007-03-01

    To develop a novel approach for calculating the accurate sensitivity profiles of phased-array coils, resulting in correction of nonuniform intensity in parallel MRI. The proposed intensity-correction method estimates the accurate sensitivity profile of each channel of the phased-array coil. The sensitivity profile is estimated by fitting a nonlinear curve to every projection view through the imaged object. The nonlinear curve-fitting efficiently obtains the low-frequency sensitivity profile by eliminating the high-frequency image contents. Filtered back-projection (FBP) is then used to compute the estimates of the sensitivity profile of each channel. The method was applied to both phantom and brain images acquired from the phased-array coil. Intensity-corrected images from the proposed method had more uniform intensity than those obtained by the commonly used sum-of-squares (SOS) approach. With the use of the proposed correction method, the intensity variation was reduced to 6.1% from 13.1% of the SOS. When the proposed approach was applied to the computation of the sensitivity maps during sensitivity encoding (SENSE) reconstruction, it outperformed the SOS approach in terms of the reconstructed image uniformity. The proposed method is more effective at correcting the intensity nonuniformity of phased-array surface-coil images than the conventional SOS method. In addition, the method was shown to be resilient to noise and was successfully applied for image reconstruction in parallel imaging.

  17. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, J.; Tolson, B.

    2017-12-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  18. Shape sensitivity analysis of flutter response of a laminated wing

    NASA Technical Reports Server (NTRS)

    Bergen, Fred D.; Kapania, Rakesh K.

    1988-01-01

    A method is presented for calculating the shape sensitivity of a wing aeroelastic response with respect to changes in geometric shape. Yates' modified strip method is used in conjunction with Giles' equivalent plate analysis to predict the flutter speed, frequency, and reduced frequency of the wing. Three methods are used to calculate the sensitivity of the eigenvalue. The first method is purely a finite difference calculation of the eigenvalue derivative directly from the solution of the flutter problem corresponding to the two different values of the shape parameters. The second method uses an analytic expression for the eigenvalue sensitivities of a general complex matrix, where the derivatives of the aerodynamic, mass, and stiffness matrices are computed using a finite difference approximation. The third method also uses an analytic expression for the eigenvalue sensitivities, but the aerodynamic matrix is computed analytically. All three methods are found to be in good agreement with each other. The sensitivities of the eigenvalues were used to predict the flutter speed, frequency, and reduced frequency. These approximations were found to be in good agreement with those obtained using a complete reanalysis.

  19. High sensitivity phase retrieval method in grating-based x-ray phase contrast imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Zhao; Gao, Kun; Chen, Jian

    2015-02-15

    Purpose: Grating-based x-ray phase contrast imaging is considered as one of the most promising techniques for future medical imaging. Many different methods have been developed to retrieve phase signal, among which the phase stepping (PS) method is widely used. However, further practical implementations are hindered, due to its complex scanning mode and high radiation dose. In contrast, the reverse projection (RP) method is a novel fast and low dose extraction approach. In this contribution, the authors present a quantitative analysis of the noise properties of the refraction signals retrieved by the two methods and compare their sensitivities. Methods: Using themore » error propagation formula, the authors analyze theoretically the signal-to-noise ratios (SNRs) of the refraction images retrieved by the two methods. Then, the sensitivities of the two extraction methods are compared under an identical exposure dose. Numerical experiments are performed to validate the theoretical results and provide some quantitative insight. Results: The SNRs of the two methods are both dependent on the system parameters, but in different ways. Comparison between their sensitivities reveals that for the refraction signal, the RP method possesses a higher sensitivity, especially in the case of high visibility and/or at the edge of the object. Conclusions: Compared with the PS method, the RP method has a superior sensitivity and provides refraction images with a higher SNR. Therefore, one can obtain highly sensitive refraction images in grating-based phase contrast imaging. This is very important for future preclinical and clinical implementations.« less

  20. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks

    PubMed Central

    Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544

  1. Estimation of the sensitivity of various environmental sampling methods for detection of Salmonella in duck flocks.

    PubMed

    Arnold, Mark E; Mueller-Doblies, Doris; Gosling, Rebecca J; Martelli, Francesca; Davies, Robert H

    2015-01-01

    Reports of Salmonella in ducks in the UK currently rely upon voluntary submissions from the industry, and as there is no harmonized statutory monitoring and control programme, it is difficult to compare data from different years in order to evaluate any trends in Salmonella prevalence in relation to sampling methodology. Therefore, the aim of this project was to assess the sensitivity of a selection of environmental sampling methods, including the sampling of faeces, dust and water troughs or bowls for the detection of Salmonella in duck flocks, and a range of sampling methods were applied to 67 duck flocks. Bayesian methods in the absence of a gold standard were used to provide estimates of the sensitivity of each of the sampling methods relative to the within-flock prevalence. There was a large influence of the within-flock prevalence on the sensitivity of all sample types, with sensitivity reducing as the within-flock prevalence reduced. Boot swabs (individual and pool of four), swabs of faecally contaminated areas and whole house hand-held fabric swabs showed the overall highest sensitivity for low-prevalence flocks and are recommended for use to detect Salmonella in duck flocks. The sample type with the highest proportion positive was a pool of four hair nets used as boot swabs, but this was not the most sensitive sample for low-prevalence flocks. All the environmental sampling types (faeces swabs, litter pinches, drag swabs, water trough samples and dust) had higher sensitivity than individual faeces sampling. None of the methods consistently identified all the positive flocks, and at least 10 samples would be required for even the most sensitive method (pool of four boot swabs) to detect a 5% prevalence. The sampling of dust had a low sensitivity and is not recommended for ducks.

  2. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1988-01-01

    Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.

  3. Comparison of four methods to assess colostral IgG concentration in dairy cows.

    PubMed

    Chigerwe, Munashe; Tyler, Jeff W; Middleton, John R; Spain, James N; Dill, Jeffrey S; Steevens, Barry J

    2008-09-01

    To determine sensitivity and specificity of 4 methods to assess colostral IgG concentration in dairy cows and determine the optimal cutpoint for each method. Cross-sectional study. 160 Holstein dairy cows. 171 composite colostrum samples collected within 2 hours after parturition were used in the study. Test methods used to estimate colostral IgG concentration consisted of weight of the first milking, 2 hydrometers, and an electronic refractometer. Results of the test methods were compared with colostral IgG concentration determined by means of radial immunodiffusion. For each method, sensitivity and specificity for detecting colostral IgG concentration < 50 g/L were calculated across a range of potential cutpoints, and the optimal cutpoint for each test was selected to maximize sensitivity and specificity. At the optimal cutpoint for each method, sensitivity for weight of the first milking (0.42) was significantly lower than sensitivity for each of the other 3 methods (hydrometer 1, 0.75; hydrometer 2, 0.76; refractometer, 0.75), but no significant differences were identified among the other 3 methods with regard to sensitivity. Specificities at the optimal cutpoint were similar for all 4 methods. Results suggested that use of either hydrometer or the electronic refractometer was an acceptable method of screening colostrum for low IgG concentration; however, the manufacturer-defined scale for both hydrometers overestimated colostral IgG concentration. Use of weight of the first milking as a screening test to identify bovine colostrum with inadequate IgG concentration could not be justified because of the low sensitivity.

  4. Eigenvalue sensitivity analysis of planar frames with variable joint and support locations

    NASA Technical Reports Server (NTRS)

    Chuang, Ching H.; Hou, Gene J. W.

    1991-01-01

    Two sensitivity equations are derived in this study based upon the continuum approach for eigenvalue sensitivity analysis of planar frame structures with variable joint and support locations. A variational form of an eigenvalue equation is first derived in which all of the quantities are expressed in the local coordinate system attached to each member. Material derivative of this variational equation is then sought to account for changes in member's length and orientation resulting form the perturbation of joint and support locations. Finally, eigenvalue sensitivity equations are formulated in either domain quantities (by the domain method) or boundary quantities (by the boundary method). It is concluded that the sensitivity equation derived by the boundary method is more efficient in computation but less accurate than that of the domain method. Nevertheless, both of them in terms of computational efficiency are superior to the conventional direct differentiation method and the finite difference method.

  5. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Tolson, Bryan

    2017-04-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. Subsequently, we focus on the model-independency by testing the frugal method using the hydrologic model mHM (www.ufz.de/mhm) with about 50 model parameters. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed (and published) sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  6. Efficient sensitivity analysis method for chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao

    2016-05-01

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.

  7. Effective classification of the prevalence of Schistosoma mansoni.

    PubMed

    Mitchell, Shira A; Pagano, Marcello

    2012-12-01

    To present an effective classification method based on the prevalence of Schistosoma mansoni in the community. We created decision rules (defined by cut-offs for number of positive slides), which account for imperfect sensitivity, both with a simple adjustment of fixed sensitivity and with a more complex adjustment of changing sensitivity with prevalence. To reduce screening costs while maintaining accuracy, we propose a pooled classification method. To estimate sensitivity, we use the De Vlas model for worm and egg distributions. We compare the proposed method with the standard method to investigate differences in efficiency, measured by number of slides read, and accuracy, measured by probability of correct classification. Modelling varying sensitivity lowers the lower cut-off more significantly than the upper cut-off, correctly classifying regions as moderate rather than lower, thus receiving life-saving treatment. The classification method goes directly to classification on the basis of positive pools, avoiding having to know sensitivity to estimate prevalence. For model parameter values describing worm and egg distributions among children, the pooled method with 25 slides achieves an expected 89.9% probability of correct classification, whereas the standard method with 50 slides achieves 88.7%. Among children, it is more efficient and more accurate to use the pooled method for classification of S. mansoni prevalence than the current standard method. © 2012 Blackwell Publishing Ltd.

  8. CONTINUOUS-ENERGY MONTE CARLO METHODS FOR CALCULATING GENERALIZED RESPONSE SENSITIVITIES USING TSUNAMI-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2014-01-01

    This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.

  9. Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems.

    PubMed

    Wolf, Elizabeth Skubak; Anderson, David F

    2015-01-21

    Stochastic models are often used to help understand the behavior of intracellular biochemical processes. The most common such models are continuous time Markov chains (CTMCs). Parametric sensitivities, which are derivatives of expectations of model output quantities with respect to model parameters, are useful in this setting for a variety of applications. In this paper, we introduce a class of hybrid pathwise differentiation methods for the numerical estimation of parametric sensitivities. The new hybrid methods combine elements from the three main classes of procedures for sensitivity estimation and have a number of desirable qualities. First, the new methods are unbiased for a broad class of problems. Second, the methods are applicable to nearly any physically relevant biochemical CTMC model. Third, and as we demonstrate on several numerical examples, the new methods are quite efficient, particularly if one wishes to estimate the full gradient of parametric sensitivities. The methods are rather intuitive and utilize the multilevel Monte Carlo philosophy of splitting an expectation into separate parts and handling each in an efficient manner.

  10. A discourse on sensitivity analysis for discretely-modeled structures

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Haftka, Raphael T.

    1991-01-01

    A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.

  11. Shape design sensitivity analysis using domain information

    NASA Technical Reports Server (NTRS)

    Seong, Hwal-Gyeong; Choi, Kyung K.

    1985-01-01

    A numerical method for obtaining accurate shape design sensitivity information for built-up structures is developed and demonstrated through analysis of examples. The basic character of the finite element method, which gives more accurate domain information than boundary information, is utilized for shape design sensitivity improvement. A domain approach for shape design sensitivity analysis of built-up structures is derived using the material derivative idea of structural mechanics and the adjoint variable method of design sensitivity analysis. Velocity elements and B-spline curves are introduced to alleviate difficulties in generating domain velocity fields. The regularity requirements of the design velocity field are studied.

  12. Inverse solutions for electrical impedance tomography based on conjugate gradients methods

    NASA Astrophysics Data System (ADS)

    Wang, M.

    2002-01-01

    A multistep inverse solution for two-dimensional electric field distribution is developed to deal with the nonlinear inverse problem of electric field distribution in relation to its boundary condition and the problem of divergence due to errors introduced by the ill-conditioned sensitivity matrix and the noise produced by electrode modelling and instruments. This solution is based on a normalized linear approximation method where the change in mutual impedance is derived from the sensitivity theorem and a method of error vector decomposition. This paper presents an algebraic solution of the linear equations at each inverse step, using a generalized conjugate gradients method. Limiting the number of iterations in the generalized conjugate gradients method controls the artificial errors introduced by the assumption of linearity and the ill-conditioned sensitivity matrix. The solution of the nonlinear problem is approached using a multistep inversion. This paper also reviews the mathematical and physical definitions of the sensitivity back-projection algorithm based on the sensitivity theorem. Simulations and discussion based on the multistep algorithm, the sensitivity coefficient back-projection method and the Newton-Raphson method are given. Examples of imaging gas-liquid mixing and a human hand in brine are presented.

  13. Sensitive determination of carbohydrates by fluorimetric method with Ce(IV) and sodium triphosphate.

    PubMed

    Yang, Jinghe; Cao, Xihui; Sun, Changxia; Wu, Xia; Li, Lei

    2004-05-01

    A new simple and sensitive fluorimetric method for the determination of carbohydrates is described. The method is based on the reaction between carbohydrates and Ce(IV) in the presence of sulfuric acid. All the reductive carbohydrates can be detected indirectly by the fluorescence of Ce(III) produced. The addition of sodium triphate enhances the sensitivity of the method by more than 10-folds. Under optimum conditions, an excellent linear relationship was obtained between the fluorescence intensity and the concentration of carbohydrates. The limits of detection lie in the range of 9.3 x 10(-10) - 1.3 x 10(-9) mol/L. As compared to the normal fluorimetric method, the proposed method is faster and more sensitive.

  14. Results of an integrated structure-control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1988-01-01

    Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.

  15. Rapid detection of pandemic influenza in the presence of seasonal influenza

    PubMed Central

    2010-01-01

    Background Key to the control of pandemic influenza are surveillance systems that raise alarms rapidly and sensitively. In addition, they must minimise false alarms during a normal influenza season. We develop a method that uses historical syndromic influenza data from the existing surveillance system 'SERVIS' (Scottish Enhanced Respiratory Virus Infection Surveillance) for influenza-like illness (ILI) in Scotland. Methods We develop an algorithm based on the weekly case ratio (WCR) of reported ILI cases to generate an alarm for pandemic influenza. From the seasonal influenza data from 13 Scottish health boards, we estimate the joint probability distribution of the country-level WCR and the number of health boards showing synchronous increases in reported influenza cases over the previous week. Pandemic cases are sampled with various case reporting rates from simulated pandemic influenza infections and overlaid with seasonal SERVIS data from 2001 to 2007. Using this combined time series we test our method for speed of detection, sensitivity and specificity. Also, the 2008-09 SERVIS ILI cases are used for testing detection performances of the three methods with a real pandemic data. Results We compare our method, based on our simulation study, to the moving-average Cumulative Sums (Mov-Avg Cusum) and ILI rate threshold methods and find it to be more sensitive and rapid. For 1% case reporting and detection specificity of 95%, our method is 100% sensitive and has median detection time (MDT) of 4 weeks while the Mov-Avg Cusum and ILI rate threshold methods are, respectively, 97% and 100% sensitive with MDT of 5 weeks. At 99% specificity, our method remains 100% sensitive with MDT of 5 weeks. Although the threshold method maintains its sensitivity of 100% with MDT of 5 weeks, sensitivity of Mov-Avg Cusum declines to 92% with increased MDT of 6 weeks. For a two-fold decrease in the case reporting rate (0.5%) and 99% specificity, the WCR and threshold methods, respectively, have MDT of 5 and 6 weeks with both having sensitivity close to 100% while the Mov-Avg Cusum method can only manage sensitivity of 77% with MDT of 6 weeks. However, the WCR and Mov-Avg Cusum methods outperform the ILI threshold method by 1 week in retrospective detection of the 2009 pandemic in Scotland. Conclusions While computationally and statistically simple to implement, the WCR algorithm is capable of raising alarms, rapidly and sensitively, for influenza pandemics against a background of seasonal influenza. Although the algorithm was developed using the SERVIS data, it has the capacity to be used at other geographic scales and for different disease systems where buying some early extra time is critical. PMID:21106071

  16. Method for detecting the reactivity of chemicals towards peptides as an alternative test method for assessing skin sensitization potential.

    PubMed

    Cho, Sun-A; Jeong, Yun Hyeok; Kim, Ji Hoon; Kim, Seoyoung; Cho, Jun-Cheol; Heo, Yong; Heo, Young; Suh, Kyung-Do; Shin, Kyeho; An, Susun

    2014-02-10

    Cosmetics are normally composed of various ingredients. Some cosmetic ingredients can act as chemical haptens reacting toward proteins or peptides of human skin and they can provoke an immunologic reaction, called as skin sensitization. This haptenation process is very important step of inducing skin sensitization and evaluating the sensitizing potentials of cosmetic ingredients is very important for consumer safety. Therefore, animal alternative methods focusing on monitoring haptenation potential are undergoing vigorous research. To examine the further usefulness of spectrophotometric methods to monitor reactivity of chemicals toward peptides for cosmetic ingredients. Forty chemicals (25 sensitizers and 15 non-sensitizers) were reacted with 2 synthetic peptides, e.g., the cysteine peptides (Ac-RFAACAA-COOH) with free thiol group and the lysine peptides (Ac-RFAAKAA-COOH) with free amine group. Unreacted peptides can be detected after incubating with 5,5'-dithiobis-2-nitrobenzoic acid or fluorescamine™ as detection reagents for free thiol and amine group, respectively. Chemicals were categorized as sensitizers when they induced more than 10% depletion of cysteine peptides or more than 30% depletion of lysine peptides. The sensitivity, specificity, and accuracy were 80.0%, 86.7% and 82.5%, respectively. These results demonstrate that spectrophotometric methods can be an easy, fast, and high-throughput screening tools predicting the skin sensitization potential of chemical including cosmetic ingredient. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1992-01-01

    Research conducted during the period from July 1991 through December 1992 is covered. A method based upon the quasi-analytical approach was developed for computing the aerodynamic sensitivity coefficients of three dimensional wings in transonic and subsonic flow. In addition, the method computes for comparison purposes the aerodynamic sensitivity coefficients using the finite difference approach. The accuracy and validity of the methods are currently under investigation.

  18. Experimental study on the sensitive depth of backwards detected light in turbid media.

    PubMed

    Zhang, Yunyao; Huang, Liqing; Zhang, Ning; Tian, Heng; Zhu, Jingping

    2018-05-28

    In the recent past, optical spectroscopy and imaging methods for biomedical diagnosis and target enhancing have been widely researched. The challenge to improve the performance of these methods is to know the sensitive depth of the backwards detected light well. Former research mainly employed a Monte Carlo method to run simulations to statistically describe the light sensitive depth. An experimental method for investigating the sensitive depth was developed and is presented here. An absorption plate was employed to remove all the light that may have travelled deeper than the plate, leaving only the light which cannot reach the plate. By measuring the received backwards light intensity and the depth between the probe and the plate, the light intensity distribution along the depth dimension can be achieved. The depth with the maximum light intensity was recorded as the sensitive depth. The experimental results showed that the maximum light intensity was nearly the same in a short depth range. It could be deduced that the sensitive depth was a range, rather than a single depth. This sensitive depth range as well as its central depth increased consistently with the increasing source-detection distance. Relationships between sensitive depth and optical properties were also investigated. It also showed that the reduced scattering coefficient affects the central sensitive depth and the range of the sensitive depth more than the absorption coefficient, so they cannot be simply added as reduced distinct coefficients to describe the sensitive depth. This study provides an efficient method for investigation of sensitive depth. It may facilitate the development of spectroscopy and imaging techniques for biomedical diagnosis and underwater imaging.

  19. Microscopy outperformed in a comparison of five methods for detecting Trichomonas vaginalis in symptomatic women.

    PubMed

    Nathan, B; Appiah, J; Saunders, P; Heron, D; Nichols, T; Brum, R; Alexander, S; Baraitser, P; Ison, C

    2015-03-01

    In the UK, despite its low sensitivity, wet mount microscopy is often the only method of detecting Trichomonas vaginalis infection. A study was conducted in symptomatic women to compare the performance of five methods for detecting T. vaginalis: an in-house polymerase chain reaction (PCR); Aptima T. vaginalis kit; OSOM ®Trichomonas Rapid Test; culture and microscopy. Symptomatic women underwent routine testing; microscopy and further swabs were taken for molecular testing, OSOM and culture. A true positive was defined as a sample that was positive for T. vaginalis by two or more different methods. Two hundred and forty-six women were recruited: 24 patients were positive for T. vaginalis by two or more different methods. Of these 24 patients, 21 patients were detected by real-time PCR (sensitivity 88%); 22 patients were detected by the Aptima T. vaginalis kit (sensitivity 92%); 22 patients were detected by OSOM (sensitivity 92%); nine were detected by wet mount microscopy (sensitivity 38%); and 21 were detected by culture (sensitivity 88%). Two patients were positive by just one method and were not considered true positives. All the other detection methods had a sensitivity to detect T. vaginalis that was significantly greater than wet mount microscopy, highlighting the number of cases that are routinely missed even in symptomatic women if microscopy is the only diagnostic method available. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  20. [Implication of inverse-probability weighting method in the evaluation of diagnostic test with verification bias].

    PubMed

    Kang, Leni; Zhang, Shaokai; Zhao, Fanghui; Qiao, Youlin

    2014-03-01

    To evaluate and adjust the verification bias existed in the screening or diagnostic tests. Inverse-probability weighting method was used to adjust the sensitivity and specificity of the diagnostic tests, with an example of cervical cancer screening used to introduce the Compare Tests package in R software which could be implemented. Sensitivity and specificity calculated from the traditional method and maximum likelihood estimation method were compared to the results from Inverse-probability weighting method in the random-sampled example. The true sensitivity and specificity of the HPV self-sampling test were 83.53% (95%CI:74.23-89.93)and 85.86% (95%CI: 84.23-87.36). In the analysis of data with randomly missing verification by gold standard, the sensitivity and specificity calculated by traditional method were 90.48% (95%CI:80.74-95.56)and 71.96% (95%CI:68.71-75.00), respectively. The adjusted sensitivity and specificity under the use of Inverse-probability weighting method were 82.25% (95% CI:63.11-92.62) and 85.80% (95% CI: 85.09-86.47), respectively, whereas they were 80.13% (95%CI:66.81-93.46)and 85.80% (95%CI: 84.20-87.41) under the maximum likelihood estimation method. The inverse-probability weighting method could effectively adjust the sensitivity and specificity of a diagnostic test when verification bias existed, especially when complex sampling appeared.

  1. Densified waste form and method for forming

    DOEpatents

    Garino, Terry J.; Nenoff, Tina M.; Sava Gallis, Dorina Florentina

    2015-08-25

    Materials and methods of making densified waste forms for temperature sensitive waste material, such as nuclear waste, formed with low temperature processing using metallic powder that forms the matrix that encapsulates the temperature sensitive waste material. The densified waste form includes a temperature sensitive waste material in a physically densified matrix, the matrix is a compacted metallic powder. The method for forming the densified waste form includes mixing a metallic powder and a temperature sensitive waste material to form a waste form precursor. The waste form precursor is compacted with sufficient pressure to densify the waste precursor and encapsulate the temperature sensitive waste material in a physically densified matrix.

  2. THE SEDIMENTATION PROPERTIES OF THE SKIN-SENSITIZING ANTIBODIES OF RAGWEED-SENSITIVE PATIENTS

    PubMed Central

    Andersen, Burton R.; Vannier, Wilton E.

    1964-01-01

    The sedimentation coefficients of the skin-sensitizing antibodies to ragweed were evaluated by the moving partition cell method and the sucrose density gradient method. The most reliable results were obtained by sucrose density gradient ultracentrifugation which showed that the major portion of skin-sensitizing antibodies to ragweed sediment with an average value of 7.7S (7.4 to 7.9S). This is about one S unit faster than γ-globulins (6.8S). The data from the moving partition cell method are in agreement with these results. Our studies failed to demonstrate heterogeneity of the skin-sensitizing antibodies with regard to sedimentation rate. PMID:14194391

  3. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling

    NASA Astrophysics Data System (ADS)

    Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.

    2017-05-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.

  4. Development of a generalized perturbation theory method for sensitivity analysis using continuous-energy Monte Carlo methods

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.

    2016-03-01

    The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less

  5. A Geostatistics-Informed Hierarchical Sensitivity Analysis Method for Complex Groundwater Flow and Transport Modeling

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2017-12-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.

  6. Shape design sensitivity analysis and optimal design of structural systems

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.

    1987-01-01

    The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.

  7. [Validity of expired carbon monoxide and urine cotinine using dipstick method to assess smoking status].

    PubMed

    Park, Su San; Lee, Ju Yul; Cho, Sung-Il

    2007-07-01

    We investigated the validity of the dipstick method (Mossman Associates Inc. USA) and the expired CO method to distinguish between smokers and nonsmokers. We also elucidated the related factors of the two methods. This study included 244 smokers and 50 ex-smokers, recruited from smoking cessation clinics at 4 local public health centers, who had quit for over 4 weeks. We calculated the sensitivity, specificity and Kappa coefficient of each method for validity. We obtained ROC curve, predictive value and agreement to determine the cutoff of expired air CO method. Finally, we elucidated the related factors and compared their effect powers using the standardized regression coefficient. The dipstick method showed a sensitivity of 92.6%, specificity of 96.0% and Kappa coefficient of 0.79. The best cutoff value to distinguish smokers was 5-6 ppm. At 5 ppm, the expired CO method showed a sensitivity of 94.3%, specificity of 82.0% and Kappa coefficient of 0.73. And at 6 ppm, sensitivity, specificity and Kappa coefficient were 88.5%, 86.0% and 0.64, respectively. Therefore, the dipstick method had higher sensitivity and specificity than the expired CO method. The dipstick and expired CO methods were significantly increased with increasing smoking amount. With longer time since the last smoking, expired CO showed a rapid decrease after 4 hours, whereas the dipstick method showed relatively stable levels for more than 4 hours. The dipstick and expired CO methods were both good indicators for assessing smoking status. However, the former showed higher sensitivity and specificity and stable levels over longer hours after smoking, compared to the expired CO method.

  8. DIVERSITY: A new method for evaluating sensitivity of groundwater to contamination

    NASA Astrophysics Data System (ADS)

    Ray, J. A.; O'Dell, P. W.

    1993-12-01

    This study outlines an improved method, DIVERSITY, for delineating and rating groundwater sensitivity. It is an acronym for DIspersion/VElocity-Rated SensitivITY, which is based on an assessment of three aquifer characteristics: recharge potential, flow velocity, and flow directions. The primary objective of this method is to produce sensitivity maps at the county or state scale that illustrate intrinsic potential for contamination of the uppermost aquifer. Such maps can be used for recognition of aquifer sensitivity and for protection of groundwater quality. We suggest that overriding factors that strongly affect one or more of the three basic aquifer characteristics may systematically elevate or lower the sensitivity rating. The basic method employs a three-step procedure: (1) Hydrogeologic settings are delineated on the basis of geology and groundwater recharge/discharge position within a terrane. (2) A sensitivity envelope or model for each setting is outlined on a three-component rating graph. (3) Sensitivity ratings derived from the envelope are extrapolated to hydrogeologic setting polygons utilizing overriding and key factors, when appropriate. The three-component sensitivity rating graph employs two logarithmic scales and a relative area scale on which measured and estimated values may be plotted. The flow velocity scale ranging from 0.01 to more than 10,000 m/d is the keystone of the rating graph. Whenever possible, actual time-of-travel values are plotted on the velocity scale to bracket the position of a sensitivity envelope. The DIVERSITY method was developed and tested for statewide use in Kentucky, but we believe it is also practical and applicable for use in almost any other area.

  9. Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, Elizabeth Skubak, E-mail: ewolf@saintmarys.edu; Anderson, David F., E-mail: anderson@math.wisc.edu

    2015-01-21

    Stochastic models are often used to help understand the behavior of intracellular biochemical processes. The most common such models are continuous time Markov chains (CTMCs). Parametric sensitivities, which are derivatives of expectations of model output quantities with respect to model parameters, are useful in this setting for a variety of applications. In this paper, we introduce a class of hybrid pathwise differentiation methods for the numerical estimation of parametric sensitivities. The new hybrid methods combine elements from the three main classes of procedures for sensitivity estimation and have a number of desirable qualities. First, the new methods are unbiased formore » a broad class of problems. Second, the methods are applicable to nearly any physically relevant biochemical CTMC model. Third, and as we demonstrate on several numerical examples, the new methods are quite efficient, particularly if one wishes to estimate the full gradient of parametric sensitivities. The methods are rather intuitive and utilize the multilevel Monte Carlo philosophy of splitting an expectation into separate parts and handling each in an efficient manner.« less

  10. Sensitivity of control-augmented structure obtained by a system decomposition method

    NASA Technical Reports Server (NTRS)

    Sobieszczanskisobieski, Jaroslaw; Bloebaum, Christina L.; Hajela, Prabhat

    1988-01-01

    The verification of a method for computing sensitivity derivatives of a coupled system is presented. The method deals with a system whose analysis can be partitioned into subsets that correspond to disciplines and/or physical subsystems that exchange input-output data with each other. The method uses the partial sensitivity derivatives of the output with respect to input obtained for each subset separately to assemble a set of linear, simultaneous, algebraic equations that are solved for the derivatives of the coupled system response. This sensitivity analysis is verified using an example of a cantilever beam augmented with an active control system to limit the beam's dynamic displacements under an excitation force. The verification shows good agreement of the method with reference data obtained by a finite difference technique involving entire system analysis. The usefulness of a system sensitivity method in optimization applications by employing a piecewise-linear approach to the same numerical example is demonstrated. The method's principal merits are its intrinsically superior accuracy in comparison with the finite difference technique, and its compatibility with the traditional division of work in complex engineering tasks among specialty groups.

  11. Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks.

    PubMed

    Rathinam, Muruhan; Sheppard, Patrick W; Khammash, Mustafa

    2010-01-21

    Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie's stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10,000 are demonstrated.

  12. Simulation-based sensitivity analysis for non-ignorably missing data.

    PubMed

    Yin, Peng; Shi, Jian Q

    2017-01-01

    Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.

  13. Improving multi-objective reservoir operation optimization with sensitivity-informed dimension reduction

    NASA Astrophysics Data System (ADS)

    Chu, J.; Zhang, C.; Fu, G.; Li, Y.; Zhou, H.

    2015-08-01

    This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed method dramatically reduces the computational demands required for attaining high-quality approximations of optimal trade-off relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed dimension reduction and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform dimension reduction of optimization problems when solving complex multi-objective reservoir operation problems.

  14. Optimization of Parameter Ranges for Composite Tape Winding Process Based on Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Yu, Tao; Shi, Yaoyao; He, Xiaodong; Kang, Chao; Deng, Bo; Song, Shibo

    2017-08-01

    This study is focus on the parameters sensitivity of winding process for composite prepreg tape. The methods of multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis are proposed. The polynomial empirical model of interlaminar shear strength is established by response surface experimental method. Using this model, the relative sensitivity of key process parameters including temperature, tension, pressure and velocity is calculated, while the single-parameter sensitivity curves are obtained. According to the analysis of sensitivity curves, the stability and instability range of each parameter are recognized. Finally, the optimization method of winding process parameters is developed. The analysis results show that the optimized ranges of the process parameters for interlaminar shear strength are: temperature within [100 °C, 150 °C], tension within [275 N, 387 N], pressure within [800 N, 1500 N], and velocity within [0.2 m/s, 0.4 m/s], respectively.

  15. Radiation sensitive devices and systems for detection of radioactive materials and related methods

    DOEpatents

    Kotter, Dale K

    2014-12-02

    Radiation sensitive devices include a substrate comprising a radiation sensitive material and a plurality of resonance elements coupled to the substrate. Each resonance element is configured to resonate responsive to non-ionizing incident radiation. Systems for detecting radiation from a special nuclear material include a radiation sensitive device and a sensor located remotely from the radiation sensitive device and configured to measure an output signal from the radiation sensitive device. In such systems, the radiation sensitive device includes a radiation sensitive material and a plurality of resonance elements positioned on the radiation sensitive material. Methods for detecting a presence of a special nuclear material include positioning a radiation sensitive device in a location where special nuclear materials are to be detected and remotely interrogating the radiation sensitive device with a sensor.

  16. High-Sensitivity Spectrophotometry.

    ERIC Educational Resources Information Center

    Harris, T. D.

    1982-01-01

    Selected high-sensitivity spectrophotometric methods are examined, and comparisons are made of their relative strengths and weaknesses and the circumstances for which each can best be applied. Methods include long path cells, noise reduction, laser intracavity absorption, thermocouple calorimetry, photoacoustic methods, and thermo-optical methods.…

  17. Empirical Observations on the Sensitivity of Hot Cathode Ionization Type Vacuum Gages

    NASA Technical Reports Server (NTRS)

    Summers, R. L.

    1969-01-01

    A study of empirical methods of predicting tile relative sensitivities of hot cathode ionization gages is presented. Using previously published gage sensitivities, several rules for predicting relative sensitivity are tested. The relative sensitivity to different gases is shown to be invariant with gage type, in the linear range of gage operation. The total ionization cross section, molecular and molar polarizability, and refractive index are demonstrated to be useful parameters for predicting relative gage sensitivity. Using data from the literature, the probable error of predictions of relative gage sensitivity based on these molecular properties is found to be about 10 percent. A comprehensive table of predicted relative sensitivities, based on empirical methods, is presented.

  18. The sensitivity of an hydroponic lettuce root elongation bioassay to metals, phenol and wastewaters.

    PubMed

    Park, Jihae; Yoon, Jeong-Hyun; Depuydt, Stephen; Oh, Jung-Woo; Jo, Youn-Min; Kim, Kyungtae; Brown, Murray T; Han, Taejun

    2016-04-01

    The root elongation bioassay is one of the most straightforward test methods used for environmental monitoring in terms of simplicity, rapidity and economy since it merely requires filter paper, distilled water and Petri dishes. However, filter paper as a support material is known to be problematic as it can reduce the sensitivity of the test. The newly developed hydroponic method reported here differs from the conventional root elongation method (US EPA filter paper method) in that no support material is used and the exposure time is shorter (48 h in this test versus 120 h in the US EPA test). For metals, the hydroponic test method was 3.3 (for Hg) to 57 (for Cu) times more sensitive than the US EPA method with the rank orders of sensitivity, estimated from EC50 values, being Cu≥Cd>Ni≥Zn≥Hg for the former and Hg≥Cu≥Ni≥Cd≥Zn for the latter methods. For phenol, the results did not differ significantly; EC50 values were 124 mg L(-1) and 108-180 mg L(-1) for the hydroponic and filter paper methods, respectively. Lettuce was less sensitive than daphnids to wastewaters, but the root elongation response appears to be wastewater-specific and is especially sensitive for detecting the presence of fluorine. The new hydroponic test thus provides many practical advantages, especially in terms of cost and time-effectiveness requiring only a well plate, a small volume of distilled water and short exposure period; furthermore, no specialist expertise is required. The method is simpler than the conventional EPA technique in not using filter paper which can influence the sensitivity of the test. Additionally, plant seeds have a long shelf-life and require little or no maintenance. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. A comparison of analysis methods to estimate contingency strength.

    PubMed

    Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T

    2018-05-09

    To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.

  20. A fluorescence high throughput screening method for the detection of reactive electrophiles as potential skin sensitizers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avonto, Cristina; Chittiboyina, Amar G.; Rua, Diego

    2015-12-01

    Skin sensitization is an important toxicological end-point in the risk assessment of chemical allergens. Because of the complexity of the biological mechanisms associated with skin sensitization, integrated approaches combining different chemical, biological and in silico methods are recommended to replace conventional animal tests. Chemical methods are intended to characterize the potential of a sensitizer to induce earlier molecular initiating events. The presence of an electrophilic mechanistic domain is considered one of the essential chemical features to covalently bind to the biological target and induce further haptenation processes. Current in chemico assays rely on the quantification of unreacted model nucleophiles aftermore » incubation with the candidate sensitizer. In the current study, a new fluorescence-based method, ‘HTS-DCYA assay’, is proposed. The assay aims at the identification of reactive electrophiles based on their chemical reactivity toward a model fluorescent thiol. The reaction workflow enabled the development of a High Throughput Screening (HTS) method to directly quantify the reaction adducts. The reaction conditions have been optimized to minimize solubility issues, oxidative side reactions and increase the throughput of the assay while minimizing the reaction time, which are common issues with existing methods. Thirty-six chemicals previously classified with LLNA, DPRA or KeratinoSens™ were tested as a proof of concept. Preliminary results gave an estimated 82% accuracy, 78% sensitivity, 90% specificity, comparable to other in chemico methods such as Cys-DPRA. In addition to validated chemicals, six natural products were analyzed and a prediction of their sensitization potential is presented for the first time. - Highlights: • A novel fluorescence-based method to detect electrophilic sensitizers is proposed. • A model fluorescent thiol was used to directly quantify the reaction products. • A discussion of the reaction workflow and critical parameters is presented. • The method could provide a useful tool to complement existing chemical assays.« less

  1. [Parameter sensitivity of simulating net primary productivity of Larix olgensis forest based on BIOME-BGC model].

    PubMed

    He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong

    2016-02-01

    Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.

  2. Spatial contrast sensitivity - Effects of age, test-retest, and psychophysical method

    NASA Technical Reports Server (NTRS)

    Higgins, Kent E.; Jaffe, Myles J.; Caruso, Rafael C.; Demonasterio, Francisco M.

    1988-01-01

    Two different psychophysical methods were used to test the spatial contrast sensitivity in normal subjects from five age groups. The method of adjustment showed a decline in sensitivity with increasing age at all spatial frequencies, while the forced-choice procedure showed an age-related decline predominantly at high spatial frequencies. It is suggested that a neural component is responsible for this decline.

  3. Densified waste form and method for forming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garino, Terry J.; Nenoff, Tina M.; Sava Gallis, Dorina Florentina

    Materials and methods of making densified waste forms for temperature sensitive waste material, such as nuclear waste, formed with low temperature processing using metallic powder that forms the matrix that encapsulates the temperature sensitive waste material. The densified waste form includes a temperature sensitive waste material in a physically densified matrix, the matrix is a compacted metallic powder. The method for forming the densified waste form includes mixing a metallic powder and a temperature sensitive waste material to form a waste form precursor. The waste form precursor is compacted with sufficient pressure to densify the waste precursor and encapsulate themore » temperature sensitive waste material in a physically densified matrix.« less

  4. Results of an integrated structure/control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1989-01-01

    A design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations is discussed. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changes in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient than finite difference methods for the computation of the equivalent sensitivity information.

  5. The influence of film-screen color sensitivity and type of measurement device on kVp measurements.

    PubMed

    Lam, R W; Price, S C

    1989-01-01

    Three methods for evaluating radiographic kVp were studied: the Wisconsin Test Cassette, the Noninvasive Evaluator of Radiation Outputs (NERO), and the Dynalyzer. The Dynalyzer kVp readings were the highest and were followed by NERO and cassette readings in descending order. By film type, the cassette readings ranged from Kodak OG (green sensitive), TMG (green sensitive), XK (blue sensitive), and XRP (blue sensitive) in descending order. The results show that there is significant variation between the methods.

  6. A sensitivity equation approach to shape optimization in fluid flows

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1994-01-01

    A sensitivity equation method to shape optimization problems is applied. An algorithm is developed and tested on a problem of designing optimal forebody simulators for a 2D, inviscid supersonic flow. The algorithm uses a BFGS/Trust Region optimization scheme with sensitivities computed by numerically approximating the linear partial differential equations that determine the flow sensitivities. Numerical examples are presented to illustrate the method.

  7. An improved method for detecting circulating microRNAs with S-Poly(T) Plus real-time PCR

    PubMed Central

    Niu, Yanqin; Zhang, Limin; Qiu, Huiling; Wu, Yike; Wang, Zhiwei; Zai, Yujia; Liu, Lin; Qu, Junle; Kang, Kang; Gou, Deming

    2015-01-01

    We herein describe a simple, sensitive and specific method for analysis of circulating microRNAs (miRNA), termed S-Poly(T) Plus real-time PCR assay. This new method is based on our previously developed S-Poly(T) method, in which a unique S-Poly(T) primer is used during reverse-transcription to increase sensitivity and specificity. Further increased sensitivity and simplicity of S-Poly(T) Plus, in comparison with the S-Poly(T) method, were achieved by a single-step, multiple-stage reaction, where RNAs were polyadenylated and reverse-transcribed at the same time. The sensitivity of circulating miRNA detection was further improved by a modified method of total RNA isolation from serum/plasma, S/P miRsol, in which glycogen was used to increase the RNA yield. We validated our methods by quantifying miRNA expression profiles in the sera of the patients with pulmonary arterial hypertension associated with congenital heart disease. In conclusion, we developed a simple, sensitive, and specific method for detecting circulating miRNAs that allows the measurement of 266 miRNAs from 100 μl of serum or plasma. This method presents a promising tool for basic miRNA research and clinical diagnosis of human diseases based on miRNA biomarkers. PMID:26459910

  8. Can currently available non-animal methods detect pre and pro-haptens relevant for skin sensitization?

    PubMed

    Patlewicz, Grace; Casati, Silvia; Basketter, David A; Asturiol, David; Roberts, David W; Lepoittevin, Jean-Pierre; Worth, Andrew P; Aschberger, Karin

    2016-12-01

    Predictive testing to characterize substances for their skin sensitization potential has historically been based on animal tests such as the Local Lymph Node Assay (LLNA). In recent years, regulations in the cosmetics and chemicals sectors have provided strong impetus to develop non-animal alternatives. Three test methods have undergone OECD validation: the direct peptide reactivity assay (DPRA), the KeratinoSens™ and the human Cell Line Activation Test (h-CLAT). Whilst these methods perform relatively well in predicting LLNA results, a concern raised is their ability to predict chemicals that need activation to be sensitizing (pre- or pro-haptens). This current study reviewed an EURL ECVAM dataset of 127 substances for which information was available in the LLNA and three non-animal test methods. Twenty eight of the sensitizers needed to be activated, with the majority being pre-haptens. These were correctly identified by 1 or more of the test methods. Six substances were categorized exclusively as pro-haptens, but were correctly identified by at least one of the cell-based assays. The analysis here showed that skin metabolism was not likely to be a major consideration for assessing sensitization potential and that sensitizers requiring activation could be identified correctly using one or more of the current non-animal methods. Published by Elsevier Inc.

  9. An ATP-gated cation channel with some P2Z-like characteristics in gastric smooth muscle cells of toad.

    PubMed Central

    Ugur, M; Drummond, R M; Zou, H; Sheng, P; Singer, J J; Walsh, J V

    1997-01-01

    1. Whole-cell and single-channel currents elicited by extracellular ATP were studied in freshly dissociated smooth muscle cells from the stomach of the toad Bufo marinus using standard patch clamp and microfluorimetric techniques. 2. This ATP-gated cation channel shares a number of pharmacological and functional properties with native rat myometrium receptors, certain native P2Z purinoceptors and the recently cloned P2X7 purinoceptor. But, unlike the last two, the ATP-gated channel does not mediate the formation of large non-specific pores. Thus, it may represent a novel member of the P2X or P2Z class. 3. Extracellular application of ATP (> or = 150 microM) elicited an inward whole-cell current at negative holding potentials that was inwardly rectifying and showed no sign of desensitization. Na+, Cs+ and, to a lesser degree, the organic cation choline served as charge carriers, but Cl- did not. Ratiometric fura-2 measurements indicated that the current is carried in part by Ca2+. The EC50 for ATP was 700 microM in solutions with a low divalent cation concentration. 4. ATP (> or = 100 microM) at the extracellular surface of cell-attached or excised patches elicited inwardly rectifying single-channel currents with a 22 pS conductance. Cl- did not serve as a charge carrier but both Na+ and Cs+ did, as did choline to a lesser extent. The mean open time of the channel was quite long, with a range in hundreds of milliseconds at a holding potential of -70 mV. 5. Mg2+ and Ca2+ decreased the magnitude of the ATP-induced whole-cell currents. Mg2+ decreased both the amplitude and the activity of ATP-activated single-channel currents. 6. ADP, UTP, P1, P5-di-adenosine pentaphosphate (AP5A), adenosine and alpha, beta-methylene ATP (alpha, beta-Me-ATP) did not induce significant whole-cell current. ATP-gamma-S and 2-methylthio ATP (2-Me-S-ATP) were significantly less effective than ATP in inducing whole-cell currents, whereas benzoylbenzoyl ATP (BzATP) was more effective. BzATP, alpha, beta-Me-ATP, ATP-gamma-S and 2-Me-S-ATP induced single-channel currents, but a higher concentration of alpha, beta-Me-ATP was required. 7. BzATP did not induce the formation of large non-specific pores, as assayed using mag-fura-2 as a high molecular mass probe. PMID:9032690

  10. Calibration test of the temperature and strain sensitivity coefficient in regional reference grating method

    NASA Astrophysics Data System (ADS)

    Wu, Jing; Huang, Junbing; Wu, Hanping; Gu, Hongcan; Tang, Bo

    2014-12-01

    In order to verify the validity of the regional reference grating method in solve the strain/temperature cross sensitive problem in the actual ship structural health monitoring system, and to meet the requirements of engineering, for the sensitivity coefficients of regional reference grating method, national standard measurement equipment is used to calibrate the temperature sensitivity coefficient of selected FBG temperature sensor and strain sensitivity coefficient of FBG strain sensor in this modal. And the thermal expansion sensitivity coefficient of the steel for ships is calibrated with water bath method. The calibration results show that the temperature sensitivity coefficient of FBG temperature sensor is 28.16pm/°C within -10~30°C, and its linearity is greater than 0.999, the strain sensitivity coefficient of FBG strain sensor is 1.32pm/μɛ within -2900~2900μɛ whose linearity is almost to 1, the thermal expansion sensitivity coefficient of the steel for ships is 23.438pm/°C within 30~90°C, and its linearity is greater than 0.998. Finally, the calibration parameters are used in the actual ship structure health monitoring system for temperature compensation. The results show that the effect of temperature compensation is good, and the calibration parameters meet the engineering requirements, which provide an important reference for fiber Bragg grating sensor is widely used in engineering.

  11. A fluorescence high throughput screening method for the detection of reactive electrophiles as potential skin sensitizers.

    PubMed

    Avonto, Cristina; Chittiboyina, Amar G; Rua, Diego; Khan, Ikhlas A

    2015-12-01

    Skin sensitization is an important toxicological end-point in the risk assessment of chemical allergens. Because of the complexity of the biological mechanisms associated with skin sensitization, integrated approaches combining different chemical, biological and in silico methods are recommended to replace conventional animal tests. Chemical methods are intended to characterize the potential of a sensitizer to induce earlier molecular initiating events. The presence of an electrophilic mechanistic domain is considered one of the essential chemical features to covalently bind to the biological target and induce further haptenation processes. Current in chemico assays rely on the quantification of unreacted model nucleophiles after incubation with the candidate sensitizer. In the current study, a new fluorescence-based method, 'HTS-DCYA assay', is proposed. The assay aims at the identification of reactive electrophiles based on their chemical reactivity toward a model fluorescent thiol. The reaction workflow enabled the development of a High Throughput Screening (HTS) method to directly quantify the reaction adducts. The reaction conditions have been optimized to minimize solubility issues, oxidative side reactions and increase the throughput of the assay while minimizing the reaction time, which are common issues with existing methods. Thirty-six chemicals previously classified with LLNA, DPRA or KeratinoSens™ were tested as a proof of concept. Preliminary results gave an estimated 82% accuracy, 78% sensitivity, 90% specificity, comparable to other in chemico methods such as Cys-DPRA. In addition to validated chemicals, six natural products were analyzed and a prediction of their sensitization potential is presented for the first time. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. A multiplex PCR method for detection of Aspergillus spp. and Mycobacterium tuberculosis in BAL specimens.

    PubMed

    Amini, F; Kachuei, R; Noorbakhsh, F; Imani Fooladi, A A

    2015-06-01

    The aim of this study was the detection of Aspergillus species and Mycobacterium tuberculosis together in bronchoalveolar lavage (BAL) using of multiplex PCR. In this study, from September 2012 until June 2013, 100 bronchoalveolar lavage (BAL) specimens were collected from patients suspected of tuberculosis (TB). After the direct and culture test, multiplex PCR were utilized in order to diagnose Aspergillus species and M. tuberculosis. Phenol-chloroform manual method was used in order to extract DNA from these microorganisms. Aspergillus specific primers, M. tuberculosis designed primers and beta actin primers were used for multiplex PCR. In this study, by multiplex PCR method, Aspergillus species were identified in 12 samples (12%), positive samples in direct and culture test were respectively 11% and 10%. Sensitivity and specificity of this method in comparison to direct test were respectively 100% and 98.8%, also sensitivity and specificity of this method in comparison to culture test were respectively 100% and 97.7%. In this assay, M. tuberculosis was identified in 8 samples (8%). Mycobacterium-positive samples in molecular method, direct and culture test were respectively 6%, 5% and 7%. Sensitivity and specificity of PCR method in comparison to direct test were 80% and 97.8% also sensitivity and specificity of this method in comparison to culture test was 71.4% and 98.9%. In the present study, multiplex PCR method had higher sensitivity than direct and culture test in order to identify and detect Aspergillus, also this method had lower sensitivity for identification of M. tuberculosis, suggesting that the method of DNA extraction was not suitable. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  13. Precision of Sensitivity in the Design Optimization of Indeterminate Structures

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Hopkins, Dale A.

    2006-01-01

    Design sensitivity is central to most optimization methods. The analytical sensitivity expression for an indeterminate structural design optimization problem can be factored into a simple determinate term and a complicated indeterminate component. Sensitivity can be approximated by retaining only the determinate term and setting the indeterminate factor to zero. The optimum solution is reached with the approximate sensitivity. The central processing unit (CPU) time to solution is substantially reduced. The benefit that accrues from using the approximate sensitivity is quantified by solving a set of problems in a controlled environment. Each problem is solved twice: first using the closed-form sensitivity expression, then using the approximation. The problem solutions use the CometBoards testbed as the optimization tool with the integrated force method as the analyzer. The modification that may be required, to use the stiffener method as the analysis tool in optimization, is discussed. The design optimization problem of an indeterminate structure contains many dependent constraints because of the implicit relationship between stresses, as well as the relationship between the stresses and displacements. The design optimization process can become problematic because the implicit relationship reduces the rank of the sensitivity matrix. The proposed approximation restores the full rank and enhances the robustness of the design optimization method.

  14. A fluorescence high throughput screening method for the detection of reactive electrophiles as potential skin sensitizers

    USDA-ARS?s Scientific Manuscript database

    Skin sensitization is an important toxicological end-point in the risk assessment of chemical allergens. Because of the complexity of the biological mechanisms associated with skin sensitization integrated approaches combining different chemical, biological and in silico methods are recommended to r...

  15. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1991-01-01

    Continuing studies associated with the development of the quasi-analytical (QA) sensitivity method for three dimensional transonic flow about wings are presented. Furthermore, initial results using the quasi-analytical approach were obtained and compared to those computed using the finite difference (FD) approach. The basic goals achieved were: (1) carrying out various debugging operations pertaining to the quasi-analytical method; (2) addition of section design variables to the sensitivity equation in the form of multiple right hand sides; (3) reconfiguring the analysis/sensitivity package in order to facilitate the execution of analysis/FD/QA test cases; and (4) enhancing the display of output data to allow careful examination of the results and to permit various comparisons of sensitivity derivatives obtained using the FC/QA methods to be conducted easily and quickly. In addition to discussing the above goals, the results of executing subcritical and supercritical test cases are presented.

  16. Sensitivity and comparison evaluation of Saturn 5 liquid penetrants

    NASA Technical Reports Server (NTRS)

    Jones, G. H.

    1973-01-01

    Results of a sensitivity and comparison evaluation performed on six liquid penetrants that were used on the Saturn 5 vehicle and other space hardware to detect surface discontinuities are described. The relationship between penetrant materials and crack definition capabilities, the optimum penetrant materials evaluation method, and the optimum measurement methods for crack dimensions were investigated. A unique method of precise developer thickness control was envolved, utilizing clear radiographic film and a densitometer. The method of evaluation included five aluminum alloy, 2219-T87, specimens that were heated and then quenched in cold water to produce cracks. The six penetrants were then applied, one at a time, and the crack indications were counted and recorded for each penetrant for comparison purposes. Measurements were made by determining the visual crack indications per linear inch and then sectioning the specimens for a metallographic count of the cracks present. This method provided a numerical approach for assigning a sensitivity index number to the penetrants. Of the six penetrants evaluated, two were not satisfactory (one was not sufficiently sensitive and the other was to sensitive, giving false indications). The other four were satisfactory with approximately the same sensitivity in the range of 78 to 80.5 percent of total cracks detected.

  17. Selecting Sensitive Parameter Subsets in Dynamical Models With Application to Biomechanical System Identification.

    PubMed

    Ramadan, Ahmed; Boss, Connor; Choi, Jongeun; Peter Reeves, N; Cholewicki, Jacek; Popovich, John M; Radcliffe, Clark J

    2018-07-01

    Estimating many parameters of biomechanical systems with limited data may achieve good fit but may also increase 95% confidence intervals in parameter estimates. This results in poor identifiability in the estimation problem. Therefore, we propose a novel method to select sensitive biomechanical model parameters that should be estimated, while fixing the remaining parameters to values obtained from preliminary estimation. Our method relies on identifying the parameters to which the measurement output is most sensitive. The proposed method is based on the Fisher information matrix (FIM). It was compared against the nonlinear least absolute shrinkage and selection operator (LASSO) method to guide modelers on the pros and cons of our FIM method. We present an application identifying a biomechanical parametric model of a head position-tracking task for ten human subjects. Using measured data, our method (1) reduced model complexity by only requiring five out of twelve parameters to be estimated, (2) significantly reduced parameter 95% confidence intervals by up to 89% of the original confidence interval, (3) maintained goodness of fit measured by variance accounted for (VAF) at 82%, (4) reduced computation time, where our FIM method was 164 times faster than the LASSO method, and (5) selected similar sensitive parameters to the LASSO method, where three out of five selected sensitive parameters were shared by FIM and LASSO methods.

  18. Phenotypic detection of broad-spectrum beta-lactamases in microbiological practice

    PubMed Central

    Sedlakova, Miroslava Htoutou; Hanulik, Vojtech; Chroma, Magdalena; Hricova, Kristyna; Kolar, Milan; Latal, Tomas; Schaumann, Reiner; Rodloff, Arne C.

    2011-01-01

    Summary Background Enterobacteriaceae producing ESBL and AmpC enzymes can be associated with failure of antibiotic therapy and related morbidity and mortality. Their routine detection in microbiology laboratories is still a problem. The aim of this study was to compare the sensitivity of selected phenotypic methods. Material/Methods A total of 106 strains of the Enterobacteriaceae family were tested, in which molecular biology methods confirmed the presence of genes encoding ESBL or AmpC. In ESBL-positive strains, the sensitivity of the ESBL Etest (AB Biodisk) and a modified double-disk synergy test (DDST) were evaluated. AmpC strains were tested by a modified AmpC disk method using 3-aminophenylboronic acid. For simultaneous detection of ESBL and AmpC, the microdilution method with a modified set of antimicrobial agents was used. Results The sensitivity of the ESBL Etest was 95%; the modified DDST yielded 100% sensitivity for ESBL producers and the AmpC test correctly detected 95% of AmpC-positive strains. The sensitivity of the modified microdilution method was 87% and 95% for ESBL and AmpC beta lactamases, respectively. Conclusions The detection of ESBL and AmpC beta lactamases should be based on specific phenotypic methods such as the modified DDST, ESBL Etest, AmpC disk test and the modified microdilution method. PMID:21525803

  19. Application of immuno-PCR assay for the detection of serum IgE specific to Bermuda allergen.

    PubMed

    Rahmatpour, Samine; Khan, Amjad Hayat; Nasiri Kalmarzi, Rasoul; Rajabibazl, Masoumeh; Tavoosidana, Gholamreza; Motevaseli, Elahe; Zarghami, Nosratollah; Sadroddiny, Esmaeil

    2017-04-01

    In vivo and in vitro tests are the two major ways of identifying the triggering allergens in sensitized individuals with allergic symptoms. Both methods are equally significant in terms of sensitivity and specificity. However, in certain circumstances, in vitro methods are highly preferred because they circumvent the use of sensitizing drugs in patients. In current study, we described a highly sensitive immuno-PCR (iPCR) assay for serum IgE specific to Bermuda allergens. Using oligonucleotide-labelled antibody, we used iPCR for the sensitive detection of serum IgE. The nucleotide sequence was amplified using conventional PCR and the bands were visualized on 2.5% agarose gel. Results demonstrated a 100-fold enhancement in sensitivity of iPCR over commercially available enzyme-linked immunosorbent assay (ELISA) kit. Our iPCR method was highly sensitive for Bermuda-specific serum IgE and could be beneficial in allergy clinics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. METHODS FOR DETERMINING SMALL AMOUNTS OF NIOBIUM AND TANTALUM IN ORES (in Russian)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bykova, V.S.; Skrizhinskaya, V.I.

    1960-01-01

    Several current colorimetric methods for determining Nb and Ta were evaluated by comparing the results obtained from analyzing artificial mixtures and minerals, such as loparite, tantalite-columbite, perovskite, pyrochlore, cassiterite-tantalite and Ti-bearing minerals such as sphene. A modification of the thiosulfate method had a sensitivity of 0.05% Nb and was found useful when the sample contained less than 1% Ti. The dimethyl fluorene method for Ta was sensitive to 0.002% and could be used only if most of the Ti was previously removed from the sample. The pyrogallol extraction method, based on the extraction of complex Ta fluoride wtth cyclohexane, presentedmore » a sensitivity of 0.01% of Ta, similar to the pyrogallol-tannin method used for both elements. If their concentration is smaller, the samples must be analyzed subsequently according to the first two methods. The absorption method allows a determination of the two elements without separating them, if their concentration is higher than 0.5%, although the individual sensitivity of the method is 0.05% for Ta and 0.005% for Nb. (TTT)« less

  1. Diagnostic sensitivity and specificity of a participatory disease surveillance method for highly pathogenic avian influenza in household chicken flocks in Indonesia.

    PubMed

    Robyn, M; Priyono, W B; Kim, L M; Brum, E

    2012-06-01

    A study was conducted to assess the diagnostic sensitivity and specificity of a disease surveillance method for diagnosis of highly pathogenic avian influenza (HPAI) outbreaks in household chicken flocks used by participatory disease surveillance (PDS) teams in Yogyakarta Province, Indonesia. The Government of Indonesia, in partnership with the Food and Agriculture Organization of the United Nations, has implemented a PDS method for the detection of HPAI outbreaks in poultry since 2006. The PDS method in Indonesia utilizes both a clinical case definition (CD) and the result of a commercial rapid antigen test kit Yogyakarta 55611, to diagnose HPAI outbreaks, primarily in backyard chicken flocks. The following diagnostic sensitivities and specificities were obtained relative to real-time reverse transcription-PCR as the gold standard diagnostic test: 1) 89% sensitivity (CI95: 75%-97%) and 96% specificity (CI95: 89%-99%) for the PDS CD alone; 2) 86% sensitivity (CI95: 71%-95%) and 99% specificity (CI95: 94%-100%) for the rapid antigen test alone; and 3) 84% sensitivity (CI95: 68%-94%) and 100% specificity (CI95: 96%-100%) for the PDS CD result combined with the rapid antigen test result. Based on these results, HPAI outbreaks in extensively raised household chickens can be diagnosed with sufficient sensitivity and specificity using the PDS method as implemented in Indonesia. Subject to further field evaluation, data from this study suggest that the diagnostic sensitivity of the PDS method may be improved by expanding the PDS CD to include more possible clinical presentations of HPAI and by increasing the number of rapid antigen tests to three different birds with HPAI-compatible signs of same flock.

  2. Dynamic Modeling of Cell-Free Biochemical Networks Using Effective Kinetic Models

    DTIC Science & Technology

    2015-03-16

    sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity Analysis of the Reduced Order Coagulation...sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the performance of the reduced order model [69]. We...Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates

  3. Dynamic Modeling of the Human Coagulation Cascade Using Reduced Order Effective Kinetic Models (Open Access)

    DTIC Science & Technology

    2015-03-16

    shaded region around each total sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity...Performance We conducted a global sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the...Hunter, J.D. Matplotlib: A 2D Graphics Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear

  4. Sensitivity analysis for dose deposition in radiotherapy via a Fokker–Planck model

    DOE PAGES

    Barnard, Richard C.; Frank, Martin; Krycki, Kai

    2016-02-09

    In this paper, we study the sensitivities of electron dose calculations with respect to stopping power and transport coefficients. We focus on the application to radiotherapy simulations. We use a Fokker–Planck approximation to the Boltzmann transport equation. Equations for the sensitivities are derived by the adjoint method. The Fokker–Planck equation and its adjoint are solved numerically in slab geometry using the spherical harmonics expansion (P N) and an Harten-Lax-van Leer finite volume method. Our method is verified by comparison to finite difference approximations of the sensitivities. Finally, we present numerical results of the sensitivities for the normalized average dose depositionmore » depth with respect to the stopping power and the transport coefficients, demonstrating the increase in relative sensitivities as beam energy decreases. In conclusion, this in turn gives estimates on the uncertainty in the normalized average deposition depth, which we present.« less

  5. Development of a highly sensitive three-dimensional gel electrophoresis method for characterization of monoclonal protein heterogeneity.

    PubMed

    Nakano, Keiichi; Tamura, Shogo; Otuka, Kohei; Niizeki, Noriyasu; Shigemura, Masahiko; Shimizu, Chikara; Matsuno, Kazuhiko; Kobayashi, Seiichi; Moriyama, Takanori

    2013-07-15

    Three-dimensional gel electrophoresis (3-DE), which combines agarose gel electrophoresis and isoelectric focusing/SDS-PAGE, was developed to characterize monoclonal proteins (M-proteins). However, the original 3-DE method has not been optimized and its specificity has not been demonstrated. The main goal of this study was to optimize the 3-DE procedure and then compare it with 2-DE. We developed a highly sensitive 3-DE method in which M-proteins are extracted from a first-dimension agarose gel, by diffusing into 150 mM NaCl, and the recovery of M-proteins was 90.6%. To validate the utility of the highly sensitive 3-DE, we compared it with the original 3-DE method. We found that highly sensitive 3-DE provided for greater M-protein recovery and was more effective in terms of detecting spots on SDS-PAGE gels than the original 3-DE. Moreover, highly sensitive 3-DE separates residual normal IgG from M-proteins, which could not be done by 2-DE. Applying the highly sensitive 3-DE to clinical samples, we found that the characteristics of M-proteins vary tremendously between individuals. We believe that our highly sensitive 3-DE method described here will prove useful in further studies of the heterogeneity of M-proteins. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Method for characterizing the upset response of CMOS circuits using alpha-particle sensitive test circuits

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G. (Inventor); Nixon, Robert H. (Inventor); Soli, George A. (Inventor); Blaes, Brent R. (Inventor)

    1995-01-01

    A method for predicting the SEU susceptibility of a standard-cell D-latch using an alpha-particle sensitive SRAM, SPICE critical charge simulation results, and alpha-particle interaction physics. A technique utilizing test structures to quickly and inexpensively characterize the SEU sensitivity of standard cell latches intended for use in a space environment. This bench-level approach utilizes alpha particles to induce upsets in a low LET sensitive 4-k bit test SRAM. This SRAM consists of cells that employ an offset voltage to adjust their upset sensitivity and an enlarged sensitive drain junction to enhance the cell's upset rate.

  7. A Comparison of the Capability of Sensitivity Level 3 and Sensitivity Level 4 Fluorescent Penetrants to Detect Fatigue Cracks in Aluminum

    NASA Technical Reports Server (NTRS)

    Parker, Bradford, H.

    2009-01-01

    Historically both sensitivity level 3 and sensitivity level 4 fluorescent penetrants have been used to perform NASA Standard Level inspections of aerospace hardware. In April 2008, NASA-STD-5009 established a requirement that only sensitivity level 4 penetrants were acceptable for inspections of NASA hardware. Having NASA contractors change existing processes or perform demonstration tests to certify sensitivity level 3 penetrants posed a potentially huge cost to the Agency. This study was conducted to directly compare the probability of detection sensitivity level 3 and level 4 penetrants using both Method A and Method D inspection processes. The study results strongly support the conclusion that sensitivity level 3 penetrants are acceptable for NASA Standard Level inspections

  8. The PneuCarriage Project: A Multi-Centre Comparative Study to Identify the Best Serotyping Methods for Examining Pneumococcal Carriage in Vaccine Evaluation Studies

    PubMed Central

    Satzke, Catherine; Dunne, Eileen M.; Porter, Barbara D.; Klugman, Keith P.; Mulholland, E. Kim

    2015-01-01

    Background The pneumococcus is a diverse pathogen whose primary niche is the nasopharynx. Over 90 different serotypes exist, and nasopharyngeal carriage of multiple serotypes is common. Understanding pneumococcal carriage is essential for evaluating the impact of pneumococcal vaccines. Traditional serotyping methods are cumbersome and insufficient for detecting multiple serotype carriage, and there are few data comparing the new methods that have been developed over the past decade. We established the PneuCarriage project, a large, international multi-centre study dedicated to the identification of the best pneumococcal serotyping methods for carriage studies. Methods and Findings Reference sample sets were distributed to 15 research groups for blinded testing. Twenty pneumococcal serotyping methods were used to test 81 laboratory-prepared (spiked) samples. The five top-performing methods were used to test 260 nasopharyngeal (field) samples collected from children in six high-burden countries. Sensitivity and positive predictive value (PPV) were determined for the test methods and the reference method (traditional serotyping of >100 colonies from each sample). For the alternate serotyping methods, the overall sensitivity ranged from 1% to 99% (reference method 98%), and PPV from 8% to 100% (reference method 100%), when testing the spiked samples. Fifteen methods had ≥70% sensitivity to detect the dominant (major) serotype, whilst only eight methods had ≥70% sensitivity to detect minor serotypes. For the field samples, the overall sensitivity ranged from 74.2% to 95.8% (reference method 93.8%), and PPV from 82.2% to 96.4% (reference method 99.6%). The microarray had the highest sensitivity (95.8%) and high PPV (93.7%). The major limitation of this study is that not all of the available alternative serotyping methods were included. Conclusions Most methods were able to detect the dominant serotype in a sample, but many performed poorly in detecting the minor serotype populations. Microarray with a culture amplification step was the top-performing method. Results from this comprehensive evaluation will inform future vaccine evaluation and impact studies, particularly in low-income settings, where pneumococcal disease burden remains high. PMID:26575033

  9. Quantitative Evaluation of Aged AISI 316L Stainless Steel Sensitization to Intergranular Corrosion: Comparison Between Microstructural Electrochemical and Analytical Methods

    NASA Astrophysics Data System (ADS)

    Sidhom, H.; Amadou, T.; Sahlaoui, H.; Braham, C.

    2007-06-01

    The evaluation of the degree of sensitization (DOS) to intergranular corrosion (IGC) of a commercial AISI 316L austenitic stainless steel aged at temperatures ranging from 550 °C to 800 °C during 100 to 80,000 hours was carried out using three different assessment methods. (1) The microstructural method coupled with the Strauss standard test (ASTM A262). This method establishes the kinetics of the precipitation phenomenon under different aging conditions, by transmission electronic microscope (TEM) examination of thin foils and electron diffraction. The subsequent chromium-depleted zones are characterized by X-ray microanalysis using scanning transmission electronic microscope (STEM). The superimposition of microstructural time-temperature-precipitation (TTP) and ASTM A262 time-temperature-sensitization (TTS) diagrams provides the relationship between aged microstructure and IGC. Moreover, by considering the chromium-depleted zone characteristics, sensitization and desensitization criteria could be established. (2) The electrochemical method involving the double loop-electrochemical potentiokinetic reactivation (DL-EPR) test. The operating conditions of this test were initially optimized using the experimental design method on the bases of the reliability, the selectivity, and the reproducibility of test responses for both annealed and sensitized steels. The TTS diagram of the AISI 316L stainless steel was established using this method. This diagram offers a quantitative assessment of the DOS and a possibility to appreciate the time-temperature equivalence of the IGC sensitization and desensitization. (3) The analytical method based on the chromium diffusion models. Using the IGC sensitization and desensitization criteria established by the microstructural method, numerical solving of the chromium diffusion equations leads to a calculated AISI 316L TTS diagram. Comparison of these three methods gives a clear advantage to the nondestructive DL-EPR test when it is used with its optimized operating conditions. This quantitative method is simple to perform; it is fast, reliable, economical, and presents the best ability to detect the lowest DOS to IGC. For these reasons, this method can be considered as a serious candidate for IGC checking of stainless steel components of industrial plants.

  10. Sensitivity analysis of a sound absorption model with correlated inputs

    NASA Astrophysics Data System (ADS)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  11. Sensitivity of different Trypanosoma vivax specific primers for the diagnosis of livestock trypanosomosis using different DNA extraction methods.

    PubMed

    Gonzales, J L; Loza, A; Chacon, E

    2006-03-15

    There are several T. vivax specific primers developed for PCR diagnosis. Most of these primers were validated under different DNA extraction methods and study designs leading to heterogeneity of results. The objective of the present study was to validate PCR as a diagnostic test for T. vivax trypanosomosis by means of determining the test sensitivity of different published specific primers with different sample preparations. Four different DNA extraction methods were used to test the sensitivity of PCR with four different primer sets. DNA was extracted directly from whole blood samples, blood dried on filter papers or blood dried on FTA cards. The results showed that the sensitivity of PCR with each primer set was highly dependant of the sample preparation and DNA extraction method. The highest sensitivities for all the primers tested were determined using DNA extracted from whole blood samples, while the lowest sensitivities were obtained when DNA was extracted from filter paper preparations. To conclude, the obtained results are discussed and a protocol for diagnosis and surveillance for T. vivax trypanosomosis is recommended.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Haitao, E-mail: liaoht@cae.ac.cn

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less

  13. Adjoint-based sensitivity analysis of low-order thermoacoustic networks using a wave-based approach

    NASA Astrophysics Data System (ADS)

    Aguilar, José G.; Magri, Luca; Juniper, Matthew P.

    2017-07-01

    Strict pollutant emission regulations are pushing gas turbine manufacturers to develop devices that operate in lean conditions, with the downside that combustion instabilities are more likely to occur. Methods to predict and control unstable modes inside combustion chambers have been developed in the last decades but, in some cases, they are computationally expensive. Sensitivity analysis aided by adjoint methods provides valuable sensitivity information at a low computational cost. This paper introduces adjoint methods and their application in wave-based low order network models, which are used as industrial tools, to predict and control thermoacoustic oscillations. Two thermoacoustic models of interest are analyzed. First, in the zero Mach number limit, a nonlinear eigenvalue problem is derived, and continuous and discrete adjoint methods are used to obtain the sensitivities of the system to small modifications. Sensitivities to base-state modification and feedback devices are presented. Second, a more general case with non-zero Mach number, a moving flame front and choked outlet, is presented. The influence of the entropy waves on the computed sensitivities is shown.

  14. A Bayesian Network Based Global Sensitivity Analysis Method for Identifying Dominant Processes in a Multi-physics Model

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2016-12-01

    Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.

  15. Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)

    1996-01-01

    Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.

  16. A PDE Sensitivity Equation Method for Optimal Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1996-01-01

    The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.

  17. [Comparison of the Conventional Centrifuged and Filtrated Preparations in Urine Cytology].

    PubMed

    Sekita, Nobuyuki; Shimosakai, Hirofumi; Nishikawa, Rika; Sato, Hiroaki; Kouno, Hiroyoshi; Fujimura, Masaaki; Mikami, Kazuo

    2016-03-01

    The urine cytology test is one of the most important tools for the diagnosis of malignant urinary tract tumors. This test is also of great value for predicting malignancy. However, the sensitivity of this test is not high enough to screen for malignant cells. In our laboratory, we were able to attain a high sensitivity of urine cytology tests after changing the preparation method of urine samples. The differences in the cytodiagnosis between the two methods are discussed here. From January 2012 to June 2013, 2,031 urine samples were prepared using the conventional centrifuge method (C method) ; and from September 2013 to March 2015, 2,453 urine samples were prepared using the filtration method (F method) for the cytology test. When the samples included in category 4 or 5, were defined as cytological positive, the sensitivities of this test with samples prepared using the F method were significantly high compared with samples prepared using the C method (72% vs 28%, p<0.001). The number of cells on the glass slides prepared by the F method was significantly higher than that of the samples prepared by the C method (p<0.001). After introduction of the F method, the number of f alse negative cases was decreased in the urine cytology test because a larger number of cells was seen and easily detected as atypical or malignant epithelial cells. Therefore, this method has a higher sensitivity than the conventional C method as the sensitivity of urine cytology tests relies partially on the number of cells visualized in the prepared samples.

  18. The PneuCarriage Project: A Multi-Centre Comparative Study to Identify the Best Serotyping Methods for Examining Pneumococcal Carriage in Vaccine Evaluation Studies.

    PubMed

    Satzke, Catherine; Dunne, Eileen M; Porter, Barbara D; Klugman, Keith P; Mulholland, E Kim

    2015-11-01

    The pneumococcus is a diverse pathogen whose primary niche is the nasopharynx. Over 90 different serotypes exist, and nasopharyngeal carriage of multiple serotypes is common. Understanding pneumococcal carriage is essential for evaluating the impact of pneumococcal vaccines. Traditional serotyping methods are cumbersome and insufficient for detecting multiple serotype carriage, and there are few data comparing the new methods that have been developed over the past decade. We established the PneuCarriage project, a large, international multi-centre study dedicated to the identification of the best pneumococcal serotyping methods for carriage studies. Reference sample sets were distributed to 15 research groups for blinded testing. Twenty pneumococcal serotyping methods were used to test 81 laboratory-prepared (spiked) samples. The five top-performing methods were used to test 260 nasopharyngeal (field) samples collected from children in six high-burden countries. Sensitivity and positive predictive value (PPV) were determined for the test methods and the reference method (traditional serotyping of >100 colonies from each sample). For the alternate serotyping methods, the overall sensitivity ranged from 1% to 99% (reference method 98%), and PPV from 8% to 100% (reference method 100%), when testing the spiked samples. Fifteen methods had ≥70% sensitivity to detect the dominant (major) serotype, whilst only eight methods had ≥70% sensitivity to detect minor serotypes. For the field samples, the overall sensitivity ranged from 74.2% to 95.8% (reference method 93.8%), and PPV from 82.2% to 96.4% (reference method 99.6%). The microarray had the highest sensitivity (95.8%) and high PPV (93.7%). The major limitation of this study is that not all of the available alternative serotyping methods were included. Most methods were able to detect the dominant serotype in a sample, but many performed poorly in detecting the minor serotype populations. Microarray with a culture amplification step was the top-performing method. Results from this comprehensive evaluation will inform future vaccine evaluation and impact studies, particularly in low-income settings, where pneumococcal disease burden remains high.

  19. RAMA casein zymography: Time-saving and highly sensitive casein zymography for MMP7 and trypsin.

    PubMed

    Yasumitsu, Hidetaro; Ozeki, Yasuhiro; Kanaly, Robert A

    2016-11-01

    To detect metalloproteinase-7 (MMP7), zymography is conducted using a casein substrate and conventional CBB stain. It has disadvantages because it is time consuming and has low sensitivity. Previously, a sensitive method to detect MMP7 up to 30 pg was reported, however it required special substrates and complicated handlings. RAMA casein zymography described herein is rapid, sensitive, and reproducible. By applying high-sensitivity staining with low substrate conditions, the staining process is completed within 1 h and sensitivity was increased 100-fold. The method can detect 10 pg MMP7 by using commercially available casein without complicated handlings. Moreover, it increases detection sensitivity for trypsin. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. [An oral sensitization food allergy model in Brown-Norway rats].

    PubMed

    Huang, Juan; Zhong, Yan; Cai, Wei; Zhang, Hongbo

    2009-01-01

    To develop an oral-sensitized animal model of food allergy using Brown-Norway (BN) rats and evaluate the sensitivity of ELISA and passive cutaneous anaphylaxis (PCA) in detecting ovalbumin-specific IgE antibody (OVA-IgE) level in sensitized animals. Sixteen 3-week old female BN rats were randomly divided into 3 groups: negative control group orally gavaged with saline, positive control group sensitized by intraperitoneal injection of 0. lmg/d OVA, and, study group sensitized by daily gavage of 1 mg/d ovalbumin (OVA). OVA-IgE was analyzed by ELISA and PCA method at week 4, 5, 6, 7, 8 and 9. At week 13, OVA-IgE level was analyzed after orally challenged by 1.0 ml of 100 mg/ml OVA. The ELISA result showed that the OVA-IgE level in study group was significantly increased at week 6, 7 and week 8 compared with negative control group (P < 0.05), and the highest level was found at week 6. There was no significant difference for the level of OVA-IgE between study group and positive control group. The sensitization rate in study group was 60%, 80% and 80% at week 6, 7 and 8 respectively, which was similar to positive control group. All PCA results in study group were negative, while in positive control group it was positive. Oral sensitization could be used as a suitable method to establish an animal model of food allergy, which is more comparable with the natural sensitization process in food allergy patients. ELISA method is more sensitive in detecting OVA-IgE level in oral sensitized animal model than PCA method.

  1. Sensitivity Analysis for some Water Pollution Problem

    NASA Astrophysics Data System (ADS)

    Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff

    2014-05-01

    Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .

  2. Design component method for sensitivity analysis of built-up structures

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.; Seong, Hwai G.

    1986-01-01

    A 'design component method' that provides a unified and systematic organization of design sensitivity analysis for built-up structures is developed and implemented. Both conventional design variables, such as thickness and cross-sectional area, and shape design variables of components of built-up structures are considered. It is shown that design of components of built-up structures can be characterized and system design sensitivity expressions obtained by simply adding contributions from each component. The method leads to a systematic organization of computations for design sensitivity analysis that is similar to the way in which computations are organized within a finite element code.

  3. Three Methods for Estimating the Middle-Ear Muscle Reflex (MEMR) Using Otoacoustic Emission (OAE) Measurement Systems

    DTIC Science & Technology

    2014-10-01

    sensitive MEMR measurement using the OAE and MOCR measurement modules in the Mimosa Acoustics HeariD system. All three methods can sensitively detect...three related methods for making this sensitive MEMR measurement using the OAE and MOCR measurement modules in the Mimosa Acoustics HearID system...without buying additional equipment or software. The purpose of this report is to document the methodology we have used since 2007 with Mimosa Acoustics

  4. Enhanced Sensitivity to Detection Nanomolar Level of Cu2 + Compared to Spectrophotometry Method by Functionalized Gold Nanoparticles: Design of Sensor Assisted by Exploiting First-order Data with Chemometrics

    NASA Astrophysics Data System (ADS)

    Rasouli, Zolaikha; Ghavami, Raouf

    2018-02-01

    A simple, sensitive and efficient colorimetric assay platform for the determination of Cu2 + was proposed with the aim of developing sensitive detection based on the aggregation of AuNPs in presence of a histamine H2-receptor antagonist (famotidine, FAM) as recognition site. This study is the first to demonstrate that the molar extinction coefficients of the complexes formed by FAM and Cu2 + are very low (by analyzing the chemometrics methods on the first order data arising from different metal to ligand ratio method), leading to the undesirable sensitivity of FAM-based assays. To resolve the problem of low sensitivity, the colorimetry method based on the Cu2 +-induced aggregation of AuNPs functionalized with FAM was introduced. This procedure is accompanied by a color change from bright red to blue which can be observed with the naked eyes. Detection sensitivity obtained by the developed method increased about 100 fold compared with the spectrophotometry method. This sensor exhibited a good linear relation between the absorbance ratios at 670 to 520 nm (A670/520) and the concentration in the range 2-110 nM with LOD = 0.76 nM. The satisfactory analytical performance of the proposed sensor facilitates the development of simple and affordable UV-Vis chemosensors for environmental applications.

  5. Glaucoma progression detection: agreement, sensitivity, and specificity of expert visual field evaluation, event analysis, and trend analysis.

    PubMed

    Antón, Alfonso; Pazos, Marta; Martín, Belén; Navero, José Manuel; Ayala, Miriam Eleonora; Castany, Marta; Martínez, Patricia; Bardavío, Javier

    2013-01-01

    To assess sensitivity, specificity, and agreement among automated event analysis, automated trend analysis, and expert evaluation to detect glaucoma progression. This was a prospective study that included 37 eyes with a follow-up of 36 months. All had glaucomatous disks and fields and performed reliable visual fields every 6 months. Each series of fields was assessed with 3 different methods: subjective assessment by 2 independent teams of glaucoma experts, glaucoma/guided progression analysis (GPA) event analysis, and GPA (visual field index-based) trend analysis. Kappa agreement coefficient between methods and sensitivity and specificity for each method using expert opinion as gold standard were calculated. The incidence of glaucoma progression was 16% to 18% in 3 years but only 3 cases showed progression with all 3 methods. Kappa agreement coefficient was high (k=0.82) between subjective expert assessment and GPA event analysis, and only moderate between these two and GPA trend analysis (k=0.57). Sensitivity and specificity for GPA event and GPA trend analysis were 71% and 96%, and 57% and 93%, respectively. The 3 methods detected similar numbers of progressing cases. The GPA event analysis and expert subjective assessment showed high agreement between them and moderate agreement with GPA trend analysis. In a period of 3 years, both methods of GPA analysis offered high specificity, event analysis showed 83% sensitivity, and trend analysis had a 66% sensitivity.

  6. Locally refined block-centred finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and the performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are: (a) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed, and (b) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  7. Locally refined block-centered finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling and predictions

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are (1) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed and (2) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  8. A Fast, Accurate and Sensitive GC-FID Method for the Analyses of Glycols in Water and Urine

    NASA Technical Reports Server (NTRS)

    Kuo, C. Mike; Alverson, James T.; Gazda, Daniel B.

    2017-01-01

    Glycols, specifically ethylene glycol and 1,2-propanediol, are some of the major organic compounds found in the humidity condensate samples collected on the International Space Station. The current analytical method for glycols is a GC/MS method with direct sample injection. This method is simple and fast, but it is not very sensitive. Reporting limits for ethylene glycol and 1,2-propanediol are only 1 ppm. A much more sensitive GC/FID method was developed, in which glycols were derivatized with benzoyl chloride for 10 minutes before being extracted with hexane. Using 1,3-propanediol as an internal standard, the detection limits for the GC/FID method was determined to be 50 ppb and the analysis only takes 7 minutes. Data from the GC/MS and the new GC/FID methods shows excellent agreement with each other. Factors affecting the sensitivity, including sample volume, NaOH concentration and volume, volume of benzoyl chloride, reaction time and temperature, were investigated. Interferences during derivatization and possible method to reduce interferences were also investigated.

  9. General methods for sensitivity analysis of equilibrium dynamics in patch occupancy models

    USGS Publications Warehouse

    Miller, David A.W.

    2012-01-01

    Sensitivity analysis is a useful tool for the study of ecological models that has many potential applications for patch occupancy modeling. Drawing from the rich foundation of existing methods for Markov chain models, I demonstrate new methods for sensitivity analysis of the equilibrium state dynamics of occupancy models. Estimates from three previous studies are used to illustrate the utility of the sensitivity calculations: a joint occupancy model for a prey species, its predators, and habitat used by both; occurrence dynamics from a well-known metapopulation study of three butterfly species; and Golden Eagle occupancy and reproductive dynamics. I show how to deal efficiently with multistate models and how to calculate sensitivities involving derived state variables and lower-level parameters. In addition, I extend methods to incorporate environmental variation by allowing for spatial and temporal variability in transition probabilities. The approach used here is concise and general and can fully account for environmental variability in transition parameters. The methods can be used to improve inferences in occupancy studies by quantifying the effects of underlying parameters, aiding prediction of future system states, and identifying priorities for sampling effort.

  10. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling: GEOSTATISTICAL SENSITIVITY ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Chen, Xingyuan; Ye, Ming

    Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level ofmore » the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.« less

  11. Phenotypic detection of broad-spectrum beta-lactamases in microbiological practice.

    PubMed

    Htoutou Sedlakova, Miroslava; Hanulik, Vojtech; Chroma, Magdalena; Hricova, Kristyna; Kolar, Milan; Latal, Tomas; Schaumann, Reiner; Rodloff, Arne C

    2011-05-01

    Enterobacteriaceae producing ESBL and AmpC enzymes can be associated with failure of antibiotic therapy and related morbidity and mortality. Their routine detection in microbiology laboratories is still a problem. The aim of this study was to compare the sensitivity of selected phenotypic methods. A total of 106 strains of the Enterobacteriaceae family were tested, in which molecular biology methods confirmed the presence of genes encoding ESBL or AmpC. In ESBL-positive strains, the sensitivity of the ESBL Etest (AB Biodisk) and a modified double-disk synergy test (DDST) were evaluated. AmpC strains were tested by a modified AmpC disk method using 3-aminophenylboronic acid. For simultaneous detection of ESBL and AmpC, the microdilution method with a modified set of antimicrobial agents was used. The sensitivity of the ESBL Etest was 95%; the modified DDST yielded 100% sensitivity for ESBL producers and the AmpC test correctly detected 95% of AmpC-positive strains. The sensitivity of the modified microdilution method was 87% and 95% for ESBL and AmpC beta lactamases, respectively. The detection of ESBL and AmpC beta lactamases should be based on specific phenotypic methods such as the modified DDST, ESBL Etest, AmpC disk test and the modified microdilution method.

  12. Applying under-sampling techniques and cost-sensitive learning methods on risk assessment of breast cancer.

    PubMed

    Hsu, Jia-Lien; Hung, Ping-Cheng; Lin, Hung-Yen; Hsieh, Chung-Ho

    2015-04-01

    Breast cancer is one of the most common cause of cancer mortality. Early detection through mammography screening could significantly reduce mortality from breast cancer. However, most of screening methods may consume large amount of resources. We propose a computational model, which is solely based on personal health information, for breast cancer risk assessment. Our model can be served as a pre-screening program in the low-cost setting. In our study, the data set, consisting of 3976 records, is collected from Taipei City Hospital starting from 2008.1.1 to 2008.12.31. Based on the dataset, we first apply the sampling techniques and dimension reduction method to preprocess the testing data. Then, we construct various kinds of classifiers (including basic classifiers, ensemble methods, and cost-sensitive methods) to predict the risk. The cost-sensitive method with random forest classifier is able to achieve recall (or sensitivity) as 100 %. At the recall of 100 %, the precision (positive predictive value, PPV), and specificity of cost-sensitive method with random forest classifier was 2.9 % and 14.87 %, respectively. In our study, we build a breast cancer risk assessment model by using the data mining techniques. Our model has the potential to be served as an assisting tool in the breast cancer screening.

  13. Highly Sensitive GMO Detection Using Real-Time PCR with a Large Amount of DNA Template: Single-Laboratory Validation.

    PubMed

    Mano, Junichi; Hatano, Shuko; Nagatomi, Yasuaki; Futo, Satoshi; Takabatake, Reona; Kitta, Kazumi

    2018-03-01

    Current genetically modified organism (GMO) detection methods allow for sensitive detection. However, a further increase in sensitivity will enable more efficient testing for large grain samples and reliable testing for processed foods. In this study, we investigated real-time PCR-based GMO detection methods using a large amount of DNA template. We selected target sequences that are commonly introduced into many kinds of GM crops, i.e., 35S promoter and nopaline synthase (NOS) terminator. This makes the newly developed method applicable to a wide range of GMOs, including some unauthorized ones. The estimated LOD of the new method was 0.005% of GM maize events; to the best of our knowledge, this method is the most sensitive among the GM maize detection methods for which the LOD was evaluated in terms of GMO content. A 10-fold increase in the DNA amount as compared with the amount used under common testing conditions gave an approximately 10-fold reduction in the LOD without PCR inhibition. Our method is applicable to various analytical samples, including processed foods. The use of other primers and fluorescence probes would permit highly sensitive detection of various recombinant DNA sequences besides the 35S promoter and NOS terminator.

  14. Alignment-free microbial phylogenomics under scenarios of sequence divergence, genome rearrangement and lateral genetic transfer.

    PubMed

    Bernard, Guillaume; Chan, Cheong Xin; Ragan, Mark A

    2016-07-01

    Alignment-free (AF) approaches have recently been highlighted as alternatives to methods based on multiple sequence alignment in phylogenetic inference. However, the sensitivity of AF methods to genome-scale evolutionary scenarios is little known. Here, using simulated microbial genome data we systematically assess the sensitivity of nine AF methods to three important evolutionary scenarios: sequence divergence, lateral genetic transfer (LGT) and genome rearrangement. Among these, AF methods are most sensitive to the extent of sequence divergence, less sensitive to low and moderate frequencies of LGT, and most robust against genome rearrangement. We describe the application of AF methods to three well-studied empirical genome datasets, and introduce a new application of the jackknife to assess node support. Our results demonstrate that AF phylogenomics is computationally scalable to multi-genome data and can generate biologically meaningful phylogenies and insights into microbial evolution.

  15. Demodulation method for tilted fiber Bragg grating refractometer with high sensitivity

    NASA Astrophysics Data System (ADS)

    Pham, Xuantung; Si, Jinhai; Chen, Tao; Wang, Ruize; Yan, Lihe; Cao, Houjun; Hou, Xun

    2018-05-01

    In this paper, we propose a demodulation method for refractive index (RI) sensing with tilted fiber Bragg gratings (TFBGs). It operates by monitoring the TFBG cladding mode resonance "cut-off wavelengths." The idea of a "cut-off wavelength" and its determination method are introduced. The RI sensitivities of TFBGs are significantly enhanced in certain RI ranges by using our demodulation method. The temperature-induced cross sensitivity is eliminated. We also demonstrate a parallel-double-angle TFBG (PDTFBG), in which two individual TFBGs are inscribed in the fiber core in parallel using a femtosecond laser and a phase mask. The RI sensing range of the PDTFBG is significantly broader than that of a conventional single-angle TFBG. In addition, its RI sensitivity can reach 1023.1 nm/refractive index unit in the 1.4401-1.4570 RI range when our proposed demodulation method is used.

  16. Rapid concentration and sensitive detection of hookworm ova from wastewater matrices using a real-time PCR method.

    PubMed

    Gyawali, P; Sidhu, J P S; Ahmed, W; Jagals, P; Toze, S

    2015-12-01

    The risk of human hookworm infections from land application of wastewater matrices could be high in regions with high hookworm prevalence. A rapid, sensitive and specific hookworm detection method from wastewater matrices is required in order to assess human health risks. Currently available methods used to identify hookworm ova to the species level are time consuming and lack accuracy. In this study, a real-time PCR method was developed for the rapid, sensitive and specific detection of canine hookworm (Ancylostoma caninum) ova from wastewater matrices. A. caninum was chosen because of its morphological similarity to the human hookworm (Ancylostoma duodenale and Necator americanus). The newly developed PCR method has high detection sensitivity with the ability to detect less than one A. caninum ova from 1 L of secondary treated wastewater at the mean threshold cycle (CT) values ranging from 30.1 to 34.3. The method is also able to detect four A. caninum ova from 1 L of raw wastewater and from ∼4 g of treated sludge with mean CT values ranging from 35.6 to 39.8 and 39.8 to 39.9, respectively. The better detection sensitivity obtained for secondary treated wastewater compared to raw wastewater and sludge samples could be attributed to sample turbidity. The proposed method appears to be rapid, sensitive and specific compared to traditional methods and has potential to aid in the public health risk assessment associated with land application of wastewater matrices. Furthermore, the method can be adapted to detect other helminth ova of interest from wastewater matrices. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  17. Dynamic sensitivity analysis of biological systems

    PubMed Central

    Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang

    2008-01-01

    Background A mathematical model to understand, predict, control, or even design a real biological system is a central theme in systems biology. A dynamic biological system is always modeled as a nonlinear ordinary differential equation (ODE) system. How to simulate the dynamic behavior and dynamic parameter sensitivities of systems described by ODEs efficiently and accurately is a critical job. In many practical applications, e.g., the fed-batch fermentation systems, the system admissible input (corresponding to independent variables of the system) can be time-dependent. The main difficulty for investigating the dynamic log gains of these systems is the infinite dimension due to the time-dependent input. The classical dynamic sensitivity analysis does not take into account this case for the dynamic log gains. Results We present an algorithm with an adaptive step size control that can be used for computing the solution and dynamic sensitivities of an autonomous ODE system simultaneously. Although our algorithm is one of the decouple direct methods in computing dynamic sensitivities of an ODE system, the step size determined by model equations can be used on the computations of the time profile and dynamic sensitivities with moderate accuracy even when sensitivity equations are more stiff than model equations. To show this algorithm can perform the dynamic sensitivity analysis on very stiff ODE systems with moderate accuracy, it is implemented and applied to two sets of chemical reactions: pyrolysis of ethane and oxidation of formaldehyde. The accuracy of this algorithm is demonstrated by comparing the dynamic parameter sensitivities obtained from this new algorithm and from the direct method with Rosenbrock stiff integrator based on the indirect method. The same dynamic sensitivity analysis was performed on an ethanol fed-batch fermentation system with a time-varying feed rate to evaluate the applicability of the algorithm to realistic models with time-dependent admissible input. Conclusion By combining the accuracy we show with the efficiency of being a decouple direct method, our algorithm is an excellent method for computing dynamic parameter sensitivities in stiff problems. We extend the scope of classical dynamic sensitivity analysis to the investigation of dynamic log gains of models with time-dependent admissible input. PMID:19091016

  18. Development of the High-Order Decoupled Direct Method in Three Dimensions for Particulate Matter: Enabling Advanced Sensitivity Analysis in Air Quality Models

    EPA Science Inventory

    The high-order decoupled direct method in three dimensions for particular matter (HDDM-3D/PM) has been implemented in the Community Multiscale Air Quality (CMAQ) model to enable advanced sensitivity analysis. The major effort of this work is to develop high-order DDM sensitivity...

  19. Multi-capillary based optical sensors for highly sensitive protein detection

    NASA Astrophysics Data System (ADS)

    Okuyama, Yasuhira; Katagiri, Takashi; Matsuura, Yuji

    2017-04-01

    A fluorescence measuring method based on glass multi-capillary for detecting trace amounts of proteins is proposed. It promises enhancement of sensitivity due to effects of the adsorption area expansion and the longitudinal excitation. The sensitivity behavior of this method was investigated by using biotin-streptavidin binding. According to experimental examinations, it was found that the sensitivity was improved by a factor of 70 from common glass wells. We also confirmed our measuring system could detect 1 pg/mL of streptavidin. These results suggest that multi-capillary has a potential as a high-sensitive biosensor.

  20. Global Sensitivity Applied to Dynamic Combined Finite Discrete Element Methods for Fracture Simulation

    NASA Astrophysics Data System (ADS)

    Godinez, H. C.; Rougier, E.; Osthus, D.; Srinivasan, G.

    2017-12-01

    Fracture propagation play a key role for a number of application of interest to the scientific community. From dynamic fracture processes like spall and fragmentation in metals and detection of gas flow in static fractures in rock and the subsurface, the dynamics of fracture propagation is important to various engineering and scientific disciplines. In this work we implement a global sensitivity analysis test to the Hybrid Optimization Software Suite (HOSS), a multi-physics software tool based on the combined finite-discrete element method, that is used to describe material deformation and failure (i.e., fracture and fragmentation) under a number of user-prescribed boundary conditions. We explore the sensitivity of HOSS for various model parameters that influence how fracture are propagated through a material of interest. The parameters control the softening curve that the model relies to determine fractures within each element in the mesh, as well a other internal parameters which influence fracture behavior. The sensitivity method we apply is the Fourier Amplitude Sensitivity Test (FAST), which is a global sensitivity method to explore how each parameter influence the model fracture and to determine the key model parameters that have the most impact on the model. We present several sensitivity experiments for different combination of model parameters and compare against experimental data for verification.

  1. Evolution of Geometric Sensitivity Derivatives from Computer Aided Design Models

    NASA Technical Reports Server (NTRS)

    Jones, William T.; Lazzara, David; Haimes, Robert

    2010-01-01

    The generation of design parameter sensitivity derivatives is required for gradient-based optimization. Such sensitivity derivatives are elusive at best when working with geometry defined within the solid modeling context of Computer-Aided Design (CAD) systems. Solid modeling CAD systems are often proprietary and always complex, thereby necessitating ad hoc procedures to infer parameter sensitivity. A new perspective is presented that makes direct use of the hierarchical associativity of CAD features to trace their evolution and thereby track design parameter sensitivity. In contrast to ad hoc methods, this method provides a more concise procedure following the model design intent and determining the sensitivity of CAD geometry directly to its respective defining parameters.

  2. Dual sensitivity mode system for monitoring processes and sensors

    DOEpatents

    Wilks, Alan D.; Wegerich, Stephan W.; Gross, Kenneth C.

    2000-01-01

    A method and system for analyzing a source of data. The system and method involves initially training a system using a selected data signal, calculating at least two levels of sensitivity using a pattern recognition methodology, activating a first mode of alarm sensitivity to monitor the data source, activating a second mode of alarm sensitivity to monitor the data source and generating a first alarm signal upon the first mode of sensitivity detecting an alarm condition and a second alarm signal upon the second mode of sensitivity detecting an associated alarm condition. The first alarm condition and second alarm condition can be acted upon by an operator and/or analyzed by a specialist or computer program.

  3. Development of a Sensitive Luciferase-Based Sandwich ELISA System for the Detection of Human Extracellular Matrix 1 Protein.

    PubMed

    Li, Ya; Li, Yanqing; Zhao, Junli; Zheng, Xiaojing; Mao, Qinwen; Xia, Haibin

    2016-12-01

    Enzyme-linked immunosorbent assay (ELISA) has been one of the main methods for detecting an antigen in an aqueous sample for more than four decades. Nowadays, one of the biggest concerns for ELISA is still how to improve the sensitivity of the assay, and the luciferase-luciferin reaction system has been noticed as a new detection method with high sensitivity. In this study, a luciferin-luciferase reaction system was used as the detection method for a sandwich ELISA system. It was shown that this new system led to an increase in the detection sensitivity of at least two times when compared with the traditional horseradish peroxidase (HRP) detection method. Lastly, the serum levels of the human extracellular matrix 1 protein of breast cancer patients were determined by the new system, which were overall similar to the HRP chemiluminescent system. Furthermore, this new luciferase reporter can be implemented into other ELISA systems for the purpose of increasing the assay sensitivity.

  4. On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods

    NASA Technical Reports Server (NTRS)

    Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.

    2003-01-01

    Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.

  5. LLNA variability: An essential ingredient for a comprehensive assessment of non-animal skin sensitization test methods and strategies.

    PubMed

    Hoffmann, Sebastian

    2015-01-01

    The development of non-animal skin sensitization test methods and strategies is quickly progressing. Either individually or in combination, the predictive capacity is usually described in comparison to local lymph node assay (LLNA) results. In this process the important lesson from other endpoints, such as skin or eye irritation, to account for variability reference test results - here the LLNA - has not yet been fully acknowledged. In order to provide assessors as well as method and strategy developers with appropriate estimates, we investigated the variability of EC3 values from repeated substance testing using the publicly available NICEATM (NTP Interagency Center for the Evaluation of Alternative Toxicological Methods) LLNA database. Repeat experiments for more than 60 substances were analyzed - once taking the vehicle into account and once combining data over all vehicles. In general, variability was higher when different vehicles were used. In terms of skin sensitization potential, i.e., discriminating sensitizer from non-sensitizers, the false positive rate ranged from 14-20%, while the false negative rate was 4-5%. In terms of skin sensitization potency, the rate to assign a substance to the next higher or next lower potency class was approx.10-15%. In addition, general estimates for EC3 variability are provided that can be used for modelling purposes. With our analysis we stress the importance of considering the LLNA variability in the assessment of skin sensitization test methods and strategies and provide estimates thereof.

  6. Functional-diversity indices can be driven by methodological choices and species richness.

    PubMed

    Poos, Mark S; Walker, Steven C; Jackson, Donald A

    2009-02-01

    Functional diversity is an important concept in community ecology because it captures information on functional traits absent in measures of species diversity. One popular method of measuring functional diversity is the dendrogram-based method, FD. To calculate FD, a variety of methodological choices are required, and it has been debated about whether biological conclusions are sensitive to such choices. We studied the probability that conclusions regarding FD were sensitive, and that patterns in sensitivity were related to alpha and beta components of species richness. We developed a randomization procedure that iteratively calculated FD by assigning species into two assemblages and calculating the probability that the community with higher FD varied across methods. We found evidence of sensitivity in all five communities we examined, ranging from a probability of sensitivity of 0 (no sensitivity) to 0.976 (almost completely sensitive). Variations in these probabilities were driven by differences in alpha diversity between assemblages and not by beta diversity. Importantly, FD was most sensitive when it was most useful (i.e., when differences in alpha diversity were low). We demonstrate that trends in functional-diversity analyses can be largely driven by methodological choices or species richness, rather than functional trait information alone.

  7. Improving multi-objective reservoir operation optimization with sensitivity-informed problem decomposition

    NASA Astrophysics Data System (ADS)

    Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.

    2015-04-01

    This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.

  8. Validation of laboratory-scale recycling test method of paper PSA label products

    Treesearch

    Carl Houtman; Karen Scallon; Richard Oldack

    2008-01-01

    Starting with test methods and a specification developed by the U.S. Postal Service (USPS) Environmentally Benign Pressure Sensitive Adhesive Postage Stamp Program, a laboratory-scale test method and a specification were developed and validated for pressure-sensitive adhesive labels, By comparing results from this new test method and pilot-scale tests, which have been...

  9. Development of a high sensitivity pinhole type gamma camera using semiconductors for low dose rate fields

    NASA Astrophysics Data System (ADS)

    Ueno, Yuichiro; Takahashi, Isao; Ishitsu, Takafumi; Tadokoro, Takahiro; Okada, Koichi; Nagumo, Yasushi; Fujishima, Yasutake; Yoshida, Akira; Umegaki, Kikuo

    2018-06-01

    We developed a pinhole type gamma camera, using a compact detector module of a pixelated CdTe semiconductor, which has suitable sensitivity and quantitative accuracy for low dose rate fields. In order to improve the sensitivity of the pinhole type semiconductor gamma camera, we adopted three methods: a signal processing method to set the discriminating level lower, a high sensitivity pinhole collimator and a smoothing image filter that improves the efficiency of the source identification. We tested basic performances of the developed gamma camera and carefully examined effects of the three methods. From the sensitivity test, we found that the effective sensitivity was about 21 times higher than that of the gamma camera for high dose rate fields which we had previously developed. We confirmed that the gamma camera had sufficient sensitivity and high quantitative accuracy; for example, a weak hot spot (0.9 μSv/h) around a tree root could be detected within 45 min in a low dose rate field test, and errors of measured dose rates with point sources were less than 7% in a dose rate accuracy test.

  10. Method selection and adaptation for distributed monitoring of infectious diseases for syndromic surveillance.

    PubMed

    Xing, Jian; Burkom, Howard; Tokars, Jerome

    2011-12-01

    Automated surveillance systems require statistical methods to recognize increases in visit counts that might indicate an outbreak. In prior work we presented methods to enhance the sensitivity of C2, a commonly used time series method. In this study, we compared the enhanced C2 method with five regression models. We used emergency department chief complaint data from US CDC BioSense surveillance system, aggregated by city (total of 206 hospitals, 16 cities) during 5/2008-4/2009. Data for six syndromes (asthma, gastrointestinal, nausea and vomiting, rash, respiratory, and influenza-like illness) was used and was stratified by mean count (1-19, 20-49, ≥50 per day) into 14 syndrome-count categories. We compared the sensitivity for detecting single-day artificially-added increases in syndrome counts. Four modifications of the C2 time series method, and five regression models (two linear and three Poisson), were tested. A constant alert rate of 1% was used for all methods. Among the regression models tested, we found that a Poisson model controlling for the logarithm of total visits (i.e., visits both meeting and not meeting a syndrome definition), day of week, and 14-day time period was best. Among 14 syndrome-count categories, time series and regression methods produced approximately the same sensitivity (<5% difference) in 6; in six categories, the regression method had higher sensitivity (range 6-14% improvement), and in two categories the time series method had higher sensitivity. When automated data are aggregated to the city level, a Poisson regression model that controls for total visits produces the best overall sensitivity for detecting artificially added visit counts. This improvement was achieved without increasing the alert rate, which was held constant at 1% for all methods. These findings will improve our ability to detect outbreaks in automated surveillance system data. Published by Elsevier Inc.

  11. Temperature sensitive surfaces and methods of making same

    DOEpatents

    Liang, Liang [Richland, WA; Rieke, Peter C [Pasco, WA; Alford, Kentin L [Pasco, WA

    2002-09-10

    Poly-n-isopropylacrylamide surface coatings demonstrate the useful property of being able to switch charateristics depending upon temperature. More specifically, these coatings switch from being hydrophilic at low temperature to hydrophobic at high temperature. Research has been conducted for many years to better characterize and control the properties of temperature sensitive coatings. The present invention provides novel temperature sensitive coatings on articles and novel methods of making temperature sensitive coatings that are disposed on the surfaces of various articles. These novel coatings contain the reaction products of n-isopropylacrylamide and are characterized by their properties such as advancing contact angles. Numerous other characteristics such as coating thickness, surface roughness, and hydrophilic-to-hydrophobic transition temperatures are also described. The present invention includes articles having temperature-sensitve coatings with improved properties as well as improved methods for forming temperature sensitive coatings.

  12. Design sensitivity analysis with Applicon IFAD using the adjoint variable method

    NASA Technical Reports Server (NTRS)

    Frederick, Marjorie C.; Choi, Kyung K.

    1984-01-01

    A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.

  13. In vivo sensitivity estimation and imaging acceleration with rotating RF coil arrays at 7 Tesla.

    PubMed

    Li, Mingyan; Jin, Jin; Zuo, Zhentao; Liu, Feng; Trakic, Adnan; Weber, Ewald; Zhuo, Yan; Xue, Rong; Crozier, Stuart

    2015-03-01

    Using a new rotating SENSitivity Encoding (rotating-SENSE) algorithm, we have successfully demonstrated that the rotating radiofrequency coil array (RRFCA) was capable of achieving a significant reduction in scan time and a uniform image reconstruction for a homogeneous phantom at 7 Tesla. However, at 7 Tesla the in vivo sensitivity profiles (B1(-)) become distinct at various angular positions. Therefore, sensitivity maps at other angular positions cannot be obtained by numerically rotating the acquired ones. In this work, a novel sensitivity estimation method for the RRFCA was developed and validated with human brain imaging. This method employed a library database and registration techniques to estimate coil sensitivity at an arbitrary angular position. The estimated sensitivity maps were then compared to the acquired sensitivity maps. The results indicate that the proposed method is capable of accurately estimating both magnitude and phase of sensitivity at an arbitrary angular position, which enables us to employ the rotating-SENSE algorithm to accelerate acquisition and reconstruct image. Compared to a stationary coil array with the same number of coil elements, the RRFCA was able to reconstruct images with better quality at a high reduction factor. It is hoped that the proposed rotation-dependent sensitivity estimation algorithm and the acceleration ability of the RRFCA will be particularly useful for ultra high field MRI. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. In vivo sensitivity estimation and imaging acceleration with rotating RF coil arrays at 7 Tesla

    NASA Astrophysics Data System (ADS)

    Li, Mingyan; Jin, Jin; Zuo, Zhentao; Liu, Feng; Trakic, Adnan; Weber, Ewald; Zhuo, Yan; Xue, Rong; Crozier, Stuart

    2015-03-01

    Using a new rotating SENSitivity Encoding (rotating-SENSE) algorithm, we have successfully demonstrated that the rotating radiofrequency coil array (RRFCA) was capable of achieving a significant reduction in scan time and a uniform image reconstruction for a homogeneous phantom at 7 Tesla. However, at 7 Tesla the in vivo sensitivity profiles (B1-) become distinct at various angular positions. Therefore, sensitivity maps at other angular positions cannot be obtained by numerically rotating the acquired ones. In this work, a novel sensitivity estimation method for the RRFCA was developed and validated with human brain imaging. This method employed a library database and registration techniques to estimate coil sensitivity at an arbitrary angular position. The estimated sensitivity maps were then compared to the acquired sensitivity maps. The results indicate that the proposed method is capable of accurately estimating both magnitude and phase of sensitivity at an arbitrary angular position, which enables us to employ the rotating-SENSE algorithm to accelerate acquisition and reconstruct image. Compared to a stationary coil array with the same number of coil elements, the RRFCA was able to reconstruct images with better quality at a high reduction factor. It is hoped that the proposed rotation-dependent sensitivity estimation algorithm and the acceleration ability of the RRFCA will be particularly useful for ultra high field MRI.

  15. State of the art in non-animal approaches for skin sensitization testing: from individual test methods towards testing strategies.

    PubMed

    Ezendam, Janine; Braakhuis, Hedwig M; Vandebriel, Rob J

    2016-12-01

    The hazard assessment of skin sensitizers relies mainly on animal testing, but much progress is made in the development, validation and regulatory acceptance and implementation of non-animal predictive approaches. In this review, we provide an update on the available computational tools and animal-free test methods for the prediction of skin sensitization hazard. These individual test methods address mostly one mechanistic step of the process of skin sensitization induction. The adverse outcome pathway (AOP) for skin sensitization describes the key events (KEs) that lead to skin sensitization. In our review, we have clustered the available test methods according to the KE they inform: the molecular initiating event (MIE/KE1)-protein binding, KE2-keratinocyte activation, KE3-dendritic cell activation and KE4-T cell activation and proliferation. In recent years, most progress has been made in the development and validation of in vitro assays that address KE2 and KE3. No standardized in vitro assays for T cell activation are available; thus, KE4 cannot be measured in vitro. Three non-animal test methods, addressing either the MIE, KE2 or KE3, are accepted as OECD test guidelines, and this has accelerated the development of integrated or defined approaches for testing and assessment (e.g. testing strategies). The majority of these approaches are mechanism-based, since they combine results from multiple test methods and/or computational tools that address different KEs of the AOP to estimate skin sensitization potential and sometimes potency. Other approaches are based on statistical tools. Until now, eleven different testing strategies have been published, the majority using the same individual information sources. Our review shows that some of the defined approaches to testing and assessment are able to accurately predict skin sensitization hazard, sometimes even more accurate than the currently used animal test. A few defined approaches are developed to provide an estimate of the potency sub-category of a skin sensitizer as well, but these approaches need further independent evaluation with a new dataset of chemicals. To conclude, this update shows that the field of non-animal approaches for skin sensitization has evolved greatly in recent years and that it is possible to predict skin sensitization hazard without animal testing.

  16. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  17. Multivariate Models for Prediction of Human Skin Sensitization Hazard

    PubMed Central

    Strickland, Judy; Zang, Qingda; Paris, Michael; Lehmann, David M.; Allen, David; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Kleinstreuer, Nicole

    2016-01-01

    One of ICCVAM’s top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays—the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT), and KeratinoSens™ assay—six physicochemical properties, and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression (LR) and support vector machine (SVM), to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three LR and three SVM) with the highest accuracy (92%) used: (1) DPRA, h-CLAT, and read-across; (2) DPRA, h-CLAT, read-across, and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens, and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy = 88%), any of the alternative methods alone (accuracy = 63–79%), or test batteries combining data from the individual methods (accuracy = 75%). These results suggest that computational methods are promising tools to effectively identify potential human skin sensitizers without animal testing. PMID:27480324

  18. Comparison of the diagnostic accuracy, sensitivity and specificity of four odontological methods for age evaluation in Italian children at the age threshold of 14 years using ROC curves.

    PubMed

    Pinchi, Vilma; Pradella, Francesco; Vitale, Giulia; Rugo, Dario; Nieri, Michele; Norelli, Gian-Aristide

    2016-01-01

    The age threshold of 14 years is relevant in Italy as the minimum age for criminal responsibility. It is of utmost importance to evaluate the diagnostic accuracy of every odontological method for age evaluation considering the sensitivity, or the ability to estimate the true positive cases, and the specificity, or the ability to estimate the true negative cases. The research aims to compare the specificity and sensitivity of four commonly adopted methods of dental age estimation - Demirjian, Haavikko, Willems and Cameriere - in a sample of Italian children aged between 11 and 16 years, with an age threshold of 14 years, using receiver operating characteristic curves and the area under the curve (AUC). In addition, new decision criteria are developed to increase the accuracy of the methods. Among the four odontological methods for age estimation adopted in the research, the Cameriere method showed the highest AUC in both female and male cohorts. The Cameriere method shows a high degree of accuracy at the age threshold of 14 years. To adopt the Cameriere method to estimate the 14-year age threshold more accurately, however, it is suggested - according to the Youden index - that the decision criterion be set at the lower value of 12.928 for females and 13.258 years for males, obtaining a sensitivity of 85% and specificity of 88% in females, and a sensitivity of 77% and specificity of 92% in males. If a specificity level >90% is needed, the cut-off point should be set at 12.959 years (82% sensitivity) for females. © The Author(s) 2015.

  19. Pragmatic utility: using analytical questions to explore the concept of ethical sensitivity.

    PubMed

    Weaver, Kathryn; Morse, Janice M

    2006-01-01

    Ethical sensitivity is the means and capacity through which professionals strive to understand and compassionately respond to those in their care. As a transdisciplinary concept, ethical sensitivity can facilitate knowledge development across disciplines. To clarify and reduce ambiguities, the concept of ethical sensitivity was analyzed using the pragmatic utility method. With this method, the investigator uses analytical questions arising from in-depth understanding of the literature to synthesize data, to push beyond the limits of isolated findings and individual disciplines, to identify shared knowledge, and to provide new insights, lines of questioning, and direction. In this article, the role, technique, results, advantages, and limitations of the pragmatic utility method are elucidated. Characteristics of ethical sensitivity are its (a) preconditions of suffering and vulnerability cues, uncertainty, relationships characterized by receptivity and responsiveness, and courage; (b) attributes of moral perception, affectivity, and dividing loyalties; and (c) outcomes of client comfort and well-being, professional learning and self-transcendence, and integrity-preserving compromise. Use of the pragmatic utility method enhanced comprehension, meaning, relevance, and dimensions of the concept of ethical sensitivity as conveyed in the academic literature of selected professional disciplines.

  20. Modified short-term guinea pig sensitization tests for detecting contact allergens as an alternative to the conventional test.

    PubMed

    Yanagi, M; Hoya, M; Mori, M; Katsumura, Y

    2001-03-01

    The conventional adjuvant and patch test (APT) method of guinea pig sensitization testing was modified in 2 ways, s-APT and s-APT(2), in order to shorten the test period. These short-term test methods consist of 72-h closed application of test material with intradermal injection of emulsified Freund's complete adjuvant (E-FCA) for 1st induction, 48-h closed application of test material with (s-APT) or without (s-APT(2)) intradermal injection of E-FCA on the 7th day for 2nd induction, and open application on the 14th day for challenge. They were compared with conventional APT by using 8 allergenic chemicals (formaldehyde, nickel sulfate, cobalt sulfate, ethyl-p-aminobenzoate (benzocaine), isoeugenol, 2-mercaptobenzothiazole, 2,4-dinitrochlorobenzene (DNCB) and 1-phenylazo-2-naphthol (Sudan I)). The short-term methods gave similar results to those of conventional APT in terms of mean response, sensitization rate and sensitization potency (challenge concentration that induces a mean response equal to 1.0). Thus, our short-term methods, which are capable of evaluating skin sensitization within 17 days, are sufficiently sensitive to detect potentially hazardous contact allergens.

  1. Simple, Sensitive and Accurate Multiplex Detection of Clinically Important Melanoma DNA Mutations in Circulating Tumour DNA with SERS Nanotags

    PubMed Central

    Wee, Eugene J.H.; Wang, Yuling; Tsao, Simon Chang-Hao; Trau, Matt

    2016-01-01

    Sensitive and accurate identification of specific DNA mutations can influence clinical decisions. However accurate diagnosis from limiting samples such as circulating tumour DNA (ctDNA) is challenging. Current approaches based on fluorescence such as quantitative PCR (qPCR) and more recently, droplet digital PCR (ddPCR) have limitations in multiplex detection, sensitivity and the need for expensive specialized equipment. Herein we describe an assay capitalizing on the multiplexing and sensitivity benefits of surface-enhanced Raman spectroscopy (SERS) with the simplicity of standard PCR to address the limitations of current approaches. This proof-of-concept method could reproducibly detect as few as 0.1% (10 copies, CV < 9%) of target sequences thus demonstrating the high sensitivity of the method. The method was then applied to specifically detect three important melanoma mutations in multiplex. Finally, the PCR/SERS assay was used to genotype cell lines and ctDNA from serum samples where results subsequently validated with ddPCR. With ddPCR-like sensitivity and accuracy yet at the convenience of standard PCR, we believe this multiplex PCR/SERS method could find wide applications in both diagnostics and research. PMID:27446486

  2. Simple, Sensitive and Accurate Multiplex Detection of Clinically Important Melanoma DNA Mutations in Circulating Tumour DNA with SERS Nanotags.

    PubMed

    Wee, Eugene J H; Wang, Yuling; Tsao, Simon Chang-Hao; Trau, Matt

    2016-01-01

    Sensitive and accurate identification of specific DNA mutations can influence clinical decisions. However accurate diagnosis from limiting samples such as circulating tumour DNA (ctDNA) is challenging. Current approaches based on fluorescence such as quantitative PCR (qPCR) and more recently, droplet digital PCR (ddPCR) have limitations in multiplex detection, sensitivity and the need for expensive specialized equipment. Herein we describe an assay capitalizing on the multiplexing and sensitivity benefits of surface-enhanced Raman spectroscopy (SERS) with the simplicity of standard PCR to address the limitations of current approaches. This proof-of-concept method could reproducibly detect as few as 0.1% (10 copies, CV < 9%) of target sequences thus demonstrating the high sensitivity of the method. The method was then applied to specifically detect three important melanoma mutations in multiplex. Finally, the PCR/SERS assay was used to genotype cell lines and ctDNA from serum samples where results subsequently validated with ddPCR. With ddPCR-like sensitivity and accuracy yet at the convenience of standard PCR, we believe this multiplex PCR/SERS method could find wide applications in both diagnostics and research.

  3. Sensitivity-Uncertainty Based Nuclear Criticality Safety Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-09-20

    These are slides from a seminar given to the University of Mexico Nuclear Engineering Department. Whisper is a statistical analysis package developed to support nuclear criticality safety validation. It uses the sensitivity profile data for an application as computed by MCNP6 along with covariance files for the nuclear data to determine a baseline upper-subcritical-limit for the application. Whisper and its associated benchmark files are developed and maintained as part of MCNP6, and will be distributed with all future releases of MCNP6. Although sensitivity-uncertainty methods for NCS validation have been under development for 20 years, continuous-energy Monte Carlo codes such asmore » MCNP could not determine the required adjoint-weighted tallies for sensitivity profiles. The recent introduction of the iterated fission probability method into MCNP led to the rapid development of sensitivity analysis capabilities for MCNP6 and the development of Whisper. Sensitivity-uncertainty based methods represent the future for NCS validation – making full use of today’s computer power to codify past approaches based largely on expert judgment. Validation results are defensible, auditable, and repeatable as needed with different assumptions and process models. The new methods can supplement, support, and extend traditional validation approaches.« less

  4. Sensitivity Analysis in Engineering

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)

    1987-01-01

    The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.

  5. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  6. NK sensitivity of neuroblastoma cells determined by a highly sensitive coupled luminescent method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogbomo, Henry; Hahn, Anke; Geiler, Janina

    2006-01-06

    The measurement of natural killer (NK) cells toxicity against tumor or virus-infected cells especially in cases with small blood samples requires highly sensitive methods. Here, a coupled luminescent method (CLM) based on glyceraldehyde-3-phosphate dehydrogenase release from injured target cells was used to evaluate the cytotoxicity of interleukin-2 activated NK cells against neuroblastoma cell lines. In contrast to most other methods, CLM does not require the pretreatment of target cells with labeling substances which could be toxic or radioactive. The effective killing of tumor cells was achieved by low effector/target ratios ranging from 0.5:1 to 4:1. CLM provides highly sensitive, safe,more » and fast procedure for measurement of NK cell activity with small blood samples such as those obtained from pediatric patients.« less

  7. Design sensitivity analysis using EAL. Part 1: Conventional design parameters

    NASA Technical Reports Server (NTRS)

    Dopker, B.; Choi, Kyung K.; Lee, J.

    1986-01-01

    A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.

  8. DGSA: A Matlab toolbox for distance-based generalized sensitivity analysis of geoscientific computer experiments

    NASA Astrophysics Data System (ADS)

    Park, Jihoon; Yang, Guang; Satija, Addy; Scheidt, Céline; Caers, Jef

    2016-12-01

    Sensitivity analysis plays an important role in geoscientific computer experiments, whether for forecasting, data assimilation or model calibration. In this paper we focus on an extension of a method of regionalized sensitivity analysis (RSA) to applications typical in the Earth Sciences. Such applications involve the building of large complex spatial models, the application of computationally extensive forward modeling codes and the integration of heterogeneous sources of model uncertainty. The aim of this paper is to be practical: 1) provide a Matlab code, 2) provide novel visualization methods to aid users in getting a better understanding in the sensitivity 3) provide a method based on kernel principal component analysis (KPCA) and self-organizing maps (SOM) to account for spatial uncertainty typical in Earth Science applications and 4) provide an illustration on a real field case where the above mentioned complexities present themselves. We present methods that extend the original RSA method in several ways. First we present the calculation of conditional effects, defined as the sensitivity of a parameter given a level of another parameters. Second, we show how this conditional effect can be used to choose nominal values or ranges to fix insensitive parameters aiming to minimally affect uncertainty in the response. Third, we develop a method based on KPCA and SOM to assign a rank to spatial models in order to calculate the sensitivity on spatial variability in the models. A large oil/gas reservoir case is used as illustration of these ideas.

  9. Local lymph node assay (LLNA) for detection of sensitization capacity of chemicals.

    PubMed

    Gerberick, G Frank; Ryan, Cindy A; Dearman, Rebecca J; Kimber, Ian

    2007-01-01

    The local lymph node assay (LLNA) is a murine model developed to evaluate the skin sensitization potential of chemicals. The LLNA is an alternative approach to traditional guinea pig methods and in comparison provides important animal welfare benefits. The assay relies on measurement of events induced during the induction phase of skin sensitization, specifically lymphocyte proliferation in the draining lymph nodes which is a hallmark of a skin sensitization response. Since its introduction the LLNA has been the subject of extensive evaluation on a national and international scale, and has been successfully validated and incorporated worldwide into regulatory guidelines. Experience gained in recent years has demonstrated that adherence to published procedures and guidelines for the LLNA (e.g., with respect to dose and vehicle selection) is critical for the successful conduct and eventual interpretation of the data. In addition to providing a robust method for skin sensitization hazard identification, the LLNA has proven very useful in assessing the skin sensitizing potency of test chemicals, and this has provided invaluable information to risk assessors. The primary method to make comparisons of the relative potency of chemical sensitizers is to use linear interpolation to estimate the concentration of chemical required to induce a stimulation index of three relative to concurrent vehicle-treated controls (EC3). In certain situations where there are available less than optimal dose response data a log-linear extrapolation method can be used to estimate an EC3 value which can reduce significantly the need for repeat testing of chemicals. The LLNA, when conducted according to published guidelines, provides a robust method for skin sensitization testing that not only provides reliable hazard identification information but also data necessary for effective risk assessment and risk management.

  10. 75 FR 35712 - National Pollutant Discharge Elimination System (NPDES): Use of Sufficiently Sensitive Test...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-23

    ... Methods for Permit Applications and Reporting AGENCY: Environmental Protection Agency (EPA). ACTION... System (NPDES) program, only ``sufficiently sensitive'' analytical test methods can be used when... methods with respect to measurement of mercury and extend the approach outlined in that guidance to the...

  11. [Optimized application of nested PCR method for detection of malaria].

    PubMed

    Yao-Guang, Z; Li, J; Zhen-Yu, W; Li, C

    2017-04-28

    Objective To optimize the application of the nested PCR method for the detection of malaria according to the working practice, so as to improve the efficiency of malaria detection. Methods Premixing solution of PCR, internal primers for further amplification and new designed primers that aimed at two Plasmodium ovale subspecies were employed to optimize the reaction system, reaction condition and specific primers of P . ovale on basis of routine nested PCR. Then the specificity and the sensitivity of the optimized method were analyzed. The positive blood samples and examination samples of malaria were detected by the routine nested PCR and the optimized method simultaneously, and the detection results were compared and analyzed. Results The optimized method showed good specificity, and its sensitivity could reach the pg to fg level. The two methods were used to detect the same positive malarial blood samples simultaneously, the results indicated that the PCR products of the two methods had no significant difference, but the non-specific amplification reduced obviously and the detection rates of P . ovale subspecies improved, as well as the total specificity also increased through the use of the optimized method. The actual detection results of 111 cases of malarial blood samples showed that the sensitivity and specificity of the routine nested PCR were 94.57% and 86.96%, respectively, and those of the optimized method were both 93.48%, and there was no statistically significant difference between the two methods in the sensitivity ( P > 0.05), but there was a statistically significant difference between the two methods in the specificity ( P < 0.05). Conclusion The optimized PCR can improve the specificity without reducing the sensitivity on the basis of the routine nested PCR, it also can save the cost and increase the efficiency of malaria detection as less experiment links.

  12. Global sensitivity analysis for urban water quality modelling: Terminology, convergence and comparison of different methods

    NASA Astrophysics Data System (ADS)

    Vanrolleghem, Peter A.; Mannina, Giorgio; Cosenza, Alida; Neumann, Marc B.

    2015-03-01

    Sensitivity analysis represents an important step in improving the understanding and use of environmental models. Indeed, by means of global sensitivity analysis (GSA), modellers may identify both important (factor prioritisation) and non-influential (factor fixing) model factors. No general rule has yet been defined for verifying the convergence of the GSA methods. In order to fill this gap this paper presents a convergence analysis of three widely used GSA methods (SRC, Extended FAST and Morris screening) for an urban drainage stormwater quality-quantity model. After the convergence was achieved the results of each method were compared. In particular, a discussion on peculiarities, applicability, and reliability of the three methods is presented. Moreover, a graphical Venn diagram based classification scheme and a precise terminology for better identifying important, interacting and non-influential factors for each method is proposed. In terms of convergence, it was shown that sensitivity indices related to factors of the quantity model achieve convergence faster. Results for the Morris screening method deviated considerably from the other methods. Factors related to the quality model require a much higher number of simulations than the number suggested in literature for achieving convergence with this method. In fact, the results have shown that the term "screening" is improperly used as the method may exclude important factors from further analysis. Moreover, for the presented application the convergence analysis shows more stable sensitivity coefficients for the Extended-FAST method compared to SRC and Morris screening. Substantial agreement in terms of factor fixing was found between the Morris screening and Extended FAST methods. In general, the water quality related factors exhibited more important interactions than factors related to water quantity. Furthermore, in contrast to water quantity model outputs, water quality model outputs were found to be characterised by high non-linearity.

  13. A sensitivity analysis method for the body segment inertial parameters based on ground reaction and joint moment regressor matrices.

    PubMed

    Futamure, Sumire; Bonnet, Vincent; Dumas, Raphael; Venture, Gentiane

    2017-11-07

    This paper presents a method allowing a simple and efficient sensitivity analysis of dynamics parameters of complex whole-body human model. The proposed method is based on the ground reaction and joint moment regressor matrices, developed initially in robotics system identification theory, and involved in the equations of motion of the human body. The regressor matrices are linear relatively to the segment inertial parameters allowing us to use simple sensitivity analysis methods. The sensitivity analysis method was applied over gait dynamics and kinematics data of nine subjects and with a 15 segments 3D model of the locomotor apparatus. According to the proposed sensitivity indices, 76 segments inertial parameters out the 150 of the mechanical model were considered as not influent for gait. The main findings were that the segment masses were influent and that, at the exception of the trunk, moment of inertia were not influent for the computation of the ground reaction forces and moments and the joint moments. The same method also shows numerically that at least 90% of the lower-limb joint moments during the stance phase can be estimated only from a force-plate and kinematics data without knowing any of the segment inertial parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Quantitative real-time in vivo detection of magnetic nanoparticles by their nonlinear magnetization

    NASA Astrophysics Data System (ADS)

    Nikitin, M. P.; Torno, M.; Chen, H.; Rosengart, A.; Nikitin, P. I.

    2008-04-01

    A novel method of highly sensitive quantitative detection of magnetic nanoparticles (MP) in biological tissues and blood system has been realized and tested in real time in vivo experiments. The detection method is based on nonlinear magnetic properties of MP and the related device can record a very small relative variation of nonlinear magnetic susceptibility up to 10-8 at room temperature, providing sensitivity of several nanograms of MP in 0.1ml volume. Real-time quantitative in vivo measurements of dynamics of MP concentration in blood flow have been performed. A catheter that carried the blood flow of a rat passed through the measuring device. After an MP injection, the quantity of MP in the circulating blood was continuously recorded. The method has also been used to evaluate the MP distribution between rat's organs. Its sensitivity was compared with detection of the radioactive MP based on isotope of Fe59. The comparison of magnetic and radioactive signals in the rat's blood and organ samples demonstrated similar sensitivity for both methods. However, the proposed magnetic method is much more convenient as it is safe, less expensive, and provides real-time measurements in vivo. Moreover, the sensitivity of the method can be further improved by optimization of the device geometry.

  15. A strategy of combining SILAR with solvothermal process for In2S3 sensitized quantum dot-sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Yang, Peizhi; Tang, Qunwei; Ji, Chenming; Wang, Haobo

    2015-12-01

    Pursuit of an efficient strategy for quantum dot-sensitized photoanode has been a persistent objective for enhancing photovoltaic performances of quantum dot-sensitized solar cell (QDSC). We present here the fabrication of the indium sulfide (In2S3) quantum dot-sensitized titanium dioxide (TiO2) photoanode by combining successive ionic layer adsorption and reaction (SILAR) with solvothermal processes. The resultant QDSC consists of an In2S3 sensitized TiO2 photoanode, a liquid polysulfide electrolyte, and a Co0.85Se counter electrode. The optimized QDSC with photoanode prepared with the help of a SILAR method at 20 deposition cycles and solvothermal method yields a maximum power conversion efficiency of 1.39%.

  16. Pressure-Sensitive Paint: Effect of Substrate

    PubMed Central

    Quinn, Mark Kenneth; Yang, Leichao; Kontis, Konstantinos

    2011-01-01

    There are numerous ways in which pressure-sensitive paint can be applied to a surface. The choice of substrate and application method can greatly affect the results obtained. The current study examines the different methods of applying pressure-sensitive paint to a surface. One polymer-based and two porous substrates (anodized aluminum and thin-layer chromatography plates) are investigated and compared for luminescent output, pressure sensitivity, temperature sensitivity and photodegradation. Two luminophores [tris-Bathophenanthroline Ruthenium(II) Perchlorate and Platinum-tetrakis (pentafluorophenyl) Porphyrin] will also be compared in all three of the substrates. The results show the applicability of the different substrates and luminophores to different testing environments. PMID:22247685

  17. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    PubMed

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  18. Measuring hearing in the harbor seal (Phoca vitulina): Comparison of behavioral and auditory brainstem response techniques

    NASA Astrophysics Data System (ADS)

    Wolski, Lawrence F.; Anderson, Rindy C.; Bowles, Ann E.; Yochem, Pamela K.

    2003-01-01

    Auditory brainstem response (ABR) and standard behavioral methods were compared by measuring in-air audiograms for an adult female harbor seal (Phoca vitulina). Behavioral audiograms were obtained using two techniques: the method of constant stimuli and the staircase method. Sensitivity was tested from 0.250 to 30 kHz. The seal showed good sensitivity from 6 to 12 kHz [best sensitivity 8.1 dB (re 20 μPa2.s) RMS at 8 kHz]. The staircase method yielded thresholds that were lower by 10 dB on average than the method of constant stimuli. ABRs were recorded at 2, 4, 8, 16, and 22 kHz and showed a similar best range (8-16 kHz). ABR thresholds averaged 5.7 dB higher than behavioral thresholds at 2, 4, and 8 kHz. ABRs were at least 7 dB lower at 16 kHz, and approximately 3 dB higher at 22 kHz. The better sensitivity of ABRs at higher frequencies could have reflected differences in the seal's behavior during ABR testing and/or bandwidth characteristics of test stimuli. These results agree with comparisons of ABR and behavioral methods performed in other recent studies and indicate that ABR methods represent a good alternative for estimating hearing range and sensitivity in pinnipeds, particularly when time is a critical factor and animals are untrained.

  19. Estimation of Spatiotemporal Sensitivity Using Band-limited Signals with No Additional Acquisitions for k-t Parallel Imaging.

    PubMed

    Takeshima, Hidenori; Saitoh, Kanako; Nitta, Shuhei; Shiodera, Taichiro; Takeguchi, Tomoyuki; Bannae, Shuhei; Kuhara, Shigehide

    2018-03-13

    Dynamic MR techniques, such as cardiac cine imaging, benefit from shorter acquisition times. The goal of the present study was to develop a method that achieves short acquisition times, while maintaining a cost-effective reconstruction, for dynamic MRI. k - t sensitivity encoding (SENSE) was identified as the base method to be enhanced meeting these two requirements. The proposed method achieves a reduction in acquisition time by estimating the spatiotemporal (x - f) sensitivity without requiring the acquisition of the alias-free signals, typical of the k - t SENSE technique. The cost-effective reconstruction, in turn, is achieved by a computationally efficient estimation of the x - f sensitivity from the band-limited signals of the aliased inputs. Such band-limited signals are suitable for sensitivity estimation because the strongly aliased signals have been removed. For the same reduction factor 4, the net reduction factor 4 for the proposed method was significantly higher than the factor 2.29 achieved by k - t SENSE. The processing time is reduced from 4.1 s for k - t SENSE to 1.7 s for the proposed method. The image quality obtained using the proposed method proved to be superior (mean squared error [MSE] ± standard deviation [SD] = 6.85 ± 2.73) compared to the k - t SENSE case (MSE ± SD = 12.73 ± 3.60) for the vertical long-axis (VLA) view, as well as other views. In the present study, k - t SENSE was identified as a suitable base method to be improved achieving both short acquisition times and a cost-effective reconstruction. To enhance these characteristics of base method, a novel implementation is proposed, estimating the x - f sensitivity without the need for an explicit scan of the reference signals. Experimental results showed that the acquisition, computational times and image quality for the proposed method were improved compared to the standard k - t SENSE method.

  20. Tracing the conformational changes in BSA using FRET with environmentally-sensitive squaraine probes

    NASA Astrophysics Data System (ADS)

    Govor, Iryna V.; Tatarets, Anatoliy L.; Obukhova, Olena M.; Terpetschnig, Ewald A.; Gellerman, Gary; Patsenker, Leonid D.

    2016-06-01

    A new potential method of detecting the conformational changes in hydrophobic proteins such as bovine serum albumin (BSA) is introduced. The method is based on the change in the Förster resonance energy transfer (FRET) efficiency between protein-sensitive fluorescent probes. As compared to conventional FRET based methods, in this new approach the donor and acceptor dyes are not covalently linked to protein molecules. Performance of the new method is demonstrated using the protein-sensitive squaraine probes Square-634 (donor) and Square-685 (acceptor) to detect the urea-induced conformational changes of BSA. The FRET efficiency between these probes can be considered a more sensitive parameter to trace protein unfolding as compared to the changes in fluorescence intensity of each of these probes. Addition of urea followed by BSA unfolding causes a noticeable decrease in the emission intensities of these probes (factor of 5.6 for Square-634 and 3.0 for Square-685), and the FRET efficiency changes by a factor of up to 17. Compared to the conventional method the new approach therefore demonstrates to be a more sensitive way to detect the conformational changes in BSA.

  1. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  2. Pyocin-sensitivity testing as a method of typing Pseudomonas aeruginosa: use of "phage-free" preparations of pyocin.

    PubMed

    Rampling, A; Whitby, J L; Wildy, P

    1975-11-01

    A method for pyocin-sensitivity typing by means of "phage-free" preparations of pyocin is described. The method was tested on 227 isolates of P. aeruginosa, collected from 34 different foci of infection in hospitals in the British Isles and the results were compared with those for combined serological and phage typing of all strains and pyocin production of 105 of the isolates. It is concluded that pyocin-sensitivity typing is a simple and reliable method giving a high degree of discrimination, comparable to that of combined serological and phage typing, and it is suitable for use in routine hospital laboratories.

  3. Remote air pollution measurement

    NASA Technical Reports Server (NTRS)

    Byer, R. L.

    1975-01-01

    This paper presents a discussion and comparison of the Raman method, the resonance and fluorescence backscatter method, long path absorption methods and the differential absorption method for remote air pollution measurement. A comparison of the above remote detection methods shows that the absorption methods offer the most sensitivity at the least required transmitted energy. Topographical absorption provides the advantage of a single ended measurement, and differential absorption offers the additional advantage of a fully depth resolved absorption measurement. Recent experimental results confirming the range and sensitivity of the methods are presented.

  4. Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide

    DOE PAGES

    Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...

    2017-03-01

    The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less

  5. A Nested PCR Assay to Avoid False Positive Detection of the Microsporidian Enterocytozoon hepatopenaei (EHP) in Environmental Samples in Shrimp Farms

    PubMed Central

    Jaroenlak, Pattana; Sanguanrut, Piyachat; Williams, Bryony A. P.; Stentiford, Grant D.; Flegel, Timothy W.; Sritunyalucksana, Kallaya

    2016-01-01

    Hepatopancreatic microsporidiosis (HPM) caused by Enterocytozoon hepatopenaei (EHP) is an important disease of cultivated shrimp. Heavy infections may lead to retarded growth and unprofitable harvests. Existing PCR detection methods target the EHP small subunit ribosomal RNA (SSU rRNA) gene (SSU-PCR). However, we discovered that they can give false positive test results due to cross reactivity of the SSU-PCR primers with DNA from closely related microsporidia that infect other aquatic organisms. This is problematic for investigating and monitoring EHP infection pathways. To overcome this problem, a sensitive and specific nested PCR method was developed for detection of the spore wall protein (SWP) gene of EHP (SWP-PCR). The new SWP-PCR method did not produce false positive results from closely related microsporidia. The first PCR step of the SWP-PCR method was 100 times (104 plasmid copies per reaction vial) more sensitive than that of the existing SSU-PCR method (106 copies) but sensitivity was equal for both in the nested step (10 copies). Since the hepatopancreas of cultivated shrimp is not currently known to be infected with microsporidia other than EHP, the SSU-PCR methods are still valid for analyzing hepatopancreatic samples despite the lower sensitivity than the SWP-PCR method. However, due to its greater specificity and sensitivity, we recommend that the SWP-PCR method be used to screen for EHP in feces, feed and environmental samples for potential EHP carriers. PMID:27832178

  6. A Nested PCR Assay to Avoid False Positive Detection of the Microsporidian Enterocytozoon hepatopenaei (EHP) in Environmental Samples in Shrimp Farms.

    PubMed

    Jaroenlak, Pattana; Sanguanrut, Piyachat; Williams, Bryony A P; Stentiford, Grant D; Flegel, Timothy W; Sritunyalucksana, Kallaya; Itsathitphaisarn, Ornchuma

    2016-01-01

    Hepatopancreatic microsporidiosis (HPM) caused by Enterocytozoon hepatopenaei (EHP) is an important disease of cultivated shrimp. Heavy infections may lead to retarded growth and unprofitable harvests. Existing PCR detection methods target the EHP small subunit ribosomal RNA (SSU rRNA) gene (SSU-PCR). However, we discovered that they can give false positive test results due to cross reactivity of the SSU-PCR primers with DNA from closely related microsporidia that infect other aquatic organisms. This is problematic for investigating and monitoring EHP infection pathways. To overcome this problem, a sensitive and specific nested PCR method was developed for detection of the spore wall protein (SWP) gene of EHP (SWP-PCR). The new SWP-PCR method did not produce false positive results from closely related microsporidia. The first PCR step of the SWP-PCR method was 100 times (104 plasmid copies per reaction vial) more sensitive than that of the existing SSU-PCR method (106 copies) but sensitivity was equal for both in the nested step (10 copies). Since the hepatopancreas of cultivated shrimp is not currently known to be infected with microsporidia other than EHP, the SSU-PCR methods are still valid for analyzing hepatopancreatic samples despite the lower sensitivity than the SWP-PCR method. However, due to its greater specificity and sensitivity, we recommend that the SWP-PCR method be used to screen for EHP in feces, feed and environmental samples for potential EHP carriers.

  7. A model to estimate insulin sensitivity in dairy cows.

    PubMed

    Holtenius, Paul; Holtenius, Kjell

    2007-10-11

    Impairment of the insulin regulation of energy metabolism is considered to be an etiologic key component for metabolic disturbances. Methods for studies of insulin sensitivity thus are highly topical. There are clear indications that reduced insulin sensitivity contributes to the metabolic disturbances that occurs especially among obese lactating cows. Direct measurements of insulin sensitivity are laborious and not suitable for epidemiological studies. We have therefore adopted an indirect method originally developed for humans to estimate insulin sensitivity in dairy cows. The method, "Revised Quantitative Insulin Sensitivity Check Index" (RQUICKI) is based on plasma concentrations of glucose, insulin and free fatty acids (FFA) and it generates good and linear correlations with different estimates of insulin sensitivity in human populations. We hypothesized that the RQUICKI method could be used as an index of insulin function in lactating dairy cows. We calculated RQUICKI in 237 apparently healthy dairy cows from 20 commercial herds. All cows included were in their first 15 weeks of lactation. RQUICKI was not affected by the homeorhetic adaptations in energy metabolism that occurred during the first 15 weeks of lactation. In a cohort of 24 experimental cows fed in order to obtain different body condition at parturition RQUICKI was lower in early lactation in cows with a high body condition score suggesting disturbed insulin function in obese cows. The results indicate that RQUICKI might be used to identify lactating cows with disturbed insulin function.

  8. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE PAGES

    Dai, Heng; Ye, Ming; Walker, Anthony P.; ...

    2017-03-28

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  9. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  10. Identification of stochastic interactions in nonlinear models of structural mechanics

    NASA Astrophysics Data System (ADS)

    Kala, Zdeněk

    2017-07-01

    In the paper, the polynomial approximation is presented by which the Sobol sensitivity analysis can be evaluated with all sensitivity indices. The nonlinear FEM model is approximated. The input area is mapped using simulations runs of Latin Hypercube Sampling method. The domain of the approximation polynomial is chosen so that it were possible to apply large number of simulation runs of Latin Hypercube Sampling method. The method presented also makes possible to evaluate higher-order sensitivity indices, which could not be identified in case of nonlinear FEM.

  11. Sensitive method to monitor trace quantities of benzanthrone in workers of dyestuff industries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, A.; Khanna, S.K.; Singh, G.B.

    1986-03-01

    Dyestuff workers coming in contact with benzanthrone (an intermediate used for the synthesis of a variety of dyes) develop skin lesions, gastritis, liver malfunctions, and sexual disturbances. A highly sensitive fluorometric method to monitor trace quantities of benzanthrone in urine, serum, and biological tissues for experimental studies, has been developed. Coupled with simple extraction and resolution, optimum fluorescence is obtained in an equal mixture of chloroform:methanol, detecting as low as 2 ng benzanthrone. This method is approximately 250 times more sensitive than currently available colorimetric assay.

  12. Multivariate models for prediction of human skin sensitization hazard.

    PubMed

    Strickland, Judy; Zang, Qingda; Paris, Michael; Lehmann, David M; Allen, David; Choksi, Neepa; Matheson, Joanna; Jacobs, Abigail; Casey, Warren; Kleinstreuer, Nicole

    2017-03-01

    One of the Interagency Coordinating Committee on the Validation of Alternative Method's (ICCVAM) top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary to produce skin sensitization suggests that no single alternative method will replace the currently accepted animal tests. ICCVAM is evaluating an integrated approach to testing and assessment based on the adverse outcome pathway for skin sensitization that uses machine learning approaches to predict human skin sensitization hazard. We combined data from three in chemico or in vitro assays - the direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens™ assay - six physicochemical properties and an in silico read-across prediction of skin sensitization hazard into 12 variable groups. The variable groups were evaluated using two machine learning approaches, logistic regression and support vector machine, to predict human skin sensitization hazard. Models were trained on 72 substances and tested on an external set of 24 substances. The six models (three logistic regression and three support vector machine) with the highest accuracy (92%) used: (1) DPRA, h-CLAT and read-across; (2) DPRA, h-CLAT, read-across and KeratinoSens; or (3) DPRA, h-CLAT, read-across, KeratinoSens and log P. The models performed better at predicting human skin sensitization hazard than the murine local lymph node assay (accuracy 88%), any of the alternative methods alone (accuracy 63-79%) or test batteries combining data from the individual methods (accuracy 75%). These results suggest that computational methods are promising tools to identify effectively the potential human skin sensitizers without animal testing. Published 2016. This article has been contributed to by US Government employees and their work is in the public domain in the USA. Published 2016. This article has been contributed to by US Government employees and their work is in the public domain in the USA.

  13. A Novel Extraction Method Combining Plasma with a Whole-Blood Fraction Shows Excellent Sensitivity and Reproducibility for Patients at High Risk for Invasive Aspergillosis

    PubMed Central

    Springer, Jan; Schloßnagel, Hannes; Heinz, Werner; Doedt, Thomas; Soeller, Rainer; Einsele, Hermann

    2012-01-01

    Diagnosis of invasive aspergillosis (IA) is still a major problem in routine clinical practice. Early diagnosis is essential for a good patient prognosis. PCR is a highly sensitive method for the detection of nucleic acids and could play an important role in improving the diagnosis of fungal infections. Therefore, a novel DNA extraction method, ultraclean production (UCP), was developed allowing purification of both cellular and cell-free circulating fungal DNA. In this prospective study we evaluated the commercially available UCP extraction system and compared it to an in-house system. Sixty-three patients at high risk for IA were screened twice weekly, and DNA extracted by both methods was cross-analyzed, in triplicate, by two different real-time PCR assays. The negative predictive values were high for all methods (94.3 to 100%), qualifying them as screening methods, but the sensitivity and diagnostic odds ratios were higher using the UCP extraction method. Sensitivity ranged from 33.3 to 66.7% using the in-house extracts to 100% using the UCP extraction method. Most of the unclassified patients showed no positive PCR results; however, single-positive PCR replicates were observed in some cases. These can bear clinical relevance but should be interpreted with additional clinical and laboratory data. The PCR assays from the UCP extracts showed greater reproducibility than the in-house method for probable IA patients. The standardized UCP extraction method yielded superior results, with regard to sensitivity and reproducibility, than the in-house method. This was independent of the PCR assay used to detect fungal DNA in the sample extracts. PMID:22593600

  14. Adjoint Sensitivity Analysis for Scale-Resolving Turbulent Flow Solvers

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Garai, Anirban; Diosady, Laslo; Murman, Scott

    2017-11-01

    Adjoint-based sensitivity analysis methods are powerful design tools for engineers who use computational fluid dynamics. In recent years, these engineers have started to use scale-resolving simulations like large-eddy simulations (LES) and direct numerical simulations (DNS), which resolve more scales in complex flows with unsteady separation and jets than the widely-used Reynolds-averaged Navier-Stokes (RANS) methods. However, the conventional adjoint method computes large, unusable sensitivities for scale-resolving simulations, which unlike RANS simulations exhibit the chaotic dynamics inherent in turbulent flows. Sensitivity analysis based on least-squares shadowing (LSS) avoids the issues encountered by conventional adjoint methods, but has a high computational cost even for relatively small simulations. The following talk discusses a more computationally efficient formulation of LSS, ``non-intrusive'' LSS, and its application to turbulent flows simulated with a discontinuous-Galkerin spectral-element-method LES/DNS solver. Results are presented for the minimal flow unit, a turbulent channel flow with a limited streamwise and spanwise domain.

  15. New parameters in adaptive testing of ferromagnetic materials utilizing magnetic Barkhausen noise

    NASA Astrophysics Data System (ADS)

    Pal'a, Jozef; Ušák, Elemír

    2016-03-01

    A new method of magnetic Barkhausen noise (MBN) measurement and optimization of the measured data processing with respect to non-destructive evaluation of ferromagnetic materials was tested. Using this method we tried to found, if it is possible to enhance sensitivity and stability of measurement results by replacing the traditional MBN parameter (root mean square) with some new parameter. In the tested method, a complex set of the MBN from minor hysteresis loops is measured. Afterward, the MBN data are collected into suitably designed matrices and optimal parameters of MBN with respect to maximum sensitivity to the evaluated variable are searched. The method was verified on plastically deformed steel samples. It was shown that the proposed measuring method and measured data processing bring an improvement of the sensitivity to the evaluated variable when comparing with measuring traditional MBN parameter. Moreover, we found a parameter of MBN, which is highly resistant to the changes of applied field amplitude and at the same time it is noticeably more sensitive to the evaluated variable.

  16. Failure Bounding And Sensitivity Analysis Applied To Monte Carlo Entry, Descent, And Landing Simulations

    NASA Technical Reports Server (NTRS)

    Gaebler, John A.; Tolson, Robert H.

    2010-01-01

    In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.

  17. Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference.

    PubMed

    Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H

    2017-03-01

    To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Optimal observables for multiparameter seismic tomography

    NASA Astrophysics Data System (ADS)

    Bernauer, Moritz; Fichtner, Andreas; Igel, Heiner

    2014-08-01

    We propose a method for the design of seismic observables with maximum sensitivity to a target model parameter class, and minimum sensitivity to all remaining parameter classes. The resulting optimal observables thereby minimize interparameter trade-offs in multiparameter inverse problems. Our method is based on the linear combination of fundamental observables that can be any scalar measurement extracted from seismic waveforms. Optimal weights of the fundamental observables are determined with an efficient global search algorithm. While most optimal design methods assume variable source and/or receiver positions, our method has the flexibility to operate with a fixed source-receiver geometry, making it particularly attractive in studies where the mobility of sources and receivers is limited. In a series of examples we illustrate the construction of optimal observables, and assess the potentials and limitations of the method. The combination of Rayleigh-wave traveltimes in four frequency bands yields an observable with strongly enhanced sensitivity to 3-D density structure. Simultaneously, sensitivity to S velocity is reduced, and sensitivity to P velocity is eliminated. The original three-parameter problem thereby collapses into a simpler two-parameter problem with one dominant parameter. By defining parameter classes to equal earth model properties within specific regions, our approach mimics the Backus-Gilbert method where data are combined to focus sensitivity in a target region. This concept is illustrated using rotational ground motion measurements as fundamental observables. Forcing dominant sensitivity in the near-receiver region produces an observable that is insensitive to the Earth structure at more than a few wavelengths' distance from the receiver. This observable may be used for local tomography with teleseismic data. While our test examples use a small number of well-understood fundamental observables, few parameter classes and a radially symmetric earth model, the method itself does not impose such restrictions. It can easily be applied to large numbers of fundamental observables and parameters classes, as well as to 3-D heterogeneous earth models.

  19. On understanding the relationship between structure in the potential surface and observables in classical dynamics: A functional sensitivity analysis approach

    NASA Astrophysics Data System (ADS)

    Judson, Richard S.; Rabitz, Herschel

    1987-04-01

    The relationship between structure in the potential surface and classical mechanical observables is examined by means of functional sensitivity analysis. Functional sensitivities provide maps of the potential surface, highlighting those regions that play the greatest role in determining the behavior of observables. A set of differential equations for the sensitivities of the trajectory components are derived. These are then solved using a Green's function method. It is found that the sensitivities become singular at the trajectory turning points with the singularities going as η-3/2, with η being the distance from the nearest turning point. The sensitivities are zero outside of the energetically and dynamically allowed region of phase space. A second set of equations is derived from which the sensitivities of observables can be directly calculated. An adjoint Green's function technique is employed, providing an efficient method for numerically calculating these quantities. Sensitivity maps are presented for a simple collinear atom-diatom inelastic scattering problem and for two Henon-Heiles type Hamiltonians modeling intramolecular processes. It is found that the positions of the trajectory caustics in the bound state problem determine regions of the highest potential surface sensitivities. In the scattering problem (which is impulsive, so that ``sticky'' collisions did not occur), the positions of the turning points of the individual trajectory components determine the regions of high sensitivity. In both cases, these lines of singularities are superimposed on a rich background structure. Most interesting is the appearance of classical interference effects. The interference features in the sensitivity maps occur most noticeably where two or more lines of turning points cross. The important practical motivation for calculating the sensitivities derives from the fact that the potential is a function, implying that any direct attempt to understand how local potential regions affect the behavior of the observables by repeatedly and systematically altering the potential will be prohibitively expensive. The functional sensitivity method enables one to perform this analysis at a fraction of the computational labor required for the direct method.

  20. Experimental study on cross-sensitivity of temperature and vibration of embedded fiber Bragg grating sensors

    NASA Astrophysics Data System (ADS)

    Chen, Tao; Ye, Meng-li; Liu, Shu-liang; Deng, Yan

    2018-03-01

    In view of the principle for occurrence of cross-sensitivity, a series of calibration experiments are carried out to solve the cross-sensitivity problem of embedded fiber Bragg gratings (FBGs) using the reference grating method. Moreover, an ultrasonic-vibration-assisted grinding (UVAG) model is established, and finite element analysis (FEA) is carried out under the monitoring environment of embedded temperature measurement system. In addition, the related temperature acquisition tests are set in accordance with requirements of the reference grating method. Finally, comparative analyses of the simulation and experimental results are performed, and it may be concluded that the reference grating method may be utilized to effectively solve the cross-sensitivity of embedded FBGs.

  1. Resonance-induced sensitivity enhancement method for conductivity sensors

    NASA Technical Reports Server (NTRS)

    Tai, Yu-Chong (Inventor); Shih, Chi-yuan (Inventor); Li, Wei (Inventor); Zheng, Siyang (Inventor)

    2009-01-01

    Methods and systems for improving the sensitivity of a variety of conductivity sensing devices, in particular capacitively-coupled contactless conductivity detectors. A parallel inductor is added to the conductivity sensor. The sensor with the parallel inductor is operated at a resonant frequency of the equivalent circuit model. At the resonant frequency, parasitic capacitances that are either in series or in parallel with the conductance (and possibly a series resistance) is substantially removed from the equivalent circuit, leaving a purely resistive impedance. An appreciably higher sensor sensitivity results. Experimental verification shows that sensitivity improvements of the order of 10,000-fold are possible. Examples of detecting particulates with high precision by application of the apparatus and methods of operation are described.

  2. First- and Second-Order Sensitivity Analysis of a P-Version Finite Element Equation Via Automatic Differentiation

    NASA Technical Reports Server (NTRS)

    Hou, Gene

    1998-01-01

    Sensitivity analysis is a technique for determining derivatives of system responses with respect to design parameters. Among many methods available for sensitivity analysis, automatic differentiation has been proven through many applications in fluid dynamics and structural mechanics to be an accurate and easy method for obtaining derivatives. Nevertheless, the method can be computational expensive and can require a high memory space. This project will apply an automatic differentiation tool, ADIFOR, to a p-version finite element code to obtain first- and second- order then-nal derivatives, respectively. The focus of the study is on the implementation process and the performance of the ADIFOR-enhanced codes for sensitivity analysis in terms of memory requirement, computational efficiency, and accuracy.

  3. Numerical studies of the thermal design sensitivity calculation for a reaction-diffusion system with discontinuous derivatives

    NASA Technical Reports Server (NTRS)

    Hou, Jean W.; Sheen, Jeen S.

    1987-01-01

    The aim of this study is to find a reliable numerical algorithm to calculate thermal design sensitivities of a transient problem with discontinuous derivatives. The thermal system of interest is a transient heat conduction problem related to the curing process of a composite laminate. A logical function which can smoothly approximate the discontinuity is introduced to modify the system equation. Two commonly used methods, the adjoint variable method and the direct differentiation method, are then applied to find the design derivatives of the modified system. The comparisons of numerical results obtained by these two methods demonstrate that the direct differentiation method is a better choice to be used in calculating thermal design sensitivity.

  4. A rapid method for determining salinomycin and monensin sensitivity in Eimeria tenella

    USDA-ARS?s Scientific Manuscript database

    Standard methods of determining the ionophore sensitivity of Eimeria rely on infecting chickens with an isolate or a mixture of Eimeria spp. oocysts in the presence of different anti-coccidial drugs. The purpose of this study was to develop a rapid in vitro method for assessing salinomycin and mone...

  5. A novel in chemico method to detect skin sensitizers in highly diluted reaction conditions.

    PubMed

    Yamamoto, Yusuke; Tahara, Haruna; Usami, Ryota; Kasahara, Toshihiko; Jimbo, Yoshihiro; Hioki, Takanori; Fujita, Masaharu

    2015-11-01

    The direct peptide reactivity assay (DPRA) is a simple and versatile alternative method for the evaluation of skin sensitization that involves the reaction of test chemicals with two peptides. However, this method requires concentrated solutions of test chemicals, and hydrophobic substances may not dissolve at the concentrations required. Furthermore, hydrophobic test chemicals may precipitate when added to the reaction solution. We previously established a high-sensitivity method, the amino acid derivative reactivity assay (ADRA). This method uses novel cysteine (NAC) and novel lysine derivatives (NAL), which were synthesized by introducing a naphthalene ring to the amine group of cysteine and lysine residues. In this study, we modified the ADRA method by reducing the concentration of the test chemicals 100-fold. We investigated the accuracy of skin sensitization predictions made using the modified method, which was designated the ADRA-dilutional method (ADRA-DM). The predictive accuracy of the ADRA-DM for skin sensitization was 90% for 82 test chemicals which were also evaluated via the ADRA, and the predictive accuracy in the ADRA-DM was higher than that in the ADRA and DPRA. Furthermore, no precipitation of test compounds was observed at the initiation of the ADRA-DM reaction. These results show that the ADRA-DM allowed the use of test chemicals at concentrations two orders of magnitude lower than that possible with the ADRA. In addition, ADRA-DM does not have the restrictions on test compound solubility that were a major problem with the DPRA. Therefore, the ADRA-DM is a versatile and useful method. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Method of controlled reduction of nitroaromatics by enzymatic reaction with oxygen sensitive nitroreductase enzymes

    DOEpatents

    Shah, Manish M.; Campbell, James A.

    1998-01-01

    A method for the controlled reduction of nitroaromatic compounds such as nitrobenzene and 2,4,6-trinitrotoluene by enzymatic reaction with oxygen sensitive nitroreductase enzymes, such as ferredoxin NADP oxidoreductase.

  7. [Multiplex real-time PCR method for rapid detection of Marburg virus and Ebola virus].

    PubMed

    Yang, Yu; Bai, Lin; Hu, Kong-Xin; Yang, Zhi-Hong; Hu, Jian-Ping; Wang, Jing

    2012-08-01

    Marburg virus and Ebola virus are acute infections with high case fatality rates. A rapid, sensitive detection method was established to detect Marburg virus and Ebola virus by multiplex real-time fluorescence quantitative PCR. Designing primers and Taqman probes from highly conserved sequences of Marburg virus and Ebola virus through whole genome sequences alignment, Taqman probes labeled by FAM and Texas Red, the sensitivity of the multiplex real-time quantitative PCR assay was optimized by evaluating the different concentrations of primers and Probes. We have developed a real-time PCR method with the sensitivity of 30.5 copies/microl for Marburg virus positive plasmid and 28.6 copies/microl for Ebola virus positive plasmids, Japanese encephalitis virus, Yellow fever virus, Dengue virus were using to examine the specificity. The Multiplex real-time PCR assays provide a sensitive, reliable and efficient method to detect Marburg virus and Ebola virus simultaneously.

  8. Variation in sensitivity, absorption and density of the central rod distribution with eccentricity.

    PubMed

    Tornow, R P; Stilling, R

    1998-01-01

    To assess the human rod photopigment distribution and sensitivity with high spatial resolution within the central +/-15 degrees and to compare the results of pigment absorption, sensitivity and rod density distribution (number of rods per square degree). Rod photopigment density distribution was measured with imaging densitometry using a modified Rodenstock scanning laser ophthalmoscope. Dark-adapted sensitivity profiles were measured with green stimuli (17' arc diameter, 1 degrees spacing) using a T ubingen manual perimeter. Sensitivity profiles were plotted on a linear scale and rod photopigment optical density distribution profiles were converted to absorption profiles of the rod photopigment layer. Both the absorption profile of the rod photopigment and the linear sensitivity profile for green stimuli show a minimum at the foveal center and increase steeply with eccentricity. The variation with eccentricity corresponds to the rod density distribution. Rod photopigment absorption profiles, retinal sensitivity profiles, and the rod density distribution are linearly related within the central +/-15 degrees. This is in agreement with theoretical considerations. Both methods, imaging retinal densitometry using a scanning laser ophthalmoscope and dark-adapted perimetry with small green stimuli, are useful for assessing the central rod distribution and sensitivity. However, at present, both methods have limitations. Suggestions for improving the reliability of both methods are given.

  9. MethylMeter®: bisulfite-free quantitative and sensitive DNA methylation profiling and mutation detection in FFPE samples

    PubMed Central

    McCarthy, David; Pulverer, Walter; Weinhaeusel, Andreas; Diago, Oscar R; Hogan, Daniel J; Ostertag, Derek; Hanna, Michelle M

    2016-01-01

    Aim: Development of a sensitive method for DNA methylation profiling and associated mutation detection in clinical samples. Materials & methods: Formalin-fixed and paraffin-embedded tumors received by clinical laboratories often contain insufficient DNA for analysis with bisulfite or methylation sensitive restriction enzymes-based methods. To increase sensitivity, methyl-CpG DNA capture and Coupled Abscription PCR Signaling detection were combined in a new assay, MethylMeter®. Gliomas were analyzed for MGMT methylation, glioma CpG island methylator phenotype and IDH1 R132H. Results: MethylMeter had 100% assay success rate measuring all five biomarkers in formalin-fixed and paraffin-embedded tissue. MGMT methylation results were supported by survival and mRNA expression data. Conclusion: MethylMeter is a sensitive and quantitative method for multitarget DNA methylation profiling and associated mutation detection. The MethylMeter-based GliomaSTRAT assay measures methylation of four targets and one mutation to simultaneously grade gliomas and predict their response to temozolomide. This information is clinically valuable in management of gliomas. PMID:27337298

  10. DNP enhanced NMR with flip-back recovery

    NASA Astrophysics Data System (ADS)

    Björgvinsdóttir, Snædís; Walder, Brennan J.; Pinon, Arthur C.; Yarava, Jayasubba Reddy; Emsley, Lyndon

    2018-03-01

    DNP methods can provide significant sensitivity enhancements in magic angle spinning solid-state NMR, but in systems with long polarization build up times long recycling periods are required to optimize sensitivity. We show how the sensitivity of such experiments can be improved by the classic flip-back method to recover bulk proton magnetization following continuous wave proton heteronuclear decoupling. Experiments were performed on formulations with characteristic build-up times spanning two orders of magnitude: a bulk BDPA radical doped o-terphenyl glass and microcrystalline samples of theophylline, L-histidine monohydrochloride monohydrate, and salicylic acid impregnated by incipient wetness. For these systems, addition of flip-back is simple, improves the sensitivity beyond that provided by modern heteronuclear decoupling methods such as SPINAL-64, and provides optimal sensitivity at shorter recycle delays. We show how to acquire DNP enhanced 2D refocused CP-INADEQUATE spectra with flip-back recovery, and demonstrate that the flip-back recovery method is particularly useful in rapid recycling regimes. We also report Overhauser effect DNP enhancements of over 70 at 592.6 GHz/900 MHz.

  11. Accuracy of the domain method for the material derivative approach to shape design sensitivities

    NASA Technical Reports Server (NTRS)

    Yang, R. J.; Botkin, M. E.

    1987-01-01

    Numerical accuracy for the boundary and domain methods of the material derivative approach to shape design sensitivities is investigated through the use of mesh refinement. The results show that the domain method is generally more accurate than the boundary method, using the finite element technique. It is also shown that the domain method is equivalent, under certain assumptions, to the implicit differentiation approach not only theoretically but also numerically.

  12. Computational aspects of sensitivity calculations in linear transient structural analysis. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1990-01-01

    A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.

  13. Application of global sensitivity analysis methods to Takagi-Sugeno-Kang rainfall-runoff fuzzy models

    NASA Astrophysics Data System (ADS)

    Jacquin, A. P.; Shamseldin, A. Y.

    2009-04-01

    This study analyses the sensitivity of the parameters of Takagi-Sugeno-Kang rainfall-runoff fuzzy models previously developed by the authors. These models can be classified in two types, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity in the rainfall-runoff relationship. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis (RSA) and Sobol's Variance Decomposition (SVD). In general, the RSA method has the disadvantage of not being able to detect sensitivities arising from parameter interactions. By contrast, the SVD method is suitable for analysing models where the model response surface is expected to be affected by interactions at a local scale and/or local optima, such as the case of the rainfall-runoff fuzzy models analysed in this study. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of two measures of goodness of fit, assessing the model performance from different points of view. These measures are the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the study show that the sensitivity of the model parameters depends on both the type of non-linear effects (i.e. changes in catchment wetness or seasonality) that dominates the catchment's rainfall-runoff relationship and the measure used to assess the model performance. Acknowledgements: This research was supported by FONDECYT, Research Grant 11070130. We would also like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.

  14. Staying theoretically sensitive when conducting grounded theory research.

    PubMed

    Reay, Gudrun; Bouchal, Shelley Raffin; A Rankin, James

    2016-09-01

    Background Grounded theory (GT) is founded on the premise that underlying social patterns can be discovered and conceptualised into theories. The method and need for theoretical sensitivity are best understood in the historical context in which GT was developed. Theoretical sensitivity entails entering the field with no preconceptions, so as to remain open to the data and the emerging theory. Investigators also read literature from other fields to understand various ways to construct theories. Aim To explore the concept of theoretical sensitivity from a classical GT perspective, and discuss the ontological and epistemological foundations of GT. Discussion Difficulties in remaining theoretically sensitive throughout research are discussed and illustrated with examples. Emergence - the idea that theory and substance will emerge from the process of comparing data - and staying open to the data are emphasised. Conclusion Understanding theoretical sensitivity as an underlying guiding principle of GT helps the researcher make sense of important concepts, such as delaying the literature review, emergence and the constant comparative method (simultaneous collection, coding and analysis of data). Implications for practice Theoretical sensitivity and adherence to the GT research method allow researchers to discover theories that can bridge the gap between theory and practice.

  15. Isolation of uv-sensitive variants of human FL cells by a viral suicide method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shiomi, T.; Sato, K.

    A new method (viral suicide method) for the isolation of uv-sensitive mutants is described. Colonies of mutagenized human FL cells were infected with uv-irradiated Herpes simplex viruses and surviving ones which seemed to be deficient in host cell reactivation (HCR) were examined for their uv sensitivity. Nineteen of 238 clones examined were sensitive to uv irradiation at the time of the isolation. After recloning, four of these clones have been studied and two (UVS-1 and UVS-2) of them are stable in their uv sensitivity for 4 months in culture. uv sensitivity of UVS-1, UVS-2, and the parental FL cells aremore » as follows: the extrapolation numbers (n) are 2.2, 2.1, and 1.8 and mean lethal doses (DO) are 2.9, 3.7, and 7.8 J/m/sup 2/ for UVS-1, UVS-2, and the parental FL cells, respectively. They are no more sensitive than FL cells to x-irradiation. The ability of HCR in UVS-2 cells is apparently lower than that in FL cells, whereas UVS-1 cells are the same as FL cells in the ability.« less

  16. An investigation of using an RQP based method to calculate parameter sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    Estimation of the sensitivity of problem functions with respect to problem variables forms the basis for many of our modern day algorithms for engineering optimization. The most common application of problem sensitivities has been in the calculation of objective function and constraint partial derivatives for determining search directions and optimality conditions. A second form of sensitivity analysis, parameter sensitivity, has also become an important topic in recent years. By parameter sensitivity, researchers refer to the estimation of changes in the modeling functions and current design point due to small changes in the fixed parameters of the formulation. Methods for calculating these derivatives have been proposed by several authors (Armacost and Fiacco 1974, Sobieski et al 1981, Schmit and Chang 1984, and Vanderplaats and Yoshida 1985). Two drawbacks to estimating parameter sensitivities by current methods have been: (1) the need for second order information about the Lagrangian at the current point, and (2) the estimates assume no change in the active set of constraints. The first of these two problems is addressed here and a new algorithm is proposed that does not require explicit calculation of second order information.

  17. A Comparison of the Capability of Sensitivity Level 3 and Sensitivity Level 4 Fluorescent Penetrants to Detect Fatigue Cracks in Various Metals

    NASA Technical Reports Server (NTRS)

    Parker, Bradford H.

    2011-01-01

    In April 2008, NASA-STD-5009 established a requirement that only sensitivity level 4 penetrants are acceptable for NASA Standard Level liquid penetrant inspections. Having NASA contractors change existing processes or perform demonstration tests to certify sensitivity level 3 penetrants posed a potentially huge cost to the Agency. This study was conducted to directly compare the probability of detection (POD) of sensitivity level 3 and level 4 penetrants using both Method A and Method D inspection processes. POD demonstration tests were performed on 6061-Al, Haynes 188 and Ti-6Al-4V crack panel sets. The study results strongly support the conclusion that sensitivity level 3 penetrants are acceptable for NASA Standard Level inspections.

  18. Comparison of Non-Culture-Based Methods for Detection of Systemic Fungal Infections, with an Emphasis on Invasive Candida Infections

    PubMed Central

    White, P. Lewis; Archer, Alice E.; Barnes, Rosemary A.

    2005-01-01

    The accepted limitations associated with classic culture techniques for the diagnosis of invasive fungal infections have lead to the emergence of many non-culture-based methods. With superior sensitivities and quicker turnaround times, non-culture-based methods may aid the diagnosis of invasive fungal infections. In this review of the diagnostic service, we assessed the performances of two antigen detection techniques (enzyme-linked immunosorbent assay [ELISA] and latex agglutination) with a molecular method for the detection of invasive Candida infection and invasive aspergillosis. The specificities for all three assays were high (≥97%), although the Candida PCR method had enhanced sensitivity over both ELISA and latex agglutination with values of 95%, 75%, and 25%, respectively. However, calculating significant sensitivity values for the Aspergillus detection methods was not feasible due to a low number of proven/probable cases. Despite enhanced sensitivity, the PCR method failed to detect nucleic acid in a probable case of invasive Candida infection that was detected by ELISA. In conclusion, both PCR and ELISA techniques should be used in unison to aid the detection of invasive fungal infections. PMID:15872239

  19. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  20. Sensitivity of diagnostic methods for Mansonella ozzardi microfilariae detection in the Brazilian Amazon Region

    PubMed Central

    Medeiros, Jansen Fernandes; Fontes, Gilberto; do Nascimento, Vilma Lopes; Rodrigues, Moreno; Cohen, Jacob; de Andrade, Edmar Vaz; Pessoa, Felipe Arley Costa; Martins, Marilaine

    2018-01-01

    BACKGROUND The human filarial worm Mansonella ozzardi is highly endemic in the large tributaries of the Amazon River. This infection is still highly neglected and can be falsely negative when microfilariae levels are low. OBJECTIVES This study investigated the frequency of individuals with M. ozzardi in riverine communities in Coari municipality, Brazilian Amazon. METHODS Different diagnostic methods including polymerase chain reaction (PCR), blood polycarbonate membrane filtration (PCMF), Knott's method (Knott), digital thick blood smears (DTBS) and venous thick blood smears (VTBS) were used to compare sensitivity and specificity among the methods. Data were analysed using PCMF and Bayesian latent class models (BLCM) as the gold standard. We used BLCM to calculate the prevalence of mansonelliasis based on the results of five diagnostic methods. FINDINGS The prevalence of mansonelliasis was 35.4% by PCMF and 30.1% by BLCM. PCR and Knott methods both possessed high sensitivity. Sensitivity relative to PCMF was 98.5% [95% confidence interval (CI): 92.0 - 99.7] for PCR and 83.5% (95% CI: 72.9 - 90.5) for Knott. Sensitivity derived by BLCM was 100% (95% CI 93.7 - 100) for PCMF, 100% (95% CI: 93.7 - 100) for PCR and 98.3% (95% CI: 90.6 - 99.9) for Knott. The odds ratio of being diagnosed as microfilaremic increased with age but did not differ between genders. Microfilariae loads were higher in subjects aged 30 - 45 and 45 - 60 years. MAIN CONCLUSIONS PCMF and PCR were the best methods to assess the prevalence of mansonelliasis in our samples. As such, using these methods could lead to higher prevalence of mansonelliasis in this region than the most commonly used method (i.e., thick blood smears). PMID:29412356

  1. [The diagnostic value of ultrasonic elastography and ultrasonography comprehensive score in cervical lesions].

    PubMed

    Lu, R; Xiao, Y

    2017-07-18

    Objective: To evaluate the clinical value of ultrasonic elastography and ultrasonography comprehensive scoring method in the diagnosis of cervical lesions. Methods: A total of 116 patients were selected from the Department of Gynecology of the first hospital affiliated with Central South University from March 2014 to September 2015.All of the lesions were preoperatively examined by Doppler Ultrasound and elastography.The elasticity score was determined by a 5-point scoring method. Calculation of the strain ratio was based on a comparison of the average strain measured in the lesion with the adjacent tissue of the same depth, size, and shape.All these ultrasonic parameters were quantified, added, and arrived at ultrasonography comprehensive scores.To use surgical pathology as the gold standard, the sensitivity, specificity, accuracy of Doppler Ultrasound, elasticity score and strain ratio methods and ultrasonography comprehensive scoring method were comparatively analyzed. Results: (1) The sensitivity, specificity, and accuracy of Doppler Ultrasound in diagnosing cervical lesions were 82.89% (63/76), 85.0% (34/40), and 83.62% (97/116), respectively.(2) The sensitivity, specificity, and accuracy of the elasticity score method were 77.63% (59/76), 82.5% (33/40), and 79.31% (92/116), respectively; the sensitivity, specificity, and accuracy of the strain ratio measure method were 84.21% (64/76), 87.5% (35/40), and 85.34% (99/116), respectively.(3) The sensitivity, specificity, and accuracy of ultrasonography comprehensive scoring method were 90.79% (69/76), 92.5% (37/40), and 91.38% (106/116), respectively. Conclusion: (1) It was obvious that ultrasonic elastography had certain diagnostic value in cervical lesions. Strain ratio measurement can be more objective than elasticity score method.(2) The combined application of ultrasonography comprehensive scoring method, ultrasonic elastography and conventional sonography was more accurate than single parameter.

  2. A Pilot Study of a Picture- and Audio-Assisted Self-Interviewing Method (PIASI) for the Study of Sensitive Questions on HIV in the Field

    ERIC Educational Resources Information Center

    Aarnio, Pauliina; Kulmala, Teija

    2016-01-01

    Self-interview methods such as audio computer-assisted self-interviewing (ACASI) are used to improve the accuracy of interview data on sensitive topics in large trials. Small field studies on sensitive topics would benefit from methodological alternatives. In a study on male involvement in antenatal HIV testing in a largely illiterate population…

  3. Benchmark On Sensitivity Calculation (Phase III)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanova, Tatiana; Laville, Cedric; Dyrda, James

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impactmore » the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.« less

  4. Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Newman, James C., III; Barnwell, Richard W.; Taylor, Arthur C., III; Hou, Gene J.-W.

    1998-01-01

    This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics. The focus here is on those methods particularly well- suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid computational fluid dynamics algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid computational fluid dynamics in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.

  5. Sensitivity ranking for freshwater invertebrates towards hydrocarbon contaminants.

    PubMed

    Gerner, Nadine V; Cailleaud, Kevin; Bassères, Anne; Liess, Matthias; Beketov, Mikhail A

    2017-11-01

    Hydrocarbons have an utmost economical importance but may also cause substantial ecological impacts due to accidents or inadequate transportation and use. Currently, freshwater biomonitoring methods lack an indicator that can unequivocally reflect the impacts caused by hydrocarbons while being independent from effects of other stressors. The aim of the present study was to develop a sensitivity ranking for freshwater invertebrates towards hydrocarbon contaminants, which can be used in hydrocarbon-specific bioindicators. We employed the Relative Sensitivity method and developed the sensitivity ranking S hydrocarbons based on literature ecotoxicological data supplemented with rapid and mesocosm test results. A first validation of the sensitivity ranking based on an earlier field study has been conducted and revealed the S hydrocarbons ranking to be promising for application in sensitivity based indicators. Thus, the first results indicate that the ranking can serve as the core component of future hydrocarbon-specific and sensitivity trait based bioindicators.

  6. Development of LLNA:DAE: a new local lymph node assay that includes the elicitation phase, discriminates borderline-positive chemicals, and is useful for cross-sensitization testing.

    PubMed

    Yamashita, Kunihiko; Shinoda, Shinsuke; Hagiwara, Saori; Itagaki, Hiroshi

    2014-02-01

    We developed a new local lymph node assay (LLNA) that includes the elicitation phase termed LLNA:DAE for discrimination of borderline-positive chemicals as classified by the LLNA modified by Daicel based on ATP content (LLNA:DA) and for cross-sensitization testing. Although the LLNA:DA method could help identify skin sensitizers, some skin irritants classified as non-sensitizers by the LLNA were classified as borderline positive. In addition, the evaluation for the cross-sensitization potential between chemicals was impossible. In the LLNA:DAE procedure, test group of mice received four applications of chemicals on the dorsum of the right ear for induction and one application on the dorsum of the left ear for elicitation. Control group of mice received one chemical application on the dorsum of the left ear. We evaluated the sensitizing potential by comparing the weights of the lymph nodes from the left ears between the two groups. The results of using the LLNA:DAE method to examine 24 chemicals, which contained borderline-positive chemicals, were consistent with those from the LLNA method, except for nickel chloride (NiCl2). Two chemical pairs, 2,4-dinitrochlorobenzene (DNCB) with 2,4-dinitrofluorobenzene (DNFB) and hydroquinone (HQ) with p-benzoquinone (p-BQ), showed clear cross-sensitization with each other, while another chemical pair, DNFB with hexylcinnamic aldehyde (HCA) did not. Taken together, our results suggest that the LLNA:DAE method is useful for discriminating borderline-positive chemicals and for determining chemical cross-sensitization.

  7. A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters

    NASA Astrophysics Data System (ADS)

    Ren, Luchuan

    2015-04-01

    A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters Luchuan Ren, Jianwei Tian, Mingli Hong Institute of Disaster Prevention, Sanhe, Heibei Province, 065201, P.R. China It is obvious that the uncertainties of the maximum tsunami wave heights in offshore area are partly from uncertainties of the potential seismic tsunami source parameters. A global sensitivity analysis method on the maximum tsunami wave heights to the potential seismic source parameters is put forward in this paper. The tsunami wave heights are calculated by COMCOT ( the Cornell Multi-grid Coupled Tsunami Model), on the assumption that an earthquake with magnitude MW8.0 occurred at the northern fault segment along the Manila Trench and triggered a tsunami in the South China Sea. We select the simulated results of maximum tsunami wave heights at specific sites in offshore area to verify the validity of the method proposed in this paper. For ranking importance order of the uncertainties of potential seismic source parameters (the earthquake's magnitude, the focal depth, the strike angle, dip angle and slip angle etc..) in generating uncertainties of the maximum tsunami wave heights, we chose Morris method to analyze the sensitivity of the maximum tsunami wave heights to the aforementioned parameters, and give several qualitative descriptions of nonlinear or linear effects of them on the maximum tsunami wave heights. We quantitatively analyze the sensitivity of the maximum tsunami wave heights to these parameters and the interaction effects among these parameters on the maximum tsunami wave heights by means of the extended FAST method afterward. The results shows that the maximum tsunami wave heights are very sensitive to the earthquake magnitude, followed successively by the epicenter location, the strike angle and dip angle, the interactions effect between the sensitive parameters are very obvious at specific site in offshore area, and there exist differences in importance order in generating uncertainties of the maximum tsunami wave heights for same group parameters at different specific sites in offshore area. These results are helpful to deeply understand the relationship between the tsunami wave heights and the seismic tsunami source parameters. Keywords: Global sensitivity analysis; Tsunami wave height; Potential seismic tsunami source parameter; Morris method; Extended FAST method

  8. Direct differentiation of the quasi-incompressible fluid formulation of fluid-structure interaction using the PFEM

    NASA Astrophysics Data System (ADS)

    Zhu, Minjie; Scott, Michael H.

    2017-07-01

    Accurate and efficient response sensitivities for fluid-structure interaction (FSI) simulations are important for assessing the uncertain response of coastal and off-shore structures to hydrodynamic loading. To compute gradients efficiently via the direct differentiation method (DDM) for the fully incompressible fluid formulation, approximations of the sensitivity equations are necessary, leading to inaccuracies of the computed gradients when the geometry of the fluid mesh changes rapidly between successive time steps or the fluid viscosity is nonzero. To maintain accuracy of the sensitivity computations, a quasi-incompressible fluid is assumed for the response analysis of FSI using the particle finite element method and DDM is applied to this formulation, resulting in linearized equations for the response sensitivity that are consistent with those used to compute the response. Both the response and the response sensitivity can be solved using the same unified fractional step method. FSI simulations show that although the response using the quasi-incompressible and incompressible fluid formulations is similar, only the quasi-incompressible approach gives accurate response sensitivity for viscous, turbulent flows regardless of time step size.

  9. A 2D MTF approach to evaluate and guide dynamic imaging developments.

    PubMed

    Chao, Tzu-Cheng; Chung, Hsiao-Wen; Hoge, W Scott; Madore, Bruno

    2010-02-01

    As the number and complexity of partially sampled dynamic imaging methods continue to increase, reliable strategies to evaluate performance may prove most useful. In the present work, an analytical framework to evaluate given reconstruction methods is presented. A perturbation algorithm allows the proposed evaluation scheme to perform robustly without requiring knowledge about the inner workings of the method being evaluated. A main output of the evaluation process consists of a two-dimensional modulation transfer function, an easy-to-interpret visual rendering of a method's ability to capture all combinations of spatial and temporal frequencies. Approaches to evaluate noise properties and artifact content at all spatial and temporal frequencies are also proposed. One fully sampled phantom and three fully sampled cardiac cine datasets were subsampled (R = 4 and 8) and reconstructed with the different methods tested here. A hybrid method, which combines the main advantageous features observed in our assessments, was proposed and tested in a cardiac cine application, with acceleration factors of 3.5 and 6.3 (skip factors of 4 and 8, respectively). This approach combines features from methods such as k-t sensitivity encoding, unaliasing by Fourier encoding the overlaps in the temporal dimension-sensitivity encoding, generalized autocalibrating partially parallel acquisition, sensitivity profiles from an array of coils for encoding and reconstruction in parallel, self, hybrid referencing with unaliasing by Fourier encoding the overlaps in the temporal dimension and generalized autocalibrating partially parallel acquisition, and generalized autocalibrating partially parallel acquisition-enhanced sensitivity maps for sensitivity encoding reconstructions.

  10. Developing Bayesian adaptive methods for estimating sensitivity thresholds (d′) in Yes-No and forced-choice tasks

    PubMed Central

    Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.

    2015-01-01

    Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798

  11. Global Sensitivity Analysis for Process Identification under Model Uncertainty

    NASA Astrophysics Data System (ADS)

    Ye, M.; Dai, H.; Walker, A. P.; Shi, L.; Yang, J.

    2015-12-01

    The environmental system consists of various physical, chemical, and biological processes, and environmental models are always built to simulate these processes and their interactions. For model building, improvement, and validation, it is necessary to identify important processes so that limited resources can be used to better characterize the processes. While global sensitivity analysis has been widely used to identify important processes, the process identification is always based on deterministic process conceptualization that uses a single model for representing a process. However, environmental systems are complex, and it happens often that a single process may be simulated by multiple alternative models. Ignoring the model uncertainty in process identification may lead to biased identification in that identified important processes may not be so in the real world. This study addresses this problem by developing a new method of global sensitivity analysis for process identification. The new method is based on the concept of Sobol sensitivity analysis and model averaging. Similar to the Sobol sensitivity analysis to identify important parameters, our new method evaluates variance change when a process is fixed at its different conceptualizations. The variance considers both parametric and model uncertainty using the method of model averaging. The method is demonstrated using a synthetic study of groundwater modeling that considers recharge process and parameterization process. Each process has two alternative models. Important processes of groundwater flow and transport are evaluated using our new method. The method is mathematically general, and can be applied to a wide range of environmental problems.

  12. The bias, accuracy and precision of faecal egg count reduction test results in cattle using McMaster, Cornell-Wisconsin and FLOTAC egg counting methods.

    PubMed

    Levecke, B; Rinaldi, L; Charlier, J; Maurelli, M P; Bosco, A; Vercruysse, J; Cringoli, G

    2012-08-13

    The faecal egg count reduction test (FECRT) is the recommended method to monitor anthelmintic drug efficacy in cattle. There is a large variation in faecal egg count (FEC) methods applied to determine FECRT. However, it remains unclear whether FEC methods with an equal analytic sensitivity, but with different methodologies, result in equal FECRT results. We therefore, compared the bias, accuracy and precision of FECRT results for Cornell-Wisconsin (analytic sensitivity = 1 egg per gram faeces (EPG)), FLOTAC (analytic sensitivity = 1 EPG) and McMaster method (analytic sensitivity = 10 EPG) across four levels of egg excretion (1-49 EPG; 50-149 EPG; 150-299 EPG; 300-600 EPG). Finally, we assessed the sensitivity of the FEC methods to detect a truly reduced efficacy. To this end, two different criteria were used to define reduced efficacy based on FECR, including those described in the WAAVP guidelines (FECRT <95% and lower limit of 95%CI <90%) (Coles et al., 1992) and those proposed by El-Abdellati et al. (2010) (upper limit of 95%CI <95%). There was no significant difference in bias and accuracy of FECRT results across the three methods. FLOTAC provided the most precise FECRT results. Cornell-Wisconsin and McMaster gave similar imprecise results. FECRT were significantly underestimated when baseline FEC were low and drugs were more efficacious. For all FEC methods, precision and accuracy of the FECRT improved as egg excretion increased, this effect was greatest for McMaster and least for Cornell-Wisconsin. The sensitivity of the three methods to detect a truly reduced efficacy was high (>90%). Yet, the sensitivity of McMaster and Cornell-Wisconsin may drop when drugs only show sub-optimal efficacy. Overall, the study indicates that the precision of FECRT is affected by the methodology of FEC, and that the level of egg excretion should be considered in the final interpretation of the FECRT. However, more comprehensive studies are required to provide more insights into the complex interplay of factors inherent to study design (sample size and FEC method) and host-parasite interactions (level of egg excretion and aggregation across the host population). Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Groundwater sensitivity mapping in Kentucky using GIS and digitally vectorized geologic quadrangles

    NASA Astrophysics Data System (ADS)

    Croskrey, Andrea; Groves, Chris

    2008-05-01

    Groundwater sensitivity (Ray and O’dell in Environ Geol 22:345 352, 1993a) refers to the inherent ease with which groundwater can be contaminated based on hydrogeologic characteristics. We have developed digital methods for identifying areas of varying groundwater sensitivity for a ten county area of south central Kentucky at a scale of 1:100,000. The study area includes extensive limestone karst sinkhole plains, with groundwater extremely sensitive to contamination. Digitally vectorized geologic quadrangles (DVGQs) were combined with elevation data to identify both hydrogeologic groundwater sensitivity regions and zones of “high risk runoff” where contaminants could be transported in runoff from less sensitive to higher sensitivity (particularly karst) areas. While future work will fine-tune these maps with additional layers of data (soils for example) as digital data have become available, using DVGQs allows a relatively rapid assessment of groundwater sensitivity for Kentucky at a more useful scale than previously available assessment methods, such as DRASTIC and DIVERSITY.

  14. Methods for increasing the sensitivity of gamma-ray imagers

    DOEpatents

    Mihailescu, Lucian [Pleasanton, CA; Vetter, Kai M [Alameda, CA; Chivers, Daniel H [Fremont, CA

    2012-02-07

    Methods are presented that increase the position resolution and granularity of double sided segmented semiconductor detectors. These methods increase the imaging resolution capability of such detectors, either used as Compton cameras, or as position sensitive radiation detectors in imagers such as SPECT, PET, coded apertures, multi-pinhole imagers, or other spatial or temporal modulated imagers.

  15. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1994-01-01

    The primary accomplishments of the project are as follows: (1) Using the transonic small perturbation equation as a flowfield model, the project demonstrated that the quasi-analytical method could be used to obtain aerodynamic sensitivity coefficients for airfoils at subsonic, transonic, and supersonic conditions for design variables such as Mach number, airfoil thickness, maximum camber, angle of attack, and location of maximum camber. It was established that the quasi-analytical approach was an accurate method for obtaining aerodynamic sensitivity derivatives for airfoils at transonic conditions and usually more efficient than the finite difference approach. (2) The usage of symbolic manipulation software to determine the appropriate expressions and computer coding associated with the quasi-analytical method for sensitivity derivatives was investigated. Using the three dimensional fully conservative full potential flowfield model, it was determined that symbolic manipulation along with a chain rule approach was extremely useful in developing a combined flowfield and quasi-analytical sensitivity derivative code capable of considering a large number of realistic design variables. (3) Using the three dimensional fully conservative full potential flowfield model, the quasi-analytical method was applied to swept wings (i.e. three dimensional) at transonic flow conditions. (4) The incremental iterative technique has been applied to the three dimensional transonic nonlinear small perturbation flowfield formulation, an equivalent plate deflection model, and the associated aerodynamic and structural discipline sensitivity equations; and coupled aeroelastic results for an aspect ratio three wing in transonic flow have been obtained.

  16. The discrepancy between risky and riskless utilities: a matter of framing?

    PubMed

    Stalmeier, P F; Bezembinder, T G

    1999-01-01

    Utilities differ according to whether they are derived from risky (gamble) and riskless (visual analog scale, time-tradeoff) assessment methods. The discrepancies are usually explained by assuming that the utilities elicited by risky methods incorporate attitudes towards risk, whereas riskless utilities do not. In (cumulative) prospect theory, risk attitude is conceived as consisting of two components: a decision-weight function (attentiveness to changes in, or sensitivity towards, chance) and a utility function (sensitivity towards outcomes). The authors' data suggest that a framing effect is a hitherto unrecognized and important factor in causing discrepancies between risky and riskless utilities. They collected risky evaluations with the gamble method, and riskless evaluations with difference measurement. Risky utilities were derived using expected-utility theory and prospect theory. With the latter approach, sensitivity towards outcomes and sensitivity towards chance are modeled separately. When the hypothesis that risky utilities from prospect theory coincide with riskless utilities was tested, it was rejected (n = 8, F(1,7) = 132, p = 0.000), suggesting that a correction for sensitivity towards chance is not sufficient to resolve the difference between risky and riskless utilities. Next, it was assumed that different gain/loss frames are induced by risky and riskless elicitation methods. Indeed, identical utility functions were obtained when the gain/loss frames were made identical across methods (n = 7), suggesting that framing was operative. The results suggest that risky and riskless utilities are identical after corrections for sensitivity towards chance and framing.

  17. Analysis of Urinary Metabolites of Nerve and Blister Chemical Warfare Agents

    DTIC Science & Technology

    2014-08-01

    of CWAs. The analysis methods use UHPLC-MS/MS in Multiple Reaction Monitoring ( MRM ) mode to enhance the selectivity and sensitivity of the method...Chromatography Mass Spectrometry LOD Limit Of Detection LOQ Limit of Quantitation MRM Multiple Reaction Monitoring MSMS Tandem mass...urine [1]. Those analysis methods use UHPLC- MS/MS in Multiple Reaction Monitoring ( MRM ) mode to enhance the selectivity and sensitivity of the method

  18. California sea lion (Zalophus californianus) aerial hearing sensitivity measured using auditory steady-state response and psychophysical methods.

    PubMed

    Mulsow, Jason; Finneran, James J; Houser, Dorian S

    2011-04-01

    Although electrophysiological methods of measuring the hearing sensitivity of pinnipeds are not yet as refined as those for dolphins and porpoises, they appear to be a promising supplement to traditional psychophysical procedures. In order to further standardize electrophysiological methods with pinnipeds, a within-subject comparison of psychophysical and auditory steady-state response (ASSR) measures of aerial hearing sensitivity was conducted with a 1.5-yr-old California sea lion. The psychophysical audiogram was similar to those previously reported for otariids, with a U-shape, and thresholds near 10 dB re 20 μPa at 8 and 16 kHz. ASSR thresholds measured using both single and multiple simultaneous amplitude-modulated tones closely reproduced the psychophysical audiogram, although the mean ASSR thresholds were elevated relative to psychophysical thresholds. Differences between psychophysical and ASSR thresholds were greatest at the low- and high-frequency ends of the audiogram. Thresholds measured using the multiple ASSR method were not different from those measured using the single ASSR method. The multiple ASSR method was more rapid than the single ASSR method, and allowed for threshold measurements at seven frequencies in less than 20 min. The multiple ASSR method may be especially advantageous for hearing sensitivity measurements with otariid subjects that are untrained for psychophysical procedures.

  19. Method of controlled reduction of nitroaromatics by enzymatic reaction with oxygen sensitive nitroreductase enzymes

    DOEpatents

    Shah, M.M.; Campbell, J.A.

    1998-07-07

    A method is described for the controlled reduction of nitroaromatic compounds such as nitrobenzene and 2,4,6-trinitrotoluene by enzymatic reaction with oxygen sensitive nitroreductase enzymes, such as ferredoxin NADP oxidoreductase. 6 figs.

  20. Methods of determining complete sensor requirements for autonomous mobility

    NASA Technical Reports Server (NTRS)

    Curtis, Steven A. (Inventor)

    2012-01-01

    A method of determining complete sensor requirements for autonomous mobility of an autonomous system includes computing a time variation of each behavior of a set of behaviors of the autonomous system, determining mobility sensitivity to each behavior of the autonomous system, and computing a change in mobility based upon the mobility sensitivity to each behavior and the time variation of each behavior. The method further includes determining the complete sensor requirements of the autonomous system through analysis of the relative magnitude of the change in mobility, the mobility sensitivity to each behavior, and the time variation of each behavior, wherein the relative magnitude of the change in mobility, the mobility sensitivity to each behavior, and the time variation of each behavior are characteristic of the stability of the autonomous system.

  1. Radial magnetic resonance imaging (MRI) using a rotating radiofrequency (RF) coil at 9.4 T.

    PubMed

    Li, Mingyan; Weber, Ewald; Jin, Jin; Hugger, Thimo; Tesiram, Yasvir; Ullmann, Peter; Stark, Simon; Fuentes, Miguel; Junge, Sven; Liu, Feng; Crozier, Stuart

    2018-02-01

    The rotating radiofrequency coil (RRFC) has been developed recently as an alternative approach to multi-channel phased-array coils. The single-element RRFC avoids inter-channel coupling and allows a larger coil element with better B 1 field penetration when compared with an array counterpart. However, dedicated image reconstruction algorithms require accurate estimation of temporally varying coil sensitivities to remove artefacts caused by coil rotation. Various methods have been developed to estimate unknown sensitivity profiles from a few experimentally measured sensitivity maps, but these methods become problematic when the RRFC is used as a transceiver coil. In this work, a novel and practical radial encoding method is introduced for the RRFC to facilitate image reconstruction without the measurement or estimation of rotation-dependent sensitivity profiles. Theoretical analyses suggest that the rotation-dependent sensitivities of the RRFC can be used to create a uniform profile with careful choice of sampling positions and imaging parameters. To test this new imaging method, dedicated electronics were designed and built to control the RRFC speed and hence positions in synchrony with imaging parameters. High-quality phantom and animal images acquired on a 9.4 T pre-clinical scanner demonstrate the feasibility and potential of this new RRFC method. Copyright © 2017 John Wiley & Sons, Ltd.

  2. An adjoint method of sensitivity analysis for residual vibrations of structures subject to impacts

    NASA Astrophysics Data System (ADS)

    Yan, Kun; Cheng, Gengdong

    2018-03-01

    For structures subject to impact loads, the residual vibration reduction is more and more important as the machines become faster and lighter. An efficient sensitivity analysis of residual vibration with respect to structural or operational parameters is indispensable for using a gradient based optimization algorithm, which reduces the residual vibration in either active or passive way. In this paper, an integrated quadratic performance index is used as the measure of the residual vibration, since it globally measures the residual vibration response and its calculation can be simplified greatly with Lyapunov equation. Several sensitivity analysis approaches for performance index were developed based on the assumption that the initial excitations of residual vibration were given and independent of structural design. Since the resulting excitations by the impact load often depend on structural design, this paper aims to propose a new efficient sensitivity analysis method for residual vibration of structures subject to impacts to consider the dependence. The new method is developed by combining two existing methods and using adjoint variable approach. Three numerical examples are carried out and demonstrate the accuracy of the proposed method. The numerical results show that the dependence of initial excitations on structural design variables may strongly affects the accuracy of sensitivities.

  3. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1991-01-01

    The three dimensional quasi-analytical sensitivity analysis and the ancillary driver programs are developed needed to carry out the studies and perform comparisons. The code is essentially contained in one unified package which includes the following: (1) a three dimensional transonic wing analysis program (ZEBRA); (2) a quasi-analytical portion which determines the matrix elements in the quasi-analytical equations; (3) a method for computing the sensitivity coefficients from the resulting quasi-analytical equations; (4) a package to determine for comparison purposes sensitivity coefficients via the finite difference approach; and (5) a graphics package.

  4. On the sensitivity of complex, internally coupled systems

    NASA Technical Reports Server (NTRS)

    Sobieszczanskisobieski, Jaroslaw

    1988-01-01

    A method is presented for computing sensitivity derivatives with respect to independent (input) variables for complex, internally coupled systems, while avoiding the cost and inaccuracy of finite differencing performed on the entire system analysis. The method entails two alternative algorithms: the first is based on the classical implicit function theorem formulated on residuals of governing equations, and the second develops the system sensitivity equations in a new form using the partial (local) sensitivity derivatives of the output with respect to the input of each part of the system. A few application examples are presented to illustrate the discussion.

  5. Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries

    DOE PAGES

    Lu, Zhiming

    2018-01-30

    Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less

  6. Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Zhiming

    Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less

  7. A Non-Intrusive Algorithm for Sensitivity Analysis of Chaotic Flow Simulations

    NASA Technical Reports Server (NTRS)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2017-01-01

    We demonstrate a novel algorithm for computing the sensitivity of statistics in chaotic flow simulations to parameter perturbations. The algorithm is non-intrusive but requires exposing an interface. Based on the principle of shadowing in dynamical systems, this algorithm is designed to reduce the effect of the sampling error in computing sensitivity of statistics in chaotic simulations. We compare the effectiveness of this method to that of the conventional finite difference method.

  8. Reverse-micelle-induced porous pressure-sensitive rubber for wearable human-machine interfaces.

    PubMed

    Jung, Sungmook; Kim, Ji Hoon; Kim, Jaemin; Choi, Suji; Lee, Jongsu; Park, Inhyuk; Hyeon, Taeghwan; Kim, Dae-Hyeong

    2014-07-23

    A novel method to produce porous pressure-sensitive rubber is developed. For the controlled size distribution of embedded micropores, solution-based procedures using reverse micelles are adopted. The piezosensitivity of the pressure sensitive rubber is significantly increased by introducing micropores. Using this method, wearable human-machine interfaces are fabricated, which can be applied to the remote control of a robot. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Towards high-throughput molecular detection of Plasmodium: new approaches and molecular markers

    PubMed Central

    Steenkeste, Nicolas; Incardona, Sandra; Chy, Sophy; Duval, Linda; Ekala, Marie-Thérèse; Lim, Pharath; Hewitt, Sean; Sochantha, Tho; Socheat, Doung; Rogier, Christophe; Mercereau-Puijalon, Odile; Fandeur, Thierry; Ariey, Frédéric

    2009-01-01

    Background Several strategies are currently deployed in many countries in the tropics to strengthen malaria control toward malaria elimination. To measure the impact of any intervention, there is a need to detect malaria properly. Mostly, decisions still rely on microscopy diagnosis. But sensitive diagnosis tools enabling to deal with a large number of samples are needed. The molecular detection approach offers a much higher sensitivity, and the flexibility to be automated and upgraded. Methods Two new molecular methods were developed: dot18S, a Plasmodium-specific nested PCR based on the 18S rRNA gene followed by dot-blot detection of species by using species-specific probes and CYTB, a Plasmodium-specific nested PCR based on cytochrome b gene followed by species detection using SNP analysis. The results were compared to those obtained with microscopic examination and the "standard" 18S rRNA gene based nested PCR using species specific primers. 337 samples were diagnosed. Results Compared to the microscopy the three molecular methods were more sensitive, greatly increasing the estimated prevalence of Plasmodium infection, including P. malariae and P. ovale. A high rate of mixed infections was uncovered with about one third of the villagers infected with more than one malaria parasite species. Dot18S and CYTB sensitivity outranged the "standard" nested PCR method, CYTB being the most sensitive. As a consequence, compared to the "standard" nested PCR method for the detection of Plasmodium spp., the sensitivity of dot18S and CYTB was respectively 95.3% and 97.3%. Consistent detection of Plasmodium spp. by the three molecular methods was obtained for 83% of tested isolates. Contradictory results were mostly related to detection of Plasmodium malariae and Plasmodium ovale in mixed infections, due to an "all-or-none" detection effect at low-level parasitaemia. Conclusion A large reservoir of asymptomatic infections was uncovered using the molecular methods. Dot18S and CYTB, the new methods reported herein are highly sensitive, allow parasite DNA extraction as well as genus- and species-specific diagnosis of several hundreds of samples, and are amenable to high-throughput scaling up for larger sample sizes. Such methods provide novel information on malaria prevalence and epidemiology and are suited for active malaria detection. The usefulness of such sensitive malaria diagnosis tools, especially in low endemic areas where eradication plans are now on-going, is discussed in this paper. PMID:19402894

  10. Sensitivity of diagnostic methods for Mansonella ozzardi microfilariae detection in the Brazilian Amazon Region.

    PubMed

    Medeiros, Jansen Fernandes; Fontes, Gilberto; Nascimento, Vilma Lopes do; Rodrigues, Moreno; Cohen, Jacob; Andrade, Edmar Vaz de; Pessoa, Felipe Arley Costa; Martins, Marilaine

    2018-03-01

    The human filarial worm Mansonella ozzardi is highly endemic in the large tributaries of the Amazon River. This infection is still highly neglected and can be falsely negative when microfilariae levels are low. This study investigated the frequency of individuals with M. ozzardi in riverine communities in Coari municipality, Brazilian Amazon. Different diagnostic methods including polymerase chain reaction (PCR), blood polycarbonate membrane filtration (PCMF), Knott's method (Knott), digital thick blood smears (DTBS) and venous thick blood smears (VTBS) were used to compare sensitivity and specificity among the methods. Data were analysed using PCMF and Bayesian latent class models (BLCM) as the gold standard. We used BLCM to calculate the prevalence of mansonelliasis based on the results of five diagnostic methods. The prevalence of mansonelliasis was 35.4% by PCMF and 30.1% by BLCM. PCR and Knott methods both possessed high sensitivity. Sensitivity relative to PCMF was 98.5% [95% confidence interval (CI): 92.0 - 99.7] for PCR and 83.5% (95% CI: 72.9 - 90.5) for Knott. Sensitivity derived by BLCM was 100% (95% CI 93.7 - 100) for PCMF, 100% (95% CI: 93.7 - 100) for PCR and 98.3% (95% CI: 90.6 - 99.9) for Knott. The odds ratio of being diagnosed as microfilaremic increased with age but did not differ between genders. Microfilariae loads were higher in subjects aged 30 - 45 and 45 - 60 years. PCMF and PCR were the best methods to assess the prevalence of mansonelliasis in our samples. As such, using these methods could lead to higher prevalence of mansonelliasis in this region than the most commonly used method (i.e., thick blood smears).

  11. The effect of pit and fissure sealants on the detection of occlusal caries in vitro.

    PubMed

    Manton, D J; Messer, L B

    2007-03-01

    To compare, in vitro, the effect of placing opaque (OPS) and clear fluorescing (CFS) pit and fissure sealants (PFS) on the detection of occlusal caries (OCD). Occlusal surfaces of 67 extracted molars were examined under standardised conditions by 6 final year undergraduate dental students, using visual, bitewing radiography, transillumination (FOTI), laser fluorescence (LF) and tactile methods of caries detection. The teeth were then assigned randomly to two groups for PFS placement: OPS and CFS; then the OCD methods were repeated. Caries presence/absence was determined histologically on serial sections examined under stereo-microscopy (10x). Before PFS placement the sensitivity and specificity for the OCD methods were: visual: 68%, 71%; radiographic: 15%, 95%; FOTI: 36%, 93%; LF: 49%, 83% and tactile: 39%, 67%, respectively. After placement of OPS, the sensitivity of LF (20%) and visual (13%) methods decreased and specificity increased (93%, 98% respectively). Placement of CFS resulted in minor changes in sensitivity and specificity. Correlation (Spearman's Rho coefficients) between OCD methods and histological intra-dentinal caries for pre- PFS, OPS, and CFS were: visual: 0.38, 0.34, 0.33; FOTI: 0.42, 0.35, 0.43; and LF: 0.41, 0.30, and 0.45 respectively. The sensitivity of all OCD methods was low, as well as their correlation to the histological gold standard. Placing OPS further decreased the sensitivity of LF and visual methods, whereas placing CFS had little effect on all OCD methods. It is recommended that tactile detection of occlusal caries should be discontinued, and the probe used only to clean the pits and fissures gently for more accurate visual detection, or prior to pit and fissure sealant placement. Further research into the development of an affordable, robust, accurate and easy to use method for OCD is required.

  12. Effects of Environmental Toxicants on Metabolic Activity of Natural Microbial Communities

    PubMed Central

    Barnhart, Carole L. H.; Vestal, J. Robie

    1983-01-01

    Two methods of measuring microbial activity were used to study the effects of toxicants on natural microbial communities. The methods were compared for suitability for toxicity testing, sensitivity, and adaptability to field applications. This study included measurements of the incorporation of 14C-labeled acetate into microbial lipids and microbial glucosidase activity. Activities were measured per unit biomass, determined as lipid phosphate. The effects of various organic and inorganic toxicants on various natural microbial communities were studied. Both methods were useful in detecting toxicity, and their comparative sensitivities varied with the system studied. In one system, the methods showed approximately the same sensitivities in testing the effects of metals, but the acetate incorporation method was more sensitive in detecting the toxicity of organic compounds. The incorporation method was used to study the effects of a point source of pollution on the microbiota of a receiving stream. Toxic doses were found to be two orders of magnitude higher in sediments than in water taken from the same site, indicating chelation or adsorption of the toxicant by the sediment. The microbiota taken from below a point source outfall was 2 to 100 times more resistant to the toxicants tested than was that taken from above the outfall. Downstream filtrates in most cases had an inhibitory effect on the natural microbiota taken from above the pollution source. The microbial methods were compared with commonly used bioassay methods, using higher organisms, and were found to be similar in ability to detect comparative toxicities of compounds, but were less sensitive than methods which use standard media because of the influences of environmental factors. PMID:16346432

  13. Rapid and Sensitive Salmonella Typhi Detection in Blood and Fecal Samples Using Reverse Transcription Loop-Mediated Isothermal Amplification.

    PubMed

    Fan, Fenxia; Yan, Meiying; Du, Pengcheng; Chen, Chen; Kan, Biao

    2015-09-01

    Typhoid fever caused by Salmonella enterica serovar Typhi remains a significant public health problem in developing countries. Although the main method for diagnosing typhoid fever is blood culture, the test is time consuming and not always able to detect infections. Thus, it is very difficult to distinguish typhoid from other infections in patients with nonspecific symptoms. A simple and sensitive laboratory detection method remains necessary. The purpose of this study is to establish and evaluate a rapid and sensitive reverse transcription-based loop-mediated isothermal amplification (RT-LAMP) method to detect Salmonella Typhi infection. In this study, a new specific gene marker, STY1607, was selected to develop a STY1607-RT-LAMP assay; this is the first report of specific RT-LAMP detection assay for typhoid. Human-simulated and clinical blood/stool samples were used to evaluate the performance of STY1607-RT-LAMP for RNA detection; this method was compared with STY1607-LAMP, reverse transcription real-time polymerase chain reaction (rRT-PCR), and bacterial culture methods for Salmonella Typhi detection. Using mRNA as the template, STY1607-RT-LAMP exhibited 50-fold greater sensitivity than STY1607-LAMP for DNA detection. The STY1607-RT-LAMP detection limit is 3 colony-forming units (CFU)/mL for both the pure Salmonella Typhi samples and Salmonella Typhi-simulated blood samples and was 30 CFU/g for the simulated stool samples, all of which were 10-fold more sensitive than the rRT-PCR method. RT-LAMP exhibited improved Salmonella Typhi detection sensitivity compared to culture methods and to rRT-PCR of clinical blood and stool specimens from suspected typhoid fever patients. Because it can be performed without sophisticated equipment or skilled personnel, RT-LAMP is a valuable tool for clinical laboratories in developing countries. This method can be applied in the clinical diagnosis and care of typhoid fever patients as well as for a quick public health response.

  14. Sensitivity Analysis for Steady State Groundwater Flow Using Adjoint Operators

    NASA Astrophysics Data System (ADS)

    Sykes, J. F.; Wilson, J. L.; Andrews, R. W.

    1985-03-01

    Adjoint sensitivity theory is currently being considered as a potential method for calculating the sensitivity of nuclear waste repository performance measures to the parameters of the system. For groundwater flow systems, performance measures of interest include piezometric heads in the vicinity of a waste site, velocities or travel time in aquifers, and mass discharge to biosphere points. The parameters include recharge-discharge rates, prescribed boundary heads or fluxes, formation thicknesses, and hydraulic conductivities. The derivative of a performance measure with respect to the system parameters is usually taken as a measure of sensitivity. To calculate sensitivities, adjoint sensitivity equations are formulated from the equations describing the primary problem. The solution of the primary problem and the adjoint sensitivity problem enables the determination of all of the required derivatives and hence related sensitivity coefficients. In this study, adjoint sensitivity theory is developed for equations of two-dimensional steady state flow in a confined aquifer. Both the primary flow equation and the adjoint sensitivity equation are solved using the Galerkin finite element method. The developed computer code is used to investigate the regional flow parameters of the Leadville Formation of the Paradox Basin in Utah. The results illustrate the sensitivity of calculated local heads to the boundary conditions. Alternatively, local velocity related performance measures are more sensitive to hydraulic conductivities.

  15. Prostate cancer localization with multispectral MRI using cost-sensitive support vector machines and conditional random fields.

    PubMed

    Artan, Yusuf; Haider, Masoom A; Langer, Deanna L; van der Kwast, Theodorus H; Evans, Andrew J; Yang, Yongyi; Wernick, Miles N; Trachtenberg, John; Yetik, Imam Samil

    2010-09-01

    Prostate cancer is a leading cause of cancer death for men in the United States. Fortunately, the survival rate for early diagnosed patients is relatively high. Therefore, in vivo imaging plays an important role for the detection and treatment of the disease. Accurate prostate cancer localization with noninvasive imaging can be used to guide biopsy, radiotherapy, and surgery as well as to monitor disease progression. Magnetic resonance imaging (MRI) performed with an endorectal coil provides higher prostate cancer localization accuracy, when compared to transrectal ultrasound (TRUS). However, in general, a single type of MRI is not sufficient for reliable tumor localization. As an alternative, multispectral MRI, i.e., the use of multiple MRI-derived datasets, has emerged as a promising noninvasive imaging technique for the localization of prostate cancer; however almost all studies are with human readers. There is a significant inter and intraobserver variability for human readers, and it is substantially difficult for humans to analyze the large dataset of multispectral MRI. To solve these problems, this study presents an automated localization method using cost-sensitive support vector machines (SVMs) and shows that this method results in improved localization accuracy than classical SVM. Additionally, we develop a new segmentation method by combining conditional random fields (CRF) with a cost-sensitive framework and show that our method further improves cost-sensitive SVM results by incorporating spatial information. We test SVM, cost-sensitive SVM, and the proposed cost-sensitive CRF on multispectral MRI datasets acquired from 21 biopsy-confirmed cancer patients. Our results show that multispectral MRI helps to increase the accuracy of prostate cancer localization when compared to single MR images; and that using advanced methods such as cost-sensitive SVM as well as the proposed cost-sensitive CRF can boost the performance significantly when compared to SVM.

  16. Comparison of AASHTO moisture sensitivity test (T-283) with Connecticut Department of Transportation modified test method

    DOT National Transportation Integrated Search

    1999-08-01

    Several different interpretations of the American Association of State Highway and Transportation Officials' (AASHTO's) Moisture Sensitivity Test exist. The official AASHTO interpretation of this test method does not account for water which has been ...

  17. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    DOE PAGES

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less

  18. Fabrication of dye-sensitized solar cell using chlorophylls pigment from sargassum

    NASA Astrophysics Data System (ADS)

    Ridwan, M. A.; Noor, E.; Rusli, M. S.; Akhiruddin

    2018-04-01

    Dye-sensitized solar cell (DSSC) is a new generation of the solar cell. Its development in the dye-sensitized system is varied. Natural dyes have been the choice in developing DSSC. This study used a dye-sensitized chlorophyll pigment from Sargassum sp. as a dye-sensitized solar cell. This study aims to obtain chlorophyll pigment extract to be used as a dye in DSSC and to obtain the best energy conversion efficiency from DSSC. The chlorophyll pigments were extracted using APHA method (2012), and the TiO2 coating method used was doctor blade method. The two fabricated cells have an area of 1 cm2 immersed with chlorophyll dye for 30 hours. Then these cells were tested using direct sun radiation. The concentration value of chlorophyll in acetone solution was 61.176 mg/L. The efficiency value obtained was 1.50% with VOC of 241 mV, ISC 2.9 x 10-4 mA and fill factor 0.432.

  19. Prediction of skin sensitizers using alternative methods to animal experimentation.

    PubMed

    Johansson, Henrik; Lindstedt, Malin

    2014-07-01

    Regulatory frameworks within the European Union demand that chemical substances are investigated for their ability to induce sensitization, an adverse health effect caused by the human immune system in response to chemical exposure. A recent ban on the use of animal tests within the cosmetics industry has led to an urgent need for alternative animal-free test methods that can be used for assessment of chemical sensitizers. To date, no such alternative assay has yet completed formal validation. However, a number of assays are in development and the understanding of the biological mechanisms of chemical sensitization has greatly increased during the last decade. In this MiniReview, we aim to summarize and give our view on the recent progress of method development for alternative assessment of chemical sensitizers. We propose that integrated testing strategies should comprise complementary assays, providing measurements of a wide range of mechanistic events, to perform well-educated risk assessments based on weight of evidence. © 2014 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).

  20. CdS/CdSe co-sensitized SnO2 photoelectrodes for quantum dots sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Lin, Yibing; Lin, Yu; Meng, Yongming; Tu, Yongguang; Zhang, Xiaolong

    2015-07-01

    SnO2 nanoparticles were synthesized by hydrothermal method and applied to photo-electrodes of quantum dots-sensitized solar cells (QDSSCs). After sensitizing SnO2 films via CdS quantum dots, CdSe quantum dots was decorated on the surface of CdS/SnO2 photo-electrodes to further improve the power conversion efficiency. CdS and CdSe quantum dots were deposited by successive ionic layer absorption and reaction method (SILAR) and chemical bath deposition method (CBD) respectively. Scanning electron microscopy (SEM), transmission electron microscopy (TEM) and X-ray diffraction (XRD) were used to identify the surface profile and crystal structure of SnO2 photo-electrodes before and after deposited quantum dots. After CdSe co-sensitized process, an overall power conversion efficiency of 1.78% was obtained in CdSe/CdS/SnO2 QDSSC, which showed 66.4% improvement than that of CdS/SnO2 QDSSC.

  1. [Analysis and experimental verification of sensitivity and SNR of laser warning receiver].

    PubMed

    Zhang, Ji-Long; Wang, Ming; Tian, Er-Ming; Li, Xiao; Wang, Zhi-Bin; Zhang, Yue

    2009-01-01

    In order to countermeasure increasingly serious threat from hostile laser in modern war, it is urgent to do research on laser warning technology and system, and the sensitivity and signal to noise ratio (SNR) are two important performance parameters in laser warning system. In the present paper, based on the signal statistical detection theory, a method for calculation of the sensitivity and SNR in coherent detection laser warning receiver (LWR) has been proposed. Firstly, the probabilities of the laser signal and receiver noise were analyzed. Secondly, based on the threshold detection theory and Neyman-Pearson criteria, the signal current equation was established by introducing detection probability factor and false alarm rate factor, then, the mathematical expressions of sensitivity and SNR were deduced. Finally, by using method, the sensitivity and SNR of the sinusoidal grating laser warning receiver developed by our group were analyzed, and the theoretic calculation and experimental results indicate that the SNR analysis method is feasible, and can be used in performance analysis of LWR.

  2. Proof-of-Concept Study for Uncertainty Quantification and Sensitivity Analysis using the BRL Shaped-Charge Example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Justin Matthew

    These are the slides for a graduate presentation at Mississippi State University. It covers the following: the BRL Shaped-Charge Geometry in PAGOSA, mesh refinement study, surrogate modeling using a radial basis function network (RBFN), ruling out parameters using sensitivity analysis (equation of state study), uncertainty quantification (UQ) methodology, and sensitivity analysis (SA) methodology. In summary, a mesh convergence study was used to ensure that solutions were numerically stable by comparing PDV data between simulations. A Design of Experiments (DOE) method was used to reduce the simulation space to study the effects of the Jones-Wilkins-Lee (JWL) Parameters for the Composition Bmore » main charge. Uncertainty was quantified by computing the 95% data range about the median of simulation output using a brute force Monte Carlo (MC) random sampling method. Parameter sensitivities were quantified using the Fourier Amplitude Sensitivity Test (FAST) spectral analysis method where it was determined that detonation velocity, initial density, C1, and B1 controlled jet tip velocity.« less

  3. MethylMeter(®): bisulfite-free quantitative and sensitive DNA methylation profiling and mutation detection in FFPE samples.

    PubMed

    McCarthy, David; Pulverer, Walter; Weinhaeusel, Andreas; Diago, Oscar R; Hogan, Daniel J; Ostertag, Derek; Hanna, Michelle M

    2016-06-01

    Development of a sensitive method for DNA methylation profiling and associated mutation detection in clinical samples. Formalin-fixed and paraffin-embedded tumors received by clinical laboratories often contain insufficient DNA for analysis with bisulfite or methylation sensitive restriction enzymes-based methods. To increase sensitivity, methyl-CpG DNA capture and Coupled Abscription PCR Signaling detection were combined in a new assay, MethylMeter(®). Gliomas were analyzed for MGMT methylation, glioma CpG island methylator phenotype and IDH1 R132H. MethylMeter had 100% assay success rate measuring all five biomarkers in formalin-fixed and paraffin-embedded tissue. MGMT methylation results were supported by survival and mRNA expression data. MethylMeter is a sensitive and quantitative method for multitarget DNA methylation profiling and associated mutation detection. The MethylMeter-based GliomaSTRAT assay measures methylation of four targets and one mutation to simultaneously grade gliomas and predict their response to temozolomide. This information is clinically valuable in management of gliomas.

  4. Sonicated Diagnostic Immunoblot for Bartonellosis

    PubMed Central

    Mallqui, Vania; Speelmon, Emily C.; Verástegui, Manuela; Maguiña-Vargas, Ciro; Pinell-Salles, Paula; Lavarello, Rosa; Delgado, Jose; Kosek, Margaret; Romero, Sofia; Arana, Yanina; Gilman, Robert H.

    2000-01-01

    Two simple Bartonella bacilliformis immunoblot preparation methods were developed. Antigen was prepared by two different methods: sonication of whole organisms or glycine extraction. Both methods were then tested for sensitivity and specificity. Well-defined control sera were utilized in the development of these diagnostic immunoblots, and possible cross-reactions were thoroughly examined. Sera investigated for cross-reaction with these diagnostic antigens were drawn from patients with brucellosis, chlamydiosis, Q fever, and cat scratch disease, all of whom were from regions where bartonellosis is not endemic. While both immunoblots yielded reasonable sensitivity and high specificity, we recommend the use of the sonicated immunoblot, which has a higher sensitivity when used to detect acute disease and produces fewer cross-reactions. The sonicated immunoblot reported here is 94% sensitive to chronic bartonellosis and 70% sensitive to acute bartonellosis. In a healthy group, it is 100% specific. This immunoblot preparation requires a simple sonication protocol for the harvesting of B. bacilliformis antigens and is well suited for use in regions of endemicity. PMID:10618267

  5. Cross-Sectional and Panel Data Analyses of an Incompletely Observed Variable Derived from the Nonrandomized Method for Surveying Sensitive Questions

    ERIC Educational Resources Information Center

    Yamaguchi, Kazuo

    2016-01-01

    This article describes (1) the survey methodological and statistical characteristics of the nonrandomized method for surveying sensitive questions for both cross-sectional and panel survey data and (2) the way to use the incompletely observed variable obtained from this survey method in logistic regression and in loglinear and log-multiplicative…

  6. Sensitivity and cost considerations for the detection and eradication of marine pests in ports.

    PubMed

    Hayes, Keith R; Cannon, Rob; Neil, Kerry; Inglis, Graeme

    2005-08-01

    Port surveys are being conducted in Australia, New Zealand and around the world to confirm the presence or absence of particular marine pests. The most critical aspect of these surveys is their sensitivity-the probability that they will correctly identify a species as present if indeed it is present. This is not, however, adequately addressed in the relevant national and international standards. Simple calculations show that the sensitivity of port survey methods is closely related to their encounter rate-the average number of target individuals expected to be detected by the method. The encounter rate (which reflects any difference in relative pest density), divided by the cost of the method, provides one way to compare the cost-effectiveness of different survey methods. The most cost-effective survey method is site- and species-specific but, in general, will involve sampling from the habitat with the highest expected population of target individuals. A case study of Perna viridis in Trinity Inlet, Cairns, demonstrates that plankton trawls processed with gene probes provide the same level of sensitivity for a fraction of the cost associated with the next best available method-snorkel transects in bad visibility (secchi depth=0.72 m). Visibility and the adult/larvae ratio, however, are critical to these arguments. If visibility were good (secchi depth=10 m), the two approaches would be comparable. Diver deployed quadrats were at least three orders of magnitude less cost-effective in this case study. It is very important that environmental managers and scientists perform sensitivity calculations before embarking on port surveys to ensure the highest level of sensitivity is achieved for any given budget.

  7. Comparison of the performance of rapid prescreening, 10% random review, and clinical risk criteria as methods of internal quality control in cervical cytopathology.

    PubMed

    Tavares, Suelene B N; Alves de Sousa, Nadja L; Manrique, Edna J C; Pinheiro de Albuquerque, Zair B; Zeferino, Luiz C; Amaral, Rita G

    2008-06-25

    Rapid prescreening (RPS) is an internal quality-control (IQC) method that is used both to reduce errors in the laboratory and to measure the sensitivity of routine screening (RS). Little direct comparison data are available comparing RPS with other more widely used IQC methods. The authors compared the performance of RPS, 10% random review of negative smears (R-10%), and directed rescreening of negative smears based on clinical risk criteria (RCRC) over 1 year in a community clinic setting. In total, 6,135 smears were evaluated. The sensitivity of RS alone was 71.3%. RPS detected significantly more (132 cases) false-negative (FN) cases than either R-10% (7 cases) or RCRC (32 cases). RPS significantly improved the overall sensitivity of the laboratory (71.3-92.2%; P = .001); neither R-10% nor RCRC significantly changed the sensitivity of RS. RPS was not as specific as the other methods, although nearly 68% of all abnormalities detected by RPS were verified as real. RPS of 100% of smears required the same amount of time as RCRC but required twice as much time as R-10%. The current results demonstrated that RPS is a much more effective IQC method than either R-10% or RCRC. RPS detects significantly more errors and can improve the overall sensitivity of a laboratory with either a modest increase or no increase in overall time spent on IQC. R-10% is an insensitive IQC method, and neither R-10% nor RCRC can significantly improve the overall sensitivity of a laboratory. (c) 2008 American Cancer Society.

  8. Monte Carlo Perturbation Theory Estimates of Sensitivities to System Dimensions

    DOE PAGES

    Burke, Timothy P.; Kiedrowski, Brian C.

    2017-12-11

    Here, Monte Carlo methods are developed using adjoint-based perturbation theory and the differential operator method to compute the sensitivities of the k-eigenvalue, linear functions of the flux (reaction rates), and bilinear functions of the forward and adjoint flux (kinetics parameters) to system dimensions for uniform expansions or contractions. The calculation of sensitivities to system dimensions requires computing scattering and fission sources at material interfaces using collisions occurring at the interface—which is a set of events with infinitesimal probability. Kernel density estimators are used to estimate the source at interfaces using collisions occurring near the interface. The methods for computing sensitivitiesmore » of linear and bilinear ratios are derived using the differential operator method and adjoint-based perturbation theory and are shown to be equivalent to methods previously developed using a collision history–based approach. The methods for determining sensitivities to system dimensions are tested on a series of fast, intermediate, and thermal critical benchmarks as well as a pressurized water reactor benchmark problem with iterated fission probability used for adjoint-weighting. The estimators are shown to agree within 5% and 3σ of reference solutions obtained using direct perturbations with central differences for the majority of test problems.« less

  9. [The modified method registration of kinesthetic evoked potentials and its application for research of proprioceptive sensitivity disorders at spondylogenic cervical myelopathy].

    PubMed

    Gordeev, S A; Voronin, S G

    2016-01-01

    To analyze the efficacy of modified (passive radiocarpal articulation flexion/extension) and «standard» (passive radiocarpal articulation flexion) methods of kinesthetic evoked potentials for proprioceptive sensitivity assessment in healthy subjects and patients with spondylotic cervical myelopathy. The study included 14 healthy subjects (4 women and 10 men, mean age 54.1±10.5 years) and 8 patients (2 women and 6 men, mean age 55.8±10.9 years) with spondylotic cervical myelopathy. Muscle-joint sensation was examined during the clinical study. A modified method of kinesthetic evoked potentials was developed. This method differed from the "standard" one by the organization of a cycle including several passive movements,where each new movement differed from the preceding one by the direction. The modified method of kinesthetic evoked potentials ensures more reliable kinesthetic sensitivity assessment due to movement variability. Asignificant increaseof the latent periods of the early components of the response was found in patients compared to healthy subjects. The modified method of kinesthetic evoked potentials can be used for objective diagnosis of proprioceptive sensitivity disorders in patients with spondylotic cervical myelopathy.

  10. A sensitive LC-MS/MS method for the simultaneous determination of amoxicillin and ambroxol in human plasma with segmental monitoring.

    PubMed

    Dong, Xin; Ding, Li; Cao, Xiaomei; Jiang, Liyuan; Zhong, Shuisheng

    2013-04-01

    Amoxicillin (AMO) degrades in plasma at room temperature and readily undergoes hydrolysis by the plasma amidase. In this paper, a novel, rapid and sensitive LC-MS/MS method operated in segmental and multiple reaction monitoring has been developed for the simultaneous determination of amoxicillin and ambroxol in human plasma. The degradation of amoxicillin in plasma was well prevented by immediate addition of 20 μL glacial acetic acid to 200 μL aliquot of freshly collected plasma samples before storage at -80°C. The sensitivity of the method was improved with segmental monitoring of the analytes, and lower limits of quantitation of 0.5 ng/mL for ambroxol and 5 ng/mL for amoxicillin were obtained. The sensitivity of our method was five times better than those of the existing methods. Furthermore, the mass response saturation problem with amoxicillin was avoided by diluting the deproteinized plasma samples with water before injection into the LC-MS/MS system. The method was successfully employed in a pharmacokinetic study of the compound amoxicillin and ambroxol hydrochloride tablets. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Detection of shigella in lettuce by the use of a rapid molecular assay with increased sensitivity

    PubMed Central

    Jiménez, Kenia Barrantes; McCoy², Clyde B.; Achí, Rosario

    2010-01-01

    A Multiplex Polymerase Chain Reaction (PCR) assay to be used as an alternative to the conventional culture method in detecting Shigella and enteroinvasive Escherichia coli (EIEC) virulence genes ipaH and ial in lettuce was developed. Efficacy and rapidity of the molecular method were determined as compared to the conventional culture. Lettuce samples were inoculated with different Shigella flexneri concentrations (from 10 CFU/ml to 107 CFU/ml). DNA was extracted directly from lettuce after inoculation (direct-PCR) and after an enrichment step (enrichment PCR). Multiplex PCR detection limit was 104CFU/ml, diagnostic sensitivity and specificity were 100% accurate. An internal amplification control (IAC) of 100 bp was used in order to avoid false negative results. This method produced results in 1 to 2 days while the conventional culture method required 5 to 6 days. Also, the culture method detection limit was 106 CFU/ml, diagnostic sensitivity was 53% and diagnostic specificity was 100%. In this study a Multiplex PCR method for detection of virulence genes in Shigella and EIEC was shown to be effective in terms of diagnostic sensitivity, detection limit and amount of time as compared to Shigella conventional culture. PMID:24031579

  12. Multicenter Evaluation of the Solana Group A Streptococcus Assay: Comparison with Culture.

    PubMed

    Uphoff, Timothy S; Buchan, Blake W; Ledeboer, Nathan A; Granato, Paul A; Daly, Judy A; Marti, Tara N

    2016-09-01

    We compared group A Streptococcus (GAS) culture with a rapid helicase-dependent amplification (HDA) method using 1,082 throat swab specimens. The HDA method demonstrated 98.2% sensitivity and 97.2% specificity. GAS prevalence by culture was 20.7%, and it was 22.6% using the HDA method. In 35 min, the HDA method provided rapid, sensitive GAS detection, making culture confirmation unnecessary. Copyright © 2016 Uphoff et al.

  13. Sensitive enumeration of Listeria monocytogenes and other Listeria species in various naturally contaminated matrices using a membrane filtration method.

    PubMed

    Barre, Léna; Brasseur, Emilie; Doux, Camille; Lombard, Bertrand; Besse, Nathalie Gnanou

    2015-06-01

    For the enumeration of Listeria monocytogenes (L. monocytogenes) in food, a sensitive enumeration method has been recently developed. This method is based on a membrane filtration of the food suspension followed by transfer of the filter on a selective medium to enumerate L. monocytogenes. An evaluation of this method was performed with several categories of foods naturally contaminated with L. monocytogenes. The results obtained with this technique were compared with those obtained from the modified reference EN ISO 11290-2 method for the enumeration of L. monocytogenes in food, and are found to provide more precise results. In most cases, the filtration method enabled to examine a greater quantity of food thus greatly improving the sensitivity of the enumeration. However, it was hardly applicable to some food categories because of filtration problems and background microbiota interference. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    PubMed Central

    2011-01-01

    Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed overall classification accuracy above a median value of 0.63, but for most sensitivity was around or even lower than a median value of 0.5. Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing. PMID:21849043

  15. Parameter screening: the use of a dummy parameter to identify non-influential parameters in a global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Khorashadi Zadeh, Farkhondeh; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy

    2017-04-01

    Parameter estimation is a major concern in hydrological modeling, which may limit the use of complex simulators with a large number of parameters. To support the selection of parameters to include in or exclude from the calibration process, Global Sensitivity Analysis (GSA) is widely applied in modeling practices. Based on the results of GSA, the influential and the non-influential parameters are identified (i.e. parameters screening). Nevertheless, the choice of the screening threshold below which parameters are considered non-influential is a critical issue, which has recently received more attention in GSA literature. In theory, the sensitivity index of a non-influential parameter has a value of zero. However, since numerical approximations, rather than analytical solutions, are utilized in GSA methods to calculate the sensitivity indices, small but non-zero indices may be obtained for the indices of non-influential parameters. In order to assess the threshold that identifies non-influential parameters in GSA methods, we propose to calculate the sensitivity index of a "dummy parameter". This dummy parameter has no influence on the model output, but will have a non-zero sensitivity index, representing the error due to the numerical approximation. Hence, the parameters whose indices are above the sensitivity index of the dummy parameter can be classified as influential, whereas the parameters whose indices are below this index are within the range of the numerical error and should be considered as non-influential. To demonstrated the effectiveness of the proposed "dummy parameter approach", 26 parameters of a Soil and Water Assessment Tool (SWAT) model are selected to be analyzed and screened, using the variance-based Sobol' and moment-independent PAWN methods. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. Moreover, the calculation does not even require additional model evaluations for the Sobol' method. A formal statistical test validates these parameter screening results. Based on the dummy parameter screening, 11 model parameters are identified as influential. Therefore, it can be denoted that the "dummy parameter approach" can facilitate the parameter screening process and provide guidance for GSA users to define a screening-threshold, with only limited additional resources. Key words: Parameter screening, Global sensitivity analysis, Dummy parameter, Variance-based method, Moment-independent method

  16. Design sensitivity analysis of rotorcraft airframe structures for vibration reduction

    NASA Technical Reports Server (NTRS)

    Murthy, T. Sreekanta

    1987-01-01

    Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.

  17. Global sensitivity analysis in stochastic simulators of uncertain reaction networks.

    PubMed

    Navarro Jimenez, M; Le Maître, O P; Knio, O M

    2016-12-28

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  18. Evaluation of the performance of the reduced local lymph node assay for skin sensitization testing.

    PubMed

    Ezendam, Janine; Muller, Andre; Hakkert, Betty C; van Loveren, Henk

    2013-06-01

    The local lymph node assay (LLNA) is the preferred method for classification of sensitizers within REACH. To reduce the number of mice for the identification of sensitizers the reduced LLNA was proposed, which uses only the high dose group of the LLNA. To evaluate the performance of this method for classification, LLNA data from REACH registrations were used and classification based on all dose groups was compared to classification based on the high dose group. We confirmed previous examinations of the reduced LLNA showing that this method is less sensitive compared to the LLNA. The reduced LLNA misclassified 3.3% of the sensitizers identified in the LLNA and misclassification occurred in all potency classes and that there was no clear association with irritant properties. It is therefore not possible to predict beforehand which substances might be misclassified. Another limitation of the reduced LLNA is that skin sensitizing potency cannot be assessed. For these reasons, it is not recommended to use the reduced LLNA as a stand-alone assay for skin sensitization testing within REACH. In the future, the reduced LLNA might be of added value in a weight of evidence approach to confirm negative results obtained with non-animal approaches. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    DOE PAGES

    Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.

    2016-12-23

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes thatmore » the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. Here, a sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.« less

  20. Combining Monte Carlo methods with coherent wave optics for the simulation of phase-sensitive X-ray imaging

    PubMed Central

    Peter, Silvia; Modregger, Peter; Fix, Michael K.; Volken, Werner; Frei, Daniel; Manser, Peter; Stampanoni, Marco

    2014-01-01

    Phase-sensitive X-ray imaging shows a high sensitivity towards electron density variations, making it well suited for imaging of soft tissue matter. However, there are still open questions about the details of the image formation process. Here, a framework for numerical simulations of phase-sensitive X-ray imaging is presented, which takes both particle- and wave-like properties of X-rays into consideration. A split approach is presented where we combine a Monte Carlo method (MC) based sample part with a wave optics simulation based propagation part, leading to a framework that takes both particle- and wave-like properties into account. The framework can be adapted to different phase-sensitive imaging methods and has been validated through comparisons with experiments for grating interferometry and propagation-based imaging. The validation of the framework shows that the combination of wave optics and MC has been successfully implemented and yields good agreement between measurements and simulations. This demonstrates that the physical processes relevant for developing a deeper understanding of scattering in the context of phase-sensitive imaging are modelled in a sufficiently accurate manner. The framework can be used for the simulation of phase-sensitive X-ray imaging, for instance for the simulation of grating interferometry or propagation-based imaging. PMID:24763652

  1. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    NASA Astrophysics Data System (ADS)

    Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.

    2016-12-01

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  2. Prediction of redox-sensitive cysteines using sequential distance and other sequence-based features.

    PubMed

    Sun, Ming-An; Zhang, Qing; Wang, Yejun; Ge, Wei; Guo, Dianjing

    2016-08-24

    Reactive oxygen species can modify the structure and function of proteins and may also act as important signaling molecules in various cellular processes. Cysteine thiol groups of proteins are particularly susceptible to oxidation. Meanwhile, their reversible oxidation is of critical roles for redox regulation and signaling. Recently, several computational tools have been developed for predicting redox-sensitive cysteines; however, those methods either only focus on catalytic redox-sensitive cysteines in thiol oxidoreductases, or heavily depend on protein structural data, thus cannot be widely used. In this study, we analyzed various sequence-based features potentially related to cysteine redox-sensitivity, and identified three types of features for efficient computational prediction of redox-sensitive cysteines. These features are: sequential distance to the nearby cysteines, PSSM profile and predicted secondary structure of flanking residues. After further feature selection using SVM-RFE, we developed Redox-Sensitive Cysteine Predictor (RSCP), a SVM based classifier for redox-sensitive cysteine prediction using primary sequence only. Using 10-fold cross-validation on RSC758 dataset, the accuracy, sensitivity, specificity, MCC and AUC were estimated as 0.679, 0.602, 0.756, 0.362 and 0.727, respectively. When evaluated using 10-fold cross-validation with BALOSCTdb dataset which has structure information, the model achieved performance comparable to current structure-based method. Further validation using an independent dataset indicates it is robust and of relatively better accuracy for predicting redox-sensitive cysteines from non-enzyme proteins. In this study, we developed a sequence-based classifier for predicting redox-sensitive cysteines. The major advantage of this method is that it does not rely on protein structure data, which ensures more extensive application compared to other current implementations. Accurate prediction of redox-sensitive cysteines not only enhances our understanding about the redox sensitivity of cysteine, it may also complement the proteomics approach and facilitate further experimental investigation of important redox-sensitive cysteines.

  3. Discrimination of skin sensitizers from non-sensitizers by interleukin-1α and interleukin-6 production on cultured human keratinocytes.

    PubMed

    Jung, Daun; Che, Jeong-Hwan; Lim, Kyung-Min; Chun, Young-Jin; Heo, Yong; Seok, Seung Hyeok

    2016-09-01

    In vitro testing methods for classifying sensitizers could be valuable alternatives to in vivo sensitization testing using animal models, such as the murine local lymph node assay (LLNA) and the guinea pig maximization test (GMT), but there remains a need for in vitro methods that are more accurate and simpler to distinguish skin sensitizers from non-sensitizers. Thus, the aim of our study was to establish an in vitro assay as a screening tool for detecting skin sensitizers using the human keratinocyte cell line, HaCaT. HaCaT cells were exposed to 16 relevant skin sensitizers and 6 skin non-sensitizers. The highest dose used was the dose causing 75% cell viability (CV75) that we determined by an MTT [3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide] assay. The levels of extracellular production of interleukin-1α (IL-1α) and IL-6 were measured. The sensitivity of IL-1α was 63%, specificity was 83% and accuracy was 68%. In the case of IL-6, sensitivity: 69%, specificity: 83% and accuracy: 73%. Thus, this study suggests that measuring extracellular production of pro-inflammatory cytokines IL-1α and IL-6 by human HaCaT cells may potentially classify skin sensitizers from non-sensitizers. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  4. Tangent Adjoint Methods In a Higher-Order Space-Time Discontinuous-Galerkin Solver For Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Diosady, Laslo; Murman, Scott; Blonigan, Patrick; Garai, Anirban

    2017-01-01

    Presented space-time adjoint solver for turbulent compressible flows. Confirmed failure of traditional sensitivity methods for chaotic flows. Assessed rate of exponential growth of adjoint for practical 3D turbulent simulation. Demonstrated failure of short-window sensitivity approximations.

  5. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  6. A Novel Quantum Dots-Based Point of Care Test for Syphilis

    NASA Astrophysics Data System (ADS)

    Yang, Hao; Li, Ding; He, Rong; Guo, Qin; Wang, Kan; Zhang, Xueqing; Huang, Peng; Cui, Daxiang

    2010-05-01

    One-step lateral flow test is recommended as the first line screening of syphilis for primary healthcare settings in developing countries. However, it generally shows low sensitivity. We describe here the development of a novel fluorescent POC (Point Of Care) test method to be used for screening for syphilis. The method was designed to combine the rapidness of lateral flow test and sensitiveness of fluorescent method. 50 syphilis-positive specimens and 50 healthy specimens conformed by Treponema pallidum particle agglutination (TPPA) were tested with Quantum Dot-labeled and colloidal gold-labeled lateral flow test strips, respectively. The results showed that both sensitivity and specificity of the quantum dots-based method reached up to 100% (95% confidence interval [CI], 91-100%), while those of the colloidal gold-based method were 82% (95% CI, 68-91%) and 100% (95% CI, 91-100%), respectively. In addition, the naked-eye detection limit of quantum dot-based method could achieve 2 ng/ml of anti-TP47 polyclonal antibodies purified by affinity chromatography with TP47 antigen, which was tenfold higher than that of colloidal gold-based method. In conclusion, the quantum dots were found to be suitable for labels of lateral flow test strip. Its ease of use, sensitiveness and low cost make it well-suited for population-based on-the-site syphilis screening.

  7. A wideband FMBEM for 2D acoustic design sensitivity analysis based on direct differentiation method

    NASA Astrophysics Data System (ADS)

    Chen, Leilei; Zheng, Changjun; Chen, Haibo

    2013-09-01

    This paper presents a wideband fast multipole boundary element method (FMBEM) for two dimensional acoustic design sensitivity analysis based on the direct differentiation method. The wideband fast multipole method (FMM) formed by combining the original FMM and the diagonal form FMM is used to accelerate the matrix-vector products in the boundary element analysis. The Burton-Miller formulation is used to overcome the fictitious frequency problem when using a single Helmholtz boundary integral equation for exterior boundary-value problems. The strongly singular and hypersingular integrals in the sensitivity equations can be evaluated explicitly and directly by using the piecewise constant discretization. The iterative solver GMRES is applied to accelerate the solution of the linear system of equations. A set of optimal parameters for the wideband FMBEM design sensitivity analysis are obtained by observing the performances of the wideband FMM algorithm in terms of computing time and memory usage. Numerical examples are presented to demonstrate the efficiency and validity of the proposed algorithm.

  8. Aptamer-mediated colorimetric method for rapid and sensitive detection of chloramphenicol in food.

    PubMed

    Yan, Chao; Zhang, Jing; Yao, Li; Xue, Feng; Lu, Jianfeng; Li, Baoguang; Chen, Wei

    2018-09-15

    We report an aptamer-mediated colorimetric method for sensitive detection of chloramphenicol (CAP). The aptamer of CAP is immobilized by the hybridization with pre-immobilized capture probe in the microtiter plate. The horseradish peroxidase (HRP) is covalently attached to the aptamer by the biotin-streptavidin system for signal production. CAP will preferably bind with aptamer due to the high binding affinity, which attributes to the release of aptamer and HRP and thus, affects the optical signal intensity. Quantitative determination of CAP is successfully achieved in the wide range from 0.001 to 1000 ng/mL with detection limit of 0.0031 ng/mL, which is more sensitive than traditional immunoassays. This method is further validated by measuring the recovery of CAP spiked in two different food matrices (honey and fish). The aptamer-mediated colorimetric method can be a useful protocol for rapid and sensitive screening of CAP, and may be used as an alternative means for traditional immunoassays. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Manufacturing error sensitivity analysis and optimal design method of cable-network antenna structures

    NASA Astrophysics Data System (ADS)

    Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye

    2016-03-01

    Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.

  10. Rapid and sensitive detection of human astrovirus in water samples by loop-mediated isothermal amplification with hydroxynaphthol blue dye.

    PubMed

    Yang, Bo-Yun; Liu, Xiao-Lu; Wei, Yu-Mei; Wang, Jing-Qi; He, Xiao-Qing; Jin, Yi; Wang, Zi-Jian

    2014-02-14

    The aim of this paper was to develop a reverse transcription loop-mediated isothermal amplification (RT-LAMP) method for rapid, sensitive and inexpensive detection of astrovirus. The detection limit of LAMP using in vitro RNA transcripts was 3.6 × 10 copies·μL⁻¹, which is as sensitive as the presently used PCR assays. However, the LAMP products could be identified as different colors with the naked eye following staining with hydroxynaphthol blue dye (HNB). No cross-reactivity with other gastroenteric viruses (rotavirus and norovirus) was observed, indicating the relatively high specificity of LAMP. The RT-LAMP method with HNB was used to effectively detect astrovirus in reclaimed water samples. The LAMP technique described in this study is a cheap, sensitive, specific and rapid method for the detection of astrovirus. The RT-LAMP method can be simply applied for the specific detection of astrovirus and has the potential to be utilized in the field as a screening test.

  11. Highly sensitive determination of iron (III) ion based on phenanthroline probe: Surface-enhanced Raman spectroscopy methods

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Ma, Ning; Park, Yeonju; Jin, Sila; Hwang, Hoon; Jiang, Dayu; Jung, Young Mee

    2018-05-01

    In this paper, we introduced Raman spectroscopy techniques that were based on the traditional Fe3 + determination method with phenanthroline as a probe. Interestingly, surface-enhanced Raman spectroscopy (SERS)-based approach exhibited excellent sensitivities to phenanthroline. Different detection mechanisms were observed for the RR and SERS techniques, in which the RR intensity increased with increasing Fe3 + concentration due to the observation of the RR effect of the phenanthroline-Fe2 + complex, whereas the SERS intensity increased with decreasing Fe3 + concentration due to the observation of the SERS effect of the uncomplexed phenanthroline. More importantly, the determination sensitivity was substantially improved in the presence of a SERS-active substrate, giving a detection limit as low as 0.001 μg/mL, which is 20 times lower than the limit of the UV-vis and RR methods. Furthermore, the proposed SERS method was free from other ions interference and can be used quality and sensitivity for the determination of the city tap water.

  12. Sensitivity analysis of static resistance of slender beam under bending

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valeš, Jan

    2016-06-08

    The paper deals with statical and sensitivity analyses of resistance of simply supported I-beams under bending. The resistance was solved by geometrically nonlinear finite element method in the programme Ansys. The beams are modelled with initial geometrical imperfections following the first eigenmode of buckling. Imperfections were, together with geometrical characteristics of cross section, and material characteristics of steel, considered as random quantities. The method Latin Hypercube Sampling was applied to evaluate statistical and sensitivity resistance analyses.

  13. Rapid diagnosis of sensitivity to ultraviolet light in fibroblasts from dermatologic disorders, with particular reference to xeroderma pigmentosum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleaver, J.E.; Thomas, G.H.

    1988-04-01

    A rapid and simple method for determining the sensitivity of human fibroblasts to ultraviolet light is described. As an alternative to the colony formation assay, this method can be used for the rapid diagnosis of ultraviolet light sensitivity in fibroblasts from photosensitive disorders. The method is based on growth of small numbers of cells in 1-cm wells of culture trays for 4 or more days after irradiation and determination of cell survival by the incorporation of (/sup 3/H)hypoxanthine. D37 values (the dose at which 37% of the control level of incorporation remains) obtained from this procedure showed the same relativemore » sensitivity of normal and xeroderma pigmentosum fibroblasts as was obtained by colony formation. Untransformed and SV40-transformed fibroblasts, which have different growth rates and different responses to high cell densities, gave different D37 values by this assay in culture trays as compared with colony formation. Comparison of relative sensitivities to irradiation should therefore be made only between cell types with similar growth characteristics. The similar sensitivity of normal and xeroderma pigmentosum cells to mitomycin C was also determined by this culture tray method. By increasing cell density at the beginning of the experiments, a greater capacity of group C compared with group D fibroblasts for recovery from potentially lethal damage was also detected.« less

  14. Sensitivity for Diagnosing Group A Streptococcal Pharyngitis from Manufacturers is 10% Higher than Reported in Peer-Reviewed Publications.

    PubMed

    Vachhani, Raj; Patel, Toral; Centor, Robert M; Estrada, Carlos A

    2017-01-01

    Meta-analyses based on peer-reviewed publications report a sensitivity of approximately 85% for rapid antigen streptococcus tests to diagnose group A streptococcal (GAS) pharyngitis. Because these meta-analyses excluded package inserts, we examined the test characteristics of rapid antigen streptococcal tests and molecular methods that manufacturers report in their package inserts. We included tests available in the US market (Food and Drug Administration, period searched 1993-2015) and used package insert data to calculate pooled sensitivity and specificity. To examine quality, we used the Quality Assessment of Diagnostic Accuracy Studies-2. We excluded 26 tests having different trade names but identical methods and data. The study design was prospective in 41.7% (10 of 24). The pooled sensitivity of the most commonly used method, lateral flow/immunochromatographic, was 95% (95% confidence interval [CI] 94-96) and the pooled specificity was 98% (96-98); 7108 patients. The pooled sensitivity of the polymerase chain reaction or molecular methods was 98% (95% CI 96-98) and the pooled specificity was 96% (95% CI 95-97); 5685 patients. Package inserts include sponsored studies that overestimate the sensitivity of rapid tests to diagnose GAS pharyngitis by approximately 10%. Physicians should understand that package inserts overestimate diagnostic test utility; a negative test cannot be used to exclude GAS pharyngitis.

  15. Impact sensitivity test of liquid energetic materials

    NASA Astrophysics Data System (ADS)

    Tiutiaev, A.; Dolzhikov, A.; Zvereva, I.

    2017-10-01

    This paper presents new experimental method for sensitivity evaluation at the impact. A large number of researches shown that the probability of explosion initiating of liquid explosives by impact depends on the chemical nature and the various external characteristics. But the sensitivity of liquid explosive in the presence of gas bubbles increases many times as compared with the liquid without gas bubbles. In this case local chemical reaction focus are formed as a result of compression and heating of the gas inside the bubbles. In the liquid as a result of convection, wave motion, shock, etc. gas bubbles are easily generated, it is necessary to develop methods for determining sensitivity of liquid explosives to impact and to research the explosives ignition with bubbles. For the experimental investigation, the well-known impact machine and the so-called appliance 1 were used. Instead of the metal cup in the standard method in this paper polyurethane foam cylindrical container with liquid explosive was used. Polyurethane foam cylindrical container is easily deforms by impact. A large number of tests with different liquid explosives were made. It was found that the test liquid explosive to impact in appliance 1 with polyurethane foam to a large extent reflect the real mechanical sensitivity due to the small loss of impact energy on the deformation of the metal cup, as well as the best differentiation liquid explosive sensitivity due to the higher resolution method.

  16. Study on ABO and RhD blood grouping: Comparison between conventional tile method and a new solid phase method (InTec Blood Grouping Test Kit).

    PubMed

    Yousuf, R; Abdul Ghani, S A; Abdul Khalid, N; Leong, C F

    2018-04-01

    'InTec Blood Grouping Test kit' using solid-phase technology is a new method which may be used at outdoor blood donation site or at bed side as an alternative to the conventional tile method in view of its stability at room temperature and fulfilled the criteria as point of care test. This study aimed to compare the efficiency of this solid phase method (InTec Blood Grouping Test Kit) with the conventional tile method in determining the ABO and RhD blood group of healthy donors. A total of 760 voluntary donors who attended the Blood Bank, Penang Hospital or offsite blood donation campaigns from April to May 2014 were recruited. The ABO and RhD blood groups were determined by the conventional tile method and the solid phase method, in which the tube method was used as the gold standard. For ABO blood grouping, the tile method has shown 100% concordance results with the gold standard tube method, whereas the solid-phase method only showed concordance result for 754/760 samples (99.2%). Therefore, for ABO grouping, tile method has 100% sensitivity and specificity while the solid phase method has slightly lower sensitivity of 97.7% but both with good specificity of 100%. For RhD grouping, both the tile and solid phase methods have grouped one RhD positive specimen as negative each, thus giving the sensitivity and specificity of 99.9% and 100% for both methods respectively. The 'InTec Blood Grouping Test Kit' is suitable for offsite usage because of its simplicity and user friendliness. However, further improvement in adding the internal quality control may increase the test sensitivity and validity of the test results.

  17. The Role of Attention in Somatosensory Processing: A Multi-trait, Multi-method Analysis

    PubMed Central

    Puts, Nicolaas A. J.; Mahone, E. Mark; Edden, Richard A. E.; Tommerdahl, Mark; Mostofsky, Stewart H.

    2016-01-01

    Sensory processing abnormalities in autism have largely been described by parent report. This study used a multi-method (parent-report and measurement), multi-trait (tactile sensitivity and attention) design to evaluate somatosensory processing in ASD. Results showed multiple significant within-method (e.g., parent report of different traits)/cross-trait (e.g., attention and tactile sensitivity) correlations, suggesting that parent-reported tactile sensory dysfunction and performance-based tactile sensitivity describe different behavioral phenomena. Additionally, both parent-reported tactile functioning and performance-based tactile sensitivity measures were significantly associated with measures of attention. Findings suggest that sensory (tactile) processing abnormalities in ASD are multifaceted, and may partially reflect a more global deficit in behavioral regulation (including attention). Challenges of relying solely on parent-report to describe sensory difficulties faced by children/families with ASD are also highlighted. PMID:27448580

  18. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  19. Development of loop-mediated isothermal amplification methods for detecting Taylorella equigenitalis and Taylorella asinigenitalis

    PubMed Central

    KINOSHITA, Yuta; NIWA, Hidekazu; KATAYAMA, Yoshinari; HARIU, Kazuhisa

    2015-01-01

    ABSTRACT Taylorella equigenitalis is a causative bacterium of contagious equine metritis (CEM), and Taylorella asinigenitalis is species belonging to genus Taylorella. The authors developed two loop-mediated isothermal amplification (LAMP) methods, Te-LAMP and Ta-LAMP, for detecting T. equigenitalis and T. asinigenitalis, respectively. Using experimentally spiked samples, Te-LAMP was as sensitive as a published semi-nested PCR method, and Ta-LAMP was more sensitive than conventional PCR. Multiplex LAMP worked well without nonspecific reactions, and the analytical sensitivities of multiplex LAMP in the spiked samples were almost equivalent to those of Te-LAMP and Ta-LAMP. Therefore, the LAMP methods are considered useful tools to detect T. equigenitalis and/or T. asinigenitalis, and preventive measures will be rapidly implemented if the occurrence of CEM is confirmed by the LAMP methods. PMID:25829868

  20. Determination of IgE antibodies to the benzylpenicilloyl determinant: a comparison of the sensitivity and specificity of three radio allergo sorbent test methods.

    PubMed

    Garcia, J J; Blanca, M; Moreno, F; Vega, J M; Mayorga, C; Fernandez, J; Juarez, C; Romano, A; de Ramon, E

    1997-01-01

    The quantitation of in vitro IgE antibodies to the benzylpenicilloyl determinant (BPO) is a useful tool for evaluating suspected penicillin allergic subjects. Although many different methods have been employed, few studies have compared their diagnostic specificity and sensitivity. In this study, the sensitivity and specificity of three different radio allergo sorbent test (RAST) methods for quantitating specific IgE antibodies to the BPO determinant were compared. Thirty positive control sera (serum samples from penicillin allergic subjects with a positive clinical history and a positive penicillin skin test) and 30 negative control sera (sera from subjects with no history of penicillin allergy and negative skin tests) were tested for BPO-specific IgE antibodies by RAST using three different conjugates coupled to the solid phase: benzylpenicillin conjugated to polylysine (BPO-PLL), benzylpenicillin conjugated to human serum albumin (BPO-HSA), and benzylpenicillin conjugated to an aminospacer (BPO-SP). Receiver operator control curves (ROC analysis) were carried out by determining different cut-off points between positive and negative values. Contingence tables were constructed and sensitivity, specificity, negative predictive values (PV-), and positive predictive values (PV+) were calculated. Pearson correlation coefficients (r) and intraclass correlation coefficients (ICC) were determined and the differences between methods were compared by chi 2 analysis. Analysis of the areas defined by the ROC curves showed statistical differences among the three methods. When cut-off points for optimal sensitivity and specificity were chosen, the BPO-HSA assay was less sensitive and less specific and had a lower PV- and PV+ than the BPO-PLL and BPO-SP assays. Assessment of r and ICC indicated that the correlation was very high, but the concordance between the PLL and SP methods was higher than between the PLL and HSA or SP and HSA methods. We conclude that for quantitating IgE antibodies by RAST to the BPO determinant, BPO-SP or BPO-PLL conjugates offer advantages in sensitivity and specificity compared with BPO-HSA. These results support and extend previous in vitro studies by our group and highlight the importance of the carrier for RAST assays.

  1. Quenching of cascade reaction between triplet and photochrome probes with nitroxide radicals. A novel labeling method in study of membranes and surface systems.

    PubMed

    Papper, V; Medvedeva, N; Fishov, I; Likhtenshtein, G I

    2000-01-01

    We proposed a new method for the study of molecular dynamics and fluidity of the living and model biomembranes and surface systems. The method is based on the measurements of the sensitized photoisomerization kinetics of a photochrome probe. The cascade triplet cis-trans photoisomerization of the excited stilbene derivative sensitized with the excited triplet Erythrosin B has been studied in a model liposome membrane. The photoisomerization reaction is depressed with nitroxide radicals quenching the excited triplet state of the sensitizer. The enhanced fluorescence polarization of the stilbene probe incorporated into liposome membranes indicates that the stilbene molecules are squeezed in a relatively viscous media of the phospholipids. Calibration of the "triple" cascade system is based on a previously proposed method that allows the measurement of the product of the quenching rate constant and the sensitizer's triplet lifetime, as well as the quantitative detection of the nitroxide radicals in the vicinity of the membrane surface. The experiment was conducted using the constant-illumination fluorescence technique. Sensitivity of the method using a standard commercial spectrofluorimeter is about 10(-12) mol of fluorescence molecules per sample and can be improved using an advanced fluorescence technique. The minimal local concentration of nitroxide radicals or any other quenchers being detected is about 10(-5) M. This method enables the investigation of any chemical and biological surface processes of microscopic scale when the minimal volume is about 10(-3) microL or less.

  2. Meta-analysis of diagnostic accuracy studies in mental health

    PubMed Central

    Takwoingi, Yemisi; Riley, Richard D; Deeks, Jonathan J

    2015-01-01

    Objectives To explain methods for data synthesis of evidence from diagnostic test accuracy (DTA) studies, and to illustrate different types of analyses that may be performed in a DTA systematic review. Methods We described properties of meta-analytic methods for quantitative synthesis of evidence. We used a DTA review comparing the accuracy of three screening questionnaires for bipolar disorder to illustrate application of the methods for each type of analysis. Results The discriminatory ability of a test is commonly expressed in terms of sensitivity (proportion of those with the condition who test positive) and specificity (proportion of those without the condition who test negative). There is a trade-off between sensitivity and specificity, as an increasing threshold for defining test positivity will decrease sensitivity and increase specificity. Methods recommended for meta-analysis of DTA studies --such as the bivariate or hierarchical summary receiver operating characteristic (HSROC) model --jointly summarise sensitivity and specificity while taking into account this threshold effect, as well as allowing for between study differences in test performance beyond what would be expected by chance. The bivariate model focuses on estimation of a summary sensitivity and specificity at a common threshold while the HSROC model focuses on the estimation of a summary curve from studies that have used different thresholds. Conclusions Meta-analyses of diagnostic accuracy studies can provide answers to important clinical questions. We hope this article will provide clinicians with sufficient understanding of the terminology and methods to aid interpretation of systematic reviews and facilitate better patient care. PMID:26446042

  3. Sensitive spectrophotometric determination of aceclofenac following azo dye formation with 4-carboxyl-2,6-dinitrobenzene diazonium ion.

    PubMed

    Aderibigbe, Segun A; Adegoke, Olajire A; Idowu, Olakunle S; Olaleye, Sefiu O

    2012-01-01

    The study is a description of a sensitive spectrophotometric determination of aceclofenac following azo dye formation with 4-carboxyl-2,6-dinitrobenzenediazonium ion (CDNBD). Spot test and thin layer chromatography revealed the formation of a new compound distinct from CDNBD and aceclofenac. Optimization studies established a reaction time of 5 min at 30 degrees C after vortex mixing the drug/CDNBD for 10 s. An absorption maximum of 430 nm was selected as analytical wavelength. A linear response was observed over 1.2-4.8 μg/mL of aceclofenac with a correlation coefficient of 0.9983 and the drug combined with CDNBD at stoichiometric ratio of 2 : 1. The method has a limit of detection of 0.403 μg/mL, limit of quantitation of 1.22 μg/mL and is reproducible over a three day assessment. The method gave Sandell's sensitivity of 3.279 ng/cm2. Intra- and inter-day accuracies (in terms of errors) were less than 6% while precisions were of the order of 0.03-1.89% (RSD). The developed spectrophotometric method is of equivalent accuracy (p > 0.05) with British Pharmacopoeia, 2010 potentiometric method. It has the advantages of speed, simplicity, sensitivity and more affordable instrumentation and could found application as a rapid and sensitive analytical method of aceclofenac. It is the first described method by azo dye derivatization for the analysis of aceclofenac in bulk samples and dosage forms.

  4. Overview of the AVT-191 Project to Assess Sensitivity Analysis and Uncertainty Quantification Methods for Military Vehicle Design

    NASA Technical Reports Server (NTRS)

    Benek, John A.; Luckring, James M.

    2017-01-01

    A NATO symposium held in 2008 identified many promising sensitivity analysis and un-certainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not known. The STO Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic problems of interest to NATO. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper presents an overview of the AVT-191 program content.

  5. Summary Findings from the AVT-191 Project to Assess Sensitivity Analysis and Uncertainty Quantification Methods for Military Vehicle Design

    NASA Technical Reports Server (NTRS)

    Benek, John A.; Luckring, James M.

    2017-01-01

    A NATO symposium held in Greece in 2008 identified many promising sensitivity analysis and uncertainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not clear. The NATO Science and Technology Organization, Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic vehicle development problems. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper summarizes findings and lessons learned from the task group.

  6. Improving the sensitivity and accuracy of gamma activation analysis for the rapid determination of gold in mineral ores.

    PubMed

    Tickner, James; Ganly, Brianna; Lovric, Bojan; O'Dwyer, Joel

    2017-04-01

    Mining companies rely on chemical analysis methods to determine concentrations of gold in mineral ore samples. As gold is often mined commercially at concentrations around 1 part-per-million, it is necessary for any analysis method to provide good sensitivity as well as high absolute accuracy. We describe work to improve both the sensitivity and accuracy of the gamma activation analysis (GAA) method for gold. We present analysis results for several suites of ore samples and discuss the design of a GAA facility designed to replace conventional chemical assay in industrial applications. Copyright © 2017. Published by Elsevier Ltd.

  7. Sensitivity and specificity of the method used for ascertainment of healthcare-associated infections in the second Slovenian national prevalence survey.

    PubMed

    Serdt, Mojca; Lejko Zupanc, Tatjana; Korošec, Aleš; Klavs, Irena

    2016-12-01

    The second Slovenian national healthcare-associated infections (HAIs) prevalence survey (SNHPS) was conducted in acute-care hospitals in 2011. The objective was to assess the sensitivity and specificity of the method used for the ascertainment of six types of HAIs (bloodstream infections, catheter-associated infections, lower respiratory tract infections, pneumoniae, surgical site infections, and urinary tract infections) in the University Medical Centre Ljubljana (UMCL). A cross-sectional study was conducted in patients surveyed in the SNHPS in the UMCL using a retrospective medical chart review (RMCR) and European HAIs surveillance definitions. Sensitivity and specificity of the method used in the SNHPS using RMCR as a reference was computed for ascertainment of patients with any of the six selected types of HAIs and for individual types of HAIs. Agreement between the SNHPS and RMCR results was analyzed using Cohen's kappa coefficient. 1474 of 1742 (84.6%) patients surveyed in the SNHPS were included in RMCR. The sensitivity of the SNHPS method for detecting any of six HAIs was 90% (95% confidence interval (CI): 81%-95%) and specificity 99% (95% CI: 98%-99%). The sensitivity by type of HAI ranged from 63% (lower respiratory tract infections) to 92% (bloodstream infections). Specificity was at least 99% for all types of HAIs. Agreement between the two data collection approaches for HAIs overall was very good (κ=0.83). The overall sensitivity of SNHPS collection method for ascertaining HAIs overall was high and the specificity was very high. This suggests that the estimated prevalence of HAIs in the SNHPS was credible.

  8. Highly Sensitive and Automated Surface Enhanced Raman Scattering-based Immunoassay for H5N1 Detection with Digital Microfluidics.

    PubMed

    Wang, Yang; Ruan, Qingyu; Lei, Zhi-Chao; Lin, Shui-Chao; Zhu, Zhi; Zhou, Leiji; Yang, Chaoyong

    2018-04-17

    Digital microfluidics (DMF) is a powerful platform for a broad range of applications, especially immunoassays having multiple steps, due to the advantages of low reagent consumption and high automatization. Surface enhanced Raman scattering (SERS) has been proven as an attractive method for highly sensitive and multiplex detection, because of its remarkable signal amplification and excellent spatial resolution. Here we propose a SERS-based immunoassay with DMF for rapid, automated, and sensitive detection of disease biomarkers. SERS tags labeled with Raman reporter 4-mercaptobenzoic acid (4-MBA) were synthesized with a core@shell nanostructure and showed strong signals, good uniformity, and high stability. A sandwich immunoassay was designed, in which magnetic beads coated with antibodies were used as solid support to capture antigens from samples to form a beads-antibody-antigen immunocomplex. By labeling the immunocomplex with a detection antibody-functionalized SERS tag, antigen can be sensitively detected through the strong SERS signal. The automation capability of DMF can greatly simplify the assay procedure while reducing the risk of exposure to hazardous samples. Quantitative detection of avian influenza virus H5N1 in buffer and human serum was implemented to demonstrate the utility of the DMF-SERS method. The DMF-SERS method shows excellent sensitivity (LOD of 74 pg/mL) and selectivity for H5N1 detection with less assay time (<1 h) and lower reagent consumption (∼30 μL) compared to the standard ELISA method. Therefore, this DMF-SERS method holds great potentials for automated and sensitive detection of a variety of infectious diseases.

  9. OECD/NEA expert group on uncertainty analysis for criticality safety assessment: Results of benchmark on sensitivity calculation (phase III)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanova, T.; Laville, C.; Dyrda, J.

    2012-07-01

    The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplificationsmore » impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)« less

  10. Alignment of Carbon Nanotubes Comprising Magnetically Sensitive Metal Oxides in Nanofluids

    NASA Technical Reports Server (NTRS)

    Hong, Haiping (Inventor); Peterson, G. P. " Bud" (Inventor)

    2016-01-01

    The present invention is a nanoparticle mixture or suspension or nanofluid comprising nonmagnetically sensitive nanoparticles, magnetically sensitive nanoparticles, and surfactant(s). The present invention also relates to methods of preparing and using the same.

  11. Alignment of Carbon Nanotubes Comprising Magnetically Sensitive Metal Oxides in Nanofluids

    NASA Technical Reports Server (NTRS)

    Peterson, G. P. 'Bud' (Inventor); Hong, Haiping (Inventor)

    2014-01-01

    The present invention is a nanoparticle mixture or suspension or nanofluid comprising nonmagnetically sensitive nanoparticles, magnetically sensitive nanoparticles, and surfactant(s). The present invention also relates to methods of preparing and using the same.

  12. Noninvasive determination of optical lever sensitivity in atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Higgins, M. J.; Proksch, R.; Sader, J. E.; Polcik, M.; Mc Endoo, S.; Cleveland, J. P.; Jarvis, S. P.

    2006-01-01

    Atomic force microscopes typically require knowledge of the cantilever spring constant and optical lever sensitivity in order to accurately determine the force from the cantilever deflection. In this study, we investigate a technique to calibrate the optical lever sensitivity of rectangular cantilevers that does not require contact to be made with a surface. This noncontact approach utilizes the method of Sader et al. [Rev. Sci. Instrum. 70, 3967 (1999)] to calibrate the spring constant of the cantilever in combination with the equipartition theorem [J. L. Hutter and J. Bechhoefer, Rev. Sci. Instrum. 64, 1868 (1993)] to determine the optical lever sensitivity. A comparison is presented between sensitivity values obtained from conventional static mode force curves and those derived using this noncontact approach for a range of different cantilevers in air and liquid. These measurements indicate that the method offers a quick, alternative approach for the calibration of the optical lever sensitivity.

  13. Enhancement of the sensitivity of a temperature sensor based on fiber Bragg gratings via weak value amplification.

    PubMed

    Salazar-Serrano, L J; Barrera, D; Amaya, W; Sales, S; Pruneri, V; Capmany, J; Torres, J P

    2015-09-01

    We present a proof-of-concept experiment aimed at increasing the sensitivity of Fiber-Bragg-gratings temperature sensors by making use of a weak-value-amplification scheme. The technique requires only linear optics elements for its implementation and appears as a promising method for increasing the sensitivity than state-of the-art sensors can currently provide. The device implemented here is able to generate a shift of the centroid of the spectrum of a pulse of ∼0.035  nm/°C, a nearly fourfold increase in sensitivity over the same fiber-Bragg-grating system interrogated using standard methods.

  14. METHOD AND MEANS FOR RADIATION DOSIMETRY

    DOEpatents

    Shulte, J.W.; Suttle, J.F.

    1958-02-18

    This patent relates to a method and device for determining quantities of gamma radiation and x radiation by exposing to such radiation a mature of a purified halogenated hydrocarbon chosen from the class consisting of chloroform, bromoform, tetrachloroethane and 1,1,2trichloroethane, and a minor quantity of a sensitizer chosen from the class consisting of oxygen, benzoyl peroxide, sodium peroxide, and nitrobenzene, the proportion of the sensitizer being at least about 10/sup -5/ moles per cubic centimeter of halogenated hydrocarbon, the total amount of sensitizer depending upon the range of radiation to be measured, and chemically measuring the amount of decomposition generated by the irradiation of the sensitized halogenated hydrocarbon.

  15. Measurement Sensitivity Improvement of All-Optical Atomic Spin Magnetometer by Suppressing Noises

    PubMed Central

    Chen, Xiyuan; Zhang, Hong; Zou, Sheng

    2016-01-01

    Quantum manipulation technology and photoelectric detection technology have jointly facilitated the rapid development of ultra-sensitive atomic spin magnetometers. To improve the output signal and sensitivity of the spin-exchange-relaxation-free (SERF) atomic spin magnetometer, the noises influencing on the output signal and the sensitivity were analyzed, and the corresponding noise suppression methods were presented. The magnetic field noises, including the residual magnetic field noise and the light shift noise, were reduced to approximately zero by employing the magnetic field compensation method and by adjusting the frequency of the pump beam, respectively. With respect to the operation temperature, the simulation results showed that the temperature of the potassium atomic spin magnetometer realizing the spin-exchange relaxation-free regime was 180 °C. Moreover, the fluctuation noises of the frequency and the power were suppressed by using the frequency and the power stable systems. The experimental power stability results showed that the light intensity stability was enhanced 10%. Contrast experiments on the sensitivity were carried out to demonstrate the validity of the suppression methods. Finally, a sensitivity of 13 fT/Hz1/2 was successfully achieved by suppressing noises and optimizing parameters. PMID:27322272

  16. Unlocking Sensitivity for Visibility-based Estimators of the 21 cm Reionization Power Spectrum

    NASA Astrophysics Data System (ADS)

    Zhang, Yunfan Gerry; Liu, Adrian; Parsons, Aaron R.

    2018-01-01

    Radio interferometers designed to measure the cosmological 21 cm power spectrum require high sensitivity. Several modern low-frequency interferometers feature drift-scan antennas placed on a regular grid to maximize the number of instantaneously coherent (redundant) measurements. However, even for such maximum-redundancy arrays, significant sensitivity comes through partial coherence between baselines. Current visibility-based power-spectrum pipelines, though shown to ease control of systematics, lack the ability to make use of this partial redundancy. We introduce a method to leverage partial redundancy in such power-spectrum pipelines for drift-scan arrays. Our method cross-multiplies baseline pairs at a time lag and quantifies the sensitivity contributions of each pair of baselines. Using the configurations and beams of the 128-element Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER-128) and staged deployments of the Hydrogen Epoch of Reionization Array, we illustrate how our method applies to different arrays and predict the sensitivity improvements associated with pairing partially coherent baselines. As the number of antennas increases, we find partial redundancy to be of increasing importance in unlocking the full sensitivity of upcoming arrays.

  17. Effect of cantilever geometry on the optical lever sensitivities and thermal noise method of the atomic force microscope.

    PubMed

    Sader, John E; Lu, Jianing; Mulvaney, Paul

    2014-11-01

    Calibration of the optical lever sensitivities of atomic force microscope (AFM) cantilevers is especially important for determining the force in AFM measurements. These sensitivities depend critically on the cantilever mode used and are known to differ for static and dynamic measurements. Here, we calculate the ratio of the dynamic and static sensitivities for several common AFM cantilevers, whose shapes vary considerably, and experimentally verify these results. The dynamic-to-static optical lever sensitivity ratio is found to range from 1.09 to 1.41 for the cantilevers studied - in stark contrast to the constant value of 1.09 used widely in current calibration studies. This analysis shows that accuracy of the thermal noise method for the static spring constant is strongly dependent on cantilever geometry - neglect of these dynamic-to-static factors can induce errors exceeding 100%. We also discuss a simple experimental approach to non-invasively and simultaneously determine the dynamic and static spring constants and optical lever sensitivities of cantilevers of arbitrary shape, which is applicable to all AFM platforms that have the thermal noise method for spring constant calibration.

  18. [Comparative evaluation of the sensitivity of Acinetobacter to colistin, using the prediffusion and minimum inhibitory concentration methods: detection of heteroresistant isolates].

    PubMed

    Herrera, Melina E; Mobilia, Liliana N; Posse, Graciela R

    2011-01-01

    The objective of this study is to perform a comparative evaluation of the prediffusion and minimum inhibitory concentration (MIC) methods for the detection of sensitivity to colistin, and to detect Acinetobacter baumanii-calcoaceticus complex (ABC) heteroresistant isolates to colistin. We studied 75 isolates of ABC recovered from clinically significant samples obtained from various centers. Sensitivity to colistin was determined by prediffusion as well as by MIC. All the isolates were sensitive to colistin, with MIC = 2µg/ml. The results were analyzed by dispersion graph and linear regression analysis, revealing that the prediffusion method did not correlate with the MIC values for isolates sensitive to colistin (r² = 0.2017). Detection of heteroresistance to colistin was determined by plaque efficiency of all the isolates with the same initial MICs of 2, 1, and 0.5 µg/ml, which resulted in 14 of them with a greater than 8-fold increase in the MIC in some cases. When the sensitivity of these resistant colonies was determined by prediffusion, the resulting dispersion graph and linear regression analysis yielded an r² = 0.604, which revealed a correlation between the methodologies used.

  19. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    NASA Astrophysics Data System (ADS)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  20. Fluorescence polarization immunoassays for rapid, accurate, and sensitive determination of mycotoxins

    USDA-ARS?s Scientific Manuscript database

    Analytical methods for the determination of mycotoxins in foods are commonly based on chromatographic techniques (GC, HPLC or LC-MS). Although these methods permit a sensitive and accurate determination of the analyte, they require skilled personnel and are time-consuming, expensive, and unsuitable ...

  1. Terbium-sensitized luminescence screening method for fluoroquinolones in beef serum

    USDA-ARS?s Scientific Manuscript database

    Enrofloxacin is one of only two fluoroquinolone antibiotics approved for use in cattle in the U.S. Microbial screening methods commonly used for monitoring veterinary drug residues are not sensitive or selective for fluoroquinolones. In this work, a luminescence-based screening assay was developed...

  2. Community sensitization and decision-making for trial participation: a mixed-methods study from The Gambia.

    PubMed

    Dierickx, Susan; O'Neill, Sarah; Gryseels, Charlotte; Immaculate Anyango, Edna; Bannister-Tyrrell, Melanie; Okebe, Joseph; Mwesigwa, Julia; Jaiteh, Fatou; Gerrets, René; Ravinetto, Raffaella; D'Alessandro, Umberto; Peeters Grietens, Koen

    2017-08-16

    Ensuring individual free and informed decision-making for research participation is challenging. It is thought that preliminarily informing communities through 'community sensitization' procedures may improve individual decision-making. This study set out to assess the relevance of community sensitization for individual decision-making in research participation in rural Gambia. This anthropological mixed-methods study triangulated qualitative methods and quantitative survey methods in the context of an observational study and a clinical trial on malaria carried out by the Medical Research Council Unit Gambia. Although 38.7% of the respondents were present during sensitization sessions, 91.1% of the respondents were inclined to participate in the trial when surveyed after the sensitization and prior to the informed consent process. This difference can be explained by the informal transmission of information within the community after the community sensitization, expectations such as the benefits of participation based on previous research experiences, and the positive reputation of the research institute. Commonly mentioned barriers to participation were blood sampling and the potential disapproval of the household head. Community sensitization is effective in providing first-hand, reliable information to communities as the information is cascaded to those who could not attend the sessions. However, further research is needed to assess how the informal spread of information further shapes people's expectations, how the process engages with existing social relations and hierarchies (e.g. local political power structures; permissions of heads of households) and how this influences or changes individual consent. © 2017 The Authors Developing World Bioethics Published by John Wiley & Sons Ltd.

  3. GIS coupled Multiple Criteria based Decision Support for Classification of Urban Coastal Areas in India

    NASA Astrophysics Data System (ADS)

    Dhiman, R.; Kalbar, P.; Inamdar, A. B.

    2017-12-01

    Coastal area classification in India is a challenge for federal and state government agencies due to fragile institutional framework, unclear directions in implementation of costal regulations and violations happening at private and government level. This work is an attempt to improvise the objectivity of existing classification methods to synergies the ecological systems and socioeconomic development in coastal cities. We developed a Geographic information system coupled Multi-criteria Decision Making (GIS-MCDM) approach to classify urban coastal areas where utility functions are used to transform the costal features into quantitative membership values after assessing the sensitivity of urban coastal ecosystem. Furthermore, these membership values for costal features are applied in different weighting schemes to derive Coastal Area Index (CAI) which classifies the coastal areas in four distinct categories viz. 1) No Development Zone, 2) Highly Sensitive Zone, 3) Moderately Sensitive Zone and 4) Low Sensitive Zone based on the sensitivity of urban coastal ecosystem. Mumbai, a coastal megacity in India is used as case study for demonstration of proposed method. Finally, uncertainty analysis using Monte Carlo approach to validate the sensitivity of CAI under specific multiple scenarios is carried out. Results of CAI method shows the clear demarcation of coastal areas in GIS environment based on the ecological sensitivity. CAI provides better decision support for federal and state level agencies to classify urban coastal areas according to the regional requirement of coastal resources considering resilience and sustainable development. CAI method will strengthen the existing institutional framework for decision making in classification of urban coastal areas where most effective coastal management options can be proposed.

  4. Are quantitative sensitivity analysis methods always reliable?

    NASA Astrophysics Data System (ADS)

    Huang, X.

    2016-12-01

    Physical parameterizations developed to represent subgrid-scale physical processes include various uncertain parameters, leading to large uncertainties in today's Earth System Models (ESMs). Sensitivity Analysis (SA) is an efficient approach to quantitatively determine how the uncertainty of the evaluation metric can be apportioned to each parameter. Also, SA can identify the most influential parameters, as a result to reduce the high dimensional parametric space. In previous studies, some SA-based approaches, such as Sobol' and Fourier amplitude sensitivity testing (FAST), divide the parameters into sensitive and insensitive groups respectively. The first one is reserved but the other is eliminated for certain scientific study. However, these approaches ignore the disappearance of the interactive effects between the reserved parameters and the eliminated ones, which are also part of the total sensitive indices. Therefore, the wrong sensitive parameters might be identified by these traditional SA approaches and tools. In this study, we propose a dynamic global sensitivity analysis method (DGSAM), which iteratively removes the least important parameter until there are only two parameters left. We use the CLM-CASA, a global terrestrial model, as an example to verify our findings with different sample sizes ranging from 7000 to 280000. The result shows DGSAM has abilities to identify more influential parameters, which is confirmed by parameter calibration experiments using four popular optimization methods. For example, optimization using Top3 parameters filtered by DGSAM could achieve substantial improvement against Sobol' by 10%. Furthermore, the current computational cost for calibration has been reduced to 1/6 of the original one. In future, it is necessary to explore alternative SA methods emphasizing parameter interactions.

  5. Usefulness of Leifson Staining Method in Diagnosis of Helicobacter pylori Infection

    PubMed Central

    Piccolomini, Raffaele; Di Bonaventura, Giovanni; Neri, Matteo; Di Girolamo, Arturo; Catamo, Giovanni; Pizzigallo, Eligio

    1999-01-01

    The Leifson staining method was used to diagnose Helicobacter pylori infection and was compared to histology, culture, and the rapid urease test (RUT). Histology gave the best sensitivity (98%), compared to Leifson staining (97%), culture (92%), and RUT (85%) (P < 0.005). Leifson staining is a sensitive, rapid, economical method for diagnosis of H. pylori infection in dyspeptic patients. PMID:9854090

  6. A highly sensitive method for analysis of 7-dehydrocholesterol for the study of Smith-Lemli-Opitz syndrome[S

    PubMed Central

    Liu, Wei; Xu, Libin; Lamberson, Connor; Haas, Dorothea; Korade, Zeljka; Porter, Ned A.

    2014-01-01

    We describe a highly sensitive method for the detection of 7-dehydrocholesterol (7-DHC), the biosynthetic precursor of cholesterol, based on its reactivity with 4-phenyl-1,2,4-triazoline-3,5-dione (PTAD) in a Diels-Alder cycloaddition reaction. Samples of biological tissues and fluids with added deuterium-labeled internal standards were derivatized with PTAD and analyzed by LC-MS. This protocol permits fast processing of samples, short chromatography times, and high sensitivity. We applied this method to the analysis of cells, blood, and tissues from several sources, including human plasma. Another innovative aspect of this study is that it provides a reliable and highly reproducible measurement of 7-DHC in 7-dehydrocholesterol reductase (Dhcr7)-HET mouse (a model for Smith-Lemli-Opitz syndrome) samples, showing regional differences in the brain tissue. We found that the levels of 7-DHC are consistently higher in Dhcr7-HET mice than in controls, with the spinal cord and peripheral nerve showing the biggest differences. In addition to 7-DHC, sensitive analysis of desmosterol in tissues and blood was also accomplished with this PTAD method by assaying adducts formed from the PTAD “ene” reaction. The method reported here may provide a highly sensitive and high throughput way to identify at-risk populations having errors in cholesterol biosynthesis. PMID:24259532

  7. Sensitivity improvement of one-shot Fourier spectroscopic imager for realization of noninvasive blood glucose sensors in smartphones

    NASA Astrophysics Data System (ADS)

    Kawashima, Natsumi; Nogo, Kosuke; Hosono, Satsuki; Nishiyama, Akira; Wada, Kenji; Ishimaru, Ichiro

    2016-11-01

    The use of the wide-field-stop and beam-expansion method for sensitivity enhancement of one-shot Fourier spectroscopy is proposed to realize health care sensors installed in smartphones for daily monitoring. When measuring the spectral components of human bodies noninvasively, diffuse reflected light from biological membranes is too weak for detection using conventional hyperspectral cameras. One-shot Fourier spectroscopy is a spatial phase-shift-type interferometer that can determine the one-dimensional spectral characteristics from a single frame. However, this method has low sensitivity, so that only the spectral characteristics of light sources with direct illumination can be obtained, because a single slit is used as a field stop. The sensitivity of the proposed spectroscopic method is improved by using the wide-field-stop and beam-expansion method. The use of a wider field stop slit width increases the detected light intensity; however, this simultaneously narrows the diffraction angle. The narrower collimated objective beam diameter degrades the visibility of interferograms. Therefore, a plane-concave cylindrical lens between the objective plane and the single slit is introduced to expand the beam diameter. The resulting sensitivity improvement achieved when using the wide-field-stop and beam-expansion method allows the spectral characteristics of hemoglobin to be obtained noninvasively from a human palm using a midget lamp.

  8. Headspace-SPME-GC/MS as a simple cleanup tool for sensitive 2,6-diisopropylphenol analysis from lipid emulsions and adaptable to other matrices.

    PubMed

    Pickl, Karin E; Adamek, Viktor; Gorges, Roland; Sinner, Frank M

    2011-07-15

    Due to increased regulatory requirements, the interaction of active pharmaceutical ingredients with various surfaces and solutions during production and storage is gaining interest in the pharmaceutical research field, in particular with respect to development of new formulations, new packaging material and the evaluation of cleaning processes. Experimental adsorption/absorption studies as well as the study of cleaning processes require sophisticated analytical methods with high sensitivity for the drug of interest. In the case of 2,6-diisopropylphenol - a small lipophilic drug which is typically formulated as lipid emulsion for intravenous injection - a highly sensitive method in the concentration range of μg/l suitable to be applied to a variety of different sample matrices including lipid emulsions is needed. We hereby present a headspace-solid phase microextraction (HS-SPME) approach as a simple cleanup procedure for sensitive 2,6-diisopropylphenol quantification from diverse matrices choosing a lipid emulsion as the most challenging matrix with regard to complexity. By combining the simple and straight forward HS-SPME sample pretreatment with an optimized GC-MS quantification method a robust and sensitive method for 2,6-diisopropylphenol was developed. This method shows excellent sensitivity in the low μg/l concentration range (5-200μg/l), good accuracy (94.8-98.8%) and precision (intraday-precision 0.1-9.2%, inter-day precision 2.0-7.7%). The method can be easily adapted to other, less complex, matrices such as water or swab extracts. Hence, the presented method holds the potential to serve as a single and simple analytical procedure for 2,6-diisopropylphenol analysis in various types of samples such as required in, e.g. adsorption/absorption studies which typically deal with a variety of different surfaces (steel, plastic, glass, etc.) and solutions/matrices including lipid emulsions. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Finite-frequency sensitivity kernels for global seismic wave propagation based upon adjoint methods

    NASA Astrophysics Data System (ADS)

    Liu, Qinya; Tromp, Jeroen

    2008-07-01

    We determine adjoint equations and Fréchet kernels for global seismic wave propagation based upon a Lagrange multiplier method. We start from the equations of motion for a rotating, self-gravitating earth model initially in hydrostatic equilibrium, and derive the corresponding adjoint equations that involve motions on an earth model that rotates in the opposite direction. Variations in the misfit function χ then may be expressed as , where δlnm = δm/m denotes relative model perturbations in the volume V, δlnd denotes relative topographic variations on solid-solid or fluid-solid boundaries Σ, and ∇Σδlnd denotes surface gradients in relative topographic variations on fluid-solid boundaries ΣFS. The 3-D Fréchet kernel Km determines the sensitivity to model perturbations δlnm, and the 2-D kernels Kd and Kd determine the sensitivity to topographic variations δlnd. We demonstrate also how anelasticity may be incorporated within the framework of adjoint methods. Finite-frequency sensitivity kernels are calculated by simultaneously computing the adjoint wavefield forward in time and reconstructing the regular wavefield backward in time. Both the forward and adjoint simulations are based upon a spectral-element method. We apply the adjoint technique to generate finite-frequency traveltime kernels for global seismic phases (P, Pdiff, PKP, S, SKS, depth phases, surface-reflected phases, surface waves, etc.) in both 1-D and 3-D earth models. For 1-D models these adjoint-generated kernels generally agree well with results obtained from ray-based methods. However, adjoint methods do not have the same theoretical limitations as ray-based methods, and can produce sensitivity kernels for any given phase in any 3-D earth model. The Fréchet kernels presented in this paper illustrate the sensitivity of seismic observations to structural parameters and topography on internal discontinuities. These kernels form the basis of future 3-D tomographic inversions.

  10. Clinical evaluation of a loop-mediated isothermal amplification (LAMP) assay for rapid detection of Neisseria meningitidis in cerebrospinal fluid.

    PubMed

    Lee, DoKyung; Kim, Eun Jin; Kilgore, Paul E; Kim, Soon Ae; Takahashi, Hideyuki; Ohnishi, Makoto; Anh, Dang Duc; Dong, Bai Qing; Kim, Jung Soo; Tomono, Jun; Miyamoto, Shigehiko; Notomi, Tsugunori; Kim, Dong Wook; Seki, Mitsuko

    2015-01-01

    Neisseria meningitidis (Nm) is a leading causative agent of bacterial meningitis in humans. Traditionally, meningococcal meningitis has been diagnosed by bacterial culture. However, isolation of bacteria from patients' cerebrospinal fluid (CSF) is time consuming and sometimes yields negative results. Recently, polymerase chain reaction (PCR)-based diagnostic methods of detecting Nm have been considered the gold standard because of their superior sensitivity and specificity compared with culture. In this study, we developed a loop-mediated isothermal amplification (LAMP) method and evaluated its ability to detect Nm in cerebrospinal fluid (CSF). We developed a meningococcal LAMP assay (Nm LAMP) that targets the ctrA gene. The primer specificity was validated using 16 strains of N. meningitidis (serogroup A, B, C, D, 29-E, W-135, X, Y, and Z) and 19 non-N. meningitidis species. Within 60 min, the Nm LAMP detected down to ten copies per reaction with sensitivity 1000-fold more than that of conventional PCR. The LAMP assays were evaluated using a set of 1574 randomly selected CSF specimens from children with suspected meningitis collected between 1998 and 2002 in Vietnam, China, and Korea. The LAMP method was shown to be more sensitive than PCR methods for CSF samples (31 CSF samples were positive by LAMP vs. 25 by PCR). The detection rate of the LAMP method was substantially higher than that of the PCR method. In a comparative analysis of the PCR and LAMP assays, the clinical sensitivity, specificity, positive predictive value, and negative predictive value of the LAMP assay were 100%, 99.6%, 80.6%, and 100%, respectively. Compared to PCR, LAMP detected Nm with higher analytical and clinical sensitivity. This sensitive and specific LAMP method offers significant advantages for screening patients on a population basis and for diagnosis in clinical settings.

  11. Rapid, Fully Automated Digital Immunoassay for p24 Protein with the Sensitivity of Nucleic Acid Amplification for Detecting Acute HIV Infection.

    PubMed

    Cabrera, Carlos; Chang, Lei; Stone, Mars; Busch, Michael; Wilson, David H

    2015-11-01

    Nucleic acid testing (NAT) has become the standard for high sensitivity in detecting low levels of virus. However, adoption of NAT can be cost prohibitive in low-resource settings where access to extreme sensitivity could be clinically advantageous for early detection of infection. We report development and preliminary validation of a simple, low-cost, fully automated digital p24 antigen immunoassay with the sensitivity of quantitative NAT viral load (NAT-VL) methods for detection of acute HIV infection. We developed an investigational 69-min immunoassay for p24 capsid protein for use on a novel digital analyzer on the basis of single-molecule-array technology. We evaluated the assay for sensitivity by dilution of standardized preparations of p24, cultured HIV, and preseroconversion samples. We characterized analytical performance and concordance with 2 NAT-VL methods and 2 contemporary p24 Ag/Ab combination immunoassays with dilutions of viral isolates and samples from the earliest stages of HIV infection. Analytical sensitivity was 0.0025 ng/L p24, equivalent to 60 HIV RNA copies/mL. The limit of quantification was 0.0076 ng/L, and imprecision across 10 runs was <10% for samples as low as 0.09 ng/L. Clinical specificity was 95.1%. Sensitivity concordance vs NAT-VL on dilutions of preseroconversion samples and Group M viral isolates was 100%. The digital immunoassay exhibited >4000-fold greater sensitivity than contemporary immunoassays for p24 and sensitivity equivalent to that of NAT methods for early detection of HIV. The data indicate that NAT-level sensitivity for acute HIV infection is possible with a simple, low-cost digital immunoassay. © 2015 American Association for Clinical Chemistry.

  12. Obtaining changes in calibration-coil to seismometer output constants using sine waves

    USGS Publications Warehouse

    Ringler, Adam T.; Hutt, Charles R.; Gee, Lind S.; Sandoval, Leo D.; Wilson, David C.

    2013-01-01

    The midband sensitivity of a broadband seismometer is one of the most commonly used parameters from station metadata. Thus, it is critical for station operators to robustly estimate this quantity with a high degree of accuracy. We develop an in situ method for estimating changes in sensitivity using sine‐wave calibrations, assuming the calibration coil and its drive are stable over time and temperature. This approach has been used in the past for passive instruments (e.g., geophones) but has not been applied, to our knowledge, to derive sensitivities of modern force‐feedback broadband seismometers. We are able to detect changes in sensitivity to well within 1%, and our method is capable of detecting these sensitivity changes using any frequency of sine calibration within the passband of the instrument.

  13. Synthesis of SnS nanoparticles by SILAR method for quantum dot-sensitized solar cells.

    PubMed

    Tsukigase, Hiroki; Suzuki, Yoshikazu; Berger, Marie-Hélène; Sagawa, Takashi; Yoshikawa, Susumu

    2011-03-01

    SnS-sensitized TiO2 electrodes were applied in quantum dot-sensitized solar cells (QDSSCs) which are environmentally more favorable than conventional Cd or Pb-chalcogenide-sensitized electrodes. SnS nanoparticles were well-distributed over the surface of TiO2 nanoparticles by the successive ionic layer adsorption and reaction (SILAR) method. Deposited SnS nanoparticles had diameter about 3 nm. Under AM1.5 irradiation with 100 mW/cm2 light intensity (at 1 sun), the energy conversion efficiency of obtained cells reached a value of 0.21% (0.25 cm2) at SILAR coating cycles of 5. In addition, the photovoltaic performance was improved by additional ZnS coating on the surface of SnS-sensitized TiO2 electrodes.

  14. Sensitivity of solid explosives: Minimum energy of a dangerous impact

    NASA Technical Reports Server (NTRS)

    Afanasyev, G. T.

    1986-01-01

    A method which uses initiating explosives for determining the sensitivity of solid explosives is described. The energy index of sensitivity is determined by the mechanical properties of the explosives. The results of the calculations are discussed.

  15. Integrated Decision Strategies for Skin Sensitization Hazard

    EPA Science Inventory

    One of the top priorities of the Interagency Coordinating Committee for the Validation of Alternative Methods (ICCVAM) is the identification and evaluation of non-animal alternatives for skin sensitization testing. Although skin sensitization is a complex process, the key biologi...

  16. Estimating Sobol Sensitivity Indices Using Correlations

    EPA Science Inventory

    Sensitivity analysis is a crucial tool in the development and evaluation of complex mathematical models. Sobol's method is a variance-based global sensitivity analysis technique that has been applied to computational models to assess the relative importance of input parameters on...

  17. Non-animal methods to predict skin sensitization (II): an assessment of defined approaches *.

    PubMed

    Kleinstreuer, Nicole C; Hoffmann, Sebastian; Alépée, Nathalie; Allen, David; Ashikaga, Takao; Casey, Warren; Clouet, Elodie; Cluzel, Magalie; Desprez, Bertrand; Gellatly, Nichola; Göbel, Carsten; Kern, Petra S; Klaric, Martina; Kühnl, Jochen; Martinozzi-Teissier, Silvia; Mewes, Karsten; Miyazawa, Masaaki; Strickland, Judy; van Vliet, Erwin; Zang, Qingda; Petersohn, Dirk

    2018-05-01

    Skin sensitization is a toxicity endpoint of widespread concern, for which the mechanistic understanding and concurrent necessity for non-animal testing approaches have evolved to a critical juncture, with many available options for predicting sensitization without using animals. Cosmetics Europe and the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods collaborated to analyze the performance of multiple non-animal data integration approaches for the skin sensitization safety assessment of cosmetics ingredients. The Cosmetics Europe Skin Tolerance Task Force (STTF) collected and generated data on 128 substances in multiple in vitro and in chemico skin sensitization assays selected based on a systematic assessment by the STTF. These assays, together with certain in silico predictions, are key components of various non-animal testing strategies that have been submitted to the Organization for Economic Cooperation and Development as case studies for skin sensitization. Curated murine local lymph node assay (LLNA) and human skin sensitization data were used to evaluate the performance of six defined approaches, comprising eight non-animal testing strategies, for both hazard and potency characterization. Defined approaches examined included consensus methods, artificial neural networks, support vector machine models, Bayesian networks, and decision trees, most of which were reproduced using open source software tools. Multiple non-animal testing strategies incorporating in vitro, in chemico, and in silico inputs demonstrated equivalent or superior performance to the LLNA when compared to both animal and human data for skin sensitization.

  18. Multisite-multivariable sensitivity analysis of distributed watershed models: enhancing the perceptions from computationally frugal methods

    USDA-ARS?s Scientific Manuscript database

    This paper assesses the impact of different likelihood functions in identifying sensitive parameters of the highly parameterized, spatially distributed Soil and Water Assessment Tool (SWAT) watershed model for multiple variables at multiple sites. The global one-factor-at-a-time (OAT) method of Morr...

  19. A Sensitive and Robust Enzyme Kinetic Experiment Using Microplates and Fluorogenic Ester Substrates

    ERIC Educational Resources Information Center

    Johnson, R. Jeremy; Hoops, Geoffrey C.; Savas, Christopher J.; Kartje, Zachary; Lavis, Luke D.

    2015-01-01

    Enzyme kinetics measurements are a standard component of undergraduate biochemistry laboratories. The combination of serine hydrolases and fluorogenic enzyme substrates provides a rapid, sensitive, and general method for measuring enzyme kinetics in an undergraduate biochemistry laboratory. In this method, the kinetic activity of multiple protein…

  20. Porous polymer film calcium ion chemical sensor and method of using the same

    DOEpatents

    Porter, M.D.; Chau, L.K.

    1991-02-12

    A method of measuring calcium ions is disclosed wherein a calcium sensitive reagent, calcichrome, is immobilized on a porous polymer film. The reaction of the calcium sensitive reagent to the Ca(II) is then measured and concentration determined as a function of the reaction. 1 figure.

  1. Porous polymer film calcium ion chemical sensor and method of using the same

    DOEpatents

    Porter, Marc D.; Chau, Lai-Kwan

    1991-02-12

    A method of measuring calcium ions is disclosed wherein a calcium sensitive reagent, calcichrome, is immobilized on a porour polymer film. The reaction of the calcium sensitive reagent to the Ca(II) is then measured and concentration determined as a function of the reaction.

  2. Simplified methods of evaluating colonies for levels of Varroa Sensitive Hygiene (VSH)

    USDA-ARS?s Scientific Manuscript database

    Varroa sensitive hygiene (VSH) is a trait of honey bees, Apis mellifera, that supports resistance to varroa mites, Varroa destructor. Components of VSH were evaluated to identify simple methods for selection of the trait. Varroa mite population growth was measured in colonies with variable levels of...

  3. Developing a Method for Resolving NOx Emission Inventory Biases Using Discrete Kalman Filter Inversion, Direct Sensitivities, and Satellite-Based Columns

    EPA Science Inventory

    An inverse method was developed to integrate satellite observations of atmospheric pollutant column concentrations and direct sensitivities predicted by a regional air quality model in order to discern biases in the emissions of the pollutant precursors.

  4. Lipid-anthropometric index optimization for insulin sensitivity estimation

    NASA Astrophysics Data System (ADS)

    Velásquez, J.; Wong, S.; Encalada, L.; Herrera, H.; Severeyn, E.

    2015-12-01

    Insulin sensitivity (IS) is the ability of cells to react due to insulińs presence; when this ability is diminished, low insulin sensitivity or insulin resistance (IR) is considered. IR had been related to other metabolic disorders as metabolic syndrome (MS), obesity, dyslipidemia and diabetes. IS can be determined using direct or indirect methods. The indirect methods are less accurate and invasive than direct and they use glucose and insulin values from oral glucose tolerance test (OGTT). The accuracy is established by comparison using spearman rank correlation coefficient between direct and indirect method. This paper aims to propose a lipid-anthropometric index which offers acceptable correlation to insulin sensitivity index for different populations (DB1=MS subjects, DB2=sedentary without MS subjects and DB3=marathoners subjects) without to use OGTT glucose and insulin values. The proposed method is parametrically optimized through a random cross-validation, using the spearman rank correlation as comparator with CAUMO method. CAUMO is an indirect method designed from a simplification of the minimal model intravenous glucose tolerance test direct method (MINMOD-IGTT) and with acceptable correlation (0.89). The results show that the proposed optimized method got a better correlation with CAUMO in all populations compared to non-optimized. On the other hand, it was observed that the optimized method has better correlation with CAUMO in DB2 and DB3 groups than HOMA-IR method, which is the most widely used for diagnosing insulin resistance. The optimized propose method could detect incipient insulin resistance, when classify as insulin resistant subjects that present impaired postprandial insulin and glucose values.

  5. Novel diagnostic procedure for determining metastasis to sentinel lymph nodes in breast cancer using a semi-dry dot-blot method.

    PubMed

    Otsubo, Ryota; Oikawa, Masahiro; Hirakawa, Hiroshi; Shibata, Kenichiro; Abe, Kuniko; Hayashi, Tomayoshi; Kinoshita, Naoe; Shigematsu, Kazuto; Hatachi, Toshiko; Yano, Hiroshi; Matsumoto, Megumi; Takagi, Katsunori; Tsuchiya, Tomoshi; Tomoshige, Koichi; Nakashima, Masahiro; Taniguchi, Hideki; Omagari, Takeyuki; Itoyanagi, Noriaki; Nagayasu, Takeshi

    2014-02-15

    We developed an easy, quick and cost-effective detection method for lymph node metastasis called the semi-dry dot-blot (SDB) method, which visualizes the presence of cancer cells with washing of sectioned lymph nodes by anti-pancytokeratin antibody, modifying dot-blot technology. We evaluated the validity and efficacy of the SDB method for the diagnosis of lymph node metastasis in a clinical setting (Trial 1). To evaluate the validity of the SDB method in clinical specimens, 180 dissected lymph nodes from 29 cases, including breast, gastric and colorectal cancer, were examined. Each lymph node was sliced at the maximum diameter and the sensitivity, specificity and accuracy of the SDB method were determined and compared with the final pathology report. Metastasis was detected in 32 lymph nodes (17.8%), and the sensitivity, specificity and accuracy of the SDB method were 100, 98.0 and 98.3%, respectively (Trial 2). To evaluate the efficacy of the SDB method in sentinel lymph node (SLN) biopsy, 174 SLNs from 100 cases of clinically node-negative breast cancer were analyzed. Each SLN was longitudinally sliced at 2-mm intervals and the sensitivity, specificity, accuracy and time required for the SDB method were determined and compared with the intraoperative pathology report. Metastasis was detected in 15 SLNs (8.6%), and the sensitivity, specificity, accuracy and mean required time of the SDB method were 93.3, 96.9, 96.6 and 43.3 min, respectively. The SDB method is a novel and reliable modality for the intraoperative diagnosis of SLN metastasis. © 2013 UICC.

  6. Evaluation of the Copan Myco-TB kit for the decontamination of respiratory samples for the detection of Mycobacteria.

    PubMed

    De Geyter, Deborah; Cnudde, Danny; Van der Beken, Mieke; Autaers, Dorien; Piérard, Denis

    2018-04-01

    The purpose of this study was to test a newly developed decontamination and fluidization kit for processing respiratory specimens for the detection of mycobacteria: the Myco-TB procedure (developed by Copan (Brescia, Italy)). This technique was compared with the Zephiran decontamination method in use in our hospital. Respiratory specimens (n = 387: 130 endotracheal/bronchial aspirates, 172 bronchoalveolar lavages and 55 sputa) submitted to the University Hospital of Brussels between January 2016 and March 2017 were included. All samples were divided into two aliquots: one was subjected to the Myco-TB method and one to the Zephiran technique prior to culture. The sensitivities for culture for the Zephiran technique on solid media, the Myco-TB method on solid media and Myco-TB combined with the MGIT™ system were respectively 67%, 87% and 89%. The contamination rates were 22% with both the Zephiran and Myco-TB method on solid media and only 4% with the Myco-TB kit combined with the MGIT™ system. For direct microscopy, the sensitivities of the Zephiran method and the Myco-TB method were equal (40%) when the centrifugation time was 20 min. The Myco-TB decontamination method is easy and rapid to perform. It is more sensitive for culture as compared to the Zephiran method and gives lower contamination levels when combined with the MGIT™ technique. When increasing the centrifugation step to 20 min, the sensitivity of direct microscopy is equal to the Zephiran method.

  7. Structural optimization: Status and promise

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.

    Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)

  8. Error analysis applied to several inversion techniques used for the retrieval of middle atmospheric constituents from limb-scanning MM-wave spectroscopic measurements

    NASA Technical Reports Server (NTRS)

    Puliafito, E.; Bevilacqua, R.; Olivero, J.; Degenhardt, W.

    1992-01-01

    The formal retrieval error analysis of Rodgers (1990) allows the quantitative determination of such retrieval properties as measurement error sensitivity, resolution, and inversion bias. This technique was applied to five numerical inversion techniques and two nonlinear iterative techniques used for the retrieval of middle atmospheric constituent concentrations from limb-scanning millimeter-wave spectroscopic measurements. It is found that the iterative methods have better vertical resolution, but are slightly more sensitive to measurement error than constrained matrix methods. The iterative methods converge to the exact solution, whereas two of the matrix methods under consideration have an explicit constraint, the sensitivity of the solution to the a priori profile. Tradeoffs of these retrieval characteristics are presented.

  9. Comparison of two preparatory techniques for urine cytology.

    PubMed Central

    Dhundee, J; Rigby, H S

    1990-01-01

    Two methods of preparation of urine for cytology were compared retrospectively. In method 1 cells in the urine were fixed after the preparation of the smear; in method 2 the cells were fixed before smear preparation. Urine cytology reports were correlated with subsequent histological analysis. The specificities of urine cytology using both methods were high (99%). The sensitivity using method 1 was 87%; using method 2 it was 65%. This difference was significant. The cell preparation technique therefore significantly changes the sensitivity of urine cytology. Cellular fixation after smear preparation is preferable to smear preparation after fixation. PMID:2266176

  10. Identifying sensitive areas of adaptive observations for prediction of the Kuroshio large meander using a shallow-water model

    NASA Astrophysics Data System (ADS)

    Zou, Guang'an; Wang, Qiang; Mu, Mu

    2016-09-01

    Sensitive areas for prediction of the Kuroshio large meander using a 1.5-layer, shallow-water ocean model were investigated using the conditional nonlinear optimal perturbation (CNOP) and first singular vector (FSV) methods. A series of sensitivity experiments were designed to test the sensitivity of sensitive areas within the numerical model. The following results were obtained: (1) the eff ect of initial CNOP and FSV patterns in their sensitive areas is greater than that of the same patterns in randomly selected areas, with the eff ect of the initial CNOP patterns in CNOP sensitive areas being the greatest; (2) both CNOP- and FSV-type initial errors grow more quickly than random errors; (3) the eff ect of random errors superimposed on the sensitive areas is greater than that of random errors introduced into randomly selected areas, and initial errors in the CNOP sensitive areas have greater eff ects on final forecasts. These results reveal that the sensitive areas determined using the CNOP are more sensitive than those of FSV and other randomly selected areas. In addition, ideal hindcasting experiments were conducted to examine the validity of the sensitive areas. The results indicate that reduction (or elimination) of CNOP-type errors in CNOP sensitive areas at the initial time has a greater forecast benefit than the reduction (or elimination) of FSV-type errors in FSV sensitive areas. These results suggest that the CNOP method is suitable for determining sensitive areas in the prediction of the Kuroshio large-meander path.

  11. An in vitro human skin test for assessing sensitization potential.

    PubMed

    Ahmed, S S; Wang, X N; Fielding, M; Kerry, A; Dickinson, I; Munuswamy, R; Kimber, I; Dickinson, A M

    2016-05-01

    Sensitization to chemicals resulting in an allergy is an important health issue. The current gold-standard method for identification and characterization of skin-sensitizing chemicals was the mouse local lymph node assay (LLNA). However, for a number of reasons there has been an increasing imperative to develop alternative approaches to hazard identification that do not require the use of animals. Here we describe a human in-vitro skin explant test for identification of sensitization hazards and the assessment of relative skin sensitizing potency. This method measures histological damage in human skin as a readout of the immune response induced by the test material. Using this approach we have measured responses to 44 chemicals including skin sensitizers, pre/pro-haptens, respiratory sensitizers, non-sensitizing chemicals (including skin-irritants) and previously misclassified compounds. Based on comparisons with the LLNA, the skin explant test gave 95% specificity, 95% sensitivity, 95% concordance with a correlation coefficient of 0.9. The same specificity and sensitivity were achieved for comparison of results with published human sensitization data with a correlation coefficient of 0.91. The test also successfully identified nickel sulphate as a human skin sensitizer, which was misclassified as negative in the LLNA. In addition, sensitizers and non-sensitizers identified as positive or negative by the skin explant test have induced high/low T cell proliferation and IFNγ production, respectively. Collectively, the data suggests the human in-vitro skin explant test could provide the basis for a novel approach for characterization of the sensitizing activity as a first step in the risk assessment process. Copyright © 2015 John Wiley & Sons, Ltd.

  12. SiPM electro-optical detection system noise suppression method

    NASA Astrophysics Data System (ADS)

    Bi, Xiangli; Yang, Suhui; Hu, Tao; Song, Yiheng

    2014-11-01

    In this paper, the single photon detection principle of Silicon Photomultipliers (SiPM) device is introduced. The main noise factors that infect the sensitivity of the electro-optical detection system are analyzed, including background light noise, detector dark noise, preamplifier noise and signal light noise etc. The Optical, electrical and thermodynamic methods are used to suppress the SiPM electro-optical detection system noise, which improved the response sensitivity of the detector. Using SiPM optoelectronic detector with a even high sensitivity, together with small field large aperture optical system, high cutoff narrow bandwidth filters, low-noise operational amplifier circuit, the modular design of functional circuit, semiconductor refrigeration technology, greatly improved the sensitivity of optical detection system, reduced system noise and achieved long-range detection of weak laser radiation signal. Theoretical analysis and experimental results show that the proposed methods are reasonable and efficient.

  13. Affinity Biosensors for Detection of Mycotoxins in Food.

    PubMed

    Evtugyn, Gennady; Subjakova, Veronika; Melikishvili, Sopio; Hianik, Tibor

    2018-01-01

    This chapter reviews recent achievements in methods of detection of mycotoxins in food. Special focus is on the biosensor technology that utilizes antibodies and nucleic acid aptamers as receptors. Development of biosensors is based on the immobilization of antibodies or aptamers onto various conventional supports like gold layer, but also on nanomaterials such as graphene oxide, carbon nanotubes, and quantum dots that provide an effective platform for achieving high sensitivity of detection using various physical methods, including electrochemical, mass sensitive, and optical. The biosensors developed so far demonstrate high sensitivity typically in subnanomolar limit of detection. Several biosensors have been validated in real samples. The sensitivity of biosensors is similar and, in some cases, even better than traditional analytical methods such as ELISA or chromatography. We believe that future trends will be focused on improving biosensor properties toward practical application in food industry. © 2018 Elsevier Inc. All rights reserved.

  14. Method and apparatus for optoacoustic spectroscopy

    DOEpatents

    Amer, Nabil M.

    1979-01-01

    A method and apparatus that significantly increases the sensitivity and flexibility of laser optoacoustic spectroscopy, with reduced size. With the method, it no longer is necessary to limit the use of laser optoacoustic spectroscopy to species whose absorption must match available laser radiation. Instead, "doping" with a relatively small amount of an optically absorbing gas yields optoacoustic signatures of nonabsorbing materials (gases, liquids, solids, and aerosols), thus significantly increasing the sensitivity and flexibility of optoacoustic spectroscopy. Several applications of this method are demonstated and/or suggested.

  15. Direct comparison of phase-sensitive vibrational sum frequency generation with maximum entropy method: case study of water.

    PubMed

    de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie

    2011-12-14

    We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/. © 2011 American Institute of Physics

  16. Modeling the Sensitivity of Field Surveys for Detection of Environmental DNA (eDNA)

    PubMed Central

    Schultz, Martin T.; Lance, Richard F.

    2015-01-01

    The environmental DNA (eDNA) method is the practice of collecting environmental samples and analyzing them for the presence of a genetic marker specific to a target species. Little is known about the sensitivity of the eDNA method. Sensitivity is the probability that the target marker will be detected if it is present in the water body. Methods and tools are needed to assess the sensitivity of sampling protocols, design eDNA surveys, and interpret survey results. In this study, the sensitivity of the eDNA method is modeled as a function of ambient target marker concentration. The model accounts for five steps of sample collection and analysis, including: 1) collection of a filtered water sample from the source; 2) extraction of DNA from the filter and isolation in a purified elution; 3) removal of aliquots from the elution for use in the polymerase chain reaction (PCR) assay; 4) PCR; and 5) genetic sequencing. The model is applicable to any target species. For demonstration purposes, the model is parameterized for bighead carp (Hypophthalmichthys nobilis) and silver carp (H. molitrix) assuming sampling protocols used in the Chicago Area Waterway System (CAWS). Simulation results show that eDNA surveys have a high false negative rate at low concentrations of the genetic marker. This is attributed to processing of water samples and division of the extraction elution in preparation for the PCR assay. Increases in field survey sensitivity can be achieved by increasing sample volume, sample number, and PCR replicates. Increasing sample volume yields the greatest increase in sensitivity. It is recommended that investigators estimate and communicate the sensitivity of eDNA surveys to help facilitate interpretation of eDNA survey results. In the absence of such information, it is difficult to evaluate the results of surveys in which no water samples test positive for the target marker. It is also recommended that invasive species managers articulate concentration-based sensitivity objectives for eDNA surveys. In the absence of such information, it is difficult to design appropriate sampling protocols. The model provides insights into how sampling protocols can be designed or modified to achieve these sensitivity objectives. PMID:26509674

  17. Modeling the Sensitivity of Field Surveys for Detection of Environmental DNA (eDNA).

    PubMed

    Schultz, Martin T; Lance, Richard F

    2015-01-01

    The environmental DNA (eDNA) method is the practice of collecting environmental samples and analyzing them for the presence of a genetic marker specific to a target species. Little is known about the sensitivity of the eDNA method. Sensitivity is the probability that the target marker will be detected if it is present in the water body. Methods and tools are needed to assess the sensitivity of sampling protocols, design eDNA surveys, and interpret survey results. In this study, the sensitivity of the eDNA method is modeled as a function of ambient target marker concentration. The model accounts for five steps of sample collection and analysis, including: 1) collection of a filtered water sample from the source; 2) extraction of DNA from the filter and isolation in a purified elution; 3) removal of aliquots from the elution for use in the polymerase chain reaction (PCR) assay; 4) PCR; and 5) genetic sequencing. The model is applicable to any target species. For demonstration purposes, the model is parameterized for bighead carp (Hypophthalmichthys nobilis) and silver carp (H. molitrix) assuming sampling protocols used in the Chicago Area Waterway System (CAWS). Simulation results show that eDNA surveys have a high false negative rate at low concentrations of the genetic marker. This is attributed to processing of water samples and division of the extraction elution in preparation for the PCR assay. Increases in field survey sensitivity can be achieved by increasing sample volume, sample number, and PCR replicates. Increasing sample volume yields the greatest increase in sensitivity. It is recommended that investigators estimate and communicate the sensitivity of eDNA surveys to help facilitate interpretation of eDNA survey results. In the absence of such information, it is difficult to evaluate the results of surveys in which no water samples test positive for the target marker. It is also recommended that invasive species managers articulate concentration-based sensitivity objectives for eDNA surveys. In the absence of such information, it is difficult to design appropriate sampling protocols. The model provides insights into how sampling protocols can be designed or modified to achieve these sensitivity objectives.

  18. Hierarchical Nanogold Labels to Improve the Sensitivity of Lateral Flow Immunoassay

    NASA Astrophysics Data System (ADS)

    Serebrennikova, Kseniya; Samsonova, Jeanne; Osipov, Alexander

    2018-06-01

    Lateral flow immunoassay (LFIA) is a widely used express method and offers advantages such as a short analysis time, simplicity of testing and result evaluation. However, an LFIA based on gold nanospheres lacks the desired sensitivity, thereby limiting its wide applications. In this study, spherical nanogold labels along with new types of nanogold labels such as gold nanopopcorns and nanostars were prepared, characterized, and applied for LFIA of model protein antigen procalcitonin. It was found that the label with a structure close to spherical provided more uniform distribution of specific antibodies on its surface, indicative of its suitability for this type of analysis. LFIA using gold nanopopcorns as a label allowed procalcitonin detection over a linear range of 0.5-10 ng mL-1 with the limit of detection of 0.1 ng mL-1, which was fivefold higher than the sensitivity of the assay with gold nanospheres. Another approach to improve the sensitivity of the assay included the silver enhancement method, which was used to compare the amplification of LFIA for procalcitonin detection. The sensitivity of procalcitonin determination by this method was 10 times better the sensitivity of the conventional LFIA with gold nanosphere as a label. The proposed approach of LFIA based on gold nanopopcorns improved the detection sensitivity without additional steps and prevented the increased consumption of specific reagents (antibodies).

  19. Efficient monitoring of the blood-stage infection in a malaria rodent model by the rotating-crystal magneto-optical method

    NASA Astrophysics Data System (ADS)

    Orbán, Ágnes; Rebelo, Maria; Molnár, Petra; Albuquerque, Inês S.; Butykai, Adam; Kézsmárki, István

    2016-03-01

    Intense research efforts have been focused on the improvement of the efficiency and sensitivity of malaria diagnostics, especially in resource-limited settings for the detection of asymptomatic infections. Our recently developed magneto-optical (MO) method allows the accurate quantification of malaria pigment crystals (hemozoin) in blood by their magnetically induced rotation. First evaluations of the method using β-hematin crystals and in vitro P. falciparum cultures implied its potential for high-sensitivity malaria diagnosis. To further investigate this potential, here we study the performance of the method in monitoring the in vivo onset and progression of the blood-stage infection in a rodent malaria model. Our results show that the MO method can detect the first generation of intraerythrocytic P. berghei parasites 66-76 hours after sporozoite injection, demonstrating similar sensitivity to Giesma-stained light microscopy and exceeding that of flow cytometric techniques. Magneto-optical measurements performed during and after the treatment of P. berghei infections revealed that both the follow up under treatment and the detection of later reinfections are feasible with this new technique. The present study demonstrates that the MO method - besides being label and reagent-free, automated and rapid - has a high in vivo sensitivity and is ready for in-field evaluation.

  20. HCPCF-based in-line fiber Fabry-Perot refractometer and high sensitivity signal processing method

    NASA Astrophysics Data System (ADS)

    Liu, Xiaohui; Jiang, Mingshun; Sui, Qingmei; Geng, Xiangyi; Song, Furong

    2017-12-01

    An in-line fiber Fabry-Perot interferometer (FPI) based on the hollow-core photonic crystal fiber (HCPCF) for refractive index (RI) measurement is proposed in this paper. The FPI is formed by splicing both ends of a short section of the HCPCF to single mode fibers (SMFs) and cleaving the SMF pigtail to a proper length. The RI response of the sensor is analyzed theoretically and demonstrated experimentally. The results show that the FPI sensor has linear response to external RI and good repeatability. The sensitivity calculated from the maximum fringe contrast is -136 dB/RIU. A new spectrum differential integration (SDI) method for signal processing is also presented in this study. In this method, the RI is obtained from the integrated intensity of the absolute difference between the interference spectrum and its smoothed spectrum. The results show that the sensitivity obtained from the integrated intensity is about -1.34×105 dB/RIU. Compared with the maximum fringe contrast method, the new SDI method can provide the higher sensitivity, better linearity, improved reliability, and accuracy, and it's also convenient for automatic and fast signal processing in real-time monitoring of RI.

  1. Cost-Sensitive Local Binary Feature Learning for Facial Age Estimation.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2015-12-01

    In this paper, we propose a cost-sensitive local binary feature learning (CS-LBFL) method for facial age estimation. Unlike the conventional facial age estimation methods that employ hand-crafted descriptors or holistically learned descriptors for feature representation, our CS-LBFL method learns discriminative local features directly from raw pixels for face representation. Motivated by the fact that facial age estimation is a cost-sensitive computer vision problem and local binary features are more robust to illumination and expression variations than holistic features, we learn a series of hashing functions to project raw pixel values extracted from face patches into low-dimensional binary codes, where binary codes with similar chronological ages are projected as close as possible, and those with dissimilar chronological ages are projected as far as possible. Then, we pool and encode these local binary codes within each face image as a real-valued histogram feature for face representation. Moreover, we propose a cost-sensitive local binary multi-feature learning method to jointly learn multiple sets of hashing functions using face patches extracted from different scales to exploit complementary information. Our methods achieve competitive performance on four widely used face aging data sets.

  2. Pseudotargeted MS Method for the Sensitive Analysis of Protein Phosphorylation in Protein Complexes.

    PubMed

    Lyu, Jiawen; Wang, Yan; Mao, Jiawei; Yao, Yating; Wang, Shujuan; Zheng, Yong; Ye, Mingliang

    2018-05-15

    In this study, we presented an enrichment-free approach for the sensitive analysis of protein phosphorylation in minute amounts of samples, such as purified protein complexes. This method takes advantage of the high sensitivity of parallel reaction monitoring (PRM). Specifically, low confident phosphopeptides identified from the data-dependent acquisition (DDA) data set were used to build a pseudotargeted list for PRM analysis to allow the identification of additional phosphopeptides with high confidence. The development of this targeted approach is very easy as the same sample and the same LC-system were used for the discovery and the targeted analysis phases. No sample fractionation or enrichment was required for the discovery phase which allowed this method to analyze minute amount of sample. We applied this pseudotargeted MS method to quantitatively examine phosphopeptides in affinity purified endogenous Shc1 protein complexes at four temporal stages of EGF signaling and identified 82 phospho-sites. To our knowledge, this is the highest number of phospho-sites identified from the protein complexes. This pseudotargeted MS method is highly sensitive in the identification of low abundance phosphopeptides and could be a powerful tool to study phosphorylation-regulated assembly of protein complex.

  3. A simple and sensitive enzymatic method for cholesterol quantification in macrophages and foam cells

    PubMed Central

    Robinet, Peggy; Wang, Zeneng; Hazen, Stanley L.; Smith, Jonathan D.

    2010-01-01

    A precise and sensitive method for measuring cellular free and esterified cholesterol is required in order to perform studies of macrophage cholesterol loading, metabolism, storage, and efflux. Until now, the use of an enzymatic cholesterol assay, commonly used for aqueous phase plasma cholesterol assays, has not been optimized for use with solid phase samples such as cells, due to inefficient solubilization of total cholesterol in enzyme compatible solvents. We present an efficient solubilization protocol compatible with an enzymatic cholesterol assay that does not require chemical saponification or chromatographic separation. Another issue with enzyme compatible solvents is the presence of endogenous peroxides that interfere with the enzymatic cholesterol assay. We overcame this obstacle by pretreatment of the reaction solution with the enzyme catalase, which consumed endogenous peroxides resulting in reduced background and increased sensitivity in our method. Finally, we demonstrated that this method for cholesterol quantification in macrophages yields results that are comparable to those measured by stable isotope dilution gas chromatography with mass spectrometry detection. In conclusion, we describe a sensitive, simple, and high-throughput enzymatic method to quantify cholesterol in complex matrices such as cells. PMID:20688754

  4. Investigation of Human Cancers for Retrovirus by Low-Stringency Target Enrichment and High-Throughput Sequencing.

    PubMed

    Vinner, Lasse; Mourier, Tobias; Friis-Nielsen, Jens; Gniadecki, Robert; Dybkaer, Karen; Rosenberg, Jacob; Langhoff, Jill Levin; Cruz, David Flores Santa; Fonager, Jannik; Izarzugaza, Jose M G; Gupta, Ramneek; Sicheritz-Ponten, Thomas; Brunak, Søren; Willerslev, Eske; Nielsen, Lars Peter; Hansen, Anders Johannes

    2015-08-19

    Although nearly one fifth of all human cancers have an infectious aetiology, the causes for the majority of cancers remain unexplained. Despite the enormous data output from high-throughput shotgun sequencing, viral DNA in a clinical sample typically constitutes a proportion of host DNA that is too small to be detected. Sequence variation among virus genomes complicates application of sequence-specific, and highly sensitive, PCR methods. Therefore, we aimed to develop and characterize a method that permits sensitive detection of sequences despite considerable variation. We demonstrate that our low-stringency in-solution hybridization method enables detection of <100 viral copies. Furthermore, distantly related proviral sequences may be enriched by orders of magnitude, enabling discovery of hitherto unknown viral sequences by high-throughput sequencing. The sensitivity was sufficient to detect retroviral sequences in clinical samples. We used this method to conduct an investigation for novel retrovirus in samples from three cancer types. In accordance with recent studies our investigation revealed no retroviral infections in human B-cell lymphoma cells, cutaneous T-cell lymphoma or colorectal cancer biopsies. Nonetheless, our generally applicable method makes sensitive detection possible and permits sequencing of distantly related sequences from complex material.

  5. Detection of high-risk mucosal human papillomavirus DNA in human specimens by a novel and sensitive multiplex PCR method combined with DNA microarray.

    PubMed

    Gheit, Tarik; Tommasino, Massimo

    2011-01-01

    Epidemiological and functional studies have clearly demonstrated that certain types of human papillomavirus (HPV) from the genus alpha of the HPV phylogenetic tree, referred to as high-risk (HR) types, are the etiological cause of cervical cancer. Several methods for HPV detection and typing have been developed, and their importance in clinical and epidemiological studies has been well demonstrated. However, comparative studies have shown that several assays have different sensitivities for the detection of specific HPV types, particularly in the case of multiple infections. In this chapter, we describe a novel one-shot method for the detection and typing of 19 mucosal HR HPV types (types 16, 18, 26, 31, 33, 35, 39, 45, 51, 52, 53, 56, 58, 59, 66, 68, 70, 73, and 82). The assay combines the advantages of the multiplex PCR methods, i.e., high sensitivity and the possibility to perform multiple amplifications in a single reaction, with an array primer extension (APEX) assay. The latter method offers the benefits of Sanger dideoxy sequencing with the high-throughput potential of the microarray. Initial studies have revealed that the assay is very sensitive in detecting multiple HPV infections.

  6. Evaluation of ICT filariasis card test using whole capillary blood: comparison with Knott's concentration and counting chamber methods.

    PubMed

    Njenga, S M; Wamae, C N

    2001-10-01

    An immunochromatographic card test (ICT) that uses fingerprick whole blood instead of serum for diagnosis of bancroftian filariasis has recently been developed. The card test was validated in the field in Kenya by comparing its sensitivity to the combined sensitivity of Knott's concentration and counting chamber methods. A total of 102 (14.6%) and 117 (16.7%) persons was found to be microfilaremic by Knott's concentration and counting chamber methods, respectively. The geometric mean intensities (GMI) were 74.6 microfilariae (mf)/ml and 256.5 mf/ml by Knott's concentration and counting chamber methods, respectively. All infected individuals detected by both Knott's concentration and counting chamber methods were also antigen positive by the ICT filariasis card test (100% sensitivity). Further, of 97 parasitologically amicrofilaremic persons, 24 (24.7%) were antigen positive by the ICT. The overall prevalence of antigenemia was 37.3%. Of 100 nonendemic area control persons, none was found to be filarial antigen positive (100% specificity). The results show that the new version of the ICT filariasis card test is a simple, sensitive, specific, and rapid test that is convenient in field settings.

  7. Efficiencies of Dye-Sensitized Solar Cells using Ferritin-Encapsulated Quantum Dots with Various Staining Methods

    NASA Astrophysics Data System (ADS)

    Perez, Luis

    Dye-sensitized solar cells (DSSC) have the potential to replace traditional and cost-inefficient crystalline silicon or ruthenium solar cells. This can only be accomplished by optimizing DSSC's energy efficiency. One of the major components in a dye-sensitized solar cell is the porous layer of titanium dioxide. This layer is coated with a molecular dye that absorbs sunlight. The research conducted for this paper focuses on the different methods used to dye the porous TiO2 layer with ferritin-encapsulated quantum dots. Multiple anodes were dyed using a method known as SILAR which involves deposition through alternate immersion in two different solutions. The efficiencies of DSSCs with ferritin-encapsulated lead sulfide dye deposited using SILAR were subsequently compared against the efficiencies produced by cells using the traditional immersion method. It was concluded that both methods resulted in similar efficiencies (? .074%) however, the SILAR method dyed the TiO2 coating significantly faster than the immersion method. On a related note, our experiments concluded that conducting 2 SILAR cycles yields the highest possible efficiency for this particular binding method. National Science Foundation.

  8. A novel method for pair-matching using three-dimensional digital models of bone: mesh-to-mesh value comparison.

    PubMed

    Karell, Mara A; Langstaff, Helen K; Halazonetis, Demetrios J; Minghetti, Caterina; Frelat, Mélanie; Kranioti, Elena F

    2016-09-01

    The commingling of human remains often hinders forensic/physical anthropologists during the identification process, as there are limited methods to accurately sort these remains. This study investigates a new method for pair-matching, a common individualization technique, which uses digital three-dimensional models of bone: mesh-to-mesh value comparison (MVC). The MVC method digitally compares the entire three-dimensional geometry of two bones at once to produce a single value to indicate their similarity. Two different versions of this method, one manual and the other automated, were created and then tested for how well they accurately pair-matched humeri. Each version was assessed using sensitivity and specificity. The manual mesh-to-mesh value comparison method was 100 % sensitive and 100 % specific. The automated mesh-to-mesh value comparison method was 95 % sensitive and 60 % specific. Our results indicate that the mesh-to-mesh value comparison method overall is a powerful new tool for accurately pair-matching commingled skeletal elements, although the automated version still needs improvement.

  9. Comparison of the sensitivity of mass spectrometry atmospheric pressure ionization techniques in the analysis of porphyrinoids.

    PubMed

    Swider, Paweł; Lewtak, Jan P; Gryko, Daniel T; Danikiewicz, Witold

    2013-10-01

    The porphyrinoids chemistry is greatly dependent on the data obtained in mass spectrometry. For this reason, it is essential to determine the range of applicability of mass spectrometry ionization methods. In this study, the sensitivity of three different atmospheric pressure ionization techniques, electrospray ionization, atmospheric pressure chemical ionization and atmospheric pressure photoionization, was tested for several porphyrinods and their metallocomplexes. Electrospray ionization method was shown to be the best ionization technique because of its high sensitivity for derivatives of cyanocobalamin, free-base corroles and porphyrins. In the case of metallocorroles and metalloporphyrins, atmospheric pressure photoionization with dopant proved to be the most sensitive ionization method. It was also shown that for relatively acidic compounds, particularly for corroles, the negative ion mode provides better sensitivity than the positive ion mode. The results supply a lot of relevant information on the methodology of porphyrinoids analysis carried out by mass spectrometry. The information can be useful in designing future MS or liquid chromatography-MS experiments. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Examining the accuracy of the infinite order sudden approximation using sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Eno, Larry; Rabitz, Herschel

    1981-08-01

    A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix SIOS with respect to a parameter which reintroduces the internal energy operator ?0 into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (?0 in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result is obtained for the effect of ?0 on SIOS. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H2 system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed.

  11. Improving LC-MS sensitivity through increases in chromatographic performance: comparisons of UPLC-ES/MS/MS to HPLC-ES/MS/MS.

    PubMed

    Churchwell, Mona I; Twaddle, Nathan C; Meeker, Larry R; Doerge, Daniel R

    2005-10-25

    Recent technological advances have made available reverse phase chromatographic media with a 1.7 microm particle size along with a liquid handling system that can operate such columns at much higher pressures. This technology, termed ultra performance liquid chromatography (UPLC), offers significant theoretical advantages in resolution, speed, and sensitivity for analytical determinations, particularly when coupled with mass spectrometers capable of high-speed acquisitions. This paper explores the differences in LC-MS performance by conducting a side-by-side comparison of UPLC for several methods previously optimized for HPLC-based separation and quantification of multiple analytes with maximum throughput. In general, UPLC produced significant improvements in method sensitivity, speed, and resolution. Sensitivity increases with UPLC, which were found to be analyte-dependent, were as large as 10-fold and improvements in method speed were as large as 5-fold under conditions of comparable peak separations. Improvements in chromatographic resolution with UPLC were apparent from generally narrower peak widths and from a separation of diastereomers not possible using HPLC. Overall, the improvements in LC-MS method sensitivity, speed, and resolution provided by UPLC show that further advances can be made in analytical methodology to add significant value to hypothesis-driven research.

  12. A simplified implementation of edge detection in MATLAB is faster and more sensitive than fast fourier transform for actin fiber alignment quantification.

    PubMed

    Kemeny, Steven Frank; Clyne, Alisa Morss

    2011-04-01

    Fiber alignment plays a critical role in the structure and function of cells and tissues. While fiber alignment quantification is important to experimental analysis and several different methods for quantifying fiber alignment exist, many studies focus on qualitative rather than quantitative analysis perhaps due to the complexity of current fiber alignment methods. Speed and sensitivity were compared in edge detection and fast Fourier transform (FFT) for measuring actin fiber alignment in cells exposed to shear stress. While edge detection using matrix multiplication was consistently more sensitive than FFT, image processing time was significantly longer. However, when MATLAB functions were used to implement edge detection, MATLAB's efficient element-by-element calculations and fast filtering techniques reduced computation cost 100 times compared to the matrix multiplication edge detection method. The new computation time was comparable to the FFT method, and MATLAB edge detection produced well-distributed fiber angle distributions that statistically distinguished aligned and unaligned fibers in half as many sample images. When the FFT sensitivity was improved by dividing images into smaller subsections, processing time grew larger than the time required for MATLAB edge detection. Implementation of edge detection in MATLAB is simpler, faster, and more sensitive than FFT for fiber alignment quantification.

  13. [Improvement of sensitivity in the second generation HCV core antigen assay by a novel concentration method using polyethylene glycol (PEG)].

    PubMed

    Higashimoto, Makiko; Takahashi, Masahiko; Jokyu, Ritsuko; Syundou, Hiromi; Saito, Hidetsugu

    2007-11-01

    A HCV core antigen (Ag) detection assay system, Lumipulse Ortho HCV Ag has been developed and is commercially available in Japan with a lower detection level limit of 50 fmol/l, which is equivalent to 20 KIU/ml in PCR quantitative assay. HCV core Ag assay has an advantage of broader dynamic range compared with PCR assay, however the sensitivity is lower than PCR. We developed a novel HCV core Ag concentration method using polyethylene glycol (PEG), which can improve the sensitivity five times better than the original assay. The reproducibility was examined by consecutive five-time measurement of HCV patients serum, in which the results of HCV core Ag original and concentrated method were 56.8 +/- 8.1 fmol/l (mean +/- SD), CV 14.2% and 322.9 +/- 45.5 fmol/l CV 14.0%, respectively. The assay results of HCV negative samples in original HCV core Ag were all 0.1 fmol/l and the results were same even in the concentration method. The results of concentration method were 5.7 times higher than original assay, which was almost equal to theoretical rate as expected. The assay results of serially diluted samples were also as same as expected data in both original and concentration assay. We confirmed that the sensitivity of HCV core Ag concentration method had almost as same sensitivity as PCR high range assay in the competitive assay study using the serially monitored samples of five HCV patients during interferon therapy. A novel concentration method using PEG in HCV core Ag assay system seems to be useful for assessing and monitoring interferon treatment for HCV.

  14. Further development of LLNA:DAE method as stand-alone skin-sensitization testing method and applied for evaluation of relative skin-sensitizing potency between chemicals.

    PubMed

    Yamashita, Kunihiko; Shinoda, Shinsuke; Hagiwara, Saori; Itagaki, Hiroshi

    2015-04-01

    To date, there has been no well-established local lymph node assay (LLNA) that includes an elicitation phase. Therefore, we developed a modified local lymph node assay with an elicitation phase (LLNA:DAE) to discriminate true skin sensitizers from chemicals that gave borderline positive results and previously reported this assay. To develop the LLNA:DAE method as a useful stand-alone testing method, we investigated the complete procedure for the LLNA:DAE method using hexyl cinnamic aldehyde (HCA), isoeugenol, and 2,4-dinitrochlorobenzene (DNCB) as test compounds. We defined the LLNA:DAE procedure as follows: in the dose-finding test, four concentrations of chemical applied to dorsum of the right ear on days 1, 2, and 3 and dorsum of both ears on day 10. Ear thickness and skin irritation score were measured on days 1, 3, 5, 10, and 12. Local lymph nodes were excised and weighed on day 12. The test dose for the primary LLNA:DAE study was selected as the dose that gave the highest left ear lymph node weight in the dose-finding study, or the lowest dose that produced a left ear lymph node of over 4 mg. This procedure was validated using nine different chemicals. Furthermore, qualitative relationship was observed between the degree of elicitation response in the left ear lymph node and the skin sensitizing potency of 32 chemicals tested in this study and the previous study. These results indicated that LLNA:DAE method was as first LLNA method that was able to evaluate the skin sensitizing potential and potency in elicitation response.

  15. A simple, rapid, cost-effective and sensitive method for detection of Salmonella in environmental and pecan samples.

    PubMed

    Dobhal, S; Zhang, G; Rohla, C; Smith, M W; Ma, L M

    2014-10-01

    PCR is widely used in the routine detection of foodborne human pathogens; however, challenges remain in overcoming PCR inhibitors present in some sample matrices. The objective of this study was to develop a simple, sensitive, cost-effective and rapid method for processing large numbers of environmental and pecan samples for Salmonella detection. This study was also aimed at validation of a new protocol for the detection of Salmonella from in-shell pecans. Different DNA template preparation methods, including direct boiling, prespin, multiple washing and commercial DNA extraction kits, were evaluated with pure cultures of Salmonella Typhimurium and with enriched soil, cattle feces and in-shell pecan each spiked individually with Salmonella Typhimurium. PCR detection of Salmonella was conducted using invA and 16S rRNA gene (internal amplification control) specific primers. The effect of amplification facilitators, including bovine serum albumin (BSA), polyvinylpyrrolidone (PVP), polyethylene glycol (PEG) and gelatin on PCR sensitivity, was also evaluated. Conducting a prespin of sample matrices in combination with the addition of 0·4% (w/v) BSA and 1% (w/v) PVP in PCR mix was the simplest, most rapid, cost-effective and sensitive method for PCR detection of Salmonella, with up to 40 CFU Salmonella per reaction detectable in the presence of over 10(9 ) CFU ml(-1) of background micro-organisms from enriched feces soil or pecan samples. The developed method is rapid, cost-effective and sensitive for detection of Salmonella from different matrices. This study provides a method with broad applicability for PCR detection of Salmonella in complex sample matrices. This method has a potential for its application in different research arenas and diagnostic laboratories. © 2014 The Society for Applied Microbiology.

  16. A rapid and sensitive assay method for measuring amine oxidase based on hydrogen peroxide-titanium complex formation.

    PubMed

    Nag; Saha; Choudhuri

    2000-08-22

    Hydrogenperoxide (H(2)O(2)) is an end product of diamine and polyamine oxidation by their respective oxidase enzymes. A new sensitive assay method is based on a H(2)O(2)-titanium (Ti) complex formation as an indicator of H(2)O(2) production due to polyamine oxidation. The orange-yellow coloured H(2)O(2)-Ti complex was measured at 410 nm in a Shimadzu spectrophotometer. The assay conditions for maximum diamine oxidase (DAO) and polyamine oxidase (PAO) as standardized here using the hypocotyl tissues of Vigna catjang Endl. cv Pusa Barsati consisted of pH 7.4 (40 mM potassium phosphate buffer), 3 mM substrate (putrescine or spermine), 37 degrees C incubation temperature and 30 min incubation time in the presence of catechol (10(-2) M) used as an inhibitor of both peroxidase and catalase activity. The method described here was significantly more sensitive than the starch-iodide method [T.A. Smith, Biochem. Biophys. Res. Commun. 41 (1970) 1452-1456], which could be improved further if measured under the same assay conditions as described for the H(2)O(2)-Ti method. Sensitivity of the present method was tested by assaying DAO/PAO activity in auxin treated hypocotyls of Vigna and comparing it with the starch-iodide method in two other plant samples.

  17. MMASS: an optimized array-based method for assessing CpG island methylation.

    PubMed

    Ibrahim, Ashraf E K; Thorne, Natalie P; Baird, Katie; Barbosa-Morais, Nuno L; Tavaré, Simon; Collins, V Peter; Wyllie, Andrew H; Arends, Mark J; Brenton, James D

    2006-01-01

    We describe an optimized microarray method for identifying genome-wide CpG island methylation called microarray-based methylation assessment of single samples (MMASS) which directly compares methylated to unmethylated sequences within a single sample. To improve previous methods we used bioinformatic analysis to predict an optimized combination of methylation-sensitive enzymes that had the highest utility for CpG-island probes and different methods to produce unmethylated representations of test DNA for more sensitive detection of differential methylation by hybridization. Subtraction or methylation-dependent digestion with McrBC was used with optimized (MMASS-v2) or previously described (MMASS-v1, MMASS-sub) methylation-sensitive enzyme combinations and compared with a published McrBC method. Comparison was performed using DNA from the cell line HCT116. We show that the distribution of methylation microarray data is inherently skewed and requires exogenous spiked controls for normalization and that analysis of digestion of methylated and unmethylated control sequences together with linear fit models of replicate data showed superior statistical power for the MMASS-v2 method. Comparison with previous methylation data for HCT116 and validation of CpG islands from PXMP4, SFRP2, DCC, RARB and TSEN2 confirmed the accuracy of MMASS-v2 results. The MMASS-v2 method offers improved sensitivity and statistical power for high-throughput microarray identification of differential methylation.

  18. Assessment of statistic analysis in non-radioisotopic local lymph node assay (non-RI-LLNA) with alpha-hexylcinnamic aldehyde as an example.

    PubMed

    Takeyoshi, Masahiro; Sawaki, Masakuni; Yamasaki, Kanji; Kimber, Ian

    2003-09-30

    The murine local lymph node assay (LLNA) is used for the identification of chemicals that have the potential to cause skin sensitization. However, it requires specific facility and handling procedures to accommodate a radioisotopic (RI) endpoint. We have developed non-radioisotopic (non-RI) endpoint of LLNA based on BrdU incorporation to avoid a use of RI. Although this alternative method appears viable in principle, it is somewhat less sensitive than the standard assay. In this study, we report investigations to determine the use of statistical analysis to improve the sensitivity of a non-RI LLNA procedure with alpha-hexylcinnamic aldehyde (HCA) in two separate experiments. Consequently, the alternative non-RI method required HCA concentrations of greater than 25% to elicit a positive response based on the criterion for classification as a skin sensitizer in the standard LLNA. Nevertheless, dose responses to HCA in the alternative method were consistent in both experiments and we examined whether the use of an endpoint based upon the statistical significance of induced changes in LNC turnover, rather than an SI of 3 or greater, might provide for additional sensitivity. The results reported here demonstrate that with HCA at least significant responses were, in each of two experiments, recorded following exposure of mice to 25% of HCA. These data suggest that this approach may be more satisfactory-at least when BrdU incorporation is measured. However, this modification of the LLNA is rather less sensitive than the standard method if employing statistical endpoint. Taken together the data reported here suggest that a modified LLNA in which BrdU is used in place of radioisotope incorporation shows some promise, but that in its present form, even with the use of a statistical endpoint, lacks some of the sensitivity of the standard method. The challenge is to develop strategies for further refinement of this approach.

  19. Quantitative Estimation of Seismic Velocity Changes Using Time-Lapse Seismic Data and Elastic-Wave Sensitivity Approach

    NASA Astrophysics Data System (ADS)

    Denli, H.; Huang, L.

    2008-12-01

    Quantitative monitoring of reservoir property changes is essential for safe geologic carbon sequestration. Time-lapse seismic surveys have the potential to effectively monitor fluid migration in the reservoir that causes geophysical property changes such as density, and P- and S-wave velocities. We introduce a novel method for quantitative estimation of seismic velocity changes using time-lapse seismic data. The method employs elastic sensitivity wavefields, which are the derivatives of elastic wavefield with respect to density, P- and S-wave velocities of a target region. We derive the elastic sensitivity equations from analytical differentiations of the elastic-wave equations with respect to seismic-wave velocities. The sensitivity equations are coupled with the wave equations in a way that elastic waves arriving in a target reservoir behave as a secondary source to sensitivity fields. We use a staggered-grid finite-difference scheme with perfectly-matched layers absorbing boundary conditions to simultaneously solve the elastic-wave equations and the elastic sensitivity equations. By elastic-wave sensitivities, a linear relationship between relative seismic velocity changes in the reservoir and time-lapse seismic data at receiver locations can be derived, which leads to an over-determined system of equations. We solve this system of equations using a least- square method for each receiver to obtain P- and S-wave velocity changes. We validate the method using both surface and VSP synthetic time-lapse seismic data for a multi-layered model and the elastic Marmousi model. Then we apply it to the time-lapse field VSP data acquired at the Aneth oil field in Utah. A total of 10.5K tons of CO2 was injected into the oil reservoir between the two VSP surveys for enhanced oil recovery. The synthetic and field data studies show that our new method can quantitatively estimate changes in seismic velocities within a reservoir due to CO2 injection/migration.

  20. Comparison of the Sensitivity of Three Methods for the Early Diagnosis of Sporotrichosis in Cats.

    PubMed

    Silva, J N; Miranda, L H M; Menezes, R C; Gremião, I D F; Oliveira, R V C; Vieira, S M M; Conceição-Silva, F; Ferreiro, L; Pereira, S A

    2018-04-01

    Sporotrichosis is caused by species of fungi within the Sporothrix schenckii complex that infect man and animals. In Rio de Janeiro, Brazil, an epidemic has been observed since 1998, with most of the cases being related to transmission from infected cats. Although the definitive diagnosis of feline sporotrichosis is made by fungal culture, cytopathological and histopathological examinations are used routinely, because the long culture period may delay treatment onset. However, alternative methods are desirable in cases of low fungal burden. Immunohistochemistry (IHC) has been described as a sensitive method for diagnosing human and canine sporotrichosis, but there are no reports of its application to cats. The aim of this study was to analyse the sensitivity of cytopathological examination (Quick Panoptic method), histopathology (Grocott silver stain) and anti-Sporothrix IHC by blinded comparisons, using fungal culture as the reference standard. Samples were collected from 184 cats with sporotrichosis that exhibited skin ulcers. The sensitivities of Grocott silver stain, cytopathological examination and IHC were 91.3%, 87.0% and 88.6%, respectively. Grocott silver stain showed the best performance. IHC showed high sensitivity, as did cytopathological examination and these may be considered as alternative methodologies. When the three methods were combined, the diagnosis was established in 180 (97.8%) out of 184 cases. Taken together, these findings indicate the need to implement these methods as routine tools for the early diagnosis of sporotrichosis in cats, notably when fungal culture is not available. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Characterization of Adrenal Adenoma by Gaussian Model-Based Algorithm.

    PubMed

    Hsu, Larson D; Wang, Carolyn L; Clark, Toshimasa J

    2016-01-01

    We confirmed that computed tomography (CT) attenuation values of pixels in an adrenal nodule approximate a Gaussian distribution. Building on this and the previously described histogram analysis method, we created an algorithm that uses mean and standard deviation to estimate the percentage of negative attenuation pixels in an adrenal nodule, thereby allowing differentiation of adenomas and nonadenomas. The institutional review board approved both components of this study in which we developed and then validated our criteria. In the first, we retrospectively assessed CT attenuation values of adrenal nodules for normality using a 2-sample Kolmogorov-Smirnov test. In the second, we evaluated a separate cohort of patients with adrenal nodules using both the conventional 10HU unit mean attenuation method and our Gaussian model-based algorithm. We compared the sensitivities of the 2 methods using McNemar's test. A total of 183 of 185 observations (98.9%) demonstrated a Gaussian distribution in adrenal nodule pixel attenuation values. The sensitivity and specificity of our Gaussian model-based algorithm for identifying adrenal adenoma were 86.1% and 83.3%, respectively. The sensitivity and specificity of the mean attenuation method were 53.2% and 94.4%, respectively. The sensitivities of the 2 methods were significantly different (P value < 0.001). In conclusion, the CT attenuation values within an adrenal nodule follow a Gaussian distribution. Our Gaussian model-based algorithm can characterize adrenal adenomas with higher sensitivity than the conventional mean attenuation method. The use of our algorithm, which does not require additional postprocessing, may increase workflow efficiency and reduce unnecessary workup of benign nodules. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Behavioral Training as New Treatment for Adult Amblyopia: A Meta-Analysis and Systematic Review.

    PubMed

    Tsirlin, Inna; Colpa, Linda; Goltz, Herbert C; Wong, Agnes M F

    2015-06-01

    New behavioral treatment methods, including dichoptic training, perceptual learning, and video gaming, have been proposed to improve visual function in adult amblyopia. Here, we conducted a meta-analysis of these methods to investigate the factors involved in amblyopia recovery and their clinical significance. Mean and individual participant data meta-analyses were performed on 24 studies using the new behavioral methods in adults. Studies were identified using PubMed, Google Scholar, and published reviews. The new methods yielded a mean improvement in visual acuity of 0.17 logMAR with 32% participants achieving gains ≥ 0.2 logMAR, and a mean improvement in stereo sensitivity of 0.01 arcsec-1 with 42% of participants improving ≥2 octaves. The most significant predictor of treatment outcome was visual acuity at the onset of treatment. Participants with more severe amblyopia improved more on visual acuity and less on stereo sensitivity than those with milder amblyopia. Better initial stereo sensitivity was a predictor of greater gains in stereo sensitivity following treatment. Treatment type, amblyopia type, age, and training duration did not have any significant influence on visual and stereo acuity outcomes. Our analyses showed that some participants may benefit from the new treatments; however, clinical trials are required to confirm these findings. Despite the diverse nature of the new behavioral methods, the lack of significant differences in visual and stereo sensitivity outcomes among them suggests that visual attention-a common element among the varied treatment methods-may play an important role in amblyopia recovery.

  3. Metal oxide-encapsulated dye-sensitized photoanodes for dye-sensitized solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hupp, Joseph T.; Son, Ho-Jin

    2016-01-12

    Dye-sensitized semiconducting metal oxide films for photoanodes, photoanodes incorporating the films and DSCs incorporating the photoanodes are provided. Also provided are methods for making the dye sensitized semiconducting metal oxide films. The methods of making the films are based on the deposition of an encapsulating layer of a semiconducting metal oxide around the molecular anchoring groups of photosensitizing dye molecules adsorbed to a porous film of the semiconducting metal oxide. The encapsulating layer of semiconducting metal oxide is formed in such a way that it is not coated over the chromophores of the adsorbed dye molecules and, therefore, allows themore » dye molecules to remain electrochemically addressable.« less

  4. Gas sensitive materials for gas detection and method of making

    DOEpatents

    Trakhtenberg, Leonid Israilevich; Gerasimov, Genrikh Nikolaevich; Gromov, Vladimir Fedorovich; Rozenberg, Valeriya Isaakovna

    2012-12-25

    A gas sensitive material comprising SnO2 nanocrystals doped with In2O3 and an oxide of a platinum group metal, and a method of making the same. The platinum group metal is preferably Pd, but also may include Pt, Ru, Ir, and combinations thereof. The SnO2 nanocrystals have a specific surface of 7 or greater, preferably about 20 m2/g, and a mean particle size of between about 10 nm and about 100 nm, preferably about 40 nm. A gas detection device made from the gas sensitive material deposited on a substrate, the gas sensitive material configured as a part of a current measuring circuit in communication with a heat source.

  5. The Volatility of Data Space: Topology Oriented Sensitivity Analysis

    PubMed Central

    Du, Jing; Ligmann-Zielinska, Arika

    2015-01-01

    Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929

  6. Influence of sediment presence on freshwater mussel thermal tolerance

    USGS Publications Warehouse

    Archambault, Jennifer M.; Cope, W. Gregory; Kwak, Thomas J.

    2014-01-01

    Median lethal temperature (LT50) data from water-only exposures with the early life stages of freshwater mussels suggest that some species may be living near their upper thermal tolerances. However, evaluation of thermal sensitivity has never been conducted in sediment. Mussels live most of their lives burrowed in sediment, so understanding the effect of sediment on thermal sensitivity is a necessary step in evaluating the effectiveness of the water-only standard method, on which the regulatory framework for potential thermal criteria currently is based, as a test of thermal sensitivity. We developed a method for testing thermal sensitivity of juvenile mussels in sediment and used the method to assess thermal tolerance of 4 species across a range of temperatures common during summer. Stream beds may provide a thermal refuge in the wild, but we hypothesized that the presence of sediment alone does not alter thermal sensitivity. We also evaluated the effects of 2 temperature acclimation levels (22 and 27°C) and 2 water levels (watered and dewatered treatments). We then compared results from the sediment tests to those conducted using the water-only standard methods. We also conducted water-only LT tests with mussel larvae (glochidia) for comparison with the juvenile life stage. We found few consistent differences in thermal tolerance between sediment and water-only treatments, between acclimation temperatures, between waterlevel treatments, among species, or between juvenile and glochidial life stages (LT50 range = 33.3-37.2°C; mean = 35.6°C), supporting our hypothesis that the presence of sediment alone does not alter thermal sensitivity. The method we developed has potential for evaluating the role of other stressors (e.g., contaminants) in a more natural and complex environment.

  7. Sterilization of endoscopic instruments.

    PubMed

    Sabnis, Ravindra B; Bhattu, Amit; Vijaykumar, Mohankumar

    2014-03-01

    Sterilization of endoscopic instruments is an important but often ignored topic. The purpose of this article is to review the current literature on the sterilization of endoscopic instruments and elaborate on the appropriate sterilization practices. Autoclaving is an economic and excellent method of sterilizing the instruments that are not heat sensitive. Heat sensitive instruments may get damaged with hot sterilization methods. Several new endoscopic instruments such as flexible ureteroscopes, chip on tip endoscopes, are added in urologists armamentarium. Many of these instruments are heat sensitive and hence alternative efficacious methods of sterilization are necessary. Although ethylene oxide and hydrogen peroxide are excellent methods of sterilization, they have some drawbacks. Gamma irradiation is mainly for disposable items. Various chemical agents are widely used even though they achieve high-level disinfection rather than sterilization. This article reviews various methods of endoscopic instrument sterilization with their advantages and drawbacks. If appropriate sterilization methods are adopted, then it not only will protect patients from procedure-related infections but prevent hypersensitive allergic reactions. It will also protect instruments from damage and increase its longevity.

  8. Evaluation of serological and molecular tests used to identify Toxoplasma gondii infection in pregnant women attended in a public health service in São Paulo state, Brazil.

    PubMed

    Murata, Fernando Henrique Antunes; Ferreira, Marina Neves; Pereira-Chioccola, Vera Lucia; Spegiorin, Lígia Cosentino Junqueira Franco; Meira-Strejevitch, Cristina da Silva; Gava, Ricardo; Silveira-Carvalho, Aparecida Perpétuo; de Mattos, Luiz Carlos; Brandão de Mattos, Cinara Cássia

    2017-09-01

    Toxoplasmosis during pregnancy can have severe consequences. The use of sensitive and specific serological and molecular methods is extremely important for the correct diagnosis of the disease. We compared the ELISA and ELFA serological methods, conventional PCR (cPCR), Nested PCR and quantitative PCR (qPCR) in the diagnosis of Toxoplasma gondii infection in pregnant women without clinical suspicion of toxoplasmosis (G1=94) and with clinical suspicion of toxoplasmosis (G2=53). The results were compared using the Kappa index, and the sensitivity, specificity, positive predictive value and negative predictive value were calculated. The results of the serological methods showed concordance between the ELISA and ELFA methods even though ELFA identified more positive cases than ELISA. Molecular methods were discrepant with cPCR using B22/23 primers having greater sensitivity and lower specificity compared to the other molecular methods. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. An evaluation of computer-aided disproportionality analysis for post-marketing signal detection.

    PubMed

    Lehman, H P; Chen, J; Gould, A L; Kassekert, R; Beninger, P R; Carney, R; Goldberg, M; Goss, M A; Kidos, K; Sharrar, R G; Shields, K; Sweet, A; Wiholm, B E; Honig, P K

    2007-08-01

    To understand the value of computer-aided disproportionality analysis (DA) in relation to current pharmacovigilance signal detection methods, four products were retrospectively evaluated by applying an empirical Bayes method to Merck's post-marketing safety database. Findings were compared with the prior detection of labeled post-marketing adverse events. Disproportionality ratios (empirical Bayes geometric mean lower 95% bounds for the posterior distribution (EBGM05)) were generated for product-event pairs. Overall (1993-2004 data, EBGM05> or =2, individual terms) results of signal detection using DA compared to standard methods were sensitivity, 31.1%; specificity, 95.3%; and positive predictive value, 19.9%. Using groupings of synonymous labeled terms, sensitivity improved (40.9%). More of the adverse events detected by both methods were detected earlier using DA and grouped (versus individual) terms. With 1939-2004 data, diagnostic properties were similar to those from 1993 to 2004. DA methods using Merck's safety database demonstrate sufficient sensitivity and specificity to be considered for use as an adjunct to conventional signal detection methods.

  10. Consumers' sensitivity to androstenone and the evaluation of different cooking methods to mask boar taint.

    PubMed

    Borrisser-Pairó, F; Panella-Riera, N; Gil, M; Kallas, Z; Linares, M B; Egea, M; Garrido, M D; Oliver, M A

    2017-01-01

    Boar taint is an unpleasant odour and flavour present in some entire male pigs that is due to the presence of androstenone and skatole. The aim of the study was to assess the sensitivity of 150 consumers to androstenone and to compare the acceptability and liking of meat from castrated and entire pigs, cooked with different cooking methods. Meat samples consisted of loins from castrated (CM) and entire male pigs (EM) with high levels of androstenone cooked by two cooking methods: sous-vide and fried/breaded with garlic and parsley. Consumers evaluated smell and flavour acceptability, and overall liking of CM and EM for each cooking method. The results of the study showed that dislike of androstenone odour increased significantly with sensitivity. The results of acceptability and overall liking were similar in CM and EM for both cooking methods. Therefore, the two cooking methods used in the study may be useful to mask boar taint. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Quantitative determination of some pharmaceutical piperazine derivatives through complexation with iron(III) chloride.

    PubMed

    Abou-Attia, F M; Issa, Y M; Abdel-Gawad, F M; Abdel-Hamid, S M

    2003-08-01

    A simple, accurate and sensitive spectrophotometric method has been developed for the determination of three pharmaceutical piperazine derivatives, namely ketoconazole (KC), trimetazidine hydrochloride (TMH) and piribedil (PD). This method is based on the formation of yellow orange complexes between iron(III) chloride and the investigated drugs. The optimum reaction conditions, spectral characteristics, conditional stability constants and composition of the water soluble complexes have been established. The method permits the determination of KC, TMH and PD over a concentration range 1-15, 1-12 and 1-12 microg ml(-1), respectively. Sandell sensitivity is found to be 0.016, 0.013 and 0.013 microg cm(-2) for KC, TMH and PD, respectively. The method was sensitive, simple, reproducible and accurate within +/-1.5%. The method is applicable to the assay of the three drugs under investigation in different dosage forms and the results are in good agreement with those obtained by the official methods (USP and JP).

  12. New Spectrofluorimetric Method with Enhanced Sensitivity for Determination of Paroxetine in Dosage Forms and Plasma

    PubMed Central

    Darwish, Ibrahim A.; Amer, Sawsan M.; Abdine, Heba H.; Al-Rayes, Lama I.

    2008-01-01

    New simple spectrofluorimetric method with enhanced sensitivity has been developed and validated for the determination of the antidepressant paroxetine (PXT) in its dosage forms and plasma. The method was based on nucleophilic substitution reaction of PXT with 4-chloro-7-nitrobenzo-2-oxa-1,3-diazole in an alkaline medium (pH 8) to form a highly fluorescent derivative that was measured at 545 nm after excitation at 490 nm. The factors affecting the reaction was carefully studied and optimized. The kinetics of the reaction was investigated, and the reaction mechanism was presented. Under the optimized conditions, linear relationship with good correlation coefficient (0.9993) was found between the fluorescence intensity and PXT concentration in the range of 80–800 ng ml−1. The limits of detection and quantitation for the method were 25 and 77 ng ml−1, respectively. The precision of the method was satisfactory; the values of relative standard deviations did not exceed 3%. The proposed method was successfully applied to the determination of PXT in its pharmaceutical tablets with good accuracy; the recovery values were 100.2 ± 1.61%. The results obtained by the proposed method were comparable with those obtained by the official method. The proposed method is superior to the previously reported spectrofluorimetric method for determination of PXT in terms of its higher sensitivity and wider linear range. The high sensitivity of the method allowed its successful application to the analysis of PXT in spiked human plasma. The proposed method is practical and valuable for its routine application in quality control and clinical laboratories for analysis of PXT. PMID:19609398

  13. Dark-Adapted Chromatic Perimetry for Measuring Rod Visual Fields in Patients with Retinitis Pigmentosa

    PubMed Central

    Bennett, Lea D.; Klein, Martin; Locke, Kirsten G.; Kiser, Kelly; Birch, David G.

    2017-01-01

    Purpose Although rod photoreceptors are initially affected in retinitis pigmentosa (RP), the full-field of rod vision is not routinely characterized due to the unavailability of commercial devices detecting rod sensitivity. The purpose of this study was to quantify rod-mediated vision in the peripheral field from patients with RP using a new commercially available perimeter. Methods Participants had one eye dilated and dark-adapted for 45 minutes. A dark-adapted chromatic (DAC) perimeter tested 80 loci 144° horizontally and 72° vertically with cyan stimuli. The number of rod-mediated loci (RML) were analyzed based on normal cone sensitivity (method 1) and associated with full-field electroretinography (ERG) responses by Pearson's r correlation and linear regression. In a second cohort of patients with RP, RML were identified by two-color perimetry (cyan and red; method 2). The two methods for ascribing rod function were compared by Bland-Altman analysis. Results Method 1 RML were correlated with responses to the 0.01 cd.s/m2 flash (P < 0.001), while total sensitivity to the cyan stimulus showed correlation with responses to the 3.0 cd.s/m2 flash (P < 0.0001). Method 2 detected a mean of 10 additional RML compared to method 1. Conclusions Scotopic fields measured with the DAC detected rod sensitivity across the full visual field, even in some patients who had nondetectable rod ERGs. Two-color perimetry is warranted when sensitivity to the cyan stimulus is reduced to ≤20 dB to get a true estimation of rod function. Translational Relevance Many genetic forms of retinitis pigmentosa (RP) are caused by mutations in rod-specific genes. However, treatment trials for patients with RP have relied primarily on photopic (cone-mediated) tests as outcome measures because there are a limited number of available testing methods designed to evaluate rod function. Thus, efficient methods for quantifying rod-mediated vision are needed for the rapidly increasing numbers of clinical trials. PMID:28798898

  14. The spectral sensitivity of the human short-wavelength sensitive cones derived from thresholds and color matches.

    PubMed

    Stockman, A; Sharpe, L T; Fach, C

    1999-08-01

    We used two methods to estimate short-wave (S) cone spectral sensitivity. Firstly, we measured S-cone thresholds centrally and peripherally in five trichromats, and in three blue-cone monochromats, who lack functioning middle-wave (M) and long-wave (L) cones. Secondly, we analyzed standard color-matching data. Both methods yielded equivalent results, on the basis of which we propose new S-cone spectral sensitivity functions. At short and middle-wavelengths, our measurements are consistent with the color matching data of Stiles and Burch (1955, Optica Acta, 2, 168-181; 1959, Optica Acta, 6, 1-26), and other psychophysically measured functions, such as pi 3 (Stiles, 1953, Coloquio sobre problemas opticos de la vision, 1, 65-103). At longer wavelengths, S-cone sensitivity has previously been over-estimated.

  15. Specific and Sensitive Isothermal Electrochemical Biosensor for Plant Pathogen DNA Detection with Colloidal Gold Nanoparticles as Probes

    NASA Astrophysics Data System (ADS)

    Lau, Han Yih; Wu, Haoqi; Wee, Eugene J. H.; Trau, Matt; Wang, Yuling; Botella, Jose R.

    2017-01-01

    Developing quick and sensitive molecular diagnostics for plant pathogen detection is challenging. Herein, a nanoparticle based electrochemical biosensor was developed for rapid and sensitive detection of plant pathogen DNA on disposable screen-printed carbon electrodes. This 60 min assay relied on the rapid isothermal amplification of target pathogen DNA sequences by recombinase polymerase amplification (RPA) followed by gold nanoparticle-based electrochemical assessment with differential pulse voltammetry (DPV). Our method was 10,000 times more sensitive than conventional polymerase chain reaction (PCR)/gel electrophoresis and could readily identify P. syringae infected plant samples even before the disease symptoms were visible. On the basis of the speed, sensitivity, simplicity and portability of the approach, we believe the method has potential as a rapid disease management solution for applications in agriculture diagnostics.

  16. Specific and Sensitive Isothermal Electrochemical Biosensor for Plant Pathogen DNA Detection with Colloidal Gold Nanoparticles as Probes.

    PubMed

    Lau, Han Yih; Wu, Haoqi; Wee, Eugene J H; Trau, Matt; Wang, Yuling; Botella, Jose R

    2017-01-17

    Developing quick and sensitive molecular diagnostics for plant pathogen detection is challenging. Herein, a nanoparticle based electrochemical biosensor was developed for rapid and sensitive detection of plant pathogen DNA on disposable screen-printed carbon electrodes. This 60 min assay relied on the rapid isothermal amplification of target pathogen DNA sequences by recombinase polymerase amplification (RPA) followed by gold nanoparticle-based electrochemical assessment with differential pulse voltammetry (DPV). Our method was 10,000 times more sensitive than conventional polymerase chain reaction (PCR)/gel electrophoresis and could readily identify P. syringae infected plant samples even before the disease symptoms were visible. On the basis of the speed, sensitivity, simplicity and portability of the approach, we believe the method has potential as a rapid disease management solution for applications in agriculture diagnostics.

  17. Multivariate Models for Prediction of Human Skin Sensitization Hazard

    EPA Science Inventory

    One of ICCVAM’s top priorities is the development and evaluation of non-animal approaches to identify potential skin sensitizers. The complexity of biological events necessary for a substance to elicit a skin sensitization reaction suggests that no single alternative method...

  18. Energetic materials and methods of tailoring electrostatic discharge sensitivity of energetic materials

    DOEpatents

    Daniels, Michael A.; Heaps, Ronald J.; Wallace, Ronald S.; Pantoya, Michelle L.; Collins, Eric S.

    2016-11-01

    An energetic material comprising an elemental fuel, an oxidizer or other element, and a carbon nanofiller or carbon fiber rods, where the carbon nanofiller or carbon fiber rods are substantially homogeneously dispersed in the energetic material. Methods of tailoring the electrostatic discharge sensitivity of an energetic material are also disclosed.

  19. Study Abroad Participation and University Students' Intercultural Sensitivity

    ERIC Educational Resources Information Center

    Edmunds, Julia A.

    2017-01-01

    The purpose of this study was to examine the intercultural sensitivity of College of Education students who participated in short-term, faculty led, study abroad programs at a large, urban, branch, university in the Southeast. The research questions in this study were addressed using a mixed methods approach. This method provided for the…

  20. Noninvasive and cost-effective trapping method for monitoring sensitive mammal populations

    Treesearch

    Stephanie E. Trapp; Elizabeth A. Flaherty

    2017-01-01

    Noninvasive sampling methods provide a means to monitor endangered, threatened, or sensitive species or populations while increasing the efficacy of personnel effort and time. We developed a monitoring protocol that utilizes single-capture hair snares and analysis of morphological features of hair for evaluating populations. During 2015, we used the West Virginia...

  1. Comparison of Nerve Excitability Testing, Nerve Conduction Velocity, and Behavioral Observations for Acrylamide Induced Peripheral Neuropathy

    EPA Science Inventory

    Nerve excitability (NE) testing is a sensitive method to test for peripheral neurotoxicity in humans,and may be more sensitive than compound nerve action potential (CNAP) or nerve conduction velocity (NCV).We used acrylamide to compare the NE and CNAP/NCV methods. Behavioral test...

  2. Quantifying the sensitivity of post-glacial sea level change to laterally varying viscosity

    NASA Astrophysics Data System (ADS)

    Crawford, Ophelia; Al-Attar, David; Tromp, Jeroen; Mitrovica, Jerry X.; Austermann, Jacqueline; Lau, Harriet C. P.

    2018-05-01

    We present a method for calculating the derivatives of measurements of glacial isostatic adjustment (GIA) with respect to the viscosity structure of the Earth and the ice sheet history. These derivatives, or kernels, quantify the linearised sensitivity of measurements to the underlying model parameters. The adjoint method is used to enable efficient calculation of theoretically exact sensitivity kernels within laterally heterogeneous earth models that can have a range of linear or non-linear viscoelastic rheologies. We first present a new approach to calculate GIA in the time domain, which, in contrast to the more usual formulation in the Laplace domain, is well suited to continuously varying earth models and to the use of the adjoint method. Benchmarking results show excellent agreement between our formulation and previous methods. We illustrate the potential applications of the kernels calculated in this way through a range of numerical calculations relative to a spherically symmetric background model. The complex spatial patterns of the sensitivities are not intuitive, and this is the first time that such effects are quantified in an efficient and accurate manner.

  3. Phase-sensitive terahertz spectroscopy with backward-wave oscillators in reflection mode.

    PubMed

    Pronin, A V; Goncharov, Yu G; Fischer, T; Wosnitza, J

    2009-12-01

    In this article we describe a method which allows accurate measurements of the complex reflection coefficient r = absolute value(r) x exp(i phi(R)) of a solid at frequencies of 1-50 cm(-1) (30 GHz-1.5 THz). Backward-wave oscillators are used as sources for monochromatic coherent radiation tunable in frequency. The amplitude of the complex reflection (the reflectivity) is measured in a standard way, while the phase shift, introduced by the reflection from the sample surface, is measured using a Michelson interferometer. This method is particular useful for nontransparent samples, where phase-sensitive transmission measurements are not possible. The method requires no Kramers-Kronig transformation in order to extract the sample's electrodynamic properties (such as the complex dielectric function or complex conductivity). Another area of application of this method is the study of magnetic materials with complex dynamic permeabilities different from unity at the measurement frequencies (for example, colossal-magnetoresistance materials and metamaterials). Measuring both the phase-sensitive transmission and the phase-sensitive reflection allows for a straightforward model-independent determination of the dielectric permittivity and magnetic permeability of such materials.

  4. Phase-sensitive terahertz spectroscopy with backward-wave oscillators in reflection mode

    NASA Astrophysics Data System (ADS)

    Pronin, A. V.; Goncharov, Yu. G.; Fischer, T.; Wosnitza, J.

    2009-12-01

    In this article we describe a method which allows accurate measurements of the complex reflection coefficient r̂=|r̂|ṡexp(iφR) of a solid at frequencies of 1-50 cm-1 (30 GHz-1.5 THz). Backward-wave oscillators are used as sources for monochromatic coherent radiation tunable in frequency. The amplitude of the complex reflection (the reflectivity) is measured in a standard way, while the phase shift, introduced by the reflection from the sample surface, is measured using a Michelson interferometer. This method is particular useful for nontransparent samples, where phase-sensitive transmission measurements are not possible. The method requires no Kramers-Kronig transformation in order to extract the sample's electrodynamic properties (such as the complex dielectric function or complex conductivity). Another area of application of this method is the study of magnetic materials with complex dynamic permeabilities different from unity at the measurement frequencies (for example, colossal-magnetoresistance materials and metamaterials). Measuring both the phase-sensitive transmission and the phase-sensitive reflection allows for a straightforward model-independent determination of the dielectric permittivity and magnetic permeability of such materials.

  5. Effect of parameters in moving average method for event detection enhancement using phase sensitive OTDR

    NASA Astrophysics Data System (ADS)

    Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum

    2017-04-01

    We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.

  6. Combining isothermal rolling circle amplification and electrochemiluminescence for highly sensitive point mutation detection

    NASA Astrophysics Data System (ADS)

    Su, Qiang; Zhou, Xiaoming

    2008-12-01

    Many pathogenic and genetic diseases are associated with changes in the sequence of particular genes. We describe here a rapid and highly efficient assay for the detection of point mutation. This method is a combination of isothermal rolling circle amplification (RCA) and high sensitive electrochemluminescence (ECL) detection. In the design, a circular template generated by ligation upon the recognition of a point mutation on DNA targets was amplified isothermally by the Phi29 polymerase using a biotinylated primer. The elongation products were hybridized with tris (bipyridine) ruthenium (TBR)-tagged probes and detected in a magnetic bead based ECL platform, indicating the mutation occurrence. P53 was chosen as a model for the identification of this method. The method allowed sensitive determination of the P53 mutation from wild-type and mutant samples. The main advantage of RCA-ECL is that it can be performed under isothermal conditions and avoids the generation of false-positive results. Furthermore, ECL provides a faster, more sensitive, and economical option to currently available electrophoresis-based methods.

  7. An approach to the diagnosis of metabolic syndrome by the multi-electrode impedance method

    NASA Astrophysics Data System (ADS)

    Furuya, N.; Sakamoto, K.; Kanai, H.

    2010-04-01

    It is well known that metabolic syndrome can induce myocardial infarction and cerebral infarction. So, it is very important to measure the visceral fat volume. In the electric impedance method, information in the vicinity of the electrodes is strongly reflected. Therefore, we propose a new multi-electrode arrangement method based on the impedance sensitivity theorem to measure the visceral fat volume. This electrode arrangement is designed to enable high impedance sensitivity in the visceral and subcutaneous fat regions. Currents are simultaneously applied to several current electrodes on the body surface, and one voltage electrode pair is arranged on the body surface near the organ of interest to obtain the visceral fat information and another voltage electrode pair is arranged on the body surface near the current electrodes to obtain the subcutaneous fat information. A simulation study indicates that by weighting the impedance sensitivity distribution, as in our method, a high-sensitivity region in the visceral and the subcutaneous fat regions can be formed. In addition, it was confirmed that the visceral fat volume can be estimated by the measured impedance data.

  8. Rapid and sensitive detection of human astrovirus in water samples by loop-mediated isothermal amplification with hydroxynaphthol blue dye

    PubMed Central

    2014-01-01

    Background The aim of this paper was to develop a reverse transcription loop-mediated isothermal amplification (RT-LAMP) method for rapid, sensitive and inexpensive detection of astrovirus. Results The detection limit of LAMP using in vitro RNA transcripts was 3.6×10 copies·μL-1, which is as sensitive as the presently used PCR assays. However, the LAMP products could be identified as different colors with the naked eye following staining with hydroxynaphthol blue dye (HNB). No cross-reactivity with other gastroenteric viruses (rotavirus and norovirus) was observed, indicating the relatively high specificity of LAMP. The RT-LAMP method with HNB was used to effectively detect astrovirus in reclaimed water samples. Conclusions The LAMP technique described in this study is a cheap, sensitive, specific and rapid method for the detection of astrovirus. The RT-LAMP method can be simply applied for the specific detection of astrovirus and has the potential to be utilized in the field as a screening test. PMID:24524254

  9. Multi-test cervical cancer diagnosis with missing data estimation

    NASA Astrophysics Data System (ADS)

    Xu, Tao; Huang, Xiaolei; Kim, Edward; Long, L. Rodney; Antani, Sameer

    2015-03-01

    Cervical cancer is a leading most common type of cancer for women worldwide. Existing screening programs for cervical cancer suffer from low sensitivity. Using images of the cervix (cervigrams) as an aid in detecting pre-cancerous changes to the cervix has good potential to improve sensitivity and help reduce the number of cervical cancer cases. In this paper, we present a method that utilizes multi-modality information extracted from multiple tests of a patient's visit to classify the patient visit to be either low-risk or high-risk. Our algorithm integrates image features and text features to make a diagnosis. We also present two strategies to estimate the missing values in text features: Image Classifier Supervised Mean Imputation (ICSMI) and Image Classifier Supervised Linear Interpolation (ICSLI). We evaluate our method on a large medical dataset and compare it with several alternative approaches. The results show that the proposed method with ICSLI strategy achieves the best result of 83.03% specificity and 76.36% sensitivity. When higher specificity is desired, our method can achieve 90% specificity with 62.12% sensitivity.

  10. Three-dimensional Fréchet sensitivity kernels for electromagnetic wave propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strickland, C. E.; Johnson, T. C.; Odom, R. I.

    2015-08-28

    Electromagnetic imaging methods are useful tools for monitoring subsurface changes in pore-fluid content and the associated changes in electrical permittivity and conductivity. The most common method for georadar tomography uses a high frequency ray-theoretic approximation that is valid when material variations are sufficiently small relative to the wavelength of the propagating wave. Georadar methods, however, often utilize electromagnetic waves that propagate within heterogeneous media at frequencies where ray theory may not be applicable. In this paper we describe the 3-D Fréchet sensitivity kernels for EM wave propagation. Various data functional types are formulated that consider all three components of themore » electric wavefield and incorporate near-, intermediate-, and far-field contributions. We show that EM waves exhibit substantial variations for different relative source-receiver component orientations. The 3-D sensitivities also illustrate out-of-plane effects that are not captured in 2-D sensitivity kernels and can influence results obtained using 2-D inversion methods to image structures that are in reality 3-D.« less

  11. Increased sensitivity of OSHA method analysis of diacetyl and 2,3-pentanedione in air

    PubMed Central

    LeBouf, Ryan; Simmons, Michael

    2018-01-01

    Gas chromatography/mass spectrometry (GC/MS) operated in selected ion monitoring mode was used to enhance the sensitivity of OSHA Methods 1013/1016 for measuring diacetyl and 2,3-pentanedione in air samples. The original methods use flame ionization detection which cannot achieve the required sensitivity to quantify samples at or below the NIOSH recommended exposure limits (REL: 5 ppb for diacetyl and 9.3 ppb for 2,3-pentanedione) when sampling for both diacetyl and 2,3-pentanedione. OSHA Method 1012 was developed to measure diacetyl at lower levels but requires an electron capture detector, and a sample preparation time of 36 hours. Using GC/MS allows detection of these two alpha-diketones at lower levels than OSHA Method 1012 for diacetyl and OSHA Method 1016 for 2,3-pentanedione. Acetoin and 2,3-hexanedione may also be measured using this technique. Method quantification limits were 1.1 ppb for diacetyl (22% of the REL), 1.1 ppb for 2,3-pentanedione (12% of the REL), 1.1 ppb for 2,3-hexanedione, and 2.1 ppb for acetoin. Average extraction efficiencies above the limit of quantitation were 100% for diacetyl, 92% for 2,3-pentanedione, 89% for 2,3-hexanedione, and 87% for acetoin. Mass spectrometry with OSHA Methods 1013/1016 could be used by analytical laboratories to provide more sensitive and accurate measures of exposure to diacetyl and 2,3-pentanedione. PMID:27792470

  12. Increased sensitivity of OSHA method analysis of diacetyl and 2,3-pentanedione in air.

    PubMed

    LeBouf, Ryan; Simmons, Michael

    2017-05-01

    Gas chromatography/mass spectrometry (GC/MS) operated in selected ion monitoring mode was used to enhance the sensitivity of OSHA Methods 1013/1016 for measuring diacetyl and 2,3-pentanedione in air samples. The original methods use flame ionization detection which cannot achieve the required sensitivity to quantify samples at or below the NIOSH recommended exposure limits (REL: 5 ppb for diacetyl and 9.3 ppb for 2,3-pentanedione) when sampling for both diacetyl and 2,3-pentanedione. OSHA Method 1012 was developed to measure diacetyl at lower levels but requires an electron capture detector, and a sample preparation time of 36 hours. Using GC/MS allows detection of these two alpha-diketones at lower levels than OSHA Method 1012 for diacetyl and OSHA Method 1016 for 2,3-pentanedione. Acetoin and 2,3-hexanedione may also be measured using this technique. Method quantification limits were 1.1 ppb for diacetyl (22% of the REL), 1.1 ppb for 2,3-pentanedione (12% of the REL), 1.1 ppb for 2,3-hexanedione, and 2.1 ppb for acetoin. Average extraction efficiencies above the limit of quantitation were 100% for diacetyl, 92% for 2,3-pentanedione, 89% for 2,3-hexanedione, and 87% for acetoin. Mass spectrometry with OSHA Methods 1013/1016 could be used by analytical laboratories to provide more sensitive and accurate measures of exposure to diacetyl and 2,3-pentanedione.

  13. Online and offline tools for head movement compensation in MEG.

    PubMed

    Stolk, Arjen; Todorovic, Ana; Schoffelen, Jan-Mathijs; Oostenveld, Robert

    2013-03-01

    Magnetoencephalography (MEG) is measured above the head, which makes it sensitive to variations of the head position with respect to the sensors. Head movements blur the topography of the neuronal sources of the MEG signal, increase localization errors, and reduce statistical sensitivity. Here we describe two novel and readily applicable methods that compensate for the detrimental effects of head motion on the statistical sensitivity of MEG experiments. First, we introduce an online procedure that continuously monitors head position. Second, we describe an offline analysis method that takes into account the head position time-series. We quantify the performance of these methods in the context of three different experimental settings, involving somatosensory, visual and auditory stimuli, assessing both individual and group-level statistics. The online head localization procedure allowed for optimal repositioning of the subjects over multiple sessions, resulting in a 28% reduction of the variance in dipole position and an improvement of up to 15% in statistical sensitivity. Offline incorporation of the head position time-series into the general linear model resulted in improvements of group-level statistical sensitivity between 15% and 29%. These tools can substantially reduce the influence of head movement within and between sessions, increasing the sensitivity of many cognitive neuroscience experiments. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Low-Cost and Rapid Fabrication of Metallic Nanostructures for Sensitive Biosensors Using Hot-Embossing and Dielectric-Heating Nanoimprint Methods.

    PubMed

    Lee, Kuang-Li; Wu, Tsung-Yeh; Hsu, Hsuan-Yeh; Yang, Sen-Yeu; Wei, Pei-Kuen

    2017-07-02

    We propose two approaches-hot-embossing and dielectric-heating nanoimprinting methods-for low-cost and rapid fabrication of periodic nanostructures. Each nanofabrication process for the imprinted plastic nanostructures is completed within several seconds without the use of release agents and epoxy. Low-cost, large-area, and highly sensitive aluminum nanostructures on A4 size plastic films are fabricated by evaporating aluminum film on hot-embossing nanostructures. The narrowest bandwidth of the Fano resonance is only 2.7 nm in the visible light region. The periodic aluminum nanostructure achieves a figure of merit of 150, and an intensity sensitivity of 29,345%/RIU (refractive index unit). The rapid fabrication is also achieved by using radio-frequency (RF) sensitive plastic films and a commercial RF welding machine. The dielectric-heating, using RF power, takes advantage of the rapid heating/cooling process and lower electric power consumption. The fabricated capped aluminum nanoslit array has a 5 nm Fano linewidth and 490.46 nm/RIU wavelength sensitivity. The biosensing capabilities of the metallic nanostructures are further verified by measuring antigen-antibody interactions using bovine serum albumin (BSA) and anti-BSA. These rapid and high-throughput fabrication methods can benefit low-cost, highly sensitive biosensors and other sensing applications.

  15. Searching for the full symphony of black hole binary mergers

    NASA Astrophysics Data System (ADS)

    Harry, Ian; Bustillo, Juan Calderón; Nitz, Alex

    2018-01-01

    Current searches for the gravitational-wave signature of compact binary mergers rely on matched-filtering data from interferometric observatories with sets of modeled gravitational waveforms. These searches currently use model waveforms that do not include the higher-order mode content of the gravitational-wave signal. Higher-order modes are important for many compact binary mergers and their omission reduces the sensitivity to such sources. In this work we explore the sensitivity loss incurred from omitting higher-order modes. We present a new method for searching for compact binary mergers using waveforms that include higher-order mode effects, and evaluate the sensitivity increase that using our new method would allow. We find that, when evaluating sensitivity at a constant rate-of-false alarm, and when including the fact that signal-consistency tests can reject some signals that include higher-order mode content, we observe a sensitivity increase of up to a factor of 2 in volume for high mass ratio, high total-mass systems. For systems with equal mass, or with total mass ˜50 M⊙, we see more modest sensitivity increases, <10 %, which indicates that the existing search is already performing well. Our new search method is also directly applicable in searches for generic compact binaries.

  16. Citation searches are more sensitive than keyword searches to identify studies using specific measurement instruments.

    PubMed

    Linder, Suzanne K; Kamath, Geetanjali R; Pratt, Gregory F; Saraykar, Smita S; Volk, Robert J

    2015-04-01

    To compare the effectiveness of two search methods in identifying studies that used the Control Preferences Scale (CPS), a health care decision-making instrument commonly used in clinical settings. We searched the literature using two methods: (1) keyword searching using variations of "Control Preferences Scale" and (2) cited reference searching using two seminal CPS publications. We searched three bibliographic databases [PubMed, Scopus, and Web of Science (WOS)] and one full-text database (Google Scholar). We report precision and sensitivity as measures of effectiveness. Keyword searches in bibliographic databases yielded high average precision (90%) but low average sensitivity (16%). PubMed was the most precise, followed closely by Scopus and WOS. The Google Scholar keyword search had low precision (54%) but provided the highest sensitivity (70%). Cited reference searches in all databases yielded moderate sensitivity (45-54%), but precision ranged from 35% to 75% with Scopus being the most precise. Cited reference searches were more sensitive than keyword searches, making it a more comprehensive strategy to identify all studies that use a particular instrument. Keyword searches provide a quick way of finding some but not all relevant articles. Goals, time, and resources should dictate the combination of which methods and databases are used. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Citation searches are more sensitive than keyword searches to identify studies using specific measurement instruments

    PubMed Central

    Linder, Suzanne K.; Kamath, Geetanjali R.; Pratt, Gregory F.; Saraykar, Smita S.; Volk, Robert J.

    2015-01-01

    Objective To compare the effectiveness of two search methods in identifying studies that used the Control Preferences Scale (CPS), a healthcare decision-making instrument commonly used in clinical settings. Study Design & Setting We searched the literature using two methods: 1) keyword searching using variations of “control preferences scale” and 2) cited reference searching using two seminal CPS publications. We searched three bibliographic databases [PubMed, Scopus, Web of Science (WOS)] and one full-text database (Google Scholar). We report precision and sensitivity as measures of effectiveness. Results Keyword searches in bibliographic databases yielded high average precision (90%), but low average sensitivity (16%). PubMed was the most precise, followed closely by Scopus and WOS. The Google Scholar keyword search had low precision (54%) but provided the highest sensitivity (70%). Cited reference searches in all databases yielded moderate sensitivity (45–54%), but precision ranged from 35–75% with Scopus being the most precise. Conclusion Cited reference searches were more sensitive than keyword searches, making it a more comprehensive strategy to identify all studies that use a particular instrument. Keyword searches provide a quick way of finding some but not all relevant articles. Goals, time and resources should dictate the combination of which methods and databases are used. PMID:25554521

  18. Rapid and sensitive detection of mink circovirus by recombinase polymerase amplification.

    PubMed

    Ge, Junwei; Shi, Yunjia; Cui, Xingyang; Gu, Shanshan; Zhao, Lili; Chen, Hongyan

    2018-06-01

    To date, the pathogenic role of mink circovirus (MiCV) remains unclear, and its prevalence and economic importance are unknown. Therefore, a rapid and sensitive molecular diagnosis is necessary for disease management and epidemiological surveillance. However, only PCR methods can identify MiCV infection at present. In this study, we developed a nested PCR and established a novel recombinase polymerase amplification (RPA) assay for MiCV detection. Sensitivity analysis showed that the detection limit of nested PCR and RPA assay was 10 1 copies/reaction, and these methods were more sensitive than conventional PCR, which has a detection limit of 10 5 copies/reaction. The RPA assay had no cross-reactivity with other related viral pathogens, and amplification was completed in less than 20 min with a simple device. Further assessment of clinical samples showed that the two assays were accurate in identifying positive and negative conventional PCR samples. The detection rate of MiCV by the RPA assay in clinical samples was 38.09%, which was 97% consistent with that by the nested PCR. The developed nested PCR is a highly sensitive tool for practical use, and the RPA assay is a simple, sensitive, and potential alternative method for rapid and accurate MiCV diagnosis. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. A fast and sensitive TLD method for measurement of energy and homogeneity of electron beams using transmitted radiation through lead.

    PubMed

    Pradhan, A S; Quast, U; Sharma, P K

    1994-09-01

    A simple and fast, but sensitive TLD method for the measurement of energy and homogeneity of therapeutically used electron beams has been developed and tested. This method is based on the fact that when small thicknesses of high-Z absorbers such as lead are interposed in the high-energy electron beams, the transmitted radiation increases with the energy of the electron beams. Consequently, the ratio of readouts of TLDS held on the two sides of a lead plate varied sharply (by factor of 70) with a change in energy of the electron beam from 5 MeV to 18 MeV, offering a very sensitive method for the measurement of the energy of electron beams. By using the ratio of TL readouts of two types of TLD ribbon with widely different sensitivities, LiF TLD-700 ribbons on the upstream side and highly sensitive CaF2:Dy TLD-200 ribbons on the downstream side, an electron energy discrimination of better than +/- 0.1 MeV could be achieved. The homogeneity of the electron beam energy and the absorbed dose was measured by using a jig in which the TLDS were held in the desired array on both sides of a 4 mm thick lead plate. The method takes minimal beam time and makes it possible to carry out measurements for the audit of the quality of electron beams as well as for intercomparison of beams by mail.

  20. Development of a Tandem Repeat-Based Polymerase Chain Displacement Reaction Method for Highly Sensitive Detection of 'Candidatus Liberibacter asiaticus'.

    PubMed

    Lou, Binghai; Song, Yaqin; RoyChowdhury, Moytri; Deng, Chongling; Niu, Ying; Fan, Qijun; Tang, Yan; Zhou, Changyong

    2018-02-01

    Huanglongbing (HLB) is one of the most destructive diseases in citrus production worldwide. Early detection of HLB pathogens can facilitate timely removal of infected citrus trees in the field. However, low titer and uneven distribution of HLB pathogens in host plants make reliable detection challenging. Therefore, the development of effective detection methods with high sensitivity is imperative. This study reports the development of a novel method, tandem repeat-based polymerase chain displacement reaction (TR-PCDR), for the detection of 'Candidatus Liberibacter asiaticus', a widely distributed HLB-associated bacterium. A uniquely designed primer set (TR2-PCDR-F/TR2-PCDR-1R) and a thermostable Taq DNA polymerase mutant with strand displacement activity were used for TR-PCDR amplification. Performed in a regular thermal cycler, TR-PCDR could produce more than two amplicons after each amplification cycle. Sensitivity of the developed TR-PCDR was 10 copies of target DNA fragment. The sensitive level was proven to be 100× higher than conventional PCR and similar to real-time PCR. Data from the detection of 'Ca. L. asiaticus' with filed samples using the above three methods also showed similar results. No false-positive TR-PCDR amplification was observed from healthy citrus samples and water controls. These results thereby illustrated that the developed TR-PCDR method can be applied to the reliable, highly sensitive, and cost-effective detection of 'Ca. L. asiaticus'.

  1. Automatic high-sensitivity control of suspended pollutants in drinking and natural water

    NASA Astrophysics Data System (ADS)

    Akopov, Edmund I.; Karabegov, M.; Ovanesyan, A.

    1993-11-01

    This article presents a description of the new instrumental method and device for automatic measurement of water turbidity (WT) by means of photoelectron flow ultramicroscope (PFU). The method presents the WT determination by measuring the number concentration (number of particles suspended in 1 cm3 of water under study) using the PFU and demonstrates much higher sensitivity and accuracy in comparison with the usual methods--turbidimetry and nephelometry.

  2. Pyridoxylamine reactivity kinetics as an amine based nucleophile for screening electrophilic dermal sensitizers

    PubMed Central

    Chipinda, Itai; Mbiya, Wilbes; Adigun, Risikat Ajibola; Morakinyo, Moshood K.; Law, Brandon F.; Simoyi, Reuben H.; Siegel, Paul D.

    2015-01-01

    Chemical allergens bind directly, or after metabolic or abiotic activation, to endogenous proteins to become allergenic. Assessment of this initial binding has been suggested as a target for development of assays to screen chemicals for their allergenic potential. Recently we reported a nitrobenzenethiol (NBT) based method for screening thiol reactive skin sensitizers, however, amine selective sensitizers are not detected by this assay. In the present study we describe an amine (pyridoxylamine (PDA)) based kinetic assay to complement the NBT assay for identification of amine-selective and non-selective skin sensitizers. UV-Vis spectrophotometry and fluorescence were used to measure PDA reactivity for 57 chemicals including anhydrides, aldehydes, and quinones where reaction rates ranged from 116 to 6.2 × 10−6 M−1 s−1 for extreme to weak sensitizers, respectively. No reactivity towards PDA was observed with the thiol-selective sensitizers, non-sensitizers and prohaptens. The PDA rate constants correlated significantly with their respective murine local lymph node assay (LLNA) threshold EC3 values (R2 = 0.76). The use of PDA serves as a simple, inexpensive amine based method that shows promise as a preliminary screening tool for electrophilic, amine-selective skin sensitizers. PMID:24333919

  3. Usefulness and limitations of various guinea-pig test methods in detecting human skin sensitizers-validation of guinea-pig tests for skin hypersensitivity.

    PubMed

    Marzulli, F; Maguire, H C

    1982-02-01

    Several guinea-pig predictive test methods were evaluated by comparison of results with those obtained with human predictive tests, using ten compounds that have been used in cosmetics. The method involves the statistical analysis of the frequency with which guinea-pig tests agree with the findings of tests in humans. In addition, the frequencies of false positive and false negative predictive findings are considered and statistically analysed. The results clearly demonstrate the superiority of adjuvant tests (complete Freund's adjuvant) in determining skin sensitizers and the overall superiority of the guinea-pig maximization test in providing results similar to those obtained by human testing. A procedure is suggested for utilizing adjuvant and non-adjuvant test methods for characterizing compounds as of weak, moderate or strong sensitizing potential.

  4. A Purkinje shift in the spectral sensitivity of grey squirrels

    PubMed Central

    Silver, Priscilla H.

    1966-01-01

    1. The light-adapted spectral sensitivity of the grey squirrel has been determined by an automated training method at a level about 6 log units above the squirrel's absolute threshold. 2. The maximum sensitivity is near 555 nm, under light-adapted conditions, compared with the dark-adapted maximum near 500 nm found by a similar method. 3. Neither the light-adapted nor the dark-adapted behavioural threshold agrees with electrophysiological findings using single flash techniques, but there is agreement with e.r.g. results obtained with sinusoidal stimuli. PMID:5972118

  5. Electrooptic modulation methods for high sensitivity tunable diode laser spectroscopy

    NASA Technical Reports Server (NTRS)

    Glenar, David A.; Jennings, Donald E.; Nadler, Shacher

    1990-01-01

    A CdTe phase modulator and low power RF sources have been used with Pb-salt tunable diode lasers operating near 8 microns to generate optical sidebands for high sensitivity absorption spectroscopy. Sweep averaged, first-derivative sample spectra of CH4 were acquired by wideband phase sensitive detection of the electrooptically (EO) generated carrier-sideband beat signal. EO generated beat signals were also used to frequency lock the TDL to spectral lines. This eliminates low frequency diode jitter, and avoids the excess laser linewidth broadening that accompanies TDL current modulation frequency locking methods.

  6. Eigenvalue and eigenvector sensitivity and approximate analysis for repeated eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Hou, Gene J. W.; Kenny, Sean P.

    1991-01-01

    A set of computationally efficient equations for eigenvalue and eigenvector sensitivity analysis are derived, and a method for eigenvalue and eigenvector approximate analysis in the presence of repeated eigenvalues is presented. The method developed for approximate analysis involves a reparamaterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations of changes in both the eigenvalues and eigenvectors associated with the repeated eigenvalue problem. Examples are given to demonstrate the application of such equations for sensitivity and approximate analysis.

  7. Chinese nurses' perceived barriers and facilitators of ethical sensitivity.

    PubMed

    Huang, Fei Fei; Yang, Qing; Zhang, Jie; Khoshnood, Kaveh; Zhang, Jing Ping

    2016-08-01

    An overview of ethical sensitivity among Chinese registered nurses is needed to develop and optimize the education programs and interventions to cultivate and improve ethical sensitivity. The study was conducted to explore the barriers to and facilitators of ethical sensitivity among Chinese registered nurses working in hospital settings. A convergent parallel mixed-methods research design was adopted. In the cross-sectional quantitative study, the Chinese Moral Sensitivity Questionnaire-revised version was used to assess the levels of ethical sensitivity among registered nurses, and the scores were correlated with key demographics, training experiences in ethics, and workplace cultural environments (n = 306). In the qualitative study, semi-structured interviews were used to elicit the nurses' perceptions of the barriers and facilitators in nurturing ethical sensitivity (n = 15). The data were collected from February to June 2014. This study was approved by the Institutional Review Boards of Yale University and Central South University. Despite moderately high overall Chinese Moral Sensitivity Questionnaire-revised version scores, the ethical sensitivity among Chinese nurses lags in practice. Barriers to ethical sensitivity include the lack of knowledge related to ethics, lack of working experience as a nurse, the hierarchical organizational climate, and the conformist working attitude. The positive workplace cultural environments and application of ethical knowledge in practice were considered potential facilitators of ethical sensitivity. The findings of this study were compared with studies from other countries to examine the barriers and facilitators of ethical sensitivity in Chinese nurses. This mixed-methods study showed that even though the Chinese nurses have moderately high sensitivity to the ethical issues encountered in hospitals, there is still room for improvement. The barriers to and facilitators of ethical sensitivity identified here offer new and important strategies to support and enhance the nurses' sensitivity to ethical issues. © The Author(s) 2015.

  8. Blood culture gram stain, acridine orange stain and direct sensitivity-based antimicrobial therapy of bloodstream infection in patients with trauma.

    PubMed

    Behera, B; Mathur, P; Gupta, B

    2010-01-01

    The purpose of this study was to ascertain if the simple practice of Gram stain, acridine orange stain and direct sensitivity determination of positive blood culture bottles could be used to guide early and appropriate treatment in trauma patients with clinical suspicion of sepsis. The study also aimed to evaluate the error in interpreting antimicrobial sensitivity by direct method when compared to standard method and find out if specific antibiotic-organism combination had more discrepancies. Findings from consecutive episodes of blood stream infection at an Apex Trauma centre over a 12-month period are summarized. A total of 509 consecutive positive blood cultures were subjected to Gram staining. AO staining was done in BacT/ALERT-positive Gram-stain negative blood cultures. Direct sensitivity was performed from 369 blood culture broths, showing single type of growth in Gram and acridine orange staining. Results of direct sensitivity were compared to conventional sensitivity for errors. No 'very major' discrepancy was found in this study. About 5.2 and 1.8% minor error rates were noted in gram-positive and gram-negative bacteria, respectively, while comparing the two methods. Most of the discrepancies in gram-negative bacteria were noted in beta lactam - beta lactamase inhibitor combinations. Direct sensitivity testing was not reliable for reporting of methicillin and vancomycin resistance in Staphylococci. Gram stain result together with direct sensitivity testing is required for optimizing initial antimicrobial therapy in trauma patients with clinical suspicion of sepsis. Gram staining and AO staining proved particularly helpful in the early detection of candidaemia.

  9. Stimulus sensitive gel with radioisotope and methods of making

    DOEpatents

    Weller, Richard E.; Lind, Michael A.; Fisher, Darrell R.; Gutowska, Anna; Campbell, Allison A.

    2005-03-22

    The present invention is a thermally reversible stimulus-sensitive gel or gelling copolymer radioisotope carrier that is a linear random copolymer of an [meth-]acrylamide derivative and a hydrophilic comonomer, wherein the linear random copolymer is in the form of a plurality of linear chains having a plurality of molecular weights greater than or equal to a minimum gelling molecular weight cutoff. Addition of a biodegradable backbone and/or a therapeutic agent imparts further utility. The method of the present invention for making a thermally reversible stimulus-sensitive gelling copolymer radionuclcide carrier has the steps of: (a) mixing a stimulus-sensitive reversible gelling copolymer with an aqueous solvent as a stimulus-sensitive reversible gelling solution; and (b) mixing a radioisotope with said stimulus-sensitive reversible gelling solution as said radioisotope carrier. The gel is enhanced by either combining it with a biodegradable backbone and/or a therapeutic agent in a gelling solution made by mixing the copolymer with an aqueous solvent.

  10. Stimulus sensitive gel with radioisotope and methods of making

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weller, Richard E; Lind, Michael A; Fisher, Darrell R

    2001-10-02

    The present invention is a thermally reversible stimulus-sensitive gel or gelling copolymer radioisotope carrier that is a linear random copolymer of an [meth]acrylamide derivative and a hydrophilic comonomer, wherein the linear random copolymer is in the form of a plurality of linear chains having a plurality of molecular weights greater than or equal to a minimum gelling molecular weight cutoff. Addition of a biodegradable backbone and/or a therapeutic agent imparts further utility. The method of the present invention for making a thermally reversible stimulus-sensitive gelling copolymer radionuclcide carrier has the steps of: (a) mixing a stimulus-sensitive reversible gelling copolymer withmore » an aqueous solvent as a stimulus-sensitive reversible gelling solution; and (b) mixing a radioisotope with said stimulus-sensitive reversible gelling solution as said radioisotope carrier. The gel is enhanced by either combining it with a biodegradable backbone and/or a therapeutic agent in a gelling solution made by mixing the copolymer with an aqueous solvent.« less

  11. Calculation of impulse laser rangefinders' utmost operating range with sensitivity in different weather

    NASA Astrophysics Data System (ADS)

    Chen, Yu-dan; Zhou, Bing; Ying, Jia-ju; Mao, Shao-juan; Qian, Xian-mei

    2015-10-01

    As one of the main weapons, impulse laser rangefinders have become the main object of the electro-optical countermeasures. So its real maximum range (defined as utmost operating range in the paper) becomes the most concerned index to evaluate the performance of electro-optical countermeasure weapons. A method for calculating laser rangefinders' utmost operating range by its sensitivity in different weather is obtained. Then a method by experiment for getting the sensitivity is supplied. By analyzing the experiment data which the detectivity is 40%-60%, the laser rangefinders' sensitivity is in the range of 1.7×10-5 W to 9.8×10-5 W. For the reason that in order to get an exact utmost operating range, the experiment accuracy of sensitivity is very important, in the last part of paper, the factors which influence the experiment accuracy of sensitivity are analyzed, such as circuit of automatic gain control, the fluctuation of laser power, incident angle of laser.

  12. Evaluation of analytical performance of a new high-sensitivity immunoassay for cardiac troponin I.

    PubMed

    Masotti, Silvia; Prontera, Concetta; Musetti, Veronica; Storti, Simona; Ndreu, Rudina; Zucchelli, Gian Carlo; Passino, Claudio; Clerico, Aldo

    2018-02-23

    The study aim was to evaluate and compare the analytical performance of the new chemiluminescent immunoassay for cardiac troponin I (cTnI), called Access hs-TnI using DxI platform, with those of Access AccuTnI+3 method, and high-sensitivity (hs) cTnI method for ARCHITECT platform. The limits of blank (LoB), detection (LoD) and quantitation (LoQ) at 10% and 20% CV were evaluated according to international standardized protocols. For the evaluation of analytical performance and comparison of cTnI results, both heparinized plasma samples, collected from healthy subjects and patients with cardiac diseases, and quality control samples distributed in external quality assessment programs were used. LoB, LoD and LoQ at 20% and 10% CV values of the Access hs-cTnI method were 0.6, 1.3, 2.1 and 5.3 ng/L, respectively. Access hs-cTnI method showed analytical performance significantly better than that of Access AccuTnI+3 method and similar results to those of hs ARCHITECT cTnI method. Moreover, the cTnI concentrations measured with Access hs-cTnI method showed close linear regressions with both Access AccuTnI+3 and ARCHITECT hs-cTnI methods, although there were systematic differences between these methods. There was no difference between cTnI values measured by Access hs-cTnI in heparinized plasma and serum samples, whereas there was a significant difference between cTnI values, respectively measured in EDTA and heparin plasma samples. Access hs-cTnI has analytical sensitivity parameters significantly improved compared to Access AccuTnI+3 method and is similar to those of the high-sensitivity method using ARCHITECT platform.

  13. Survey of methods for calculating sensitivity of general eigenproblems

    NASA Technical Reports Server (NTRS)

    Murthy, Durbha V.; Haftka, Raphael T.

    1987-01-01

    A survey of methods for sensitivity analysis of the algebraic eigenvalue problem for non-Hermitian matrices is presented. In addition, a modification of one method based on a better normalizing condition is proposed. Methods are classified as Direct or Adjoint and are evaluated for efficiency. Operation counts are presented in terms of matrix size, number of design variables and number of eigenvalues and eigenvectors of interest. The effect of the sparsity of the matrix and its derivatives is also considered, and typical solution times are given. General guidelines are established for the selection of the most efficient method.

  14. Three Dimensional Distribution of Sensitive Field and Stress Field Inversion of Force Sensitive Materials under Constant Current Excitation.

    PubMed

    Zhao, Shuanfeng; Liu, Min; Guo, Wei; Zhang, Chuanwei

    2018-02-28

    Force sensitive conductive composite materials are functional materials which can be used as the sensitive material of force sensors. However, the existing sensors only use one-dimensional electrical properties of force sensitive conductive materials. Even in tactile sensors, the measurement of contact pressure is achieved by large-scale arrays and the units of a large-scale array are also based on the one-dimensional electrical properties of force sensitive materials. The main contribution of this work is to study the three-dimensional electrical properties and the inversion method of three-dimensional stress field of a force sensitive material (conductive rubber), which pushes the application of force sensitive material from one dimensional to three-dimensional. First, the mathematical model of the conductive rubber current field distribution under a constant force is established by the effective medium theory, and the current field distribution model of conductive rubber with different geometry, conductive rubber content and conductive rubber relaxation parameters is deduced. Secondly, the inversion method of the three-dimensional stress field of conductive rubber is established, which provides a theoretical basis for the design of a new tactile sensor, three-dimensional stress field and space force based on force sensitive materials.

  15. [Comparison of red edge parameters of winter wheat canopy under late frost stress].

    PubMed

    Wu, Yong-feng; Hu, Xin; Lü, Guo-hua; Ren, De-chao; Jiang, Wei-guo; Song, Ji-qing

    2014-08-01

    In the present study, late frost experiments were implemented under a range of subfreezing temperatures (-1 - -9 degrees C) by using a field movable climate chamber (FMCC) and a cold climate chamber, respectively. Based on the spectra of winter wheat canopy measured at noon on the first day after the frost experiments, red edge parameters REP, Dr, SDr, Dr(min), Dr/Dr(min) and Dr/SDr were extracted using maximum first derivative spectrum method (FD), linear four-point interpolation method (FPI), polynomial fitting method (POLY), inverted Gaussian fitting method (IG) and linear extrapolation technique (LE), respectively. The capacity of the red edge parameters to detect late frost stress was explicated from the aspects of the early, sensitivity and stability through correlation analysis, linear regression modeling and fluctuation analysis. The result indicates that except for REP calculated from FPI and IG method in Experiment 1, REP from the other methods was correlated with frost temperatures (P < 0.05). Thereinto, significant levels (P) of POLY and LE methods all reached 0.01. Except for POLY method in Experiment 2, Dr/SDr from the other methods were all significantly correlated with frost temperatures (P < 0.01). REP showed a trend to shift to short-wave band with decreasing temperatures. The lower the temperature, the more obvious the trend is. Of all the REP, REP calculated by LE method had the highest correlation with frost temperatures which indicated that LE method is the best for REP extraction. In Experiment 1 and 2, only Dr(min) and Dr/Dr(min), calculated by FD method simultaneously achieved the requirements for the early (their correlations with frost temperatures showed a significant level P < 0.01), sensitivity (abso- lute value of the slope of fluctuation coefficient is greater than 2.0) and stability (their correlations with frost temperatures al- ways keep a consistent direction). Dr/SDr calculated from FD and IG methods always had a low sensitivity in Experiment 2. In Experiment 1, the sensitivity of Dr/SDr from FD was moderate and IG was high. REP calculated from LE method had a lowest sensitivity in the two experiments. Totally, Dr(min) and Dr/Dr(min) calculated by FD method have the strongest detection capacity for frost temperature, which will be helpful to conducting the research on early diagnosis of late frost injury to winter wheat.

  16. Proportional Topology Optimization: A New Non-Sensitivity Method for Solving Stress Constrained and Minimum Compliance Problems and Its Implementation in MATLAB

    PubMed Central

    Biyikli, Emre; To, Albert C.

    2015-01-01

    A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849

  17. Simple and sensitive method for the quantification of total bilirubin in human serum using 3-methyl-2-benzothiazolinone hydrazone hydrochloride as a chromogenic probe

    NASA Astrophysics Data System (ADS)

    Nagaraja, Padmarajaiah; Avinash, Krishnegowda; Shivakumar, Anantharaman; Dinesh, Rangappa; Shrestha, Ashwinee Kumar

    2010-11-01

    We here describe a new spectrophotometric method for measuring total bilirubin in serum. The method is based on the cleavage of bilirubin giving formaldehyde which further reacts with diazotized 3-methyl-2-benzothiazolinone hydrazone hydrochloride giving blue colored solution with maximum absorbance at 630 nm. Sensitivity of the developed method was compared with Jendrassik-Grof assay procedure and its applicability has been tested with human serum samples. Good correlation was attained between both methods giving slope of 0.994, intercept 0.015, and R2 = 0.997. Beers law obeyed in the range of 0.068-17.2 μM with good linearity, absorbance y = 0.044 Cbil + 0.003. Relative standard deviation was 0.006872, within day precision ranged 0.3-1.2% and day-to-day precision ranged 1-6%. Recovery of the method varied from 97 to 102%. The proposed method has higher sensitivity with less interference. The obtained product was extracted and was spectrally characterized for structural confirmation with FT-IR, 1H NMR.

  18. Exploration of Analysis Methods for Diagnostic Imaging Tests: Problems with ROC AUC and Confidence Scores in CT Colonography

    PubMed Central

    Mallett, Susan; Halligan, Steve; Collins, Gary S.; Altman, Doug G.

    2014-01-01

    Background Different methods of evaluating diagnostic performance when comparing diagnostic tests may lead to different results. We compared two such approaches, sensitivity and specificity with area under the Receiver Operating Characteristic Curve (ROC AUC) for the evaluation of CT colonography for the detection of polyps, either with or without computer assisted detection. Methods In a multireader multicase study of 10 readers and 107 cases we compared sensitivity and specificity, using radiological reporting of the presence or absence of polyps, to ROC AUC calculated from confidence scores concerning the presence of polyps. Both methods were assessed against a reference standard. Here we focus on five readers, selected to illustrate issues in design and analysis. We compared diagnostic measures within readers, showing that differences in results are due to statistical methods. Results Reader performance varied widely depending on whether sensitivity and specificity or ROC AUC was used. There were problems using confidence scores; in assigning scores to all cases; in use of zero scores when no polyps were identified; the bimodal non-normal distribution of scores; fitting ROC curves due to extrapolation beyond the study data; and the undue influence of a few false positive results. Variation due to use of different ROC methods exceeded differences between test results for ROC AUC. Conclusions The confidence scores recorded in our study violated many assumptions of ROC AUC methods, rendering these methods inappropriate. The problems we identified will apply to other detection studies using confidence scores. We found sensitivity and specificity were a more reliable and clinically appropriate method to compare diagnostic tests. PMID:25353643

  19. Ranking of physiotherapeutic evaluation methods as outcome measures of stifle functionality in dogs.

    PubMed

    Hyytiäinen, Heli K; Mölsä, Sari H; Junnila, Jouni T; Laitinen-Vapaavuori, Outi M; Hielm-Björkman, Anna K

    2013-04-08

    Various physiotherapeutic evaluation methods are used to assess the functionality of dogs with stifle problems. Neither validity nor sensitivity of these methods has been investigated. This study aimed to determine the most valid and sensitive physiotherapeutic evaluation methods for assessing functional capacity in hind limbs of dogs with stifle problems and to serve as a basis for developing an indexed test for these dogs. A group of 43 dogs with unilateral surgically treated cranial cruciate ligament deficiency and osteoarthritic findings was used to test different physiotherapeutic evaluation methods. Twenty-one healthy dogs served as the control group and were used to determine normal variation in static weight bearing and range of motion.The protocol consisted of 14 different evaluation methods: visual evaluation of lameness, visual evaluation of diagonal movement, visual evaluation of functional active range of motion and difference in thrust of hind limbs via functional tests (sit-to-move and lie-to-move), movement in stairs, evaluation of hind limb muscle atrophy, manual evaluation of hind limb static weight bearing, quantitative measurement of static weight bearing of hind limbs with bathroom scales, and passive range of motion of hind limb stifle (flexion and extension) and tarsal (flexion and extension) joints using a universal goniometer. The results were compared with those from an orthopaedic examination, force plate analysis, radiographic evaluation, and a conclusive assessment. Congruity of the methods was assessed with a combination of three statistical approaches (Fisher's exact test and two differently calculated proportions of agreeing observations), and the components were ranked from best to worst. Sensitivities of all of the physiotherapeutic evaluation methods against each standard were calculated. Evaluation of asymmetry in a sitting and lying position, assessment of muscle atrophy, manual and measured static weight bearing, and measurement of stifle passive range of motion were the most valid and sensitive physiotherapeutic evaluation methods. Ranking of the various physiotherapeutic evaluation methods was accomplished. Several of these methods can be considered valid and sensitive when examining the functionality of dogs with stifle problems.

  20. Ranking of physiotherapeutic evaluation methods as outcome measures of stifle functionality in dogs

    PubMed Central

    2013-01-01

    Background Various physiotherapeutic evaluation methods are used to assess the functionality of dogs with stifle problems. Neither validity nor sensitivity of these methods has been investigated. This study aimed to determine the most valid and sensitive physiotherapeutic evaluation methods for assessing functional capacity in hind limbs of dogs with stifle problems and to serve as a basis for developing an indexed test for these dogs. A group of 43 dogs with unilateral surgically treated cranial cruciate ligament deficiency and osteoarthritic findings was used to test different physiotherapeutic evaluation methods. Twenty-one healthy dogs served as the control group and were used to determine normal variation in static weight bearing and range of motion. The protocol consisted of 14 different evaluation methods: visual evaluation of lameness, visual evaluation of diagonal movement, visual evaluation of functional active range of motion and difference in thrust of hind limbs via functional tests (sit-to-move and lie-to-move), movement in stairs, evaluation of hind limb muscle atrophy, manual evaluation of hind limb static weight bearing, quantitative measurement of static weight bearing of hind limbs with bathroom scales, and passive range of motion of hind limb stifle (flexion and extension) and tarsal (flexion and extension) joints using a universal goniometer. The results were compared with those from an orthopaedic examination, force plate analysis, radiographic evaluation, and a conclusive assessment. Congruity of the methods was assessed with a combination of three statistical approaches (Fisher’s exact test and two differently calculated proportions of agreeing observations), and the components were ranked from best to worst. Sensitivities of all of the physiotherapeutic evaluation methods against each standard were calculated. Results Evaluation of asymmetry in a sitting and lying position, assessment of muscle atrophy, manual and measured static weight bearing, and measurement of stifle passive range of motion were the most valid and sensitive physiotherapeutic evaluation methods. Conclusions Ranking of the various physiotherapeutic evaluation methods was accomplished. Several of these methods can be considered valid and sensitive when examining the functionality of dogs with stifle problems. PMID:23566355

  1. Identification of material constants for piezoelectric transformers by three-dimensional, finite-element method and a design-sensitivity method.

    PubMed

    Joo, Hyun-Woo; Lee, Chang-Hwan; Rho, Jong-Seok; Jung, Hyun-Kyo

    2003-08-01

    In this paper, an inversion scheme for piezoelectric constants of piezoelectric transformers is proposed. The impedance of piezoelectric transducers is calculated using a three-dimensional finite element method. The validity of this is confirmed experimentally. The effects of material coefficients on piezoelectric transformers are investigated numerically. Six material coefficient variables for piezoelectric transformers were selected, and a design sensitivity method was adopted as an inversion scheme. The validity of the proposed method was confirmed by step-up ratio calculations. The proposed method is applied to the analysis of a sample piezoelectric transformer, and its resonance characteristics are obtained by numerically combined equivalent circuit method.

  2. FACTORS AFFECTING SENSITIVITY OF CHEMICAL AND ECOLOGICAL RESPONSES OF MARINE EMBAYMEMTS TO NITROGEN LOADING

    EPA Science Inventory

    This paper summarizes an ongoing examination of the primary factors that affect sensitivity of marine embayment responses to nitrogen loading. Included is a discussion of two methods for using these factors: classification of embayments into discrete sensitivity classes and norma...

  3. Method for immunodiagnostic detection of dioxins at low concentrations

    DOEpatents

    Vanderlaan, Martin; Stanker, Larry H.; Watkins, Bruce E.; Petrovic, Peter; Gorbach, Siegbert

    1995-01-01

    A method is described for the use of monoclonal antibodies in a sensitive immunoassay for halogenated dioxins and dibenzofurans in industrial samples which contain impurities. Appropriate sample preparation and selective enzyme amplification of the immunoassay sensitivity permits detection of dioxin contaminants in industrial or environmental samples at concentrations in the range of a few parts per trillion.

  4. OPTOELECTRONICS, FIBER OPTICS, AND OTHER ASPECTS OF QUANTUM ELECTRONICS: Interference-threshold storage of optical data

    NASA Astrophysics Data System (ADS)

    Efimkov, V. F.; Zubarev, I. G.; Kolobrodov, V. V.; Sobolev, V. B.

    1989-08-01

    A method for the determination of the spatial characteristics of a laser beam is proposed and implemented. This method is based on the interaction of an interference field of two laser beams, which are spatially similar to the one being investigated, with a light-sensitive material characterized by a sensitivity threshold.

  5. Pupil Researchers Generation X: Educating Pupils as Active Participants--An Investigation into Gathering Sensitive Information from Early Adolescents

    ERIC Educational Resources Information Center

    Symonds, Jenny E.

    2008-01-01

    Developmentally appropriate research techniques were uncovered by involving ten Year 7 pupils as researchers in a four-hour workshop that investigated the effectiveness of multiple methods in gathering sensitive information from early adolescents. The pupils learned about, tried and evaluated the methods of generating interview questions, peer and…

  6. Methods for studying sensitive family topics.

    PubMed

    Gelles, Richard J

    1978-07-01

    Researchers on sensitive topics in family relations face a number of obstacles, due to the private nature of the family and to ethical constraints on the study of humans. Difficulties in locating subjects, engaging their cooperation, and obtaining valid and reliable data are discussed, and methods are proposed for pursuing research on these important but frequently taboo topics.

  7. Preparation and anti-bacterial properties of a temperature sensitive gel containing silver nanoparticles

    USDA-ARS?s Scientific Manuscript database

    The purpose of this study was to prepare a novel temperature-sensitive spray gel containing silver nanoparticles and investigate its anti-bacterial properties in vitro. Methods: The aqueous complex gel was prepared by Pluronic F127 (18-22%) and Pluronic F68 (3-9%) through a cold method to obtain a p...

  8. Diagnostic Accuracy and Cost-Effectiveness of Alternative Methods for Detection of Soil-Transmitted Helminths in a Post-Treatment Setting in Western Kenya

    PubMed Central

    Kepha, Stella; Kihara, Jimmy H.; Njenga, Sammy M.; Pullan, Rachel L.; Brooker, Simon J.

    2014-01-01

    Objectives This study evaluates the diagnostic accuracy and cost-effectiveness of the Kato-Katz and Mini-FLOTAC methods for detection of soil-transmitted helminths (STH) in a post-treatment setting in western Kenya. A cost analysis also explores the cost implications of collecting samples during school surveys when compared to household surveys. Methods Stool samples were collected from children (n = 652) attending 18 schools in Bungoma County and diagnosed by the Kato-Katz and Mini-FLOTAC coprological methods. Sensitivity and additional diagnostic performance measures were analyzed using Bayesian latent class modeling. Financial and economic costs were calculated for all survey and diagnostic activities, and cost per child tested, cost per case detected and cost per STH infection correctly classified were estimated. A sensitivity analysis was conducted to assess the impact of various survey parameters on cost estimates. Results Both diagnostic methods exhibited comparable sensitivity for detection of any STH species over single and consecutive day sampling: 52.0% for single day Kato-Katz; 49.1% for single-day Mini-FLOTAC; 76.9% for consecutive day Kato-Katz; and 74.1% for consecutive day Mini-FLOTAC. Diagnostic performance did not differ significantly between methods for the different STH species. Use of Kato-Katz with school-based sampling was the lowest cost scenario for cost per child tested ($10.14) and cost per case correctly classified ($12.84). Cost per case detected was lowest for Kato-Katz used in community-based sampling ($128.24). Sensitivity analysis revealed the cost of case detection for any STH decreased non-linearly as prevalence rates increased and was influenced by the number of samples collected. Conclusions The Kato-Katz method was comparable in diagnostic sensitivity to the Mini-FLOTAC method, but afforded greater cost-effectiveness. Future work is required to evaluate the cost-effectiveness of STH surveillance in different settings. PMID:24810593

  9. A Systematic Comparison of Linear Regression-Based Statistical Methods to Assess Exposome-Health Associations.

    PubMed

    Agier, Lydiane; Portengen, Lützen; Chadeau-Hyam, Marc; Basagaña, Xavier; Giorgis-Allemand, Lise; Siroux, Valérie; Robinson, Oliver; Vlaanderen, Jelle; González, Juan R; Nieuwenhuijsen, Mark J; Vineis, Paolo; Vrijheid, Martine; Slama, Rémy; Vermeulen, Roel

    2016-12-01

    The exposome constitutes a promising framework to improve understanding of the effects of environmental exposures on health by explicitly considering multiple testing and avoiding selective reporting. However, exposome studies are challenged by the simultaneous consideration of many correlated exposures. We compared the performances of linear regression-based statistical methods in assessing exposome-health associations. In a simulation study, we generated 237 exposure covariates with a realistic correlation structure and with a health outcome linearly related to 0 to 25 of these covariates. Statistical methods were compared primarily in terms of false discovery proportion (FDP) and sensitivity. On average over all simulation settings, the elastic net and sparse partial least-squares regression showed a sensitivity of 76% and an FDP of 44%; Graphical Unit Evolutionary Stochastic Search (GUESS) and the deletion/substitution/addition (DSA) algorithm revealed a sensitivity of 81% and an FDP of 34%. The environment-wide association study (EWAS) underperformed these methods in terms of FDP (average FDP, 86%) despite a higher sensitivity. Performances decreased considerably when assuming an exposome exposure matrix with high levels of correlation between covariates. Correlation between exposures is a challenge for exposome research, and the statistical methods investigated in this study were limited in their ability to efficiently differentiate true predictors from correlated covariates in a realistic exposome context. Although GUESS and DSA provided a marginally better balance between sensitivity and FDP, they did not outperform the other multivariate methods across all scenarios and properties examined, and computational complexity and flexibility should also be considered when choosing between these methods. Citation: Agier L, Portengen L, Chadeau-Hyam M, Basagaña X, Giorgis-Allemand L, Siroux V, Robinson O, Vlaanderen J, González JR, Nieuwenhuijsen MJ, Vineis P, Vrijheid M, Slama R, Vermeulen R. 2016. A systematic comparison of linear regression-based statistical methods to assess exposome-health associations. Environ Health Perspect 124:1848-1856; http://dx.doi.org/10.1289/EHP172.

  10. Detecting long-term growth trends using tree rings: a critical evaluation of methods.

    PubMed

    Peters, Richard L; Groenendijk, Peter; Vlam, Mart; Zuidema, Pieter A

    2015-05-01

    Tree-ring analysis is often used to assess long-term trends in tree growth. A variety of growth-trend detection methods (GDMs) exist to disentangle age/size trends in growth from long-term growth changes. However, these detrending methods strongly differ in approach, with possible implications for their output. Here, we critically evaluate the consistency, sensitivity, reliability and accuracy of four most widely used GDMs: conservative detrending (CD) applies mathematical functions to correct for decreasing ring widths with age; basal area correction (BAC) transforms diameter into basal area growth; regional curve standardization (RCS) detrends individual tree-ring series using average age/size trends; and size class isolation (SCI) calculates growth trends within separate size classes. First, we evaluated whether these GDMs produce consistent results applied to an empirical tree-ring data set of Melia azedarach, a tropical tree species from Thailand. Three GDMs yielded similar results - a growth decline over time - but the widely used CD method did not detect any change. Second, we assessed the sensitivity (probability of correct growth-trend detection), reliability (100% minus probability of detecting false trends) and accuracy (whether the strength of imposed trends is correctly detected) of these GDMs, by applying them to simulated growth trajectories with different imposed trends: no trend, strong trends (-6% and +6% change per decade) and weak trends (-2%, +2%). All methods except CD, showed high sensitivity, reliability and accuracy to detect strong imposed trends. However, these were considerably lower in the weak or no-trend scenarios. BAC showed good sensitivity and accuracy, but low reliability, indicating uncertainty of trend detection using this method. Our study reveals that the choice of GDM influences results of growth-trend studies. We recommend applying multiple methods when analysing trends and encourage performing sensitivity and reliability analysis. Finally, we recommend SCI and RCS, as these methods showed highest reliability to detect long-term growth trends. © 2014 John Wiley & Sons Ltd.

  11. Global Sensitivity Analysis for Identifying Important Parameters of Nitrogen Nitrification and Denitrification under Model and Scenario Uncertainties

    NASA Astrophysics Data System (ADS)

    Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.

    2017-12-01

    Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.

  12. Sensitivity analysis and optimization method for the fabrication of one-dimensional beam-splitting phase gratings

    PubMed Central

    Pacheco, Shaun; Brand, Jonathan F.; Zaverton, Melissa; Milster, Tom; Liang, Rongguang

    2015-01-01

    A method to design one-dimensional beam-spitting phase gratings with low sensitivity to fabrication errors is described. The method optimizes the phase function of a grating by minimizing the integrated variance of the energy of each output beam over a range of fabrication errors. Numerical results for three 1x9 beam splitting phase gratings are given. Two optimized gratings with low sensitivity to fabrication errors were compared with a grating designed for optimal efficiency. These three gratings were fabricated using gray-scale photolithography. The standard deviation of the 9 outgoing beam energies in the optimized gratings were 2.3 and 3.4 times lower than the optimal efficiency grating. PMID:25969268

  13. Determining Performance Acceptability of Electrochemical Oxygen Sensors

    NASA Technical Reports Server (NTRS)

    Gonzales, Daniel

    2012-01-01

    A method has been developed to screen commercial electrochemical oxygen sensors to reduce the failure rate. There are three aspects to the method: First, the sensitivity over time (several days) can be measured and the rate of change of the sensitivity can be used to predict sensor failure. Second, an improvement to this method would be to store the sensors in an oxygen-free (e.g., nitrogen) environment and intermittently measure the sensitivity over time (several days) to accomplish the same result while preserving the sensor lifetime by limiting consumption of the electrode. Third, the second time derivative of the sensor response over time can be used to determine the point in time at which the sensors are sufficiently stable for use.

  14. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qiqi, E-mail: qiqi@mit.edu; Hu, Rui, E-mail: hurui@mit.edu; Blonigan, Patrick, E-mail: blonigan@mit.edu

    2014-06-15

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate ourmore » algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.« less

  15. Experiences on p-Version Time-Discontinuous Galerkin's Method for Nonlinear Heat Transfer Analysis and Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Hou, Gene

    2004-01-01

    The focus of this research is on the development of analysis and sensitivity analysis equations for nonlinear, transient heat transfer problems modeled by p-version, time discontinuous finite element approximation. The resulting matrix equation of the state equation is simply in the form ofA(x)x = c, representing a single step, time marching scheme. The Newton-Raphson's method is used to solve the nonlinear equation. Examples are first provided to demonstrate the accuracy characteristics of the resultant finite element approximation. A direct differentiation approach is then used to compute the thermal sensitivities of a nonlinear heat transfer problem. The report shows that only minimal coding effort is required to enhance the analysis code with the sensitivity analysis capability.

  16. [Validation of the nutritional index in Mexican pre-teens with the sensitivity and specificity method].

    PubMed

    Saucedo-Molina, T J; Gómez-Peresmitré, G

    1998-01-01

    To determine the diagnostic validity of the nutritional index (NI) in a sample of Mexican preadolescents. A total of 256 preadolescents, between 10 and 12 years old, male and female, students from Mexico City, were used to establish the diagnostic validity of NI using the sensitivity and specificity method. The findings show that the conventional NI cut-off points showed good sensitivity and specificity for the diagnosis of low weight, normality and obesity but not for overweight. When the cut-off points of NI were normalized, the sensitivity, specificity and prediction potency values were more suitable in all categories. When working with preadolescents, it is better to use the new cut-off points of NI, to obtain more reliable diagnosis.

  17. DELAYED HYPERSENSITIVITY

    PubMed Central

    Uhr, Jonathan W.; Salvin, S. B.; Pappenheimer, A. M.

    1957-01-01

    A general method for induction of the delayed hypersensitive state directed against single protein antigens is described. The method consists of intradermal injection of minute amounts of washed immune precipitates containing the antigen in question. Provided the specific precipitates are formed in the region of antibody excess, maximal sensitivity develops at least 2 to 3 weeks before detectable circulating antibody is formed in guinea pigs against the sensitizing antigen. Neither adjuvant nor killed acid-fast bacteria are required for induction of the delayed hypersensitive state although the degree of sensitization is considerably increased when the sensitizing material is incorporated in Freund's complete adjuvant. Characteristics of the "delayed" as opposed to the "immediate" hypersensitive states in the guinea pig are described and implications of the findings are discussed. PMID:13385403

  18. Gas sensitive materials for gas detection and methods of making

    DOEpatents

    Trakhtenberg, Leonid Israilevich; Gerasimov, Genrikh Nikolaevich; Gromov, Vladimir Fedorovich; Rozenberg, Valeriya Isaakovna

    2014-07-15

    A gas sensitive material comprising SnO.sub.2 nanocrystals doped with In.sub.2O.sub.3 and an oxide of a platinum group metal, and a method of making the same. The platinum group metal is preferably Pd, but also may include Pt, Ru, Ir, and combinations thereof. The SnO.sub.2 nanocrystals have a specific surface of 7 or greater, preferably about 20 m2/g, and a mean particle size of between about 10 nm and about 100 nm, preferably about 40 nm. A gas detection device made from the gas sensitive material deposited on a substrate, the gas sensitive material configured as a part of a current measuring circuit in communication with a heat source.

  19. Sensitivity curves for searches for gravitational-wave backgrounds

    NASA Astrophysics Data System (ADS)

    Thrane, Eric; Romano, Joseph D.

    2013-12-01

    We propose a graphical representation of detector sensitivity curves for stochastic gravitational-wave backgrounds that takes into account the increase in sensitivity that comes from integrating over frequency in addition to integrating over time. This method is valid for backgrounds that have a power-law spectrum in the analysis band. We call these graphs “power-law integrated curves.” For simplicity, we consider cross-correlation searches for unpolarized and isotropic stochastic backgrounds using two or more detectors. We apply our method to construct power-law integrated sensitivity curves for second-generation ground-based detectors such as Advanced LIGO, space-based detectors such as LISA and the Big Bang Observer, and timing residuals from a pulsar timing array. The code used to produce these plots is available at https://dcc.ligo.org/LIGO-P1300115/public for researchers interested in constructing similar sensitivity curves.

  20. Accurate evaluation of sensitivity for calibration between a LiDAR and a panoramic camera used for remote sensing

    NASA Astrophysics Data System (ADS)

    García-Moreno, Angel-Iván; González-Barbosa, José-Joel; Ramírez-Pedraza, Alfonso; Hurtado-Ramos, Juan B.; Ornelas-Rodriguez, Francisco-Javier

    2016-04-01

    Computer-based reconstruction models can be used to approximate urban environments. These models are usually based on several mathematical approximations and the usage of different sensors, which implies dependency on many variables. The sensitivity analysis presented in this paper is used to weigh the relative importance of each uncertainty contributor into the calibration of a panoramic camera-LiDAR system. Both sensors are used for three-dimensional urban reconstruction. Simulated and experimental tests were conducted. For the simulated tests we analyze and compare the calibration parameters using the Monte Carlo and Latin hypercube sampling techniques. Sensitivity analysis for each variable involved into the calibration was computed by the Sobol method, which is based on the analysis of the variance breakdown, and the Fourier amplitude sensitivity test method, which is based on Fourier's analysis. Sensitivity analysis is an essential tool in simulation modeling and for performing error propagation assessments.

  1. Does the Irlen[R] Method Bring about an Increase in Reading Scores on a Specific Test of Reading for Students Found to Have Scotopic Sensitivity Syndrome?

    ERIC Educational Resources Information Center

    Faraci, Marie Elaine

    2009-01-01

    The problem. The purpose of this study was to examine the effect of the Irlen[R] method's use of colored overlays on the reading achievement of 3rd-grade students who were identified as having Scotopic Sensitivity Syndrome/Irlen[R] Syndrome. Method. This was a true experimental, pre-test, post-test design. The Irlen[R] overlay, either present…

  2. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    PubMed

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  3. Species-specific diagnostic assays for Bonamia ostreae and B. exitiosa in European flat oyster Ostrea edulis: conventional, real-time and multiplex PCR.

    PubMed

    Ramilo, Andrea; Navas, J Ignacio; Villalba, Antonio; Abollo, Elvira

    2013-05-27

    Bonamia ostreae and B. exitiosa have caused mass mortalities of various oyster species around the world and co-occur in some European areas. The World Organisation for Animal Health (OIE) has included infections with both species in the list of notifiable diseases. However, official methods for species-specific diagnosis of either parasite have certain limitations. In this study, new species-specific conventional PCR (cPCR) and real-time PCR techniques were developed to diagnose each parasite species. Moreover, a multiplex PCR method was designed to detect both parasites in a single assay. The analytical sensitivity and specificity of each new method were evaluated. These new procedures were compared with 2 OIE-recommended methods, viz. standard histology and PCR-RFLP. The new procedures showed higher sensitivity than the OIE recommended ones for the diagnosis of both species. The sensitivity of tests with the new primers was higher using oyster gills and gonad tissue, rather than gills alone. The lack of a 'gold standard' prevented accurate estimation of sensitivity and specificity of the new methods. The implementation of statistical tools (maximum likelihood method) for the comparison of the diagnostic tests showed the possibility of false positives with the new procedures, although the absence of a gold standard precluded certainty. Nevertheless, all procedures showed negative results when used for the analysis of oysters from a Bonamia-free area.

  4. Evaluation performance of diagnostic methods of intestinal parasitosis in school age children in Ethiopia.

    PubMed

    Yimer, Mulat; Hailu, Tadesse; Mulu, Wondemagegn; Abera, Bayeh

    2015-12-26

    Although the sensitivity of Wet mount technique is questionable, it is the major diagnostic technique for routine diagnosis of intestinal parasitosis in Ethiopia. Therefore, the aim of this study was the evaluation performance of diagnostic methods of intestinal parasitosis in school age children in Ethiopia. A cross sectional study was conducted from May to June 2013. Single stool sample was processed for direct, Formol ether concentration (FEC) and Kato Katz methods. The sensitivity and negative predictive value (NPV) of diagnostic tests were calculated in terms of the "Gold" standard method (the combined result of the three methods altogether). A total of 422 school age children were participated in this study. The prevalence of intestinal parasites was high (74.6%) with Kato Katz technique. The sensitivity of Wet mount, FEC and Kato Katz tests against the Gold standard test was 48.9, 63.1 and 93.7%, respectively. Kato Katz technique revealed a better NPV 80.4 (80.1-80.6) as compared to the Wet mount (33.7%) and FEC techniques (41.3%). In this study, the Kato Katz technique outperformed the other two methods but the true values for sensitivity, specificity and diagnostic values are not known. Moreover, it is labor intensive and not easily accessible. Hence, it is preferable to use FEC technique to complement the Wet mount test.

  5. Optimization of Trichomonas vaginalis Diagnosis during Pregnancy at a University Hospital, Argentina.

    PubMed

    Testardini, Pamela; Vaulet, María Lucía Gallo; Entrocassi, Andrea Carolina; Menghi, Claudia; Eliseht, Martha Cora; Gatta, Claudia; Losada, Mirta; Touzón, María Sol; Corominas, Ana; Vay, Carlos; Tatti, Silvio; Famiglietti, Angela; Fermepin, Marcelo Rodriguez; Perazzi, Beatriz

    2016-04-01

    The aim of this study was to evaluate different methods for Trichomonas vaginalis diagnosis during pregnancy in order to prevent maternal and perinatal complications. A total of 386 vaginal exudates from pregnant women were analyzed. T. vaginalis was investigated by 3 types of microscopic examinations direct wet mount with physiologic saline solution, prolonged May-Grunwald Giemsa (MGG) staining, and wet mount with sodium-acetate-formalin (SAF)/methylene blue method. PCR for 18S rRNA gene as well as culture in liquid medium were performed. The sensitivity and specificity of the microscopic examinations were evaluated considering the culture media positivity or the PCR techniques as gold standard. The frequency of T. vaginalis infection was 6.2% by culture and/or PCR, 5.2% by PCR, 4.7% by culture, 3.1% by SAF/methylene blue method and 2.8% by direct wet smear and prolonged MGG staining. The sensitivities were 83.3%, 75.0%, 50.0%, and 45.8% for PCR, culture, SAF/methylene blue method, and direct wet smear-prolonged MGG staining, respectively. The specificity was 100% for all the assessed methods. Microscopic examinations showed low sensitivity, mainly in asymptomatic pregnant patients. It is necessary to improve the detection of T. vaginalis using combined methods providing higher sensitivity, such as culture and PCR, mainly in asymptomatic pregnant patients, in order to prevent maternal and perinatal complications.

  6. Development and validation of a simple and sensitive method for quantification of levodopa and carbidopa in rat and monkey plasma using derivatization and UPLC-MS/MS.

    PubMed

    Junnotula, Venkatraman; Licea-Perez, Hermes

    2013-05-01

    A simple, selective, and sensitive quantitative method has been developed for the simultaneous determination of levodopa and carbidopa in rat and monkey plasma by protein precipitation using acetonitrile containing the derivatizing reagent, flourescamine. Derivatized products of levodopa and carbidopa were separated on a BEH C18 column (2.1 mm × 50 mm; 1.7 μm particle size) using ultra high performance liquid chromatography (UHPLC) without any further purification. Tandem mass spectrometry (MS/MS) was used for detection. The method was validated over the concentration range of 5-5000 ng/mL and 3-3000 ng/mL for levodopa and carbidopa, respectively in rat and monkey plasma. Due to the poor stability of the investigated analytes in biological matrices, a mixture of sodium metabisulfite and hydrazine dihydrochloride was used as a stabilizer. This method was successfully utilized to support pharmacokinetic studies in both species. The results from assay validations and incurred samples re-analysis show that the method is selective, sensitive and robust. To our knowledge, this is the first UHPLC-MS/MS based method that utilizes derivatization with fluorescamine and provides adequate sensitivity for both levodopa and carbidopa with 50 μL of sample and a run time of 3.5 min. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Highly sensitive catalytic spectrophotometric determination of ruthenium

    NASA Astrophysics Data System (ADS)

    Naik, Radhey M.; Srivastava, Abhishek; Prasad, Surendra

    2008-01-01

    A new and highly sensitive catalytic kinetic method (CKM) for the determination of ruthenium(III) has been established based on its catalytic effect on the oxidation of L-phenylalanine ( L-Pheala) by KMnO 4 in highly alkaline medium. The reaction has been followed spectrophotometrically by measuring the decrease in the absorbance at 526 nm. The proposed CKM is based on the fixed time procedure under optimum reaction conditions. It relies on the linear relationship where the change in the absorbance (Δ At) versus added Ru(III) amounts in the range of 0.101-2.526 ng ml -1 is plotted. Under the optimum conditions, the sensitivity of the proposed method, i.e. the limit of detection corresponding to 5 min is 0.08 ng ml -1, and decreases with increased time of analysis. The method is featured with good accuracy and reproducibility for ruthenium(III) determination. The ruthenium(III) has also been determined in presence of several interfering and non-interfering cations, anions and polyaminocarboxylates. No foreign ions interfered in the determination ruthenium(III) up to 20-fold higher concentration of foreign ions. In addition to standard solutions analysis, this method was successfully applied for the quantitative determination of ruthenium(III) in drinking water samples. The method is highly sensitive, selective and very stable. A review of recently published catalytic spectrophotometric methods for the determination of ruthenium(III) has also been presented for comparison.

  8. Evaluation of ALK gene rearrangement in central nervous system metastases of non-small-cell lung cancer using two-step RT-PCR technique.

    PubMed

    Nicoś, M; Krawczyk, P; Wojas-Krawczyk, K; Bożyk, A; Jarosz, B; Sawicki, M; Trojanowski, T; Milanowski, J

    2017-12-01

    RT-PCR technique has showed a promising value as pre-screening method for detection of mRNA containing abnormal ALK sequences, but its sensitivity and specificity is still discussable. Previously, we determined the incidence of ALK rearrangement in CNS metastases of NSCLC using IHC and FISH methods. We evaluated ALK gene rearrangement using two-step RT-PCR method with EML4-ALK Fusion Gene Detection Kit (Entrogen, USA). The studied group included 145 patients (45 females, 100 males) with CNS metastases of NSCLC and was heterogeneous in terms of histology and smoking status. 21% of CNS metastases of NSCLC (30/145) showed presence of mRNA containing abnormal ALK sequences. FISH and IHC tests confirmed the presence of ALK gene rearrangement and expression of ALK abnormal protein in seven patients with positive result of RT-PCR analysis (4.8% of all patients, 20% of RT-PCR positive patients). RT-PCR method compared to FISH analysis achieved 100% of sensitivity and only 82.7% of specificity. IHC method compared to FISH method indicated 100% of sensitivity and 97.8% of specificity. In comparison to IHC, RT-PCR showed identical sensitivity with high number of false positive results. Utility of RT-PCR technique in screening of ALK abnormalities and in qualification patients for molecularly targeted therapies needs further validation.

  9. [Sensitivity to disinfectants of Candid albicans strains isolated from the hospital environment].

    PubMed

    Tadeusiak, B

    1998-01-01

    In recent years an increase of the incidence of Candida infections caused mainly by C. albicans strains especially in high risk inpatients with neoplasms, decreased immunity, burns and after treatment with multiple antibiotics has been observed. Candida organisms are particularly dangerous for newborns being responsible for about 30% of septicaemia cases in newborns in intensive care units. Fungal infections can be endogenous in origin but exogenous infection sources occur in hospitals. The cause of the latter are errors in aseptic management and insufficiently disinfected medical instruments and equipment. The purpose of the study was a comparison of the sensitivity to disinfectants of C. albicans belonging to two laboratory strains C. albicans PZH and C. albicans ATCC 10231 used for the determination of concentrations of two disinfectants used. Besides that, this sensitivity was determined in 14 strains isolated from the patients and one from the circuit of dialysis solution supply to artificial kidney. The study was carried out by the qualitative suspension method, in which the cells in the fluid were subjected to the action of disinfectants, and by the carrier method in which the cells of the microorganisms were present on the surface of metal cylinders. By the suspension method the sensitivity was determined to chloramine T in concentrations from 5.0% to 0.001%, formalin from 10.0% to 0.25%, glutaraldehyde from 2.0% to 0.1%, Septyl from 3.5% to 0.25%. The exposure time was 5, 10, 15, 30 and 60 minutes. The tested strains differed in their sensitivity to the disinfectants used. The greatest interstrain differences were observed in the sensitivity to the disinfectants used. The greatest interstrain differences were observed in the sensitivity to chloramine T. The highest concentrations were tolerated by the strains isolated from the patients and from the artificial kidney circuit as well as by the standard strain ATCC 10231. In the 10-minute exposure time accepted by us as comparison standard these strains were 200-time less susceptible to chloramine than the standard C. albicans PZH strain. Two strain isolated from the patients were tenfold as sensitivive. The sensitivity to the remaining tested disinfectants showed less evident differences. The sensitivity of the strains from the patients to formalin was similar to that of the standard PZH strain. A similar sensitivity was found to Septyl, with the exception of the strain from the artificial kidney circuit which was sevenfold less sensitive than the PZH strain. In the case of glutaraldehyde 9 strains from the patients and the ATCC 10231 strain were two or four times less sensitive than the PZH strain. No cross-sensitivity or tolerance to the disinfectants were noted in the study. Both standard strains were similarly sensitive to formalin, but the ATCC 10231 strain was less sensitive to Septyl, glutaraldehyde and chloramine T. In the experiment by the carrier method the effect was evidenced of the surface on the action of disinfectants. This was particularly evident in the case of chloramine T. Even in sensitive strains the disinfection parameters (concentration and exposure time) were significantly higher than in the suspension method. The least sensitive strains survived the effect of 5% chloramine during 2 hours of exposure. Septyl in the working concentration 2.5% at 10-minute exposure time disinfected all carriers with the exception of that carrying the strain isolated from the artificial kidney circuit, which survived 15% Septyl exposure during 10 minutes. The disinfectant Aldesan (2% glutaraldehyde) and formalin 8% killed all fungi during 10 minutes. The study shows that the sensitivity of C. albicans strains to disinfectants varies. For the assessment of the fungicidal action of disinfectants the standard test ATCC 10231 should be used since its sensitivity was similar to that of most strains from the patients and medical equipment. (ABSTRACT TRUNCATED)

  10. Margin and sensitivity methods for security analysis of electric power systems

    NASA Astrophysics Data System (ADS)

    Greene, Scott L.

    Reliable operation of large scale electric power networks requires that system voltages and currents stay within design limits. Operation beyond those limits can lead to equipment failures and blackouts. Security margins measure the amount by which system loads or power transfers can change before a security violation, such as an overloaded transmission line, is encountered. This thesis shows how to efficiently compute security margins defined by limiting events and instabilities, and the sensitivity of those margins with respect to assumptions, system parameters, operating policy, and transactions. Security margins to voltage collapse blackouts, oscillatory instability, generator limits, voltage constraints and line overloads are considered. The usefulness of computing the sensitivities of these margins with respect to interarea transfers, loading parameters, generator dispatch, transmission line parameters, and VAR support is established for networks as large as 1500 buses. The sensitivity formulas presented apply to a range of power system models. Conventional sensitivity formulas such as line distribution factors, outage distribution factors, participation factors and penalty factors are shown to be special cases of the general sensitivity formulas derived in this thesis. The sensitivity formulas readily accommodate sparse matrix techniques. Margin sensitivity methods are shown to work effectively for avoiding voltage collapse blackouts caused by either saddle node bifurcation of equilibria or immediate instability due to generator reactive power limits. Extremely fast contingency analysis for voltage collapse can be implemented with margin sensitivity based rankings. Interarea transfer can be limited by voltage limits, line limits, or voltage stability. The sensitivity formulas presented in this thesis apply to security margins defined by any limit criteria. A method to compute transfer margins by directly locating intermediate events reduces the total number of loadflow iterations required by each margin computation and provides sensitivity information at minimal additional cost. Estimates of the effect of simultaneous transfers on the transfer margins agree well with the exact computations for a network model derived from a portion of the U.S grid. The accuracy of the estimates over a useful range of conditions and the ease of obtaining the estimates suggest that the sensitivity computations will be of practical value.

  11. Investigating the Group-Level Impact of Advanced Dual-Echo fMRI Combinations

    PubMed Central

    Kettinger, Ádám; Hill, Christopher; Vidnyánszky, Zoltán; Windischberger, Christian; Nagy, Zoltán

    2016-01-01

    Multi-echo fMRI data acquisition has been widely investigated and suggested to optimize sensitivity for detecting the BOLD signal. Several methods have also been proposed for the combination of data with different echo times. The aim of the present study was to investigate whether these advanced echo combination methods provide advantages over the simple averaging of echoes when state-of-the-art group-level random-effect analyses are performed. Both resting-state and task-based dual-echo fMRI data were collected from 27 healthy adult individuals (14 male, mean age = 25.75 years) using standard echo-planar acquisition methods at 3T. Both resting-state and task-based data were subjected to a standard image pre-processing pipeline. Subsequently the two echoes were combined as a weighted average, using four different strategies for calculating the weights: (1) simple arithmetic averaging, (2) BOLD sensitivity weighting, (3) temporal-signal-to-noise ratio weighting and (4) temporal BOLD sensitivity weighting. Our results clearly show that the simple averaging of data with the different echoes is sufficient. Advanced echo combination methods may provide advantages on a single-subject level but when considering random-effects group level statistics they provide no benefit regarding sensitivity (i.e., group-level t-values) compared to the simple echo-averaging approach. One possible reason for the lack of clear advantages may be that apart from increasing the average BOLD sensitivity at the single-subject level, the advanced weighted averaging methods also inflate the inter-subject variance. As the echo combination methods provide very similar results, the recommendation is to choose between them depending on the availability of time for collecting additional resting-state data or whether subject-level or group-level analyses are planned. PMID:28018165

  12. Evaluating the sensitization potential of surfactants: integrating data from the local lymph node assay, guinea pig maximization test, and in vitro methods in a weight-of-evidence approach.

    PubMed

    Ball, Nicholas; Cagen, Stuart; Carrillo, Juan-Carlos; Certa, Hans; Eigler, Dorothea; Emter, Roger; Faulhammer, Frank; Garcia, Christine; Graham, Cynthia; Haux, Carl; Kolle, Susanne N; Kreiling, Reinhard; Natsch, Andreas; Mehling, Annette

    2011-08-01

    An integral part of hazard and safety assessments is the estimation of a chemical's potential to cause skin sensitization. Currently, only animal tests (OECD 406 and 429) are accepted in a regulatory context. Nonanimal test methods are being developed and formally validated. In order to gain more insight into the responses induced by eight exemplary surfactants, a battery of in vivo and in vitro tests were conducted using the same batch of chemicals. In general, the surfactants were negative in the GPMT, KeratinoSens and hCLAT assays and none formed covalent adducts with test peptides. In contrast, all but one was positive in the LLNA. Most were rated as being irritants by the EpiSkin assay with the additional endpoint, IL1-alpha. The weight of evidence based on this comprehensive testing indicates that, with one exception, they are non-sensitizing skin irritants, confirming that the LLNA tends to overestimate the sensitization potential of surfactants. As results obtained from LLNAs are considered as the gold standard for the development of new nonanimal alternative test methods, results such as these highlight the necessity to carefully evaluate the applicability domains of test methods in order to develop reliable nonanimal alternative testing strategies for sensitization testing. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Parasite detection in patients with post kala-azar dermal leishmaniasis in India: a comparison between molecular and immunological methods

    PubMed Central

    Salotra, P; Sreenivas, G; Beena, K R; Mukherjee, A; Ramesh, V

    2003-01-01

    Aims: To evaluate the sensitivity and specificity of serological, immunohistochemical, and molecular methods in the diagnosis of post kala-azar dermal leishmaniasis (PKDL). Methods: Twenty five patients with confirmed PKDL and 25 controls were included in the study. G2D10, a monoclonal antibody against Leishmania, was used for the immunohistochemical (IHC) staining of lesion sections to visualise anti-Leishmania donovani antibodies. The diagnostic usefulness of IHC was compared with enzyme linked immunosorbent assay (ELISA) with a recombinant (rk39) antigen, and a species specific polymerase chain reaction (PCR) assay, amplifying a kinetoplast minicircle DNA sequence. Results: IHC detected 22 of 25 PKDL cases, giving a sensitivity of 88%. The diagnostic sensitivity of both the ELISA and PCR tests was higher (96%). All of the 25 controls examined were negative in PCR, indicating 100% specificity of the test, whereas ELISA showed 96% specificity. Conclusions: IHC with G2D10 significantly enhances the sensitivity of detection of PKDL over routine haematoxylin and eosin staining. ELISA with a recombinant antigen is an economical and practical assay. PCR is the most sensitive and specific diagnostic method for PKDL. The tests described would facilitate the recognition of patients with PKDL, enabling timely treatment, which would contribute greatly to the control of kala-azar. PMID:14600129

  14. A Bayesian model for time-to-event data with informative censoring

    PubMed Central

    Kaciroti, Niko A.; Raghunathan, Trivellore E.; Taylor, Jeremy M. G.; Julius, Stevo

    2012-01-01

    Randomized trials with dropouts or censored data and discrete time-to-event type outcomes are frequently analyzed using the Kaplan–Meier or product limit (PL) estimation method. However, the PL method assumes that the censoring mechanism is noninformative and when this assumption is violated, the inferences may not be valid. We propose an expanded PL method using a Bayesian framework to incorporate informative censoring mechanism and perform sensitivity analysis on estimates of the cumulative incidence curves. The expanded method uses a model, which can be viewed as a pattern mixture model, where odds for having an event during the follow-up interval (tk−1,tk], conditional on being at risk at tk−1, differ across the patterns of missing data. The sensitivity parameters relate the odds of an event, between subjects from a missing-data pattern with the observed subjects for each interval. The large number of the sensitivity parameters is reduced by considering them as random and assumed to follow a log-normal distribution with prespecified mean and variance. Then we vary the mean and variance to explore sensitivity of inferences. The missing at random (MAR) mechanism is a special case of the expanded model, thus allowing exploration of the sensitivity to inferences as departures from the inferences under the MAR assumption. The proposed approach is applied to data from the TRial Of Preventing HYpertension. PMID:22223746

  15. A diameter-sensitive flow entropy method for reliability consideration in water distribution system design

    NASA Astrophysics Data System (ADS)

    Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin

    2014-07-01

    Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.

  16. PCR diagnostics underestimate the prevalence of avian malaria (Plasmodium relictum) in experimentally-infected passerines

    USGS Publications Warehouse

    Jarvi, Susan I.; Schultz, Jeffrey J.; Atkinson, Carter T.

    2002-01-01

    Several polymerase chain reaction (PCR)-based methods have recently been developed for diagnosing malarial infections in both birds and reptiles, but a critical evaluation of their sensitivity in experimentally-infected hosts has not been done. This study compares the sensitivity of several PCR-based methods for diagnosing avian malaria (Plasmodium relictum) in captive Hawaiian honeycreepers using microscopy and a recently developed immunoblotting technique. Sequential blood samples were collected over periods of up to 4.4 yr after experimental infection and rechallenge to determine both the duration and detectability of chronic infections. Two new nested PCR approaches for detecting circulating parasites based on P. relictum 18S rRNA genes and the thrombospondin-related anonymous protein (TRAP) gene are described. The blood smear and the PCR tests were less sensitive than serological methods for detecting chronic malarial infections. Individually, none of the diagnostic methods was 100% accurate in detecting subpatent infections, although serological methods were significantly more sensitive (97%) than either nested PCR (61–84%) or microscopy (27%). Circulating parasites in chronically infected birds either disappear completely from circulation or to drop to intensities below detectability by nested PCR. Thus, the use of PCR as a sole means of detection of circulating parasites may significantly underestimate true prevalence.

  17. Method Of Signal Amplification In Multi-Chromophore Luminescence Sensors

    DOEpatents

    Levitsky, Igor A.; Krivoshlykov, Sergei G.

    2004-02-03

    A fluorescence-based method for highly sensitive and selective detection of analyte molecules is proposed. The method employs the energy transfer between two or more fluorescent chromophores in a carefully selected polymer matrix. In one preferred embodiment, signal amplification has been achieved in the fluorescent sensing of dimethyl methylphosphonate (DMMP) using two dyes, 3-aminofluoranthene (AM) and Nile Red (NR), in a hydrogen bond acidic polymer matrix. The selected polymer matrix quenches the fluorescence of both dyes and shifts dye emission and absorption spectra relative to more inert matrices. Upon DMMP sorption, the AM fluorescence shifts to the red at the same time the NR absorption shifts to the blue, resulting in better band overlap and increased energy transfer between chromophores. In another preferred embodiment, the sensitive material is incorporated into an optical fiber system enabling efficient excitation of the dye and collecting the fluorescent signal form the sensitive material on the remote end of the system. The proposed method can be applied to multichromophore luminescence sensor systems incorporating N-chromophores leading to N-fold signal amplification and improved selectivity. The method can be used in all applications where highly sensitive detection of basic gases, such as dimethyl methylphosphonate (DMMP), Sarin, Soman and other chemical warfare agents having basic properties, is required, including environmental monitoring, chemical industry and medicine.

  18. A method of improving sensitivity of carbon/oxygen well logging for low porosity formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Juntao; Zhang, Feng; Zhang, Quanying

    Carbon/Oxygen (C/O) spectral logging technique has been widely used to determine residual oil saturation and the evaluation of water flooded layer. In order to improve the sensitivity of the technique for low – porosity formation, Gaussian and linear models are applied to fit the peaks of measured spectra to obtain the characteristic coefficients. Standard spectra of carbon and oxygen are combined to establish a new carbon /oxygen value calculation method, and the robustness of the new method is cross – validated with known mixed gamma ray spectrum. Formation models for different porosities and saturations are built using Monte Carlo method.more » The responses of carbon/oxygen which are calculated by conventional energy window method, and the new method is applied to oil saturation under low porosity conditions. The results show the new method can reduce the effects of gamma rays contaminated by the interaction between neutrons and other elements on carbon/oxygen ratio, and therefore can significantly improve the response sensitivity of carbon/oxygen well logging to oil saturation. The new method improves greatly carbon/oxygen well logging in low porosity conditions.« less

  19. A method of improving sensitivity of carbon/oxygen well logging for low porosity formation

    DOE PAGES

    Liu, Juntao; Zhang, Feng; Zhang, Quanying; ...

    2016-12-01

    Carbon/Oxygen (C/O) spectral logging technique has been widely used to determine residual oil saturation and the evaluation of water flooded layer. In order to improve the sensitivity of the technique for low – porosity formation, Gaussian and linear models are applied to fit the peaks of measured spectra to obtain the characteristic coefficients. Standard spectra of carbon and oxygen are combined to establish a new carbon /oxygen value calculation method, and the robustness of the new method is cross – validated with known mixed gamma ray spectrum. Formation models for different porosities and saturations are built using Monte Carlo method.more » The responses of carbon/oxygen which are calculated by conventional energy window method, and the new method is applied to oil saturation under low porosity conditions. The results show the new method can reduce the effects of gamma rays contaminated by the interaction between neutrons and other elements on carbon/oxygen ratio, and therefore can significantly improve the response sensitivity of carbon/oxygen well logging to oil saturation. The new method improves greatly carbon/oxygen well logging in low porosity conditions.« less

  20. Decision Tree based Prediction and Rule Induction for Groundwater Trichloroethene (TCE) Pollution Vulnerability

    NASA Astrophysics Data System (ADS)

    Park, J.; Yoo, K.

    2013-12-01

    For groundwater resource conservation, it is important to accurately assess groundwater pollution sensitivity or vulnerability. In this work, we attempted to use data mining approach to assess groundwater pollution vulnerability in a TCE (trichloroethylene) contaminated Korean industrial site. The conventional DRASTIC method failed to describe TCE sensitivity data with a poor correlation with hydrogeological properties. Among the different data mining methods such as Artificial Neural Network (ANN), Multiple Logistic Regression (MLR), Case Base Reasoning (CBR), and Decision Tree (DT), the accuracy and consistency of Decision Tree (DT) was the best. According to the following tree analyses with the optimal DT model, the failure of the conventional DRASTIC method in fitting with TCE sensitivity data may be due to the use of inaccurate weight values of hydrogeological parameters for the study site. These findings provide a proof of concept that DT based data mining approach can be used in predicting and rule induction of groundwater TCE sensitivity without pre-existing information on weights of hydrogeological properties.

  1. Quantitative mass spectrometry methods for pharmaceutical analysis

    PubMed Central

    Loos, Glenn; Van Schepdael, Ann

    2016-01-01

    Quantitative pharmaceutical analysis is nowadays frequently executed using mass spectrometry. Electrospray ionization coupled to a (hybrid) triple quadrupole mass spectrometer is generally used in combination with solid-phase extraction and liquid chromatography. Furthermore, isotopically labelled standards are often used to correct for ion suppression. The challenges in producing sensitive but reliable quantitative data depend on the instrumentation, sample preparation and hyphenated techniques. In this contribution, different approaches to enhance the ionization efficiencies using modified source geometries and improved ion guidance are provided. Furthermore, possibilities to minimize, assess and correct for matrix interferences caused by co-eluting substances are described. With the focus on pharmaceuticals in the environment and bioanalysis, different separation techniques, trends in liquid chromatography and sample preparation methods to minimize matrix effects and increase sensitivity are discussed. Although highly sensitive methods are generally aimed for to provide automated multi-residue analysis, (less sensitive) miniaturized set-ups have a great potential due to their ability for in-field usage. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644982

  2. 49 CFR 173.57 - Acceptance criteria for new explosives.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... must be subjected to the Drop Weight Impact Sensitivity Test (Test Method 3(a)(i)), the Friction... substance has a friction sensitiveness equal to or greater than that of dry pentaerythrite tetranitrate (PETN) when tested in the Friction Sensitivity Test; (4) The substance fails to pass the test criteria...

  3. Deep-Focusing Time-Distance Helioseismology

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.; Jensen, J. M.; Kosovichev, A. G.; Birch, A. C.; Fisher, Richard R. (Technical Monitor)

    2001-01-01

    Much progress has been made by measuring the travel times of solar acoustic waves from a central surface location to points at equal arc distance away. Depth information is obtained from the range of arc distances examined, with the larger distances revealing the deeper layers. This method we will call surface-focusing, as the common point, or focus, is at the surface. To obtain a clearer picture of the subsurface region, it would, no doubt, be better to focus on points below the surface. Our first attempt to do this used the ray theory to pick surface location pairs that would focus on a particular subsurface point. This is not the ideal procedure, as Born approximation kernels suggest that this focus should have zero sensitivity to sound speed inhomogeneities. However, the sensitivity is concentrated below the surface in a much better way than the old surface-focusing method, and so we expect the deep-focusing method to be more sensitive. A large sunspot group was studied by both methods. Inversions based on both methods will be compared.

  4. Improvement of ion chromatography with ultraviolet photometric detection and comparison with conductivity detection for the determination of serum cations.

    PubMed

    Shintani, H

    1985-05-31

    Studies were made of the analytical conditions required for indirect photometric ion chromatography using ultraviolet photometric detection (UV method) for the determination of serum cations following a previously developed serum pre-treatment. The sensitivities of the conductivity detection (CD) and UV methods and the amounts of serum cations determined by both methods were compared. Attempts to improve the sensitivity of the conventional UV method are reported. It was found that the mobile phase previously reported by Small and Miller showed no quantitative response when more than 4 mM copper(II) sulphate pentahydrate was used. As a result, there was no significant difference in the amounts of serum cations shown by the CD and UV methods. However, by adding 0.5-5 mM cobalt(II) sulphate heptahydrate, nickel(II) sulphate hexahydrate, zinc(II) sulphate heptahydrate or cobalt(II) diammonium sulphate hexahydrate to 0.5-1.5 mM copper(II) sulphate pentahydrate, higher sensitivity and a quantitative response were attained.

  5. Examining the accuracy of the infinite order sudden approximation using sensitivity analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eno, L.; Rabitz, H.

    1981-08-15

    A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix S/sup IOS/ with respect to a parameter which reintroduces the internal energy operator h/sub 0/ into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (h/sub 0/ in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result ismore » obtained for the effect of h/sub 0/ on S/sup IOS/. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H/sub 2/ system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed.« less

  6. Label-Free Electrical Immunosensor for Highly Sensitive and Specific Detection of Microcystin-LR in Water Samples.

    PubMed

    Tan, Feng; Saucedo, Nuvia Maria; Ramnani, Pankaj; Mulchandani, Ashok

    2015-08-04

    Microcystin-LR (MCLR) is one of the most commonly detected and toxic cyclic heptapeptide cyanotoxins released by cyanobacterial blooms in surface waters, for which sensitive and specific detection methods are necessary to carry out its recognition and quantification. Here, we present a single-walled carbon nanotube (SWCNTs)-based label-free chemiresistive immunosensor for highly sensitive and specific detection of MCLR in different source waters. MCLR was initially immobilized on SWCNTs modified interdigitated electrode, followed by incubation with monoclonal anti-MCLR antibody. The competitive binding of MCLR in sample solutions induced departure of the antibody from the antibody-antigen complexes formed on SWCNTs, resulting in change in the conductivity between source and drain of the sensor. The displacement assay greatly improved the sensitivity of the sensor compared with direct immunoassay on the same device. The immunosensor exhibited a wide linear response to log value of MCLR concentration ranging from 1 to 1000 ng/L, with a detection limit of 0.6 ng/L. This method showed good reproducibility, stability and recovery. The proposed method provides a powerful tool for rapid and sensitive monitoring of MCLR in environmental samples.

  7. Overcoming the errors of in-house PCR used in the clinical laboratory for the diagnosis of extrapulmonary tuberculosis.

    PubMed

    Kunakorn, M; Raksakai, K; Pracharktam, R; Sattaudom, C

    1999-03-01

    Our experiences from 1993 to 1997 in the development and use of IS6110 base PCR for the diagnosis of extrapulmonary tuberculosis in a routine clinical setting revealed that error-correcting processes can improve existing diagnostic methodology. The reamplification method initially used had a sensitivity of 90.91% and a specificity of 93.75%. The concern was focused on the false positive results of this method caused by product-carryover contamination. This method was changed to single round PCR with carryover prevention by uracil DNA glycosylase (UDG), resulting in a 100% specificity but only 63% sensitivity. Dot blot hybridization was added after the single round PCR, increasing the sensitivity to 87.50%. However, false positivity resulted from the nonspecific dot blot hybridization signal, reducing the specificity to 89.47%. The hybridization of PCR was changed to a Southern blot with a new oligonucleotide probe giving the sensitivity of 85.71% and raising the specificity to 99.52%. We conclude that the PCR protocol for routine clinical use should include UDG for carryover prevention and hybridization with specific probes to optimize diagnostic sensitivity and specificity in extrapulmonary tuberculosis testing.

  8. New sensitive high-performance liquid chromatography-tandem mass spectrometry method for the detection of horse and pork in halal beef.

    PubMed

    von Bargen, Christoph; Dojahn, Jörg; Waidelich, Dietmar; Humpf, Hans-Ulrich; Brockmeyer, Jens

    2013-12-11

    The accidental or fraudulent blending of meat from different species is a highly relevant aspect for food product quality control, especially for consumers with ethical concerns against species, such as horse or pork. In this study, we present a sensitive mass spectrometrical approach for the detection of trace contaminations of horse meat and pork and demonstrate the specificity of the identified biomarker peptides against chicken, lamb, and beef. Biomarker peptides were identified by a shotgun proteomic approach using tryptic digests of protein extracts and were verified by the analysis of 21 different meat samples from the 5 species included in this study. For the most sensitive peptides, a multiple reaction monitoring (MRM) method was developed that allows for the detection of 0.55% horse or pork in a beef matrix. To enhance sensitivity, we applied MRM(3) experiments and were able to detect down to 0.13% pork contamination in beef. To the best of our knowledge, we present here the first rapid and sensitive mass spectrometrical method for the detection of horse and pork by use of MRM and MRM(3).

  9. A sensitive LC-MS/MS method for simultaneous determination of amygdalin and paeoniflorin in human plasma and its application.

    PubMed

    Li, Xiaobing; Shi, Fuguo; Gu, Pan; Liu, Lingye; He, Hua; Ding, Li

    2014-04-01

    A simple and sensitive HPLC-MS/MS method was developed and fully validated for the simultaneous determination of amygdalin (AD) and paeoniflorin (PF) in human plasma. For both analytes, the method exhibited high sensitivity (LLOQs of 0.6ng/mL) by selecting the ammonium adduct ions ([M+NH4](+)) as the precursor ions and good linearity over the concentration range of 0.6-2000ng/mL with the correlation coefficients>0.9972. The intra- and inter-day precision was lower than 10% in relation to relative standard deviation, while accuracy was within ±2.3% in terms of relative error for both analytes. The developed method was successfully applied to a pilot pharmacokinetic study of AD and PF in healthy volunteers after intravenous infusion administration of Huoxue-Tongluo lyophilized powder for injection. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. [Evaluation of echocardiography for determining left ventricular function].

    PubMed

    Wu, H; Zhu, W; Xu, J

    1994-02-01

    Left ventricular ejection fraction (LVEF) was calculated by echocardiography and gate blood pool (GBP) in 33 patients including those with coronary heart disease, acute and old myocardiac infarction, cardiomyopathy or mitral prolapse. Fourteen of the 33 had segmental wall motion abnormalities and 19 had non-segmental wall motion abnormalities. The results of comparing echocardiography and GBP showed that the former could substitute for other invasive and expensive examinations to determine LVEF (r = 0.804-0.964 in the 5 echocardiography methods used). Mod-Simpsons method of cross-sectioned echocardiography was the most accurate echocardiographic method (r = 0.964, sensitivity 90.9%) in all patients. The Teich method of M-mode echocardiography was useful in patients who had non-segmental wall motion abnormalities only (r = 0.957, sensitivity 94.7%) but not in patients who had segmental wall motion abnormalities (r = 0.703, sensitivity 42.9%).

  11. HPV Genotyping of Modified General Primer-Amplicons Is More Analytically Sensitive and Specific by Sequencing than by Hybridization

    PubMed Central

    Meisal, Roger; Rounge, Trine Ballestad; Christiansen, Irene Kraus; Eieland, Alexander Kirkeby; Worren, Merete Molton; Molden, Tor Faksvaag; Kommedal, Øyvind; Hovig, Eivind; Leegaard, Truls Michael

    2017-01-01

    Sensitive and specific genotyping of human papillomaviruses (HPVs) is important for population-based surveillance of carcinogenic HPV types and for monitoring vaccine effectiveness. Here we compare HPV genotyping by Next Generation Sequencing (NGS) to an established DNA hybridization method. In DNA isolated from urine, the overall analytical sensitivity of NGS was found to be 22% higher than that of hybridization. NGS was also found to be the most specific method and expanded the detection repertoire beyond the 37 types of the DNA hybridization assay. Furthermore, NGS provided an increased resolution by identifying genetic variants of individual HPV types. The same Modified General Primers (MGP)-amplicon was used in both methods. The NGS method is described in detail to facilitate implementation in the clinical microbiology laboratory and includes suggestions for new standards for detection and calling of types and variants with improved resolution. PMID:28045981

  12. HPV Genotyping of Modified General Primer-Amplicons Is More Analytically Sensitive and Specific by Sequencing than by Hybridization.

    PubMed

    Meisal, Roger; Rounge, Trine Ballestad; Christiansen, Irene Kraus; Eieland, Alexander Kirkeby; Worren, Merete Molton; Molden, Tor Faksvaag; Kommedal, Øyvind; Hovig, Eivind; Leegaard, Truls Michael; Ambur, Ole Herman

    2017-01-01

    Sensitive and specific genotyping of human papillomaviruses (HPVs) is important for population-based surveillance of carcinogenic HPV types and for monitoring vaccine effectiveness. Here we compare HPV genotyping by Next Generation Sequencing (NGS) to an established DNA hybridization method. In DNA isolated from urine, the overall analytical sensitivity of NGS was found to be 22% higher than that of hybridization. NGS was also found to be the most specific method and expanded the detection repertoire beyond the 37 types of the DNA hybridization assay. Furthermore, NGS provided an increased resolution by identifying genetic variants of individual HPV types. The same Modified General Primers (MGP)-amplicon was used in both methods. The NGS method is described in detail to facilitate implementation in the clinical microbiology laboratory and includes suggestions for new standards for detection and calling of types and variants with improved resolution.

  13. Polarization Sensitive Coherent Anti-Stokes Raman Spectroscopy of DCVJ in Doped Polymer

    NASA Astrophysics Data System (ADS)

    Ujj, Laszlo

    2014-05-01

    Coherent Raman Microscopy is an emerging technic and method to image biological samples such as living cells by recording vibrational fingerprints of molecules with high spatial resolution. The race is on to record the entire image during the shortest time possible in order to increase the time resolution of the recorded cellular events. The electronically enhanced polarization sensitive version of Coherent anti-Stokes Raman scattering is one of the method which can shorten the recording time and increase the sharpness of an image by enhancing the signal level of special molecular vibrational modes. In order to show the effectiveness of the method a model system, a highly fluorescence sample, DCVJ in a polymer matrix is investigated. Polarization sensitive resonance CARS spectra are recorded and analyzed. Vibrational signatures are extracted with model independent methods. Details of the measurements and data analysis will be presented. The author gratefully acknowledge the UWF for financial support.

  14. Sensitivity analysis and nonlinearity assessment of steam cracking furnace process

    NASA Astrophysics Data System (ADS)

    Rosli, M. N.; Sudibyo, Aziz, N.

    2017-11-01

    In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.

  15. Analytic uncertainty and sensitivity analysis of models with input correlations

    NASA Astrophysics Data System (ADS)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  16. Support Vector Machines to improve physiologic hot flash measures: application to the ambulatory setting.

    PubMed

    Thurston, Rebecca C; Hernandez, Javier; Del Rio, Jose M; De La Torre, Fernando

    2011-07-01

    Most midlife women have hot flashes. The conventional criterion (≥2 μmho rise/30 s) for classifying hot flashes physiologically has shown poor performance. We improved this performance in the laboratory with Support Vector Machines (SVMs), a pattern classification method. We aimed to compare conventional to SVM methods to classify hot flashes in the ambulatory setting. Thirty-one women with hot flashes underwent 24 h of ambulatory sternal skin conductance monitoring. Hot flashes were quantified with conventional (≥2 μmho/30 s) and SVM methods. Conventional methods had low sensitivity (sensitivity=.57, specificity=.98, positive predictive value (PPV)=.91, negative predictive value (NPV)=.90, F1=.60), with performance lower with higher body mass index (BMI). SVMs improved this performance (sensitivity=.87, specificity=.97, PPV=.90, NPV=.96, F1=.88) and reduced BMI variation. SVMs can improve ambulatory physiologic hot flash measures. Copyright © 2010 Society for Psychophysiological Research.

  17. Exploration of analysis methods for diagnostic imaging tests: problems with ROC AUC and confidence scores in CT colonography.

    PubMed

    Mallett, Susan; Halligan, Steve; Collins, Gary S; Altman, Doug G

    2014-01-01

    Different methods of evaluating diagnostic performance when comparing diagnostic tests may lead to different results. We compared two such approaches, sensitivity and specificity with area under the Receiver Operating Characteristic Curve (ROC AUC) for the evaluation of CT colonography for the detection of polyps, either with or without computer assisted detection. In a multireader multicase study of 10 readers and 107 cases we compared sensitivity and specificity, using radiological reporting of the presence or absence of polyps, to ROC AUC calculated from confidence scores concerning the presence of polyps. Both methods were assessed against a reference standard. Here we focus on five readers, selected to illustrate issues in design and analysis. We compared diagnostic measures within readers, showing that differences in results are due to statistical methods. Reader performance varied widely depending on whether sensitivity and specificity or ROC AUC was used. There were problems using confidence scores; in assigning scores to all cases; in use of zero scores when no polyps were identified; the bimodal non-normal distribution of scores; fitting ROC curves due to extrapolation beyond the study data; and the undue influence of a few false positive results. Variation due to use of different ROC methods exceeded differences between test results for ROC AUC. The confidence scores recorded in our study violated many assumptions of ROC AUC methods, rendering these methods inappropriate. The problems we identified will apply to other detection studies using confidence scores. We found sensitivity and specificity were a more reliable and clinically appropriate method to compare diagnostic tests.

  18. A combination of immunohistochemistry and molecular approaches improves highly sensitive detection of BRAF mutations in papillary thyroid cancer.

    PubMed

    Martinuzzi, Claudia; Pastorino, Lorenza; Andreotti, Virginia; Garuti, Anna; Minuto, Michele; Fiocca, Roberto; Bianchi-Scarrà, Giovanna; Ghiorzo, Paola; Grillo, Federica; Mastracci, Luca

    2016-09-01

    The optimal method for BRAF mutation detection remains to be determined despite advances in molecular detection techniques. The aim of this study was to compare, against classical Sanger sequencing, the diagnostic performance of two of the most recently developed, highly sensitive methods: BRAF V600E immunohistochemistry (IHC) and peptide nucleic-acid (PNA)-clamp qPCR. BRAF exon 15 mutations were searched in formalin-fixed paraffin-embedded tissues from 86 papillary thyroid carcinoma using the three methods. The limits of detection of Sanger sequencing in borderline or discordant cases were quantified by next generation sequencing. BRAF mutations were found in 74.4 % of cases by PNA, in 71 % of cases by IHC, and in 64 % of cases by Sanger sequencing. Complete concordance for the three methods was observed in 80 % of samples. Better concordance was observed with the combination of two methods, particularly PNA and IHC (59/64) (92 %), while the combination of PNA and Sanger was concordant in 55 cases (86 %). Sensitivity of the three methods was 99 % for PNA, 94.2 % for IHC, and 89.5 % for Sanger. Our data show that IHC could be used as a cost-effective, first-line method for BRAF V600E detection in daily practice, followed by PNA analysis in negative or uninterpretable cases, as the most efficient method. PNA-clamp quantitative PCR is highly sensitive and complementary to IHC as it also recognizes other mutations besides V600E and it is suitable for diagnostic purposes.

  19. Development of a fast and efficient method for hepatitis A virus concentration from green onion.

    PubMed

    Zheng, Yan; Hu, Yuan

    2017-11-01

    Hepatitis A virus (HAV) can cause serious liver disease and even death. HAV outbreaks are associated with the consumption of raw or minimally processed produce, making it a major public health concern. Infections have occurred despite the fact that effective HAV vaccine has been available. Development of a rapid and sensitive HAV detection method is necessary for an investigation of an HAV outbreak. Detection of HAV is complicated by the lack of a reliable culture method. In addition, due to the low infectious dose of HAV, these methods must be very sensitive. Current methods rely on efficient sample preparation and concentration steps followed by sensitive molecular detection techniques. Using green onions which was involved in most recent HAV outbreaks as a representative produce, a method of capturing virus particles was developed using carboxyl-derivatized magnetic beads in this study. Carboxyl beads, like antibody-coated beads or cationic beads, detect HAV at a level as low as 100 pfu/25g of green onions. RNA from virus concentrated in this manner can be released by heat-shock (98°C 5min) for molecular detection without sacrificing sensitivity. Bypassing the RNA extraction procedure saves time and removes multiple manipulation steps, which makes large scale HAV screening possible. In addition, the inclusion of beef extract and pectinase rather than NP40 in the elution buffer improved the HAV liberation from the food matrix over current methods by nearly 10 fold. The method proposed in this study provides a promising tool to improve food risk assessment and protect public health. Published by Elsevier B.V.

  20. Sampling and analysis of airborne resin acids and solvent-soluble material derived from heated colophony (rosin) flux: a method to quantify exposure to sensitizing compounds liberated during electronics soldering.

    PubMed

    Smith, P A; Son, P S; Callaghan, P M; Jederberg, W W; Kuhlmann, K; Still, K R

    1996-07-17

    Components of colophony (rosin) resin acids are sensitizers through dermal and pulmonary exposure to heated and unheated material. Significant work in the literature identifies specific resin acids and their oxidation products as sensitizers. Pulmonary exposure to colophony sensitizers has been estimated indirectly through formaldehyde exposure. To assess pulmonary sensitization from airborne resin acids, direct measurement is desired, as the degree to which aldehyde exposure correlates with that of resin acids during colophony heating is undefined. Any analytical method proposed should be applicable to a range of compounds and should also identify specific compounds present in a breathing zone sample. This work adapts OSHA Sampling and Analytical Method 58, which is designed to provide airborne concentration data for coal tar pitch volatile solids by air filtration through a glass fiber filter, solvent extraction of the filter, and gravimetric analysis of the non-volatile extract residue. In addition to data regarding total soluble material captured, a portion of the extract may be subjected to compound-specific analysis. Levels of soluble solids found during personal breathing zone sampling during electronics soldering in a Naval Aviation Depot ranged from below the "reliable quantitation limit" reported in the method to 7.98 mg/m3. Colophony-spiked filters analyzed in accordance with the method (modified) produced a limit of detection for total solvent-soluble colophony solids of 10 micrograms/filter. High performance liquid chromatography was used to identify abietic acid present in a breathing zone sample.

Top