Sample records for simplified quantitative method

  1. Simplified dichromated gelatin hologram recording process

    NASA Technical Reports Server (NTRS)

    Georgekutty, Tharayil G.; Liu, Hua-Kuang

    1987-01-01

    A simplified method for making dichromated gelatin (DCG) holographic optical elements (HOE) has been discovered. The method is much less tedious and it requires a period of processing time comparable with that for processing a silver halide hologram. HOE characteristics including diffraction efficiency (DE), linearity, and spectral sensitivity have been quantitatively investigated. The quality of the holographic grating is very high. Ninety percent or higher diffraction efficiency has been achieved in simple plane gratings made by this process.

  2. Viewpoint on ISA TR84.0.02--simplified methods and fault tree analysis.

    PubMed

    Summers, A E

    2000-01-01

    ANSI/ISA-S84.01-1996 and IEC 61508 require the establishment of a safety integrity level for any safety instrumented system or safety related system used to mitigate risk. Each stage of design, operation, maintenance, and testing is judged against this safety integrity level. Quantitative techniques can be used to verify whether the safety integrity level is met. ISA-dTR84.0.02 is a technical report under development by ISA, which discusses how to apply quantitative analysis techniques to safety instrumented systems. This paper discusses two of those techniques: (1) Simplified equations and (2) Fault tree analysis.

  3. Quantitative method of medication system interface evaluation.

    PubMed

    Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F

    2007-01-01

    The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.

  4. Accurate virus quantitation using a Scanning Transmission Electron Microscopy (STEM) detector in a scanning electron microscope.

    PubMed

    Blancett, Candace D; Fetterer, David P; Koistinen, Keith A; Morazzani, Elaine M; Monninger, Mitchell K; Piper, Ashley E; Kuehl, Kathleen A; Kearney, Brian J; Norris, Sarah L; Rossi, Cynthia A; Glass, Pamela J; Sun, Mei G

    2017-10-01

    A method for accurate quantitation of virus particles has long been sought, but a perfect method still eludes the scientific community. Electron Microscopy (EM) quantitation is a valuable technique because it provides direct morphology information and counts of all viral particles, whether or not they are infectious. In the past, EM negative stain quantitation methods have been cited as inaccurate, non-reproducible, and with detection limits that were too high to be useful. To improve accuracy and reproducibility, we have developed a method termed Scanning Transmission Electron Microscopy - Virus Quantitation (STEM-VQ), which simplifies sample preparation and uses a high throughput STEM detector in a Scanning Electron Microscope (SEM) coupled with commercially available software. In this paper, we demonstrate STEM-VQ with an alphavirus stock preparation to present the method's accuracy and reproducibility, including a comparison of STEM-VQ to viral plaque assay and the ViroCyt Virus Counter. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  5. A simplified and efficient method for the analysis of fatty acid methyl esters suitable for large clinical studies.

    PubMed

    Masood, Athar; Stark, Ken D; Salem, Norman

    2005-10-01

    Conventional sample preparation for fatty acid analysis is a complicated, multiple-step process, and gas chromatography (GC) analysis alone can require >1 h per sample to resolve fatty acid methyl esters (FAMEs). Fast GC analysis was adapted to human plasma FAME analysis using a modified polyethylene glycol column with smaller internal diameters, thinner stationary phase films, increased carrier gas linear velocity, and faster temperature ramping. Our results indicated that fast GC analyses were comparable to conventional GC in peak resolution. A conventional transesterification method based on Lepage and Roy was simplified to a one-step method with the elimination of the neutralization and centrifugation steps. A robotics-amenable method was also developed, with lower methylation temperatures and in an open-tube format using multiple reagent additions. The simplified methods produced results that were quantitatively similar and with similar coefficients of variation as compared with the original Lepage and Roy method. The present streamlined methodology is suitable for the direct fatty acid analysis of human plasma, is appropriate for research studies, and will facilitate large clinical trials and make possible population studies.

  6. Quantitative analysis of pyroglutamic acid in peptides.

    PubMed

    Suzuki, Y; Motoi, H; Sato, K

    1999-08-01

    A simplified and rapid procedure for the determination of pyroglutamic acid in peptides was developed. The method involves the enzymatic cleavage of an N-terminal pyroglutamate residue using a thermostable pyroglutamate aminopeptidase and isocratic HPLC separation of the resulting enzymatic hydrolysate using a column switching technique. Pyroglutamate aminopeptidase from a thermophilic archaebacteria, Pyrococcus furiosus, cleaves N-terminal pyroglutamic acid residue independent of the molecular weight of the substrate. It cleaves more than 85% of pyroglutamate from peptides whose molecular weight ranges from 362.4 to 4599.4 Da. Thus, a new method is presented that quantitatively estimates N-terminal pyroglutamic acid residue in peptides.

  7. Regional and longitudinal estimation of product lifespan distribution: a case study for automobiles and a simplified estimation method.

    PubMed

    Oguchi, Masahiro; Fuse, Masaaki

    2015-02-03

    Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.

  8. Correction for the Hematocrit Bias in Dried Blood Spot Analysis Using a Nondestructive, Single-Wavelength Reflectance-Based Hematocrit Prediction Method.

    PubMed

    Capiau, Sara; Wilk, Leah S; De Kesel, Pieter M M; Aalders, Maurice C G; Stove, Christophe P

    2018-02-06

    The hematocrit (Hct) effect is one of the most important hurdles currently preventing more widespread implementation of quantitative dried blood spot (DBS) analysis in a routine context. Indeed, the Hct may affect both the accuracy of DBS methods as well as the interpretation of DBS-based results. We previously developed a method to determine the Hct of a DBS based on its hemoglobin content using noncontact diffuse reflectance spectroscopy. Despite the ease with which the analysis can be performed (i.e., mere scanning of the DBS) and the good results that were obtained, the method did require a complicated algorithm to derive the total hemoglobin content from the DBS's reflectance spectrum. As the total hemoglobin was calculated as the sum of oxyhemoglobin, methemoglobin, and hemichrome, the three main hemoglobin derivatives formed in DBS upon aging, the reflectance spectrum needed to be unmixed to determine the quantity of each of these derivatives. We now simplified the method by only using the reflectance at a single wavelength, located at a quasi-isosbestic point in the reflectance curve. At this wavelength, assuming 1-to-1 stoichiometry of the aging reaction, the reflectance is insensitive to the hemoglobin degradation and only scales with the total amount of hemoglobin and, hence, the Hct. This simplified method was successfully validated. At each quality control level as well as at the limits of quantitation (i.e., 0.20 and 0.67) bias, intra- and interday imprecision were within 10%. Method reproducibility was excellent based on incurred sample reanalysis and surpassed the reproducibility of the original method. Furthermore, the influence of the volume spotted, the measurement location within the spot, as well as storage time and temperature were evaluated, showing no relevant impact of these parameters. Application to 233 patient samples revealed a good correlation between the Hct determined on whole blood and the predicted Hct determined on venous DBS. The bias obtained with Bland and Altman analysis was -0.015 and the limits of agreement were -0.061 and 0.031, indicating that the simplified, noncontact Hct prediction method even outperforms the original method. In addition, using caffeine as a model compound, it was demonstrated that this simplified Hct prediction method can effectively be used to implement a Hct-dependent correction factor to DBS-based results to alleviate the Hct bias.

  9. DNA DAMAGE QUANTITATION BY ALKALINE GEL ELECTROPHORESIS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SUTHERLAND,B.M.; BENNETT,P.V.; SUTHERLAND, J.C.

    2004-03-24

    Physical and chemical agents in the environment, those used in clinical applications, or encountered during recreational exposures to sunlight, induce damages in DNA. Understanding the biological impact of these agents requires quantitation of the levels of such damages in laboratory test systems as well as in field or clinical samples. Alkaline gel electrophoresis provides a sensitive (down to {approx} a few lesions/5Mb), rapid method of direct quantitation of a wide variety of DNA damages in nanogram quantities of non-radioactive DNAs from laboratory, field, or clinical specimens, including higher plants and animals. This method stems from velocity sedimentation studies of DNAmore » populations, and from the simple methods of agarose gel electrophoresis. Our laboratories have developed quantitative agarose gel methods, analytical descriptions of DNA migration during electrophoresis on agarose gels (1-6), and electronic imaging for accurate determinations of DNA mass (7-9). Although all these components improve sensitivity and throughput of large numbers of samples (7,8,10), a simple version using only standard molecular biology equipment allows routine analysis of DNA damages at moderate frequencies. We present here a description of the methods, as well as a brief description of the underlying principles, required for a simplified approach to quantitation of DNA damages by alkaline gel electrophoresis.« less

  10. A Simplified Technique to Measure Plaque on the Intaglio Surfaces of Complete Dentures.

    PubMed

    Almas, Khalid; Salameh, Ziad; Kutkut, Ahmad; Al Doubali, Ahmad

    2015-04-01

    The main aim of this study was to develop a simplified quantitative denture plaque index that could help dentists to motivate denture patients to maintain optimal oral hygiene. The secondary aim was to assess specific areas of dentures more prone to accumulate plaque and subjects' oral hygiene habits related to their dentures. One hundred subjects who wore maxillary and/or mandibular complete dentures for at least one year were included in the study as a powered sample. Fifteen females and 85 males, age range 45-75 years, were recruited. The study was carried out at King Saud University (KSU), College of Dentistry. A plaque disclosing solution was used to assess the plaque covered areas of denture. A quantitative percentage (10 x 10%) score index was developed by assessing plaque scores from digital images of intaglio surfaces of the dentures. The weighted kappa method was used to assess inter-examiner agreement in the main study. The new denture plaque index was identified as ASKD-DPI (Almas, Salameh, Kutkut, and Doubali-Denture Plaque Index). It ranged from 0 - 100%, and reflected the percentage of the intaglio surfaces of maxillary and mandibular complete dentures that contained plaque. It also classified quantitative percentages: 30 subjects ranged from 0 - 30% (low DPI), 50 subjects ranged from 31 - 70% (moderate DPI), and 20 subjects ranged from 71 - 100% (high DPI) denture plaque score. A simplified denture plaque index (ASKD-DPI) technique was developed and tested in this study. ASKD-DPI may be used for evaluating denture plaque scores, monitoring denture hygiene, and measuring compliance of patients regarding plaque control for complete dentures.

  11. Mathematics of quantitative kinetic PCR and the application of standard curves.

    PubMed

    Rutledge, R G; Côté, C

    2003-08-15

    Fluorescent monitoring of DNA amplification is the basis of real-time PCR, from which target DNA concentration can be determined from the fractional cycle at which a threshold amount of amplicon DNA is produced. Absolute quantification can be achieved using a standard curve constructed by amplifying known amounts of target DNA. In this study, the mathematics of quantitative PCR are examined in detail, from which several fundamental aspects of the threshold method and the application of standard curves are illustrated. The construction of five replicate standard curves for two pairs of nested primers was used to examine the reproducibility and degree of quantitative variation using SYBER Green I fluorescence. Based upon this analysis the application of a single, well- constructed standard curve could provide an estimated precision of +/-6-21%, depending on the number of cycles required to reach threshold. A simplified method for absolute quantification is also proposed, in which quantitative scale is determined by DNA mass at threshold.

  12. Can matrix solid phase dispersion (MSPD) be more simplified? Application of solventless MSPD sample preparation method for GC-MS and GC-FID analysis of plant essential oil components.

    PubMed

    Wianowska, Dorota; Dawidowicz, Andrzej L

    2016-05-01

    This paper proposes and shows the analytical capabilities of a new variant of matrix solid phase dispersion (MSPD) with the solventless blending step in the chromatographic analysis of plant volatiles. The obtained results prove that the use of a solvent is redundant as the sorption ability of the octadecyl brush is sufficient for quantitative retention of volatiles from 9 plants differing in their essential oil composition. The extraction efficiency of the proposed simplified MSPD method is equivalent to the efficiency of the commonly applied variant of MSPD with the organic dispersing liquid and pressurized liquid extraction, which is a much more complex, technically advanced and highly efficient technique of plant extraction. The equivalency of these methods is confirmed by the variance analysis. The proposed solventless MSPD method is precise, accurate, and reproducible. The recovery of essential oil components estimated by the MSPD method exceeds 98%, which is satisfactory for analytical purposes. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Simplified spectraphotometric method for the detection of red blood cell agglutination.

    PubMed

    Ramasubramanian, Melur; Anthony, Steven; Lambert, Jeremy

    2008-08-01

    Human error is the most significant factor attributed to incompatible blood transfusions. A spectrophotometric approach to blood typing has been developed by examining the spectral slopes of dilute red blood cell (RBC) suspensions in saline, in the presence and absence of various antibodies, offering a technique for the quantitative determination of agglutination intensity [Transfusion39, 1051, 1999TRANAT0041-113210.1046/j.1537-2995.1999.39101051.x]. We offer direct theoretical prediction of the observed change in slope in the 660-1000 nm range through the use of the T-matrix approach and Lorenz-Mie theory for light scattering by dilute RBC suspensions. Following a numerical simulation using the T-matrix code, we present a simplified sensing method for detecting agglutination. The sensor design has been prototyped, fully characterized, and evaluated through a complete set of tests with over 60 RBC samples and compared with the full spectrophotometric method. The LED and photodiode pairs are found to successfully reproduce the spectroscopic determination of red blood cell agglutination.

  14. Milrinone therapeutic drug monitoring in a pediatric population: Development and validation of a quantitative liquid chromatography-tandem mass spectrometry method.

    PubMed

    Raizman, Joshua E; Taylor, Katherine; Parshuram, Christopher; Colantonio, David A

    2017-05-01

    Milrinone is a potent selective phosphodiesterase type III inhibitor which stimulates myocardial function and improves myocardial relaxation. Although therapeutic monitoring is crucial to maintain therapeutic outcome, little data is available. A proof-of-principle study has been initiated in our institution to evaluate the clinical impact of optimizing milrinone dosing through therapeutic drug monitoring (TDM) in children following cardiac surgery. We developed a robust LC-MS/MS method to quantify milrinone in serum from pediatric patients in real-time. A liquid-liquid extraction procedure was used to prepare samples for analysis prior to measurement by LC-MS/MS. Performance characteristics, such as linearity, limit of quantitation (LOQ) and precision, were assessed. Patient samples were acquired post-surgery and analyzed to determine the concentration-time profile of the drug as well as to track turn-around-times. Within day precision was <8.3% across 3 levels of QC. Between-day precision was <12%. The method was linear from 50 to 800μg/l; the lower limit of quantification was 22μg/l. Comparison with another LC-MS/MS method showed good agreement. Using this simplified method, turnaround times within 3-6h were achievable, and patient drug profiles demonstrated that some milrinone levels were either sub-therapeutic or in the toxic range, highlighting the importance for milrinone TDM. This simplified and quick method proved to be analytically robust and able to provide therapeutic monitoring of milrinone in real-time in patients post-cardiac surgery. Copyright © 2017. Published by Elsevier B.V.

  15. Quantitative Real-Time PCR using the Thermo Scientific Solaris qPCR Assay

    PubMed Central

    Ogrean, Christy; Jackson, Ben; Covino, James

    2010-01-01

    The Solaris qPCR Gene Expression Assay is a novel type of primer/probe set, designed to simplify the qPCR process while maintaining the sensitivity and accuracy of the assay. These primer/probe sets are pre-designed to >98% of the human and mouse genomes and feature significant improvements from previously available technologies. These improvements were made possible by virtue of a novel design algorithm, developed by Thermo Scientific bioinformatics experts. Several convenient features have been incorporated into the Solaris qPCR Assay to streamline the process of performing quantitative real-time PCR. First, the protocol is similar to commonly employed alternatives, so the methods used during qPCR are likely to be familiar. Second, the master mix is blue, which makes setting the qPCR reactions easier to track. Third, the thermal cycling conditions are the same for all assays (genes), making it possible to run many samples at a time and reducing the potential for error. Finally, the probe and primer sequence information are provided, simplifying the publication process. Here, we demonstrate how to obtain the appropriate Solaris reagents using the GENEius product search feature found on the ordering web site (www.thermo.com/solaris) and how to use the Solaris reagents for performing qPCR using the standard curve method. PMID:20567213

  16. Adduct simplification in the analysis of cyanobacterial toxins by matrix-assisted laser desorption/ionization mass spectrometry.

    PubMed

    Howard, Karen L; Boyer, Gregory L

    2007-01-01

    A novel method for simplifying adduct patterns to improve the detection and identification of peptide toxins using matrix-assisted laser desorption/ionization (MALDI) time-of-flight (TOF) mass spectrometry is presented. Addition of 200 microM zinc sulfate heptahydrate (ZnSO(4) . 7H(2)O) to samples prior to spotting on the target enhances detection of the protonated molecule while suppressing competing adducts. This produces a highly simplified spectrum with the potential to enhance quantitative analysis, particularly for complex samples. The resulting improvement in total signal strength and reduction in the coefficient of variation (from 31.1% to 5.2% for microcystin-LR) further enhance the potential for sensitive and accurate quantitation. Other potential additives tested, including 18-crown-6 ether, alkali metal salts (lithium chloride, sodium chloride, potassium chloride), and other transition metal salts (silver chloride, silver nitrate, copper(II) nitrate, copper(II) sulfate, zinc acetate), were unable to achieve comparable results. Application of this technique to the analysis of several microcystins, potent peptide hepatotoxins from cyanobacteria, is illustrated. Copyright (c) 2007 John Wiley & Sons, Ltd.

  17. Quantitative accuracy of the simplified strong ion equation to predict serum pH in dogs.

    PubMed

    Cave, N J; Koo, S T

    2015-01-01

    Electrochemical approach to the assessment of acid-base states should provide a better mechanistic explanation of the metabolic component than methods that consider only pH and carbon dioxide. Simplified strong ion equation (SSIE), using published dog-specific values, would predict the measured serum pH of diseased dogs. Ten dogs, hospitalized for various reasons. Prospective study of a convenience sample of a consecutive series of dogs admitted to the Massey University Veterinary Teaching Hospital (MUVTH), from which serum biochemistry and blood gas analyses were performed at the same time. Serum pH was calculated (Hcal+) using the SSIE, and published values for the concentration and dissociation constant for the nonvolatile weak acids (Atot and Ka ), and subsequently Hcal+ was compared with the dog's actual pH (Hmeasured+). To determine the source of discordance between Hcal+ and Hmeasured+, the calculations were repeated using a series of substituted values for Atot and Ka . The Hcal+ did not approximate the Hmeasured+ for any dog (P = 0.499, r(2) = 0.068), and was consistently more basic. Substituted values Atot and Ka did not significantly improve the accuracy (r(2) = 0.169 to <0.001). Substituting the effective SID (Atot-[HCO3-]) produced a strong association between Hcal+ and Hmeasured+ (r(2) = 0.977). Using the simplified strong ion equation and the published values for Atot and Ka does not appear to provide a quantitative explanation for the acid-base status of dogs. Efficacy of substituting the effective SID in the simplified strong ion equation suggests the error lies in calculating the SID. Copyright © 2015 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  18. The FY 1980 Department of Defense Program for Research, Development, and Acquisition

    DTIC Science & Technology

    1979-02-01

    materiel. Up to a point, superior performance is an offset to this quantitative disadvantage. Lanchester’s theory of warfare derived simplified relations...intermediate ranges. Underground Test. The next scheduled underground test ( UGT ), MINERS IRON, in FY 1980, will provide engineering and design data on...methods of discriminating between UGTs and earthquakes, and address U.S. capabilities to monitor both the existing Threshold Test Ban Treaty and the

  19. Spectroscopy by joint spectral and time domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Szkulmowski, Maciej; Tamborski, Szymon; Wojtkowski, Maciej

    2015-03-01

    We present the methodology for spectroscopic examination of absorbing media being the combination of Spectral Optical Coherence Tomography and Fourier Transform Spectroscopy. The method bases on the joint Spectral and Time OCT computational scheme and simplifies data analysis procedure as compared to the mostly used windowing-based Spectroscopic OCT methods. The proposed experimental setup is self-calibrating in terms of wavelength-pixel assignment. The performance of the method in measuring absorption spectrum was checked with the use of the reflecting phantom filled with the absorbing agent (indocyanine green). The results show quantitative accordance with the controlled exact results provided by the reference method.

  20. A 96-well-plate-based optical method for the quantitative and qualitative evaluation of Pseudomonas aeruginosa biofilm formation and its application to susceptibility testing.

    PubMed

    Müsken, Mathias; Di Fiore, Stefano; Römling, Ute; Häussler, Susanne

    2010-08-01

    A major reason for bacterial persistence during chronic infections is the survival of bacteria within biofilm structures, which protect cells from environmental stresses, host immune responses and antimicrobial therapy. Thus, there is concern that laboratory methods developed to measure the antibiotic susceptibility of planktonic bacteria may not be relevant to chronic biofilm infections, and it has been suggested that alternative methods should test antibiotic susceptibility within a biofilm. In this paper, we describe a fast and reliable protocol for using 96-well microtiter plates for the formation of Pseudomonas aeruginosa biofilms; the method is easily adaptable for antimicrobial susceptibility testing. This method is based on bacterial viability staining in combination with automated confocal laser scanning microscopy. The procedure simplifies qualitative and quantitative evaluation of biofilms and has proven to be effective for standardized determination of antibiotic efficiency on P. aeruginosa biofilms. The protocol can be performed within approximately 60 h.

  1. Improvements to direct quantitative analysis of multiple microRNAs facilitating faster analysis.

    PubMed

    Ghasemi, Farhad; Wegman, David W; Kanoatov, Mirzo; Yang, Burton B; Liu, Stanley K; Yousef, George M; Krylov, Sergey N

    2013-11-05

    Studies suggest that patterns of deregulation in sets of microRNA (miRNA) can be used as cancer diagnostic and prognostic biomarkers. Establishing a "miRNA fingerprint"-based diagnostic technique requires a suitable miRNA quantitation method. The appropriate method must be direct, sensitive, capable of simultaneous analysis of multiple miRNAs, rapid, and robust. Direct quantitative analysis of multiple microRNAs (DQAMmiR) is a recently introduced capillary electrophoresis-based hybridization assay that satisfies most of these criteria. Previous implementations of the method suffered, however, from slow analysis time and required lengthy and stringent purification of hybridization probes. Here, we introduce a set of critical improvements to DQAMmiR that address these technical limitations. First, we have devised an efficient purification procedure that achieves the required purity of the hybridization probe in a fast and simple fashion. Second, we have optimized the concentrations of the DNA probe to decrease the hybridization time to 10 min. Lastly, we have demonstrated that the increased probe concentrations and decreased incubation time removed the need for masking DNA, further simplifying the method and increasing its robustness. The presented improvements bring DQAMmiR closer to use in a clinical setting.

  2. A quantitative visual dashboard to explore exposures to consumer product ingredients

    EPA Science Inventory

    The Exposure Prioritization (Ex Priori) model features a simplified, quantitative visual dashboard to explore exposures across chemical space. Diverse data streams are integrated within the interface such that different exposure scenarios for “individual,” “pop...

  3. Potential application of the consistency approach for vaccine potency testing.

    PubMed

    Arciniega, J; Sirota, L A

    2012-01-01

    The Consistency Approach offers the possibility of reducing the number of animals used for a potency test. However, it is critical to assess the effect that such reduction may have on assay performance. Consistency of production, sometimes referred to as consistency of manufacture or manufacturing, is an old concept implicit in regulation, which aims to ensure the uninterrupted release of safe and effective products. Consistency of manufacture can be described in terms of process capability, or the ability of a process to produce output within specification limits. For example, the standard method for potency testing of inactivated rabies vaccines is a multiple-dilution vaccination challenge test in mice that gives a quantitative, although highly variable estimate. On the other hand, a single-dilution test that does not give a quantitative estimate, but rather shows if the vaccine meets the specification has been proposed. This simplified test can lead to a considerable reduction in the number of animals used. However, traditional indices of process capability assume that the output population (potency values) is normally distributed, which clearly is not the case for the simplified approach. Appropriate computation of capability indices for the latter case will require special statistical considerations.

  4. Comparison of two trajectory based models for locating particle sources for two rural New York sites

    NASA Astrophysics Data System (ADS)

    Zhou, Liming; Hopke, Philip K.; Liu, Wei

    Two back trajectory-based statistical models, simplified quantitative transport bias analysis and residence-time weighted concentrations (RTWC) have been compared for their capabilities of identifying likely locations of source emissions contributing to observed particle concentrations at Potsdam and Stockton, New York. Quantitative transport bias analysis (QTBA) attempts to take into account the distribution of concentrations around the directions of the back trajectories. In full QTBA approach, deposition processes (wet and dry) are also considered. Simplified QTBA omits the consideration of deposition. It is best used with multiple site data. Similarly the RTWC approach uses concentrations measured at different sites along with the back trajectories to distribute the concentration contributions across the spatial domain of the trajectories. In this study, these models are used in combination with the source contribution values obtained by the previous positive matrix factorization analysis of particle composition data from Potsdam and Stockton. The six common sources for the two sites, sulfate, soil, zinc smelter, nitrate, wood smoke and copper smelter were analyzed. The results of the two methods are consistent and locate large and clearly defined sources well. RTWC approach can find more minor sources but may also give unrealistic estimations of the source locations.

  5. Statistical genetics and evolution of quantitative traits

    NASA Astrophysics Data System (ADS)

    Neher, Richard A.; Shraiman, Boris I.

    2011-10-01

    The distribution and heritability of many traits depends on numerous loci in the genome. In general, the astronomical number of possible genotypes makes the system with large numbers of loci difficult to describe. Multilocus evolution, however, greatly simplifies in the limit of weak selection and frequent recombination. In this limit, populations rapidly reach quasilinkage equilibrium (QLE) in which the dynamics of the full genotype distribution, including correlations between alleles at different loci, can be parametrized by the allele frequencies. This review provides a simplified exposition of the concept and mathematics of QLE which is central to the statistical description of genotypes in sexual populations. Key results of quantitative genetics such as the generalized Fisher’s “fundamental theorem,” along with Wright’s adaptive landscape, are shown to emerge within QLE from the dynamics of the genotype distribution. This is followed by a discussion under what circumstances QLE is applicable, and what the breakdown of QLE implies for the population structure and the dynamics of selection. Understanding the fundamental aspects of multilocus evolution obtained through simplified models may be helpful in providing conceptual and computational tools to address the challenges arising in the studies of complex quantitative phenotypes of practical interest.

  6. Probabilistic framework for product design optimization and risk management

    NASA Astrophysics Data System (ADS)

    Keski-Rahkonen, J. K.

    2018-05-01

    Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.

  7. Simple and fast spectral domain algorithm for quantitative phase imaging of living cells with digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Min, Junwei; Yao, Baoli; Ketelhut, Steffi; Kemper, Björn

    2017-02-01

    The modular combination of optical microscopes with digital holographic microscopy (DHM) has been proven to be a powerful tool for quantitative live cell imaging. The introduction of condenser and different microscope objectives (MO) simplifies the usage of the technique and makes it easier to measure different kinds of specimens with different magnifications. However, the high flexibility of illumination and imaging also causes variable phase aberrations that need to be eliminated for high resolution quantitative phase imaging. The existent phase aberrations compensation methods either require add additional elements into the reference arm or need specimen free reference areas or separate reference holograms to build up suitable digital phase masks. These inherent requirements make them unpractical for usage with highly variable illumination and imaging systems and prevent on-line monitoring of living cells. In this paper, we present a simple numerical method for phase aberration compensation based on the analysis of holograms in spatial frequency domain with capabilities for on-line quantitative phase imaging. From a single shot off-axis hologram, the whole phase aberration can be eliminated automatically without numerical fitting or pre-knowledge of the setup. The capabilities and robustness for quantitative phase imaging of living cancer cells are demonstrated.

  8. Quality-by-Design II: Application of Quantitative Risk Analysis to the Formulation of Ciprofloxacin Tablets.

    PubMed

    Claycamp, H Gregg; Kona, Ravikanth; Fahmy, Raafat; Hoag, Stephen W

    2016-04-01

    Qualitative risk assessment methods are often used as the first step to determining design space boundaries; however, quantitative assessments of risk with respect to the design space, i.e., calculating the probability of failure for a given severity, are needed to fully characterize design space boundaries. Quantitative risk assessment methods in design and operational spaces are a significant aid to evaluating proposed design space boundaries. The goal of this paper is to demonstrate a relatively simple strategy for design space definition using a simplified Bayesian Monte Carlo simulation. This paper builds on a previous paper that used failure mode and effects analysis (FMEA) qualitative risk assessment and Plackett-Burman design of experiments to identity the critical quality attributes. The results show that the sequential use of qualitative and quantitative risk assessments can focus the design of experiments on a reduced set of critical material and process parameters that determine a robust design space under conditions of limited laboratory experimentation. This approach provides a strategy by which the degree of risk associated with each known parameter can be calculated and allocates resources in a manner that manages risk to an acceptable level.

  9. Towards quantitative magnetic particle imaging: A comparison with magnetic particle spectroscopy

    NASA Astrophysics Data System (ADS)

    Paysen, Hendrik; Wells, James; Kosch, Olaf; Steinhoff, Uwe; Trahms, Lutz; Schaeffter, Tobias; Wiekhorst, Frank

    2018-05-01

    Magnetic Particle Imaging (MPI) is a quantitative imaging modality with promising features for several biomedical applications. Here, we study quantitatively the raw data obtained during MPI measurements. We present a method for the calibration of the MPI scanner output using measurements from a magnetic particle spectrometer (MPS) to yield data in units of magnetic moments. The calibration technique is validated in a simplified MPI mode with a 1D excitation field. Using the calibrated results from MPS and MPI, we determine and compare the detection limits for each system. The detection limits were found to be 5.10-12 Am2 for MPS and 3.6.10-10 Am2 for MPI. Finally, the quantitative information contained in a standard MPI measurement with a 3D excitation is analyzed and compared to the previous results, showing a decrease in signal amplitudes of the odd harmonics related to the case of 1D excitation. We propose physical explanations for all acquired results; and discuss the possible benefits for the improvement of MPI technology.

  10. 77 FR 54482 - Allocation of Costs Under the Simplified Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-05

    ... Allocation of Costs Under the Simplified Methods AGENCY: Internal Revenue Service (IRS), Treasury. ACTION... certain costs to the property and that allocate costs under the simplified production method or the simplified resale method. The proposed regulations provide rules for the treatment of negative additional...

  11. Detection system for a gas chromatograph

    DOEpatents

    Hayes, John M.; Small, Gerald J.

    1984-01-01

    A method and apparatus are described for the quantitative analysis of vaporizable compounds, and in particular of polycyclic aromatic hydrocarbons which may be induced to fluoresce. The sample to be analyzed is injected into a gas chromatography column and is eluted through a narrow orifice into a vacuum chamber. The free expansion of the eluted sample into the vacuum chamber creates a supersonic molecular beam in which the sample molecules are cooled to the extent that the excited vibrational and rotational levels are substantially depopulated. The cooled molecules, when induced to fluoresce by laser excitation, give greatly simplified spectra suitable for analytical purposes. The laser induced fluorimetry provides great selectivity, and the gas chromatograph provides quantitative transfer of the sample to the molecular beam.

  12. Quantitation of acrylamide in foods by high-resolution mass spectrometry.

    PubMed

    Troise, Antonio Dario; Fiore, Alberto; Fogliano, Vincenzo

    2014-01-08

    Acrylamide detection still represents one of the hottest topics in food chemistry. Solid phase cleanup coupled to liquid chromatography separation and tandem mass spectrometry detection along with GC-MS detection are nowadays the gold standard procedure for acrylamide quantitation thanks to high reproducibility, good recovery, and low relative standard deviation. High-resolution mass spectrometry (HRMS) is particularly suitable for the detection of low molecular weight amides, and it can provide some analytical advantages over other MS techniques. In this paper a liquid chromatography (LC) method for acrylamide determination using HRMS detection was developed and compared to LC coupled to tandem mass spectrometry. The procedure applied a simplified extraction, no cleanup steps, and a 4 min chromatography. It proved to be solid and robust with an acrylamide mass accuracy of 0.7 ppm, a limit of detection of 2.65 ppb, and a limit of quantitation of 5 ppb. The method was tested on four acrylamide-containing foods: cookies, French fries, ground coffee, and brewed coffee. Results were perfectly in line with those obtained by LC-MS/MS.

  13. Leveraging Mechanism Simplicity and Strategic Averaging to Identify Signals from Highly Heterogeneous Spatial and Temporal Ozone Data

    NASA Astrophysics Data System (ADS)

    Brown-Steiner, B.; Selin, N. E.; Prinn, R. G.; Monier, E.; Garcia-Menendez, F.; Tilmes, S.; Emmons, L. K.; Lamarque, J. F.; Cameron-Smith, P. J.

    2017-12-01

    We summarize two methods to aid in the identification of ozone signals from underlying spatially and temporally heterogeneous data in order to help research communities avoid the sometimes burdensome computational costs of high-resolution high-complexity models. The first method utilizes simplified chemical mechanisms (a Reduced Hydrocarbon Mechanism and a Superfast Mechanism) alongside a more complex mechanism (MOZART-4) within CESM CAM-Chem to extend the number of simulated meteorological years (or add additional members to an ensemble) for a given modeling problem. The Reduced Hydrocarbon mechanism is twice as fast, and the Superfast mechanism is three times faster than the MOZART-4 mechanism. We show that simplified chemical mechanisms are largely capable of simulating surface ozone across the globe as well as the more complex chemical mechanisms, and where they are not capable, a simple standardized anomaly emulation approach can correct for their inadequacies. The second method uses strategic averaging over both temporal and spatial scales to filter out the highly heterogeneous noise that underlies ozone observations and simulations. This method allows for a selection of temporal and spatial averaging scales that match a particular signal strength (between 0.5 and 5 ppbv), and enables the identification of regions where an ozone signal can rise above the ozone noise over a given region and a given period of time. In conjunction, these two methods can be used to "scale down" chemical mechanism complexity and quantitatively determine spatial and temporal scales that could enable research communities to utilize simplified representations of atmospheric chemistry and thereby maximize their productivity and efficiency given computational constraints. While this framework is here applied to ozone data, it could also be applied to a broad range of geospatial data sets (observed or modeled) that have spatial and temporal coverage.

  14. An improved pulse coupled neural network with spectral residual for infrared pedestrian segmentation

    NASA Astrophysics Data System (ADS)

    He, Fuliang; Guo, Yongcai; Gao, Chao

    2017-12-01

    Pulse coupled neural network (PCNN) has become a significant tool for the infrared pedestrian segmentation, and a variety of relevant methods have been developed at present. However, these existing models commonly have several problems of the poor adaptability of infrared noise, the inaccuracy of segmentation results, and the fairly complex determination of parameters in current methods. This paper presents an improved PCNN model that integrates the simplified framework and spectral residual to alleviate the above problem. In this model, firstly, the weight matrix of the feeding input field is designed by the anisotropic Gaussian kernels (ANGKs), in order to suppress the infrared noise effectively. Secondly, the normalized spectral residual saliency is introduced as linking coefficient to enhance the edges and structural characteristics of segmented pedestrians remarkably. Finally, the improved dynamic threshold based on the average gray values of the iterative segmentation is employed to simplify the original PCNN model. Experiments on the IEEE OTCBVS benchmark and the infrared pedestrian image database built by our laboratory, demonstrate that the superiority of both subjective visual effects and objective quantitative evaluations in information differences and segmentation errors in our model, compared with other classic segmentation methods.

  15. Applications of the hybrid coordinate method to the TOPS autopilot

    NASA Technical Reports Server (NTRS)

    Fleischer, G. E.

    1978-01-01

    Preliminary results are presented from the application of the hybrid coordinate method to modeling TOPS (thermoelectric outer planet spacecraft) structural dynamics. Computer simulated responses of the vehicle are included which illustrate the interaction of relatively flexible appendages with an autopilot control system. Comparisons were made between simplified single-axis models of the control loop, with spacecraft flexibility represented by hinged rigid bodies, and a very detailed three-axis spacecraft model whose flexible portions are described by modal coordinates. While single-axis system, root loci provided reasonable qualitative indications of stability margins in this case, they were quantitatively optimistic when matched against responses of the detailed model.

  16. Patient-dependent count-rate adaptive normalization for PET detector efficiency with delayed-window coincidence events

    NASA Astrophysics Data System (ADS)

    Niu, Xiaofeng; Ye, Hongwei; Xia, Ting; Asma, Evren; Winkler, Mark; Gagnon, Daniel; Wang, Wenli

    2015-07-01

    Quantitative PET imaging is widely used in clinical diagnosis in oncology and neuroimaging. Accurate normalization correction for the efficiency of each line-of- response is essential for accurate quantitative PET image reconstruction. In this paper, we propose a normalization calibration method by using the delayed-window coincidence events from the scanning phantom or patient. The proposed method could dramatically reduce the ‘ring’ artifacts caused by mismatched system count-rates between the calibration and phantom/patient datasets. Moreover, a modified algorithm for mean detector efficiency estimation is proposed, which could generate crystal efficiency maps with more uniform variance. Both phantom and real patient datasets are used for evaluation. The results show that the proposed method could lead to better uniformity in reconstructed images by removing ring artifacts, and more uniform axial variance profiles, especially around the axial edge slices of the scanner. The proposed method also has the potential benefit to simplify the normalization calibration procedure, since the calibration can be performed using the on-the-fly acquired delayed-window dataset.

  17. A simplified guide for charged aerosol detection of non-chromophoric compounds-Analytical method development and validation for the HPLC assay of aerosol particle size distribution for amikacin.

    PubMed

    Soliven, Arianne; Haidar Ahmad, Imad A; Tam, James; Kadrichu, Nani; Challoner, Pete; Markovich, Robert; Blasko, Andrei

    2017-09-05

    Amikacin, an aminoglycoside antibiotic lacking a UV chromophore, was developed into a drug product for delivery by inhalation. A robust method for amikacin assay analysis and aerosol particle size distribution (aPSD) determination, with comparable performance to the conventional UV detector was developed using a charged aerosol detector (CAD). The CAD approach involved more parameters for optimization than UV detection due to its sensitivity to trace impurities, non-linear response and narrow dynamic range of signal versus concentration. Through careful selection of the power transformation function value and evaporation temperature, a wider linear dynamic range, improved signal-to-noise ratio and high repeatability were obtained. The influences of mobile phase grade and glassware binding of amikacin during sample preparation were addressed. A weighed (1/X 2 ) least square regression was used for the calibration curve. The limit of quantitation (LOQ) and limit of detection (LOD) for this method were determined to be 5μg/mL and 2μg/mL, respectively. The method was validated over a concentration range of 0.05-2mg/mL. The correlation coefficient for the peak area versus concentration was 1.00 and the y-intercept was 0.2%. The recovery accuracies of triplicate preparations at 0.05, 1.0, and 2.0mg/mL were in the range of 100-101%. The relative standard deviation (S rel ) of six replicates at 1.0mg/mL was 1%, and S rel of five injections at the limit of quantitation was 4%. A robust HPLC-CAD method was developed and validated for the determination of the aPSD for amikacin. The CAD method development produced a simplified procedure with minimal variability in results during: routine operation, transfer from one instrument to another, and between different analysts. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film.

    PubMed

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.

  19. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film

    PubMed Central

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120

  20. Detection system for a gas chromatograph. [. cap alpha. -methylnaphthalene,. beta. -methylnapthalene

    DOEpatents

    Hayes, J.M.; Small, G.J.

    1982-04-26

    A method and apparatus are described for the quantitative analysis of vaporizable compounds, and in particular of polycyclic aromatic hydrocarbons which may be induced to fluoresce. The sample to be analyzed is injected into a gas chromatography column and is eluted through a narrow orifice into a vacuum chamber. The free expansion of the eluted sample into the vacuum chamber creates a supersonic molecular beam in which the sample molecules are cooled to the extent that the excited vibrational and rotational levels are substantially depopulated. The cooled molecules, when induced to fluoresce by laser excitation, give greatly simplified spectra suitable for analytical purposes. The laser induced fluorimetry provides great selectivity, and the gas chromatograph provides quantitative transfer of the sample to the molecular beam. 3 figures, 2 tables.

  1. Quantitative characterization of galectin-3-C affinity mass spectrometry measurements: Comprehensive data analysis, obstacles, shortcuts and robustness.

    PubMed

    Haramija, Marko; Peter-Katalinić, Jasna

    2017-10-30

    Affinity mass spectrometry (AMS) is an emerging tool in the field of the study of protein•carbohydrate complexes. However, experimental obstacles and data analysis are preventing faster integration of AMS methods into the glycoscience field. Here we show how analysis of direct electrospray ionization mass spectrometry (ESI-MS) AMS data can be simplified for screening purposes, even for complex AMS spectra. A direct ESI-MS assay was tested in this study and binding data for the galectin-3C•lactose complex were analyzed using a comprehensive and simplified data analysis approach. In the comprehensive data analysis approach, noise, all protein charge states, alkali ion adducts and signal overlap were taken into account. In a simplified approach, only the intensities of the fully protonated free protein and the protein•carbohydrate complex for the main protein charge state were taken into account. In our study, for high intensity signals, noise was negligible, sodiated protein and sodiated complex signals cancelled each other out when calculating the K d value, and signal overlap influenced the Kd value only to a minor extent. Influence of these parameters on low intensity signals was much higher. However, low intensity protein charge states should be avoided in quantitative AMS analyses due to poor ion statistics. The results indicate that noise, alkali ion adducts, signal overlap, as well as low intensity protein charge states, can be neglected for preliminary experiments, as well as in screening assays. One comprehensive data analysis performed as a control should be sufficient to validate this hypothesis for other binding systems as well. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.

    PubMed

    Ganasala, Padma; Kumar, Vinod

    2016-02-01

    Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.

  3. The benefits of a simplified method for CPR training of medical professionals: a randomized controlled study.

    PubMed

    Allan, Katherine S; Wong, Natalie; Aves, Theresa; Dorian, Paul

    2013-08-01

    We developed and tested a training method for basic life support incorporating defibrillator feedback during simulated cardiac arrest (CA) to determine the impact on the quality and retention of CPR skills. 298 subjects were randomized into 3 groups. All groups received a 2h training session followed by a simulated CA test scenario, immediately after training and at 3 months. Controls used a non-feedback defibrillator during training and testing. Group 1 was trained and tested with an audiovisual feedback defibrillator. During training, Group 1 reviewed quantitative CPR data from the defibrillator. Group 2 was trained as per Group 1, but was tested using the non-feedback defibrillator. The primary outcome was difference in compression depth between groups at initial testing. Secondary outcomes included differences in rate, depth at retesting, compression fraction, and self-assessment. Groups 1 and 2 had significantly deeper compressions than the controls (35.3 ± 7.6 mm, 43.7 ± 5.8 mm, 42.2 ± 6.6 mm for controls, Groups 1 and 2, P=0.001 for Group 1 vs. controls; P=0.001 for Group 2 vs. controls). At three months, CPR depth was maintained in all groups but remained significantly higher in Group 1 (39.1 ± 9.9 mm, 47.0 ± 7.4 mm, 42.2 ± 8.4 mm for controls, Groups 1 and 2, P=0.001 for Group 1 vs. control). No significant differences were noted between groups in compression rate or fraction. A simplified 2h training method using audiovisual feedback combined with quantitative review of CPR performance improved CPR quality and retention of these skills. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Study on Collision of Ship Side Structure by Simplified Plastic Analysis Method

    NASA Astrophysics Data System (ADS)

    Sun, C. J.; Zhou, J. H.; Wu, W.

    2017-10-01

    During its lifetime, a ship may encounter collision or grounding and sustain permanent damage after these types of accidents. Crashworthiness has been based on two kinds of main methods: simplified plastic analysis and numerical simulation. A simplified plastic analysis method is presented in this paper. Numerical methods using the non-linear finite-element software LS-DYNA are conducted to validate the method. The results show that, as for the accuracy of calculation results, the simplified plasticity analysis are in good agreement with the finite element simulation, which reveals that the simplified plasticity analysis method can quickly and accurately estimate the crashworthiness of the side structure during the collision process and can be used as a reliable risk assessment method.

  5. Assessment of acute myocarditis by cardiac magnetic resonance imaging: Comparison of qualitative and quantitative analysis methods.

    PubMed

    Imbriaco, Massimo; Nappi, Carmela; Puglia, Marta; De Giorgi, Marco; Dell'Aversana, Serena; Cuocolo, Renato; Ponsiglione, Andrea; De Giorgi, Igino; Polito, Maria Vincenza; Klain, Michele; Piscione, Federico; Pace, Leonardo; Cuocolo, Alberto

    2017-10-26

    To compare cardiac magnetic resonance (CMR) qualitative and quantitative analysis methods for the noninvasive assessment of myocardial inflammation in patients with suspected acute myocarditis (AM). A total of 61 patients with suspected AM underwent coronary angiography and CMR. Qualitative analysis was performed applying Lake-Louise Criteria (LLC), followed by quantitative analysis based on the evaluation of edema ratio (ER) and global relative enhancement (RE). Diagnostic performance was assessed for each method by measuring the area under the curves (AUC) of the receiver operating characteristic analyses. The final diagnosis of AM was based on symptoms and signs suggestive of cardiac disease, evidence of myocardial injury as defined by electrocardiogram changes, elevated troponin I, exclusion of coronary artery disease by coronary angiography, and clinical and echocardiographic follow-up at 3 months after admission to the chest pain unit. In all patients, coronary angiography did not show significant coronary artery stenosis. Troponin I levels and creatine kinase were higher in patients with AM compared to those without (both P < .001). There were no significant differences among LLC, T2-weighted short inversion time inversion recovery (STIR) sequences, early (EGE), and late (LGE) gadolinium-enhancement sequences for diagnosis of AM. The AUC for qualitative (T2-weighted STIR 0.92, EGE 0.87 and LGE 0.88) and quantitative (ER 0.89 and global RE 0.80) analyses were also similar. Qualitative and quantitative CMR analysis methods show similar diagnostic accuracy for the diagnosis of AM. These findings suggest that a simplified approach using a shortened CMR protocol including only T2-weighted STIR sequences might be useful to rule out AM in patients with acute coronary syndrome and normal coronary angiography.

  6. Development and validation of a liquid chromatography tandem mass spectrometry assay for the quantitation of a protein therapeutic in cynomolgus monkey serum.

    PubMed

    Zhao, Yue; Liu, Guowen; Angeles, Aida; Hamuro, Lora L; Trouba, Kevin J; Wang, Bonnie; Pillutla, Renuka C; DeSilva, Binodh S; Arnold, Mark E; Shen, Jim X

    2015-04-15

    We have developed and fully validated a fast and simple LC-MS/MS assay to quantitate a therapeutic protein BMS-A in cynomolgus monkey serum. Prior to trypsin digestion, a recently reported sample pretreatment method was applied to remove more than 95% of the total serum albumin and denature the proteins in the serum sample. The pretreatment procedure simplified the biological sample prior to digestion, improved digestion efficiency and reproducibility, and did not require reduction and alkylation. The denatured proteins were then digested with trypsin at 60 °C for 30 min and the tryptic peptides were chromatographically separated on an Acquity CSH column (2.1 mm × 50 mm, 1.7 μm) using gradient elution. One surrogate peptide was used for quantitation and another surrogate peptide was selected for confirmation. Two corresponding stable isotope labeled peptides were used to compensate variations during LC-MS detection. The linear analytical range of the assay was 0.50-500 μg/mL. The accuracy (%Dev) was within ± 5.4% and the total assay variation (%CV) was less than 12.0% for sample analysis. The validated method demonstrated good accuracy and precision and the application of the innovative albumin removal sample pretreatment method improved both assay sensitivity and robustness. The assay has been applied to a cynomolgus monkey toxicology study and the serum sample concentration data were in good agreement with data generated using a quantitative ligand-binding assay (LBA). The use of a confirmatory peptide, in addition to the quantitation peptide, ensured the integrity of the drug concentrations measured by the method. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Ambient Ionization Mass Spectrometry Measurement of Aminotransferase Activity

    NASA Astrophysics Data System (ADS)

    Yan, Xin; Li, Xin; Zhang, Chengsen; Xu, Yang; Cooks, R. Graham

    2017-06-01

    A change in enzyme activity has been used as a clinical biomarker for diagnosis and is useful in evaluating patient prognosis. Current laboratory measurements of enzyme activity involve multi-step derivatization of the reaction products followed by quantitative analysis of these derivatives. This study simplified the reaction systems by using only the target enzymatic reaction and directly detecting its product. A protocol using paper spray mass spectrometry for identifying and quantifying the reaction product has been developed. Evaluation of the activity of aspartate aminotransferase (AST) was chosen as a proof-of-principle. The volume of sample needed is greatly reduced compared with the traditional method. Paper spray has a desalting effect that avoids sprayer clogging problems seen when examining serum samples by nanoESI. This very simple method does not require sample pretreatment and additional derivatization reactions, yet it gives high quality kinetic data, excellent limits of detection (60 ppb from serum), and coefficients of variation <10% in quantitation. [Figure not available: see fulltext.

  8. Surrogate matrix and surrogate analyte approaches for definitive quantitation of endogenous biomolecules.

    PubMed

    Jones, Barry R; Schultz, Gary A; Eckstein, James A; Ackermann, Bradley L

    2012-10-01

    Quantitation of biomarkers by LC-MS/MS is complicated by the presence of endogenous analytes. This challenge is most commonly overcome by calibration using an authentic standard spiked into a surrogate matrix devoid of the target analyte. A second approach involves use of a stable-isotope-labeled standard as a surrogate analyte to allow calibration in the actual biological matrix. For both methods, parallelism between calibration standards and the target analyte in biological matrix must be demonstrated in order to ensure accurate quantitation. In this communication, the surrogate matrix and surrogate analyte approaches are compared for the analysis of five amino acids in human plasma: alanine, valine, methionine, leucine and isoleucine. In addition, methodology based on standard addition is introduced, which enables a robust examination of parallelism in both surrogate analyte and surrogate matrix methods prior to formal validation. Results from additional assays are presented to introduce the standard-addition methodology and to highlight the strengths and weaknesses of each approach. For the analysis of amino acids in human plasma, comparable precision and accuracy were obtained by the surrogate matrix and surrogate analyte methods. Both assays were well within tolerances prescribed by regulatory guidance for validation of xenobiotic assays. When stable-isotope-labeled standards are readily available, the surrogate analyte approach allows for facile method development. By comparison, the surrogate matrix method requires greater up-front method development; however, this deficit is offset by the long-term advantage of simplified sample analysis.

  9. Rationalising the 'irrational': a think aloud study of discrete choice experiment responses.

    PubMed

    Ryan, Mandy; Watson, Verity; Entwistle, Vikki

    2009-03-01

    Stated preference methods assume respondents' preferences are consistent with utility theory, but many empirical studies report evidence of preferences that violate utility theory. This evidence is often derived from quantitative tests that occur naturally within, or are added to, stated preference tasks. In this study, we use qualitative methods to explore three axioms of utility theory: completeness, monotonicity, and continuity. We take a novel approach, adopting a 'think aloud' technique to identify violations of the axioms of utility theory and to consider how well the quantitative tests incorporated within a discrete choice experiment are able to detect these. Results indicate that quantitative tests classify respondents as being 'irrational' when qualitative statements would indicate they are 'rational'. In particular, 'non-monotonic' responses can often be explained by respondents inferring additional information beyond what is presented in the task, and individuals who appear to adopt non-compensatory decision-making strategies do so because they rate particular attributes very highly (they are not attempting to simplify the task). The results also provide evidence of 'cost-based responses': respondents assumed tests with higher costs would be of higher quality. The value of including in-depth qualitative validation techniques in the development of stated preference tasks is shown.

  10. Fast and robust method for the determination of microstructure and composition in butadiene, styrene-butadiene, and isoprene rubber by near-infrared spectroscopy.

    PubMed

    Vilmin, Franck; Dussap, Claude; Coste, Nathalie

    2006-06-01

    In the tire industry, synthetic styrene-butadiene rubber (SBR), butadiene rubber (BR), and isoprene rubber (IR) elastomers are essential for conferring to the product its properties of grip and rolling resistance. Their physical properties depend on their chemical composition, i. e., their microstructure and styrene content, which must be accurately controlled. This paper describes a fast, robust, and highly reproducible near-infrared analytical method for the quantitative determination of the microstructure and styrene content. The quantitative models are calculated with the help of pure spectral profiles estimated from a partial least squares (PLS) regression, using (13)C nuclear magnetic resonance (NMR) as the reference method. This versatile approach allows the models to be applied over a large range of compositions, from a single BR to an SBR-IR blend. The resulting quantitative predictions are independent of the sample path length. As a consequence, the sample preparation is solvent free and simplified with a very fast (five minutes) hot filming step of a bulk polymer piece. No precise thickness control is required. Thus, the operator effect becomes negligible and the method is easily transferable. The root mean square error of prediction, depending on the rubber composition, is between 0.7% and 1.3%. The reproducibility standard error is less than 0.2% in every case.

  11. Disposable MoS2-Arrayed MALDI MS Chip for High-Throughput and Rapid Quantification of Sulfonamides in Multiple Real Samples.

    PubMed

    Zhao, Yaju; Tang, Minmin; Liao, Qiaobo; Li, Zhoumin; Li, Hui; Xi, Kai; Tan, Li; Zhang, Mei; Xu, Danke; Chen, Hong-Yuan

    2018-04-27

    In this work, we demonstrate, for the first time, the development of a disposable MoS 2 -arrayed matrix-assisted laser desorption/ionization mass spectrometry (MALDI MS) chip combined with an immunoaffinity enrichment method for high-throughput, rapid, and simultaneous quantitation of multiple sulfonamides (SAs). The disposable MALDI MS chip was designed and fabricated by MoS 2 array formation on a commercial indium tin oxide (ITO) glass slide. A series of SAs were analyzed, and clear deprotonated signals were obtained in negative-ion mode. Compared with MoS 2 -arrayed commercial steel plate, the prepared MALDI MS chip exhibited comparable LDI efficiency, providing a good alternative and disposable substrate for MALDI MS analysis. Furthermore, internal standard (IS) was previously deposited onto the MoS 2 array to simplify the experimental process for MALDI MS quantitation. 96 sample spots could be analyzed within 10 min in one single chip to perform quantitative analysis, recovery studies, and real foodstuff detection. Upon targeted extraction and enrichment by antibody conjugated magnetic beads, five SAs were quantitatively determined by the IS-first method with the linear range of 0.5-10 ng/mL ( R 2 > 0.990). Good recoveries and repeatability were obtained for spiked pork, egg, and milk samples. SAs in several real foodstuffs were successfully identified and quantified. The developed method may provide a promising tool for the routine analysis of antibiotic residues in real samples.

  12. Research Participants’ Understanding of and Reactions to Certificates of Confidentiality

    PubMed Central

    Check, Devon K.; Ammarell, Natalie

    2013-01-01

    Background Certificates of Confidentiality are intended to facilitate participation in critical public health research by protecting against forced disclosure of identifying data in legal proceedings, but little is known about the effect of Certificate descriptions in consent forms. Methods To gain preliminary insights, we conducted qualitative interviews with 50 HIV-positive individuals in Durham, North Carolina to explore their subjective understanding of Certificate descriptions and whether their reactions differed based on receiving a standard versus simplified description. Results Most interviewees were neither reassured nor alarmed by Certificate information, and most said it would not influence their willingness to participate or provide truthful information. However, compared with those receiving the simplified description, more who read the standard description said it raised new concerns, that their likelihood of participating would be lower, and that they might be less forthcoming. Most interviewees said they found the Certificate description clear, but standard-group participants often found particular words and phrases confusing, while simplified-group participants more often questioned the information’s substance. Conclusions Valid informed consent requires comprehension and voluntariness. Our findings highlight the importance of developing consent descriptions of Certificates and other confidentiality protections that are simple and accurate. These qualitative results provide rich detail to inform a larger, quantitative study that would permit further rigorous comparisons. PMID:24563806

  13. Determination of alkylbenzenesulfonate surfactants in groundwater using macroreticular resins and carbon-13 nuclear magnetic resonance spectrometry

    USGS Publications Warehouse

    Thurman, E.M.; Willoughby, T.; Barber, L.B.; Thorn, K.A.

    1987-01-01

    Alkylbenzenesulfonate surfactants were determined in groundwater at concentrations as low as 0.3 mg/L. The method uses XAD-8 resin for concentration, followed by elution with methanol, separation of anionic and nonionic surfactants by anion exchange, quantitation by titration, and identification by 13C nuclear magnetic resonance spectrometry. Laboratory standards and field samples containing straight-chain and branched-chain alkylbenzenesulfonates, sodium dodecyl sulfate, and alkylbenzene ethoxylates were studied. The XAD-8 extraction of surfactants from groundwater was completed in the field, which simplified sample preservation and reduced the cost of transporting samples.

  14. A novel implementation of homodyne time interval analysis method for primary vibration calibration

    NASA Astrophysics Data System (ADS)

    Sun, Qiao; Zhou, Ling; Cai, Chenguang; Hu, Hongbo

    2011-12-01

    In this paper, the shortcomings and their causes of the conventional homodyne time interval analysis (TIA) method is described with respect to its software algorithm and hardware implementation, based on which a simplified TIA method is proposed with the help of virtual instrument technology. Equipped with an ordinary Michelson interferometer and dual channel synchronous data acquisition card, the primary vibration calibration system using the simplified method can perform measurements of complex sensitivity of accelerometers accurately, meeting the uncertainty requirements laid down in pertaining ISO standard. The validity and accuracy of the simplified TIA method is verified by simulation and comparison experiments with its performance analyzed. This simplified method is recommended to apply in national metrology institute of developing countries and industrial primary vibration calibration labs for its simplified algorithm and low requirements on hardware.

  15. Advanced quantitative magnetic nondestructive evaluation methods - Theory and experiment

    NASA Technical Reports Server (NTRS)

    Barton, J. R.; Kusenberger, F. N.; Beissner, R. E.; Matzkanin, G. A.

    1979-01-01

    The paper reviews the scale of fatigue crack phenomena in relation to the size detection capabilities of nondestructive evaluation methods. An assessment of several features of fatigue in relation to the inspection of ball and roller bearings suggested the use of magnetic methods; magnetic domain phenomena including the interaction of domains and inclusions, and the influence of stress and magnetic field on domains are discussed. Experimental results indicate that simplified calculations can be used to predict many features of these results; the data predicted by analytic models which use finite element computer analysis predictions do not agree with respect to certain features. Experimental analyses obtained on rod-type fatigue specimens which show experimental magnetic measurements in relation to the crack opening displacement and volume and crack depth should provide methods for improved crack characterization in relation to fracture mechanics and life prediction.

  16. The experimental determination of the moments of inertia of airplanes by a simplified compound-pendulum method

    NASA Technical Reports Server (NTRS)

    Gracey, William

    1948-01-01

    A simplified compound-pendulum method for the experimental determination of the moments of inertia of airplanes about the x and y axes is described. The method is developed as a modification of the standard pendulum method reported previously in NACA report, NACA-467. A brief review of the older method is included to form a basis for discussion of the simplified method. (author)

  17. Evaluation of selected static methods used to estimate element mobility, acid-generating and acid-neutralizing potentials associated with geologically diverse mining wastes

    USGS Publications Warehouse

    Hageman, Philip L.; Seal, Robert R.; Diehl, Sharon F.; Piatak, Nadine M.; Lowers, Heather

    2015-01-01

    A comparison study of selected static leaching and acid–base accounting (ABA) methods using a mineralogically diverse set of 12 modern-style, metal mine waste samples was undertaken to understand the relative performance of the various tests. To complement this study, in-depth mineralogical studies were conducted in order to elucidate the relationships between sample mineralogy, weathering features, and leachate and ABA characteristics. In part one of the study, splits of the samples were leached using six commonly used leaching tests including paste pH, the U.S. Geological Survey (USGS) Field Leach Test (FLT) (both 5-min and 18-h agitation), the U.S. Environmental Protection Agency (USEPA) Method 1312 SPLP (both leachate pH 4.2 and leachate pH 5.0), and the USEPA Method 1311 TCLP (leachate pH 4.9). Leachate geochemical trends were compared in order to assess differences, if any, produced by the various leaching procedures. Results showed that the FLT (5-min agitation) was just as effective as the 18-h leaching tests in revealing the leachate geochemical characteristics of the samples. Leaching results also showed that the TCLP leaching test produces inconsistent results when compared to results produced from the other leaching tests. In part two of the study, the ABA was determined on splits of the samples using both well-established traditional static testing methods and a relatively quick, simplified net acid–base accounting (NABA) procedure. Results showed that the traditional methods, while time consuming, provide the most in-depth data on both the acid generating, and acid neutralizing tendencies of the samples. However, the simplified NABA method provided a relatively fast, effective estimation of the net acid–base account of the samples. Overall, this study showed that while most of the well-established methods are useful and effective, the use of a simplified leaching test and the NABA acid–base accounting method provide investigators fast, quantitative tools that can be used to provide rapid, reliable information about the leachability of metals and other constituents of concern, and the acid-generating potential of metal mining waste.

  18. Confirmatory and quantitative analysis of beta-lactam antibiotics in bovine kidney tissue by dispersive solid-phase extraction and liquid chromatography-tandem mass spectrometry.

    PubMed

    Fagerquist, Clifton K; Lightfield, Alan R; Lehotay, Steven J

    2005-03-01

    A simple, rapid, rugged, sensitive, and specific method for the confirmation and quantitation of 10 beta-lactam antibiotics in fortified and incurred bovine kidney tissue has been developed. The method uses a simple solvent extraction, dispersive solid-phase extraction (dispersive-SPE) cleanup, and liquid chromatography-tandem mass spectrometry (LC/MS/MS) for confirmation and quantitation. Dispersive-SPE greatly simplifies and accelerates sample cleanup and improves overall recoveries compared with conventional SPE cleanup. The beta-lactam antibiotics tested were as follows: deacetylcephapirin (an antimicrobial metabolite of cephapirin), amoxicillin, desfuroylceftiofur cysteine disulfide (DCCD, an antimicrobial metabolite of ceftiofur), ampicillin, cefazolin, penicillin G, oxacillin, cloxacillin, naficillin, and dicloxacillin. Average recoveries of fortified samples were 70% or better for all beta-lactams except DCCD, which had an average recovery of 58%. The LC/MS/MS method was able to demonstrate quantitative recoveries at established tolerance levels and provide confirmatory data for unambiguous analyte identification. The method was also tested on 30 incurred bovine kidney samples obtained from the USDA Food Safety and Inspection Service, which had previously tested the samples using the approved semiquantitative microbial assay. The results from the quantitative LC/MS/MS analysis were in general agreement with the microbial assay for 23 samples although the LC/MS/MS method was superior in that it could specifically identify which beta-lactam was present and quantitate its concentration, whereas the microbial assay could only identify the type of beta-lactam present and report a concentration with respect to the microbial inhibition of a penicillin G standard. In addition, for 6 of the 23 samples, LC/MS/MS analysis detected a penicillin and a cephalosporin beta-lactam, whereas the microbial assay detected only a penicillin beta-lactam. For samples that do not fall into the "general agreement" category, the most serious discrepancy involves two samples where the LC/MS/MS method detected a violative level of a cephalosporin beta-lactam (deacetylcephapirin) in the first sample and a possibly violative level of desfuroylceftiofur in the second, whereas the microbial assay identified the two samples as having only violative levels of a penicillin beta-lactam.

  19. A simplified method in comparison with comprehensive interaction incremental dynamic analysis to assess seismic performance of jacket-type offshore platforms

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M. R.; Ajamy, A.; Asgarian, B.

    2015-12-01

    The primary goal of seismic reassessment procedures in oil platform codes is to determine the reliability of a platform under extreme earthquake loading. Therefore, in this paper, a simplified method is proposed to assess seismic performance of existing jacket-type offshore platforms (JTOP) in regions ranging from near-elastic to global collapse. The simplified method curve exploits well agreement between static pushover (SPO) curve and the entire summarized interaction incremental dynamic analysis (CI-IDA) curve of the platform. Although the CI-IDA method offers better understanding and better modelling of the phenomenon, it is a time-consuming and challenging task. To overcome the challenges, the simplified procedure, a fast and accurate approach, is introduced based on SPO analysis. Then, an existing JTOP in the Persian Gulf is presented to illustrate the procedure, and finally a comparison is made between the simplified method and CI-IDA results. The simplified method is very informative and practical for current engineering purposes. It is able to predict seismic performance elasticity to global dynamic instability with reasonable accuracy and little computational effort.

  20. EFS: an ensemble feature selection tool implemented as R-package and web-application.

    PubMed

    Neumann, Ursula; Genze, Nikita; Heider, Dominik

    2017-01-01

    Feature selection methods aim at identifying a subset of features that improve the prediction performance of subsequent classification models and thereby also simplify their interpretability. Preceding studies demonstrated that single feature selection methods can have specific biases, whereas an ensemble feature selection has the advantage to alleviate and compensate for these biases. The software EFS (Ensemble Feature Selection) makes use of multiple feature selection methods and combines their normalized outputs to a quantitative ensemble importance. Currently, eight different feature selection methods have been integrated in EFS, which can be used separately or combined in an ensemble. EFS identifies relevant features while compensating specific biases of single methods due to an ensemble approach. Thereby, EFS can improve the prediction accuracy and interpretability in subsequent binary classification models. EFS can be downloaded as an R-package from CRAN or used via a web application at http://EFS.heiderlab.de.

  1. The supersonic molecular beam injector as a reliable tool for plasma fueling and physics experiment on HL-2A.

    PubMed

    Chen, C Y; Yu, D L; Feng, B B; Yao, L H; Song, X M; Zang, L G; Gao, X Y; Yang, Q W; Duan, X R

    2016-09-01

    On HL-2A tokamak, supersonic molecular beam injection (SMBI) has been developed as a routine refueling method. The key components of the system are an electromagnetic valve and a conic nozzle. The valve and conic nozzle are assembled to compose the simplified Laval nozzle for generating the pulsed beam. The appurtenance of the system includes the cooling system serving the cooled SMBI generation and the in situ calibration component for quantitative injection. Compared with the conventional gas puffing, the SMBI features prompt response and larger fueling flux. These merits devote the SMBI a good fueling method, an excellent plasma density feedback control tool, and an edge localized mode mitigation resource.

  2. A Quantitative ADME-base Tool for Exploring Human ...

    EPA Pesticide Factsheets

    Exposure to a wide range of chemicals through our daily habits and routines is ubiquitous and largely unavoidable within modern society. The potential for human exposure, however, has not been quantified for the vast majority of chemicals with wide commercial use. Creative advances in exposure science are needed to support efficient and effective evaluation and management of chemical risks, particularly for chemicals in consumer products. The U.S. Environmental Protection Agency Office of Research and Development is developing, or collaborating in the development of, scientifically-defensible methods for making quantitative or semi-quantitative exposure predictions. The Exposure Prioritization (Ex Priori) model is a simplified, quantitative visual dashboard that provides a rank-ordered internalized dose metric to simultaneously explore exposures across chemical space (not chemical by chemical). Diverse data streams are integrated within the interface such that different exposure scenarios for “individual,” “population,” or “professional” time-use profiles can be interchanged to tailor exposure and quantitatively explore multi-chemical signatures of exposure, internalized dose (uptake), body burden, and elimination. Ex Priori has been designed as an adaptable systems framework that synthesizes knowledge from various domains and is amenable to new knowledge/information. As such, it algorithmically captures the totality of exposure across pathways. It

  3. A Backscatter-Lidar Forward-Operator

    NASA Astrophysics Data System (ADS)

    Geisinger, Armin; Behrendt, Andreas; Wulfmeyer, Volker; Vogel, Bernhard; Mattis, Ina; Flentje, Harald; Förstner, Jochen; Potthast, Roland

    2015-04-01

    We have developed a forward-operator which is capable of calculating virtual lidar profiles from atmospheric state simulations. The operator allows us to compare lidar measurements and model simulations based on the same measurement parameter: the lidar backscatter profile. This method simplifies qualitative comparisons and also makes quantitative comparisons possible, including statistical error quantification. Implemented into an aerosol-capable model system, the operator will act as a component to assimilate backscatter-lidar measurements. As many weather services maintain already networks of backscatter-lidars, such data are acquired already in an operational manner. To estimate and quantify errors due to missing or uncertain aerosol information, we started sensitivity studies about several scattering parameters such as the aerosol size and both the real and imaginary part of the complex index of refraction. Furthermore, quantitative and statistical comparisons between measurements and virtual measurements are shown in this study, i.e. applying the backscatter-lidar forward-operator on model output.

  4. An improved 96-well turbidity assay for T4 lysozyme activity.

    PubMed

    Toro, Tasha B; Nguyen, Thao P; Watt, Terry J

    2015-01-01

    T4 lysozyme (T4L) is an important model system for investigating the relationship between protein structure and function. Despite being extensively studied, a reliable, quantitative activity assay for T4L has not been developed. Here, we present an improved T4L turbidity assay as well as an affinity-based T4L expression and purification protocol. This assay is designed for 96-well format and utilizes conditions amenable for both T4L and other lysozymes. This protocol enables easy, efficient, and quantitative characterization of T4L variants and allows comparison between different lysozymes. Our method: •Is applicable for all lysozymes, with enhanced sensitivity for T4 lysozyme compared to other 96-well plate turbidity assays;•Utilizes standardized conditions for comparing T4 lysozyme variants and other lysozymes; and•Incorporates a simplified expression and purification protocol for T4 lysozyme.

  5. An improved 96-well turbidity assay for T4 lysozyme activity

    PubMed Central

    Toro, Tasha B.; Nguyen, Thao P.; Watt, Terry J.

    2015-01-01

    T4 lysozyme (T4L) is an important model system for investigating the relationship between protein structure and function. Despite being extensively studied, a reliable, quantitative activity assay for T4L has not been developed. Here, we present an improved T4L turbidity assay as well as an affinity-based T4L expression and purification protocol. This assay is designed for 96-well format and utilizes conditions amenable for both T4L and other lysozymes. This protocol enables easy, efficient, and quantitative characterization of T4L variants and allows comparison between different lysozymes. Our method: • Is applicable for all lysozymes, with enhanced sensitivity for T4 lysozyme compared to other 96-well plate turbidity assays; • Utilizes standardized conditions for comparing T4 lysozyme variants and other lysozymes; and • Incorporates a simplified expression and purification protocol for T4 lysozyme. PMID:26150996

  6. Simultaneous determination of effective carrier lifetime and resistivity of Si wafers using the nonlinear nature of photocarrier radiometric signals

    NASA Astrophysics Data System (ADS)

    Sun, Qiming; Melnikov, Alexander; Wang, Jing; Mandelis, Andreas

    2018-04-01

    A rigorous treatment of the nonlinear behavior of photocarrier radiometric (PCR) signals is presented theoretically and experimentally for the quantitative characterization of semiconductor photocarrier recombination and transport properties. A frequency-domain model based on the carrier rate equation and the classical carrier radiative recombination theory was developed. The derived concise expression reveals different functionalities of the PCR amplitude and phase channels: the phase bears direct quantitative correlation with the carrier effective lifetime, while the amplitude versus the estimated photocarrier density dependence can be used to extract the equilibrium majority carrier density and thus, resistivity. An experimental ‘ripple’ optical excitation mode (small modulation depth compared to the dc level) was introduced to bypass the complicated ‘modulated lifetime’ problem so as to simplify theoretical interpretation and guarantee measurement self-consistency and reliability. Two Si wafers with known resistivity values were tested to validate the method.

  7. Dual-wavelength excitation to reduce background fluorescence for fluorescence spectroscopic quantitation of erythrocyte zinc protoporphyrin-IX and protoporphyrin-IX from whole blood and oral mucosa

    NASA Astrophysics Data System (ADS)

    Hennig, Georg; Vogeser, Michael; Holdt, Lesca M.; Homann, Christian; Großmann, Michael; Stepp, Herbert; Gruber, Christian; Erdogan, Ilknur; Hasmüller, Stephan; Hasbargen, Uwe; Brittenham, Gary M.

    2014-02-01

    Erythrocyte zinc protoporphyrin-IX (ZnPP) and protoporphyrin-IX (PPIX) accumulate in a variety of disorders that restrict or disrupt the biosynthesis of heme, including iron deficiency and various porphyrias. We describe a reagent-free spectroscopic method based on dual-wavelength excitation that can measure simultaneously both ZnPP and PPIX fluorescence from unwashed whole blood while virtually eliminating background fluorescence. We further aim to quantify ZnPP and PPIX non-invasively from the intact oral mucosa using dual-wavelength excitation to reduce the strong tissue background fluorescence while retaining the faint porphyrin fluorescence signal originating from erythrocytes. Fluorescence spectroscopic measurements were made on 35 diluted EDTA blood samples using a custom front-face fluorometer. The difference spectrum between fluorescence at 425 nm and 407 nm excitation effectively eliminated background autofluorescence while retaining the characteristic porphyrin peaks. These peaks were evaluated quantitatively and the results compared to a reference HPLC-kit method. A modified instrument using a single 1000 μm fiber for light delivery and detection was used to record fluorescence spectra from oral mucosa. For blood measurements, the ZnPP and PPIX fluorescence intensities from the difference spectra correlated well with the reference method (ZnPP: Spearman's rho rs = 0.943, p < 0.0001; PPIX: rs = 0.959, p < 0.0001). In difference spectra from oral mucosa, background fluorescence was reduced significantly, while porphyrin signals remained observable. The dual-wavelength excitation method evaluates quantitatively the ZnPP/heme and PPIX/heme ratios from unwashed whole blood, simplifying clinical laboratory measurements. The difference technique reduces the background fluorescence from measurements on oral mucosa, allowing for future non-invasive quantitation of erythrocyte ZnPP and PPIX.

  8. Calibration-free assays on standard real-time PCR devices

    PubMed Central

    Debski, Pawel R.; Gewartowski, Kamil; Bajer, Seweryn; Garstecki, Piotr

    2017-01-01

    Quantitative Polymerase Chain Reaction (qPCR) is one of central techniques in molecular biology and important tool in medical diagnostics. While being a golden standard qPCR techniques depend on reference measurements and are susceptible to large errors caused by even small changes of reaction efficiency or conditions that are typically not marked by decreased precision. Digital PCR (dPCR) technologies should alleviate the need for calibration by providing absolute quantitation using binary (yes/no) signals from partitions provided that the basic assumption of amplification a single target molecule into a positive signal is met. Still, the access to digital techniques is limited because they require new instruments. We show an analog-digital method that can be executed on standard (real-time) qPCR devices. It benefits from real-time readout, providing calibration-free assessment. The method combines advantages of qPCR and dPCR and bypasses their drawbacks. The protocols provide for small simplified partitioning that can be fitted within standard well plate format. We demonstrate that with the use of synergistic assay design standard qPCR devices are capable of absolute quantitation when normal qPCR protocols fail to provide accurate estimates. We list practical recipes how to design assays for required parameters, and how to analyze signals to estimate concentration. PMID:28327545

  9. Calibration-free assays on standard real-time PCR devices

    NASA Astrophysics Data System (ADS)

    Debski, Pawel R.; Gewartowski, Kamil; Bajer, Seweryn; Garstecki, Piotr

    2017-03-01

    Quantitative Polymerase Chain Reaction (qPCR) is one of central techniques in molecular biology and important tool in medical diagnostics. While being a golden standard qPCR techniques depend on reference measurements and are susceptible to large errors caused by even small changes of reaction efficiency or conditions that are typically not marked by decreased precision. Digital PCR (dPCR) technologies should alleviate the need for calibration by providing absolute quantitation using binary (yes/no) signals from partitions provided that the basic assumption of amplification a single target molecule into a positive signal is met. Still, the access to digital techniques is limited because they require new instruments. We show an analog-digital method that can be executed on standard (real-time) qPCR devices. It benefits from real-time readout, providing calibration-free assessment. The method combines advantages of qPCR and dPCR and bypasses their drawbacks. The protocols provide for small simplified partitioning that can be fitted within standard well plate format. We demonstrate that with the use of synergistic assay design standard qPCR devices are capable of absolute quantitation when normal qPCR protocols fail to provide accurate estimates. We list practical recipes how to design assays for required parameters, and how to analyze signals to estimate concentration.

  10. Recent Achievements in Characterizing the Histone Code and Approaches to Integrating Epigenomics and Systems Biology.

    PubMed

    Janssen, K A; Sidoli, S; Garcia, B A

    2017-01-01

    Functional epigenetic regulation occurs by dynamic modification of chromatin, including genetic material (i.e., DNA methylation), histone proteins, and other nuclear proteins. Due to the highly complex nature of the histone code, mass spectrometry (MS) has become the leading technique in identification of single and combinatorial histone modifications. MS has now overcome antibody-based strategies due to its automation, high resolution, and accurate quantitation. Moreover, multiple approaches to analysis have been developed for global quantitation of posttranslational modifications (PTMs), including large-scale characterization of modification coexistence (middle-down and top-down proteomics), which is not currently possible with any other biochemical strategy. Recently, our group and others have simplified and increased the effectiveness of analyzing histone PTMs by improving multiple MS methods and data analysis tools. This review provides an overview of the major achievements in the analysis of histone PTMs using MS with a focus on the most recent improvements. We speculate that the workflow for histone analysis at its state of the art is highly reliable in terms of identification and quantitation accuracy, and it has the potential to become a routine method for systems biology thanks to the possibility of integrating histone MS results with genomics and proteomics datasets. © 2017 Elsevier Inc. All rights reserved.

  11. Failure mode and effects analysis: a comparison of two common risk prioritisation methods.

    PubMed

    McElroy, Lisa M; Khorzad, Rebeca; Nannicelli, Anna P; Brown, Alexandra R; Ladner, Daniela P; Holl, Jane L

    2016-05-01

    Failure mode and effects analysis (FMEA) is a method of risk assessment increasingly used in healthcare over the past decade. The traditional method, however, can require substantial time and training resources. The goal of this study is to compare a simplified scoring method with the traditional scoring method to determine the degree of congruence in identifying high-risk failures. An FMEA of the operating room (OR) to intensive care unit (ICU) handoff was conducted. Failures were scored and ranked using both the traditional risk priority number (RPN) and criticality-based method, and a simplified method, which designates failures as 'high', 'medium' or 'low' risk. The degree of congruence was determined by first identifying those failures determined to be critical by the traditional method (RPN≥300), and then calculating the per cent congruence with those failures designated critical by the simplified methods (high risk). In total, 79 process failures among 37 individual steps in the OR to ICU handoff process were identified. The traditional method yielded Criticality Indices (CIs) ranging from 18 to 72 and RPNs ranging from 80 to 504. The simplified method ranked 11 failures as 'low risk', 30 as medium risk and 22 as high risk. The traditional method yielded 24 failures with an RPN ≥300, of which 22 were identified as high risk by the simplified method (92% agreement). The top 20% of CI (≥60) included 12 failures, of which six were designated as high risk by the simplified method (50% agreement). These results suggest that the simplified method of scoring and ranking failures identified by an FMEA can be a useful tool for healthcare organisations with limited access to FMEA expertise. However, the simplified method does not result in the same degree of discrimination in the ranking of failures offered by the traditional method. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  12. A simplified and accurate detection of the genetically modified wheat MON71800 with one calibrator plasmid.

    PubMed

    Kim, Jae-Hwan; Park, Saet-Byul; Roh, Hyo-Jeong; Park, Sunghoon; Shin, Min-Ki; Moon, Gui Im; Hong, Jin-Hwan; Kim, Hae-Yeong

    2015-06-01

    With the increasing number of genetically modified (GM) events, unauthorized GMO releases into the food market have increased dramatically, and many countries have developed detection tools for them. This study described the qualitative and quantitative detection methods of unauthorized the GM wheat MON71800 with a reference plasmid (pGEM-M71800). The wheat acetyl-CoA carboxylase (acc) gene was used as the endogenous gene. The plasmid pGEM-M71800, which contains both the acc gene and the event-specific target MON71800, was constructed as a positive control for the qualitative and quantitative analyses. The limit of detection in the qualitative PCR assay was approximately 10 copies. In the quantitative PCR assay, the standard deviation and relative standard deviation repeatability values ranged from 0.06 to 0.25 and from 0.23% to 1.12%, respectively. This study supplies a powerful and very simple but accurate detection strategy for unauthorized GM wheat MON71800 that utilizes a single calibrator plasmid. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Inertial Sensor-Based Motion Analysis of Lower Limbs for Rehabilitation Treatments

    PubMed Central

    Sun, Tongyang; Duan, Lihong; Wang, Yulong

    2017-01-01

    The hemiplegic rehabilitation state diagnosing performed by therapists can be biased due to their subjective experience, which may deteriorate the rehabilitation effect. In order to improve this situation, a quantitative evaluation is proposed. Though many motion analysis systems are available, they are too complicated for practical application by therapists. In this paper, a method for detecting the motion of human lower limbs including all degrees of freedom (DOFs) via the inertial sensors is proposed, which permits analyzing the patient's motion ability. This method is applicable to arbitrary walking directions and tracks of persons under study, and its results are unbiased, as compared to therapist qualitative estimations. Using the simplified mathematical model of a human body, the rotation angles for each lower limb joint are calculated from the input signals acquired by the inertial sensors. Finally, the rotation angle versus joint displacement curves are constructed, and the estimated values of joint motion angle and motion ability are obtained. The experimental verification of the proposed motion detection and analysis method was performed, which proved that it can efficiently detect the differences between motion behaviors of disabled and healthy persons and provide a reliable quantitative evaluation of the rehabilitation state. PMID:29065575

  14. A Versatile Method for Functionalizing Surfaces with Bioactive Glycans

    PubMed Central

    Cheng, Fang; Shang, Jing; Ratner, Daniel M.

    2011-01-01

    Microarrays and biosensors owe their functionality to our ability to display surface-bound biomolecules with retained biological function. Versatile, stable, and facile methods for the immobilization of bioactive compounds on surfaces have expanded the application of high-throughput ‘omics’-scale screening of molecular interactions by non-expert laboratories. Herein, we demonstrate the potential of simplified chemistries to fabricate a glycan microarray, utilizing divinyl sulfone (DVS)-modified surfaces for the covalent immobilization of natural and chemically derived carbohydrates, as well as glycoproteins. The bioactivity of the captured glycans was quantitatively examined by surface plasmon resonance imaging (SPRi). Composition and spectroscopic evidence of carbohydrate species on the DVS-modified surface were obtained by X-ray photoelectron spectroscopy (XPS) and time-of-flight secondary ion mass spectrometry (ToF-SIMS), respectively. The site-selective immobilization of glycans based on relative nucleophilicity (reducing sugar vs. amine- and sulfhydryl-derived saccharides) and anomeric configuration was also examined. Our results demonstrate straightforward and reproducible conjugation of a variety of functional biomolecules onto a vinyl sulfone-modified biosensor surface. The simplicity of this method will have a significant impact on glycomics research, as it expands the ability of non-synthetic laboratories to rapidly construct functional glycan microarrays and quantitative biosensors. PMID:21142056

  15. Application of a SERS-based lateral flow immunoassay strip for the rapid and sensitive detection of staphylococcal enterotoxin B

    NASA Astrophysics Data System (ADS)

    Hwang, Joonki; Lee, Sangyeop; Choo, Jaebum

    2016-06-01

    A novel surface-enhanced Raman scattering (SERS)-based lateral flow immunoassay (LFA) biosensor was developed to resolve problems associated with conventional LFA strips (e.g., limits in quantitative analysis and low sensitivity). In our SERS-based biosensor, Raman reporter-labeled hollow gold nanospheres (HGNs) were used as SERS detection probes instead of gold nanoparticles. With the proposed SERS-based LFA strip, the presence of a target antigen can be identified through a colour change in the test zone. Furthermore, highly sensitive quantitative evaluation is possible by measuring SERS signals from the test zone. To verify the feasibility of the SERS-based LFA strip platform, an immunoassay of staphylococcal enterotoxin B (SEB) was performed as a model reaction. The limit of detection (LOD) for SEB, as determined with the SERS-based LFA strip, was estimated to be 0.001 ng mL-1. This value is approximately three orders of magnitude more sensitive than that achieved with the corresponding ELISA-based method. The proposed SERS-based LFA strip sensor shows significant potential for the rapid and sensitive detection of target markers in a simplified manner.A novel surface-enhanced Raman scattering (SERS)-based lateral flow immunoassay (LFA) biosensor was developed to resolve problems associated with conventional LFA strips (e.g., limits in quantitative analysis and low sensitivity). In our SERS-based biosensor, Raman reporter-labeled hollow gold nanospheres (HGNs) were used as SERS detection probes instead of gold nanoparticles. With the proposed SERS-based LFA strip, the presence of a target antigen can be identified through a colour change in the test zone. Furthermore, highly sensitive quantitative evaluation is possible by measuring SERS signals from the test zone. To verify the feasibility of the SERS-based LFA strip platform, an immunoassay of staphylococcal enterotoxin B (SEB) was performed as a model reaction. The limit of detection (LOD) for SEB, as determined with the SERS-based LFA strip, was estimated to be 0.001 ng mL-1. This value is approximately three orders of magnitude more sensitive than that achieved with the corresponding ELISA-based method. The proposed SERS-based LFA strip sensor shows significant potential for the rapid and sensitive detection of target markers in a simplified manner. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr07243c

  16. Quantitative bioanalysis of strontium in human serum by inductively coupled plasma-mass spectrometry

    PubMed Central

    Somarouthu, Srikanth; Ohh, Jayoung; Shaked, Jonathan; Cunico, Robert L; Yakatan, Gerald; Corritori, Suzana; Tami, Joe; Foehr, Erik D

    2015-01-01

    Aim: A bioanalytical method using inductively-coupled plasma-mass spectrometry to measure endogenous levels of strontium in human serum was developed and validated. Results & methodology: This article details the experimental procedures used for the method development and validation thus demonstrating the application of the inductively-coupled plasma-mass spectrometry method for quantification of strontium in human serum samples. The assay was validated for specificity, linearity, accuracy, precision, recovery and stability. Significant endogenous levels of strontium are present in human serum samples ranging from 19 to 96 ng/ml with a mean of 34.6 ± 15.2 ng/ml (SD). Discussion & conclusion: Calibration procedures and sample pretreatment were simplified for high throughput analysis. The validation demonstrates that the method was sensitive, selective for quantification of strontium (88Sr) and is suitable for routine clinical testing of strontium in human serum samples. PMID:28031925

  17. Development, verification, and application of a simplified method to estimate total-streambed scour at bridge sites in Illinois

    USGS Publications Warehouse

    Holmes, Robert R.; Dunn, Chad J.

    1996-01-01

    A simplified method to estimate total-streambed scour was developed for application to bridges in the State of Illinois. Scour envelope curves, developed as empirical relations between calculated total scour and bridge-site chracteristics for 213 State highway bridges in Illinois, are used in the method to estimate the 500-year flood scour. These 213 bridges, geographically distributed throughout Illinois, had been previously evaluated for streambed scour with the application of conventional hydraulic and scour-analysis methods recommended by the Federal Highway Administration. The bridge characteristics necessary for application of the simplified bridge scour-analysis method can be obtained from an office review of bridge plans, examination of topographic maps, and reconnaissance-level site inspection. The estimates computed with the simplified method generally resulted in a larger value of 500-year flood total-streambed scour than with the more detailed conventional method. The simplified method was successfully verified with a separate data set of 106 State highway bridges, which are geographically distributed throughout Illinois, and 15 county highway bridges.

  18. Modeling inelastic phonon scattering in atomic- and molecular-wire junctions

    NASA Astrophysics Data System (ADS)

    Paulsson, Magnus; Frederiksen, Thomas; Brandbyge, Mads

    2005-11-01

    Computationally inexpensive approximations describing electron-phonon scattering in molecular-scale conductors are derived from the nonequilibrium Green’s function method. The accuracy is demonstrated with a first-principles calculation on an atomic gold wire. Quantitative agreement between the full nonequilibrium Green’s function calculation and the newly derived expressions is obtained while simplifying the computational burden by several orders of magnitude. In addition, analytical models provide intuitive understanding of the conductance including nonequilibrium heating and provide a convenient way of parameterizing the physics. This is exemplified by fitting the expressions to the experimentally observed conductances through both an atomic gold wire and a hydrogen molecule.

  19. Simplified Discontinuous Galerkin Methods for Systems of Conservation Laws with Convex Extension

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    1999-01-01

    Simplified forms of the space-time discontinuous Galerkin (DG) and discontinuous Galerkin least-squares (DGLS) finite element method are developed and analyzed. The new formulations exploit simplifying properties of entropy endowed conservation law systems while retaining the favorable energy properties associated with symmetric variable formulations.

  20. Inverse methods for 3D quantitative optical coherence elasticity imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Dong, Li; Wijesinghe, Philip; Hugenberg, Nicholas; Sampson, David D.; Munro, Peter R. T.; Kennedy, Brendan F.; Oberai, Assad A.

    2017-02-01

    In elastography, quantitative elastograms are desirable as they are system and operator independent. Such quantification also facilitates more accurate diagnosis, longitudinal studies and studies performed across multiple sites. In optical elastography (compression, surface-wave or shear-wave), quantitative elastograms are typically obtained by assuming some form of homogeneity. This simplifies data processing at the expense of smearing sharp transitions in elastic properties, and/or introducing artifacts in these regions. Recently, we proposed an inverse problem-based approach to compression OCE that does not assume homogeneity, and overcomes the drawbacks described above. In this approach, the difference between the measured and predicted displacement field is minimized by seeking the optimal distribution of elastic parameters. The predicted displacements and recovered elastic parameters together satisfy the constraint of the equations of equilibrium. This approach, which has been applied in two spatial dimensions assuming plane strain, has yielded accurate material property distributions. Here, we describe the extension of the inverse problem approach to three dimensions. In addition to the advantage of visualizing elastic properties in three dimensions, this extension eliminates the plane strain assumption and is therefore closer to the true physical state. It does, however, incur greater computational costs. We address this challenge through a modified adjoint problem, spatially adaptive grid resolution, and three-dimensional decomposition techniques. Through these techniques the inverse problem is solved on a typical desktop machine within a wall clock time of 20 hours. We present the details of the method and quantitative elasticity images of phantoms and tissue samples.

  1. How Does the Low-Rank Matrix Decomposition Help Internal and External Learnings for Super-Resolution.

    PubMed

    Wang, Shuang; Yue, Bo; Liang, Xuefeng; Jiao, Licheng

    2018-03-01

    Wisely utilizing the internal and external learning methods is a new challenge in super-resolution problem. To address this issue, we analyze the attributes of two methodologies and find two observations of their recovered details: 1) they are complementary in both feature space and image plane and 2) they distribute sparsely in the spatial space. These inspire us to propose a low-rank solution which effectively integrates two learning methods and then achieves a superior result. To fit this solution, the internal learning method and the external learning method are tailored to produce multiple preliminary results. Our theoretical analysis and experiment prove that the proposed low-rank solution does not require massive inputs to guarantee the performance, and thereby simplifying the design of two learning methods for the solution. Intensive experiments show the proposed solution improves the single learning method in both qualitative and quantitative assessments. Surprisingly, it shows more superior capability on noisy images and outperforms state-of-the-art methods.

  2. Two-Photon Flow Cytometry

    NASA Technical Reports Server (NTRS)

    Zhog, Cheng Frank; Ye, Jing Yong; Norris, Theodore B.; Myc, Andrzej; Cao, Zhengyl; Bielinska, Anna; Thomas, Thommey; Baker, James R., Jr.

    2004-01-01

    Flow cytometry is a powerful technique for obtaining quantitative information from fluorescence in cells. Quantitation is achieved by assuring a high degree of uniformity in the optical excitation and detection, generally by using a highly controlled flow such as is obtained via hydrodynamic focusing. In this work, we demonstrate a two-beam, two- channel detection and two-photon excitation flow cytometry (T(sup 3)FC) system that enables multi-dye analysis to be performed very simply, with greatly relaxed requirements on the fluid flow. Two-photon excitation using a femtosecond near-infrared (NIR) laser has the advantages that it enables simultaneous excitation of multiple dyes and achieves very high signal-to-noise ratio through simplified filtering and fluorescence background reduction. By matching the excitation volume to the size of a cell, single-cell detection is ensured. Labeling of cells by targeted nanoparticles with multiple fluorophores enables normalization of the fluorescence signal and thus ratiometric measurements under nonuniform excitation. Quantitative size measurements can also be done even under conditions of nonuniform flow via a two-beam layout. This innovative detection scheme not only considerably simplifies the fluid flow system and the excitation and collection optics, it opens the way to quantitative cytometry in simple and compact microfluidics systems, or in vivo. Real-time detection of fluorescent microbeads in the vasculature of mouse ear demonstrates the ability to do flow cytometry in vivo. The conditions required to perform quantitative in vivo cytometry on labeled cells will be presented.

  3. I Vivo Quantitative Ultrasound Imaging and Scatter Assessments.

    NASA Astrophysics Data System (ADS)

    Lu, Zheng Feng

    There is evidence that "instrument independent" measurements of ultrasonic scattering properties would provide useful diagnostic information that is not available with conventional ultrasound imaging. This dissertation is a continuing effort to test the above hypothesis and to incorporate quantitative ultrasound methods into clinical examinations for early detection of diffuse liver disease. A well-established reference phantom method was employed to construct quantitative ultrasound images of tissue in vivo. The method was verified by extensive phantom tests. A new method was developed to measure the effective attenuation coefficient of the body wall. The method relates the slope of the difference between the echo signal power spectrum from a uniform region distal to the body wall and the echo signal power spectrum from a reference phantom to the body wall attenuation. The accuracy obtained from phantom tests suggests further studies with animal experiments. Clinically, thirty-five healthy subjects and sixteen patients with diffuse liver disease were studied by these quantitative ultrasound methods. The average attenuation coefficient in normals agreed with previous investigators' results; in vivo backscatter coefficients agreed with the results from normals measured by O'Donnell. Strong discriminating power (p < 0.001) was found for both attenuation and backscatter coefficients between fatty livers and normals; a significant difference (p < 0.01) was observed in the backscatter coefficient but not in the attenuation coefficient between cirrhotic livers and normals. An in vivo animal model of steroid hepatopathy was used to investigate the system sensitivity in detecting early changes in canine liver resulting from corticosteroid administration. The average attenuation coefficient slope increased from 0.7 dB/cm/MHz in controls to 0.82 dB/cm/MHz (at 6 MHz) in treated animals on day 14 into the treatment, and the backscatter coefficient was 26times 10^{ -4}cm^{-1}sr^{-1} in controls compared with 74times 10^{-4}cm^{-1}sr^ {-1} (at 6 MHz) in treated animals. A simplified quantitative approach using video image signals was developed. Results derived both from the r.f. signal analysis and from the video signal analysis are sensitive to the changes in the liver in this animal model.

  4. Ex Priori: Exposure-based Prioritization across Chemical Space

    EPA Science Inventory

    EPA's Exposure Prioritization (Ex Priori) is a simplified, quantitative visual dashboard that makes use of data from various inputs to provide rank-ordered internalized dose metric. This complements other high throughput screening by viewing exposures within all chemical space si...

  5. Impact of parasitic thermal effects on thermoelectric property measurements by Harman method.

    PubMed

    Kwon, Beomjin; Baek, Seung-Hyub; Kim, Seong Keun; Kim, Jin-Sang

    2014-04-01

    Harman method is a rapid and simple technique to measure thermoelectric properties. However, its validity has been often questioned due to the over-simplified assumptions that this method relies on. Here, we quantitatively investigate the influence of the previously ignored parasitic thermal effects on the Harman method and develop a method to determine an intrinsic ZT. We expand the original Harman relation with three extra terms: heat losses via both the lead wires and radiation, and Joule heating within the sample. Based on the expanded Harman relation, we use differential measurement of the sample geometry to measure the intrinsic ZT. To separately evaluate the parasitic terms, the measured ZTs with systematically varied sample geometries and the lead wire types are fitted to the expanded relation. A huge discrepancy (∼28%) of the measured ZTs depending on the measurement configuration is observed. We are able to separately evaluate those parasitic terms. This work will help to evaluate the intrinsic thermoelectric property with Harman method by eliminating ambiguities coming from extrinsic effects.

  6. Simplified method for calculating shear deflections of beams.

    Treesearch

    I. Orosz

    1970-01-01

    When one designs with wood, shear deflections can become substantial compared to deflections due to moments, because the modulus of elasticity in bending differs from that in shear by a large amount. This report presents a simplified energy method to calculate shear deflections in bending members. This simplified approach should help designers decide whether or not...

  7. Multifunctional sample preparation kit and on-chip quantitative nucleic acid sequence-based amplification tests for microbial detection.

    PubMed

    Zhao, Xinyan; Dong, Tao

    2012-10-16

    This study reports a quantitative nucleic acid sequence-based amplification (Q-NASBA) microfluidic platform composed of a membrane-based sampling module, a sample preparation cassette, and a 24-channel Q-NASBA chip for environmental investigations on aquatic microorganisms. This low-cost and highly efficient sampling module, having seamless connection with the subsequent steps of sample preparation and quantitative detection, is designed for the collection of microbial communities from aquatic environments. Eight kinds of commercial membrane filters are relevantly analyzed using Saccharomyces cerevisiae, Escherichia coli, and Staphylococcus aureus as model microorganisms. After the microorganisms are concentrated on the membrane filters, the retentate can be easily conserved in a transport medium (TM) buffer and sent to a remote laboratory. A Q-NASBA-oriented sample preparation cassette is originally designed to extract DNA/RNA molecules directly from the captured cells on the membranes. Sequentially, the extract is analyzed within Q-NASBA chips that are compatible with common microplate readers in laboratories. Particularly, a novel analytical algorithmic method is developed for simple but robust on-chip Q-NASBA assays. The reported multifunctional microfluidic system could detect a few microorganisms quantitatively and simultaneously. Further research should be conducted to simplify and standardize ecological investigations on aquatic environments.

  8. 48 CFR 13.305-4 - Procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Section 13.305-4 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 13.305-4... purchase requisition, contracting officer verification statement, or other agency approved method of...

  9. A simplified flight-test method for determining aircraft takeoff performance that includes effects of pilot technique

    NASA Technical Reports Server (NTRS)

    Larson, T. J.; Schweikhard, W. G.

    1974-01-01

    A method for evaluating aircraft takeoff performance from brake release to air-phase height that requires fewer tests than conventionally required is evaluated with data for the XB-70 airplane. The method defines the effects of pilot technique on takeoff performance quantitatively, including the decrease in acceleration from drag due to lift. For a given takeoff weight and throttle setting, a single takeoff provides enough data to establish a standardizing relationship for the distance from brake release to any point where velocity is appropriate to rotation. The lower rotation rates penalized takeoff performance in terms of ground roll distance; the lowest observed rotation rate required a ground roll distance that was 19 percent longer than the highest. Rotations at the minimum rate also resulted in lift-off velocities that were approximately 5 knots lower than the highest rotation rate at any given lift-off distance.

  10. Modeling of Continuum Manipulators Using Pythagorean Hodograph Curves.

    PubMed

    Singh, Inderjeet; Amara, Yacine; Melingui, Achille; Mani Pathak, Pushparaj; Merzouki, Rochdi

    2018-05-10

    Research on continuum manipulators is increasingly developing in the context of bionic robotics because of their many advantages over conventional rigid manipulators. Due to their soft structure, they have inherent flexibility, which makes it a huge challenge to control them with high performances. Before elaborating a control strategy of such robots, it is essential to reconstruct first the behavior of the robot through development of an approximate behavioral model. This can be kinematic or dynamic depending on the conditions of operation of the robot itself. Kinematically, two types of modeling methods exist to describe the robot behavior; quantitative methods describe a model-based method, and qualitative methods describe a learning-based method. In kinematic modeling of continuum manipulator, the assumption of constant curvature is often considered to simplify the model formulation. In this work, a quantitative modeling method is proposed, based on the Pythagorean hodograph (PH) curves. The aim is to obtain a three-dimensional reconstruction of the shape of the continuum manipulator with variable curvature, allowing the calculation of its inverse kinematic model (IKM). It is noticed that the performances of the PH-based kinematic modeling of continuum manipulators are considerable regarding position accuracy, shape reconstruction, and time/cost of the model calculation, than other kinematic modeling methods, for two cases: free load manipulation and variable load manipulation. This modeling method is applied to the compact bionic handling assistant (CBHA) manipulator for validation. The results are compared with other IKMs developed in case of CBHA manipulator.

  11. Differential Mobility Spectrometry-Mass Spectrometry (DMS-MS) in Radiation Biodosimetry: Rapid and High-Throughput Quantitation of Multiple Radiation Biomarkers in Nonhuman Primate Urine.

    PubMed

    Chen, Zhidan; Coy, Stephen L; Pannkuk, Evan L; Laiakis, Evagelia C; Fornace, Albert J; Vouros, Paul

    2018-05-07

    High-throughput methods to assess radiation exposure are a priority due to concerns that include nuclear power accidents, the spread of nuclear weapon capability, and the risk of terrorist attacks. Metabolomics, the assessment of small molecules in an easily accessible sample, is the most recent method to be applied for the identification of biomarkers of the biological radiation response with a useful dose-response profile. Profiling for biomarker identification is frequently done using an LC-MS platform which has limited throughput due to the time-consuming nature of chromatography. We present here a chromatography-free simplified method for quantitative analysis of seven metabolites in urine with radiation dose-response using urine samples provided from the Pannkuk et al. (2015) study of long-term (7-day) radiation response in nonhuman primates (NHP). The stable isotope dilution (SID) analytical method consists of sample preparation by strong cation exchange-solid phase extraction (SCX-SPE) to remove interferences and concentrate the metabolites of interest, followed by differential mobility spectrometry (DMS) ion filtration to select the ion of interest and reduce chemical background, followed by mass spectrometry (overall SID-SPE-DMS-MS). Since no chromatography is used, calibration curves were prepared rapidly, in under 2 h (including SPE) for six simultaneously analyzed radiation biomarkers. The seventh, creatinine, was measured separately after 2500× dilution. Creatinine plays a dual role, measuring kidney glomerular filtration rate (GFR), and indicating kidney damage at high doses. The current quantitative method using SID-SPE-DMS-MS provides throughput which is 7.5 to 30 times higher than that of LC-MS and provides a path to pre-clinical radiation dose estimation. Graphical Abstract.

  12. Differential Mobility Spectrometry-Mass Spectrometry (DMS-MS) in Radiation Biodosimetry: Rapid and High-Throughput Quantitation of Multiple Radiation Biomarkers in Nonhuman Primate Urine

    NASA Astrophysics Data System (ADS)

    Chen, Zhidan; Coy, Stephen L.; Pannkuk, Evan L.; Laiakis, Evagelia C.; Fornace, Albert J.; Vouros, Paul

    2018-05-01

    High-throughput methods to assess radiation exposure are a priority due to concerns that include nuclear power accidents, the spread of nuclear weapon capability, and the risk of terrorist attacks. Metabolomics, the assessment of small molecules in an easily accessible sample, is the most recent method to be applied for the identification of biomarkers of the biological radiation response with a useful dose-response profile. Profiling for biomarker identification is frequently done using an LC-MS platform which has limited throughput due to the time-consuming nature of chromatography. We present here a chromatography-free simplified method for quantitative analysis of seven metabolites in urine with radiation dose-response using urine samples provided from the Pannkuk et al. (2015) study of long-term (7-day) radiation response in nonhuman primates (NHP). The stable isotope dilution (SID) analytical method consists of sample preparation by strong cation exchange-solid phase extraction (SCX-SPE) to remove interferences and concentrate the metabolites of interest, followed by differential mobility spectrometry (DMS) ion filtration to select the ion of interest and reduce chemical background, followed by mass spectrometry (overall SID-SPE-DMS-MS). Since no chromatography is used, calibration curves were prepared rapidly, in under 2 h (including SPE) for six simultaneously analyzed radiation biomarkers. The seventh, creatinine, was measured separately after 2500× dilution. Creatinine plays a dual role, measuring kidney glomerular filtration rate (GFR), and indicating kidney damage at high doses. The current quantitative method using SID-SPE-DMS-MS provides throughput which is 7.5 to 30 times higher than that of LC-MS and provides a path to pre-clinical radiation dose estimation. [Figure not available: see fulltext.

  13. Uniform semiclassical sudden approximation for rotationally inelastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korsch, H.J.; Schinke, R.

    1980-08-01

    The infinite-order-sudden (IOS) approximation is investigated in the semiclassical limit. A simplified IOS formula for rotationally inelastic differential cross sections is derived involving a uniform stationary phase approximation for two-dimensional oscillatory integrals with two stationary points. The semiclassical analysis provides a quantitative description of the rotational rainbow structure in the differential cross section. The numerical calculation of semiclassical IOS cross sections is extremely fast compared to numerically exact IOS methods, especially if high ..delta..j transitions are involved. Rigid rotor results for He--Na/sub 2/ collisions with ..delta..j< or approx. =26 and for K--CO collisions with ..delta..j< or approx. =70 show satisfactorymore » agreement with quantal IOS calculations.« less

  14. Kinetic characterisation of primer mismatches in allele-specific PCR: a quantitative assessment.

    PubMed

    Waterfall, Christy M; Eisenthal, Robert; Cobb, Benjamin D

    2002-12-20

    A novel method of estimating the kinetic parameters of Taq DNA polymerase during rapid cycle PCR is presented. A model was constructed using a simplified sigmoid function to represent substrate accumulation during PCR in combination with the general equation describing high substrate inhibition for Michaelis-Menten enzymes. The PCR progress curve was viewed as a series of independent reactions where initial rates were accurately measured for each cycle. Kinetic parameters were obtained for allele-specific PCR (AS-PCR) amplification to examine the effect of mismatches on amplification. A high degree of correlation was obtained providing evidence of substrate inhibition as a major cause of the plateau phase that occurs in the later cycles of PCR.

  15. Hepatic iron overload in the portal tract predicts poor survival in hepatocellular carcinoma after curative resection.

    PubMed

    Chung, Jung Wha; Shin, Eun; Kim, Haeryoung; Han, Ho-Seong; Cho, Jai Young; Choi, Young Rok; Hong, Sukho; Jang, Eun Sun; Kim, Jin-Wook; Jeong, Sook-Hyang

    2018-05-01

    Hepatic iron overload is associated with liver injury and hepatocarcinogenesis; however, it has not been evaluated in patients with hepatocellular carcinoma (HCC) in Asia. The aim of this study was to clarify the degree and distribution of intrahepatic iron deposition, and their effects on the survival of HCC patients. Intrahepatic iron deposition was examined using non-tumorous liver tissues from 204 HCC patients after curative resection, and they were scored by 2 semi-quantitative methods: simplified Scheuer's and modified Deugnier's methods. For the Scheuer's method, iron deposition in hepatocytes and Kupffer cells was separately evaluated, while for the modified Deugnier's method, hepatocyte iron score (HIS), sinusoidal iron score (SIS) and portal iron score (PIS) were systematically evaluated, and the corrected total iron score (cTIS) was calculated by multiplying the sum (TIS) of the HIS, SIS, and PIS by the coefficient. The overall prevalence of hepatic iron was 40.7% with the simplified Scheuer's method and 45.1% with the modified Deugnier's method with a mean cTIS score of 2.46. During a median follow-up of 67 months, the cTIS was not associated with overall survival. However, a positive PIS was significantly associated with a lower 5-year overall survival rate (50.0%) compared with a negative PIS (73.7%, P = .006). In the multivariate analysis, a positive PIS was an independent factor for overall mortality (hazard ratio, 2.310; 95% confidence interval, 1.181-4.517). Intrahepatic iron deposition was common, and iron overload in the portal tract indicated poor survival in curatively resected HCC patients. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. A Modeling Approach for Burn Scar Assessment Using Natural Features and Elastic Property

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsap, L V; Zhang, Y; Goldgof, D B

    2004-04-02

    A modeling approach is presented for quantitative burn scar assessment. Emphases are given to: (1) constructing a finite element model from natural image features with an adaptive mesh, and (2) quantifying the Young's modulus of scars using the finite element model and the regularization method. A set of natural point features is extracted from the images of burn patients. A Delaunay triangle mesh is then generated that adapts to the point features. A 3D finite element model is built on top of the mesh with the aid of range images providing the depth information. The Young's modulus of scars ismore » quantified with a simplified regularization functional, assuming that the knowledge of scar's geometry is available. The consistency between the Relative Elasticity Index and the physician's rating based on the Vancouver Scale (a relative scale used to rate burn scars) indicates that the proposed modeling approach has high potentials for image-based quantitative burn scar assessment.« less

  17. PCR detection and quantitation of predominant anaerobic bacteria in human and animal fecal samples.

    PubMed Central

    Wang, R F; Cao, W W; Cerniglia, C E

    1996-01-01

    PCR procedures based on 16S rRNA gene sequences specific for 12 anaerobic bacteria that predominate in the human intestinal tract were developed and used for quantitative detection of these species in human (adult and baby) feces and animal (rat, mouse, cat, dog, monkey, and rabbit) feces. Fusobacterium prausnitzii, Peptostreptococcus productus, and Clostridium clostridiiforme had high PCR titers (the maximum dilutions for positive PCR results ranged from 10(-3) to 10(-8)) in all of the human and animal fecal samples tested. Bacteroides thetaiotaomicron, Bacteroides vulgatus, and Eubacterium limosum also showed higher PCR titers (10(-2) to 10(-6)) in adult human feces. The other bacteria tested, including Escherichia coli, Bifidobacterium adolescentis, Bifidobacterium longum, Lactobacillus acidophilus, Eubacterium biforme, and Bacteroides distasonis, were either at low PCR titers (less than 10(-2)) or not detected by PCR. The reported PCR procedure including the fecal sample preparation method is simplified and rapid and eliminates the DNA isolation steps. PMID:8919784

  18. Rigorous numerical modeling of scattering-type scanning near-field optical microscopy and spectroscopy

    NASA Astrophysics Data System (ADS)

    Chen, Xinzhong; Lo, Chiu Fan Bowen; Zheng, William; Hu, Hai; Dai, Qing; Liu, Mengkun

    2017-11-01

    Over the last decade, scattering-type scanning near-field optical microscopy and spectroscopy have been widely used in nano-photonics and material research due to their fine spatial resolution and broad spectral range. A number of simplified analytical models have been proposed to quantitatively understand the tip-scattered near-field signal. However, a rigorous interpretation of the experimental results is still lacking at this stage. Numerical modelings, on the other hand, are mostly done by simulating the local electric field slightly above the sample surface, which only qualitatively represents the near-field signal rendered by the tip-sample interaction. In this work, we performed a more comprehensive numerical simulation which is based on realistic experimental parameters and signal extraction procedures. By directly comparing to the experiments as well as other simulation efforts, our methods offer a more accurate quantitative description of the near-field signal, paving the way for future studies of complex systems at the nanoscale.

  19. Development of quantitative security optimization approach for the picture archives and carrying system between a clinic and a rehabilitation center

    NASA Astrophysics Data System (ADS)

    Haneda, Kiyofumi; Kajima, Toshio; Koyama, Tadashi; Muranaka, Hiroyuki; Dojo, Hirofumi; Aratani, Yasuhiko

    2002-05-01

    The target of our study is to analyze the level of necessary security requirements, to search for suitable security measures and to optimize security distribution to every portion of the medical practice. Quantitative expression must be introduced to our study, if possible, to enable simplified follow-up security procedures and easy evaluation of security outcomes or results. Using fault tree analysis (FTA), system analysis showed that system elements subdivided into groups by details result in a much more accurate analysis. Such subdivided composition factors greatly depend on behavior of staff, interactive terminal devices, kinds of services provided, and network routes. Security measures were then implemented based on the analysis results. In conclusion, we identified the methods needed to determine the required level of security and proposed security measures for each medical information system, and the basic events and combinations of events that comprise the threat composition factors. Methods for identifying suitable security measures were found and implemented. Risk factors for each basic event, a number of elements for each composition factor, and potential security measures were found. Methods to optimize the security measures for each medical information system were proposed, developing the most efficient distribution of risk factors for basic events.

  20. Ultrasound Imaging Using Diffraction Tomography in a Cylindrical Geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chambers, D H; Littrup, P

    2002-01-24

    Tomographic images of tissue phantoms and a sample of breast tissue have been produced from an acoustic synthetic array system for frequencies near 500 kHz. The images for sound speed and attenuation show millimeter resolution and demonstrate the feasibility of obtaining high-resolution tomographic images with frequencies that can deeply penetrate tissue. The image reconstruction method is based on the Born approximation to acoustic scattering and is a simplified version of a method previously used by Andre (Andre, et. al., Int. J. Imaging Systems and Technology, Vol 8, No. 1, 1997) for a circular acoustic array system. The images have comparablemore » resolution to conventional ultrasound images at much higher frequencies (3-5 MHz) but with lower speckle noise. This shows the potential of low frequency, deeply penetrating, ultrasound for high-resolution quantitative imaging.« less

  1. Identifying sources of fugitive emissions in industrial facilities using trajectory statistical methods

    NASA Astrophysics Data System (ADS)

    Brereton, Carol A.; Johnson, Matthew R.

    2012-05-01

    Fugitive pollutant sources from the oil and gas industry are typically quite difficult to find within industrial plants and refineries, yet they are a significant contributor of global greenhouse gas emissions. A novel approach for locating fugitive emission sources using computationally efficient trajectory statistical methods (TSM) has been investigated in detailed proof-of-concept simulations. Four TSMs were examined in a variety of source emissions scenarios developed using transient CFD simulations on the simplified geometry of an actual gas plant: potential source contribution function (PSCF), concentration weighted trajectory (CWT), residence time weighted concentration (RTWC), and quantitative transport bias analysis (QTBA). Quantitative comparisons were made using a correlation measure based on search area from the source(s). PSCF, CWT and RTWC could all distinguish areas near major sources from the surroundings. QTBA successfully located sources in only some cases, even when provided with a large data set. RTWC, given sufficient domain trajectory coverage, distinguished source areas best, but otherwise could produce false source predictions. Using RTWC in conjunction with CWT could overcome this issue as well as reduce sensitivity to noise in the data. The results demonstrate that TSMs are a promising approach for identifying fugitive emissions sources within complex facility geometries.

  2. Methods for quantifying adipose tissue insulin resistance in overweight/obese humans.

    PubMed

    Ter Horst, K W; van Galen, K A; Gilijamse, P W; Hartstra, A V; de Groot, P F; van der Valk, F M; Ackermans, M T; Nieuwdorp, M; Romijn, J A; Serlie, M J

    2017-08-01

    Insulin resistance of adipose tissue is an important feature of obesity-related metabolic disease. However, assessment of lipolysis in humans requires labor-intensive and expensive methods, and there is limited validation of simplified measurement methods. We aimed to validate simplified methods for the quantification of adipose tissue insulin resistance against the assessment of insulin sensitivity of lipolysis suppression during hyperinsulinemic-euglycemic clamp studies. We assessed the insulin-mediated suppression of lipolysis by tracer-dilution of [1,1,2,3,3- 2 H 5 ]glycerol during hyperinsulinemic-euglycemic clamp studies in 125 overweight or obese adults (85 men, 40 women; age 50±11 years; body mass index 38±7 kg m -2 ). Seven indices of adipose tissue insulin resistance were validated against the reference measurement method. Low-dose insulin infusion resulted in suppression of the glycerol rate of appearance ranging from 4% (most resistant) to 85% (most sensitive), indicating a good range of adipose tissue insulin sensitivity in the study population. The reference method correlated with (1) insulin-mediated suppression of plasma glycerol concentrations (r=0.960, P<0.001), (2) suppression of plasma non-esterified fatty acid (NEFA) concentrations (r=0.899, P<0.001), (3) the Adipose tissue Insulin Resistance (Adipo-IR) index (fasting plasma insulin-NEFA product; r=-0.526, P<0.001), (4) the fasting plasma insulin-glycerol product (r=-0.467, P<0.001), (5) the Adipose Tissue Insulin Resistance Index (fasting plasma insulin-basal lipolysis product; r=0.460, P<0.001), (6) the Quantitative Insulin Sensitivity Check Index (QUICKI)-NEFA index (r=0.621, P<0.001), and (7) the QUICKI-glycerol index (r=0.671, P<0.001). Bland-Altman plots showed no systematic errors for the suppression indices but proportional errors for all fasting indices. Receiver-operator characteristic curves confirmed that all indices were able to detect adipose tissue insulin resistance (area under the curve ⩾0.801, P<0.001). Adipose tissue insulin sensitivity (that is, the antilipolytic action of insulin) can be reliably quantified in overweight and obese humans by simplified index methods. The sensitivity and specificity of the Adipo-IR index and the fasting plasma insulin-glycerol product, combined with their simplicity and acceptable agreement, suggest that these may be most useful in clinical practice.

  3. Quantification of integrated HIV DNA by repetitive-sampling Alu-HIV PCR on the basis of poisson statistics.

    PubMed

    De Spiegelaere, Ward; Malatinkova, Eva; Lynch, Lindsay; Van Nieuwerburgh, Filip; Messiaen, Peter; O'Doherty, Una; Vandekerckhove, Linos

    2014-06-01

    Quantification of integrated proviral HIV DNA by repetitive-sampling Alu-HIV PCR is a candidate virological tool to monitor the HIV reservoir in patients. However, the experimental procedures and data analysis of the assay are complex and hinder its widespread use. Here, we provide an improved and simplified data analysis method by adopting binomial and Poisson statistics. A modified analysis method on the basis of Poisson statistics was used to analyze the binomial data of positive and negative reactions from a 42-replicate Alu-HIV PCR by use of dilutions of an integration standard and on samples of 57 HIV-infected patients. Results were compared with the quantitative output of the previously described Alu-HIV PCR method. Poisson-based quantification of the Alu-HIV PCR was linearly correlated with the standard dilution series, indicating that absolute quantification with the Poisson method is a valid alternative for data analysis of repetitive-sampling Alu-HIV PCR data. Quantitative outputs of patient samples assessed by the Poisson method correlated with the previously described Alu-HIV PCR analysis, indicating that this method is a valid alternative for quantifying integrated HIV DNA. Poisson-based analysis of the Alu-HIV PCR data enables absolute quantification without the need of a standard dilution curve. Implementation of the CI estimation permits improved qualitative analysis of the data and provides a statistical basis for the required minimal number of technical replicates. © 2014 The American Association for Clinical Chemistry.

  4. A methodology to estimate uncertainty for emission projections through sensitivity analysis.

    PubMed

    Lumbreras, Julio; de Andrés, Juan Manuel; Pérez, Javier; Borge, Rafael; de la Paz, David; Rodríguez, María Encarnación

    2015-04-01

    Air pollution abatement policies must be based on quantitative information on current and future emissions of pollutants. As emission projections uncertainties are inevitable and traditional statistical treatments of uncertainty are highly time/resources consuming, a simplified methodology for nonstatistical uncertainty estimation based on sensitivity analysis is presented in this work. The methodology was applied to the "with measures" scenario for Spain, concretely over the 12 highest emitting sectors regarding greenhouse gas and air pollutants emissions. Examples of methodology application for two important sectors (power plants, and agriculture and livestock) are shown and explained in depth. Uncertainty bands were obtained up to 2020 by modifying the driving factors of the 12 selected sectors and the methodology was tested against a recomputed emission trend in a low economic-growth perspective and official figures for 2010, showing a very good performance. A solid understanding and quantification of uncertainties related to atmospheric emission inventories and projections provide useful information for policy negotiations. However, as many of those uncertainties are irreducible, there is an interest on how they could be managed in order to derive robust policy conclusions. Taking this into account, a method developed to use sensitivity analysis as a source of information to derive nonstatistical uncertainty bands for emission projections is presented and applied to Spain. This method simplifies uncertainty assessment and allows other countries to take advantage of their sensitivity analyses.

  5. Quantification of Finger-Tapping Angle Based on Wearable Sensors

    PubMed Central

    Djurić-Jovičić, Milica; Jovičić, Nenad S.; Roby-Brami, Agnes; Popović, Mirjana B.; Kostić, Vladimir S.; Djordjević, Antonije R.

    2017-01-01

    We propose a novel simple method for quantitative and qualitative finger-tapping assessment based on miniature inertial sensors (3D gyroscopes) placed on the thumb and index-finger. We propose a simplified description of the finger tapping by using a single angle, describing rotation around a dominant axis. The method was verified on twelve subjects, who performed various tapping tasks, mimicking impaired patterns. The obtained tapping angles were compared with results of a motion capture camera system, demonstrating excellent accuracy. The root-mean-square (RMS) error between the two sets of data is, on average, below 4°, and the intraclass correlation coefficient is, on average, greater than 0.972. Data obtained by the proposed method may be used together with scores from clinical tests to enable a better diagnostic. Along with hardware simplicity, this makes the proposed method a promising candidate for use in clinical practice. Furthermore, our definition of the tapping angle can be applied to all tapping assessment systems. PMID:28125051

  6. Quantification of Finger-Tapping Angle Based on Wearable Sensors.

    PubMed

    Djurić-Jovičić, Milica; Jovičić, Nenad S; Roby-Brami, Agnes; Popović, Mirjana B; Kostić, Vladimir S; Djordjević, Antonije R

    2017-01-25

    We propose a novel simple method for quantitative and qualitative finger-tapping assessment based on miniature inertial sensors (3D gyroscopes) placed on the thumb and index-finger. We propose a simplified description of the finger tapping by using a single angle, describing rotation around a dominant axis. The method was verified on twelve subjects, who performed various tapping tasks, mimicking impaired patterns. The obtained tapping angles were compared with results of a motion capture camera system, demonstrating excellent accuracy. The root-mean-square (RMS) error between the two sets of data is, on average, below 4°, and the intraclass correlation coefficient is, on average, greater than 0.972. Data obtained by the proposed method may be used together with scores from clinical tests to enable a better diagnostic. Along with hardware simplicity, this makes the proposed method a promising candidate for use in clinical practice. Furthermore, our definition of the tapping angle can be applied to all tapping assessment systems.

  7. Evaluation of a simplified gross thrust calculation method for a J85-21 afterburning turbojet engine in an altitude facility

    NASA Technical Reports Server (NTRS)

    Baer-Riedhart, J. L.

    1982-01-01

    A simplified gross thrust calculation method was evaluated on its ability to predict the gross thrust of a modified J85-21 engine. The method used tailpipe pressure data and ambient pressure data to predict the gross thrust. The method's algorithm is based on a one-dimensional analysis of the flow in the afterburner and nozzle. The test results showed that the method was notably accurate over the engine operating envelope using the altitude facility measured thrust for comparison. A summary of these results, the simplified gross thrust method and requirements, and the test techniques used are discussed in this paper.

  8. [The subject matters concerned with use of simplified analytical systems from the perspective of the Japanese Association of Medical Technologists].

    PubMed

    Morishita, Y

    2001-05-01

    The subject matters concerned with use of so-called simplified analytical systems for the purpose of useful utilizing are mentioned from the perspective of a laboratory technician. 1. The data from simplified analytical systems should to be agreed with those of particular reference methods not to occur the discrepancy of the data from different laboratories. 2. Accuracy of the measured results using simplified analytical systems is hard to be scrutinized thoroughly and correctly with the quality control surveillance procedure on the stored pooled serum or partly-processed blood. 3. It is necessary to present the guide line to follow about the contents of evaluation to guarantee on quality of simplified analytical systems. 4. Maintenance and manual performance of simplified analytical systems have to be standardized by a laboratory technician and a selling agent technician. 5. It calls attention, further that the cost of simplified analytical systems is much expensive compared to that of routine method with liquid reagents. 6. Various substances in human serum, like cytokine, hormone, tumor marker, and vitamin, etc. are also hoped to be measured by simplified analytical systems.

  9. Rapid quantitative chemical mapping of surfaces with sub-2 nm resolution

    NASA Astrophysics Data System (ADS)

    Lai, Chia-Yun; Perri, Saverio; Santos, Sergio; Garcia, Ricardo; Chiesa, Matteo

    2016-05-01

    We present a theory that exploits four observables in bimodal atomic force microscopy to produce maps of the Hamaker constant H. The quantitative H maps may be employed by the broader community to directly interpret the high resolution of standard bimodal AFM images as chemical maps while simultaneously quantifying chemistry in the non-contact regime. We further provide a simple methodology to optimize a range of operational parameters for which H is in the closest agreement with the Lifshitz theory in order to (1) simplify data acquisition and (2) generalize the methodology to any set of cantilever-sample systems.We present a theory that exploits four observables in bimodal atomic force microscopy to produce maps of the Hamaker constant H. The quantitative H maps may be employed by the broader community to directly interpret the high resolution of standard bimodal AFM images as chemical maps while simultaneously quantifying chemistry in the non-contact regime. We further provide a simple methodology to optimize a range of operational parameters for which H is in the closest agreement with the Lifshitz theory in order to (1) simplify data acquisition and (2) generalize the methodology to any set of cantilever-sample systems. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00496b

  10. Notification: Methods for Procuring Supplies and Services Under Simplified Acquisition Procedures

    EPA Pesticide Factsheets

    Project #OA-FY15-0193, June 18, 2015. The EPA OIG plans to begin the preliminary research phase of auditing the methods used in procuring supplies and services under simplified acquisition procedures.

  11. Simplified molecular input line entry system-based: QSAR modelling for MAP kinase-interacting protein kinase (MNK1).

    PubMed

    Begum, S; Achary, P Ganga Raju

    2015-01-01

    Quantitative structure-activity relationship (QSAR) models were built for the prediction of inhibition (pIC50, i.e. negative logarithm of the 50% effective concentration) of MAP kinase-interacting protein kinase (MNK1) by 43 potent inhibitors. The pIC50 values were modelled with five random splits, with the representations of the molecular structures by simplified molecular input line entry system (SMILES). QSAR model building was performed by the Monte Carlo optimisation using three methods: classic scheme; balance of correlations; and balance correlation with ideal slopes. The robustness of these models were checked by parameters as rm(2), r(*)m(2), [Formula: see text] and randomisation technique. The best QSAR model based on single optimal descriptors was applied to study in vitro structure-activity relationships of 6-(4-(2-(piperidin-1-yl) ethoxy) phenyl)-3-(pyridin-4-yl) pyrazolo [1,5-a] pyrimidine derivatives as a screening tool for the development of novel potent MNK1 inhibitors. The effects of alkyl group, -OH, -NO2, F, Cl, Br, I, etc. on the IC50 values towards the inhibition of MNK1 were also reported.

  12. Linear information retrieval method in X-ray grating-based phase contrast imaging and its interchangeability with tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Gao, K.; Wang, Z. L.; Shao, Q. G.; Hu, R. F.; Wei, C. X.; Zan, G. B.; Wali, F.; Luo, R. H.; Zhu, P. P.; Tian, Y. C.

    2017-06-01

    In X-ray grating-based phase contrast imaging, information retrieval is necessary for quantitative research, especially for phase tomography. However, numerous and repetitive processes have to be performed for tomographic reconstruction. In this paper, we report a novel information retrieval method, which enables retrieving phase and absorption information by means of a linear combination of two mutually conjugate images. Thanks to the distributive law of the multiplication as well as the commutative law and associative law of the addition, the information retrieval can be performed after tomographic reconstruction, thus simplifying the information retrieval procedure dramatically. The theoretical model of this method is established in both parallel beam geometry for Talbot interferometer and fan beam geometry for Talbot-Lau interferometer. Numerical experiments are also performed to confirm the feasibility and validity of the proposed method. In addition, we discuss its possibility in cone beam geometry and its advantages compared with other methods. Moreover, this method can also be employed in other differential phase contrast imaging methods, such as diffraction enhanced imaging, non-interferometric imaging, and edge illumination.

  13. Between Stressors and Outcomes: Can We Simplify Caregiving Process Variables?

    ERIC Educational Resources Information Center

    Braithwaite, Valerie

    1996-01-01

    Examines Lawton, Kleban, Moss, Rovine, and Glickman's (1989) caregiving appraisal through a principal components analysis and varimax rotation of a data set based on in-depth quantitative interviews with 144 caregivers. Five caregiving appraisal dimensions are identified: task load caregiving, dysfunctional caregiving, intimacy and love, social…

  14. Quantitative Investigation of the Technologies That Support Cloud Computing

    ERIC Educational Resources Information Center

    Hu, Wenjin

    2014-01-01

    Cloud computing is dramatically shaping modern IT infrastructure. It virtualizes computing resources, provides elastic scalability, serves as a pay-as-you-use utility, simplifies the IT administrators' daily tasks, enhances the mobility and collaboration of data, and increases user productivity. We focus on providing generalized black-box…

  15. Using Alien Coins to Test Whether Simple Inference Is Bayesian

    ERIC Educational Resources Information Center

    Cassey, Peter; Hawkins, Guy E.; Donkin, Chris; Brown, Scott D.

    2016-01-01

    Reasoning and inference are well-studied aspects of basic cognition that have been explained as statistically optimal Bayesian inference. Using a simplified experimental design, we conducted quantitative comparisons between Bayesian inference and human inference at the level of individuals. In 3 experiments, with more than 13,000 participants, we…

  16. The Simulation of an Oxidation-Reduction Titration Curve with Computer Algebra

    ERIC Educational Resources Information Center

    Whiteley, Richard V., Jr.

    2015-01-01

    Although the simulation of an oxidation/reduction titration curve is an important exercise in an undergraduate course in quantitative analysis, that exercise is frequently simplified to accommodate computational limitations. With the use of readily available computer algebra systems, however, such curves for complicated systems can be generated…

  17. 48 CFR 13.304 - [Reserved

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false [Reserved] 13.304 Section 13.304 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 13.304 [Reserved] ...

  18. Confidence estimation for quantitative photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena

    2018-02-01

    Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.

  19. 26 CFR 1.199-4 - Costs allocable to domestic production gross receipts.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... using the simplified deduction method. Paragraph (f) of this section provides a small business... taxpayer for internal management or other business purposes; whether the method is used for other Federal... than a taxpayer that uses the small business simplified overall method of paragraph (f) of this section...

  20. Accuracy of a simplified method for shielded gamma-ray skyshine sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bassett, M.S.; Shultis, J.K.

    1989-11-01

    Rigorous transport or Monte Carlo methods for estimating far-field gamma-ray skyshine doses generally are computationally intensive. consequently, several simplified techniques such as point-kernel methods and methods based on beam response functions have been proposed. For unshielded skyshine sources, these simplified methods have been shown to be quite accurate from comparisons to benchmark problems and to benchmark experimental results. For shielded sources, the simplified methods typically use exponential attenuation and photon buildup factors to describe the effect of the shield. However, the energy and directional redistribution of photons scattered in the shield is usually ignored, i.e., scattered photons are assumed tomore » emerge from the shield with the same energy and direction as the uncollided photons. The accuracy of this shield treatment is largely unknown due to the paucity of benchmark results for shielded sources. In this paper, the validity of such a shield treatment is assessed by comparison to a composite method, which accurately calculates the energy and angular distribution of photons penetrating the shield.« less

  1. Boundary element analysis of corrosion problems for pumps and pipes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyasaka, M.; Amaya, K.; Kishimoto, K.

    1995-12-31

    Three-dimensional (3D) and axi-symmetric boundary element methods (BEM) were developed to quantitatively estimate cathodic protection and macro-cell corrosion. For 3D analysis, a multiple-region method (MRM) was developed in addition to a single-region method (SRM). The validity and usefulness of the BEMs were demonstrated by comparing numerical results with experimental data from galvanic corrosion systems of a cylindrical model and a seawater pipe, and from a cathodic protection system of an actual seawater pump. It was shown that a highly accurate analysis could be performed for fluid machines handling seawater with complex 3D fields (e.g. seawater pump) by taking account ofmore » flow rate and time dependencies of polarization curve. Compared to the 3D BEM, the axi-symmetric BEM permitted large reductions in numbers of elements and nodes, which greatly simplified analysis of axi-symmetric fields such as pipes. Computational accuracy and CPU time were compared between analyses using two approximation methods for polarization curves: a logarithmic-approximation method and a linear-approximation method.« less

  2. Impact of parasitic thermal effects on thermoelectric property measurements by Harman method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwon, Beomjin, E-mail: bkwon@kist.re.kr; Baek, Seung-Hyub; Keun Kim, Seong

    2014-04-15

    Harman method is a rapid and simple technique to measure thermoelectric properties. However, its validity has been often questioned due to the over-simplified assumptions that this method relies on. Here, we quantitatively investigate the influence of the previously ignored parasitic thermal effects on the Harman method and develop a method to determine an intrinsic ZT. We expand the original Harman relation with three extra terms: heat losses via both the lead wires and radiation, and Joule heating within the sample. Based on the expanded Harman relation, we use differential measurement of the sample geometry to measure the intrinsic ZT. Tomore » separately evaluate the parasitic terms, the measured ZTs with systematically varied sample geometries and the lead wire types are fitted to the expanded relation. A huge discrepancy (∼28%) of the measured ZTs depending on the measurement configuration is observed. We are able to separately evaluate those parasitic terms. This work will help to evaluate the intrinsic thermoelectric property with Harman method by eliminating ambiguities coming from extrinsic effects.« less

  3. 48 CFR 13.302 - Purchase orders.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Purchase orders. 13.302 Section 13.302 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 13.302 Purchase...

  4. A quantitative visual dashboard to explore exposures to ...

    EPA Pesticide Factsheets

    The Exposure Prioritization (Ex Priori) model features a simplified, quantitative visual dashboard to explore exposures across chemical space. Diverse data streams are integrated within the interface such that different exposure scenarios for “individual,” “population,” or “professional” time-use profiles can be interchanged to tailor exposure and quantitatively explore multi-chemical signatures of exposure, internalized dose (uptake), body burden, and elimination. Ex Priori will quantitatively extrapolate single-point estimates of both exposure and internal dose for multiple exposure scenarios, factors, products, and pathways. Currently, EPA is investigating its usefulness in life cycle analysis, insofar as its ability to enhance exposure factors used in calculating characterization factors for human health. Presented at 2016 Annual ISES Meeting held in Utrecht, The Netherlands, from 9-13 October 2016.

  5. Qualification of computerized monitoring systems in a cell therapy facility compliant with the good manufacturing practices.

    PubMed

    Del Mazo-Barbara, Anna; Mirabel, Clémentine; Nieto, Valentín; Reyes, Blanca; García-López, Joan; Oliver-Vila, Irene; Vives, Joaquim

    2016-09-01

    Computerized systems (CS) are essential in the development and manufacture of cell-based medicines and must comply with good manufacturing practice, thus pushing academic developers to implement methods that are typically found within pharmaceutical industry environments. Qualitative and quantitative risk analyses were performed by Ishikawa and Failure Mode and Effects Analysis, respectively. A process for qualification of a CS that keeps track of environmental conditions was designed and executed. The simplicity of the Ishikawa analysis permitted to identify critical parameters that were subsequently quantified by Failure Mode Effects Analysis, resulting in a list of test included in the qualification protocols. The approach presented here contributes to simplify and streamline the qualification of CS in compliance with pharmaceutical quality standards.

  6. Active correction of thermal lensing through external radiative thermal actuation.

    PubMed

    Lawrence, Ryan; Ottaway, David; Zucker, Michael; Fritschel, Peter

    2004-11-15

    Absorption of laser beam power in optical elements induces thermal gradients that may cause unwanted phase aberrations. In precision measurement applications, such as laser interferometric gravitational-wave detection, corrective measures that require mechanical contact with or attachments to the optics are precluded by noise considerations. We describe a radiative thermal corrector that can counteract thermal lensing and (or) thermoelastic deformation induced by coating and substrate absorption of collimated Gaussian beams. This radiative system can correct anticipated distortions to a high accuracy, at the cost of an increase in the average temperature of the optic. A quantitative analysis and parameter optimization is supported by results from a simplified proof-of-principle experiment, demonstrating the method's feasibility for our intended application.

  7. Charge separation at nanoscale interfaces: energy-level alignment including two-quasiparticle interactions.

    PubMed

    Li, Huashan; Lin, Zhibin; Lusk, Mark T; Wu, Zhigang

    2014-10-21

    The universal and fundamental criteria for charge separation at interfaces involving nanoscale materials are investigated. In addition to the single-quasiparticle excitation, all the two-quasiparticle effects including exciton binding, Coulomb stabilization, and exciton transfer are considered, which play critical roles on nanoscale interfaces for optoelectronic applications. We propose a scheme allowing adding these two-quasiparticle interactions on top of the single-quasiparticle energy level alignment for determining and illuminating charge separation at nanoscale interfaces. Employing the many-body perturbation theory based on Green's functions, we quantitatively demonstrate that neglecting or simplifying these crucial two-quasiparticle interactions using less accurate methods is likely to predict qualitatively incorrect charge separation behaviors at nanoscale interfaces where quantum confinement dominates.

  8. Calculation of turbulence-driven secondary motion in ducts with arbitrary cross section

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.

    1989-01-01

    Calculation methods for turbulent duct flows are generalized for ducts with arbitrary cross-sections. The irregular physical geometry is transformed into a regular one in computational space, and the flow equations are solved with a finite-volume numerical procedure. The turbulent stresses are calculated with an algebraic stress model derived by simplifying model transport equations for the individual Reynolds stresses. Two variants of such a model are considered. These procedures enable the prediction of both the turbulence-driven secondary flow and the anisotropy of the Reynolds stresses, in contrast to some of the earlier calculation methods. Model predictions are compared to experimental data for developed flow in triangular duct, trapezoidal duct and a rod-bundle geometry. The correct trends are predicted, and the quantitative agreement is mostly fair. The simpler variant of the algebraic stress model procured better agreement with the measured data.

  9. Hybrid multiphase CFD simulation for liquid-liquid interfacial area prediction in annular centrifugal contactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wardle, K.E.

    2013-07-01

    Liquid-liquid contacting equipment used in solvent extraction processes has the dual purpose of mixing and separating two immiscible fluids. Consequently, such devices inherently encompass a wide variety of multiphase flow regimes. A hybrid multiphase computational fluid dynamics (CFD) solver which combines the Eulerian multi-fluid method with VOF (volume of fluid) sharp interface capturing has been developed for application to annular centrifugal contactors. This solver has been extended to enable prediction of mean droplet size and liquid-liquid interfacial area through a single moment population balance method. Simulations of liquid-liquid mixing in a simplified geometry and a model annular centrifugal contactor aremore » reported with droplet breakup/coalescence models being calibrated versus available experimental data. Quantitative comparison is made for two different housing vane geometries and it is found that the predicted droplet size is significantly smaller for vane geometries which result in higher annular liquid holdup.« less

  10. Shack-Hartmann reflective micro profilometer

    NASA Astrophysics Data System (ADS)

    Gong, Hai; Soloviev, Oleg; Verhaegen, Michel; Vdovin, Gleb

    2018-01-01

    We present a quantitative phase imaging microscope based on a Shack-Hartmann sensor, that directly reconstructs the optical path difference (OPD) in reflective mode. Comparing with the holographic or interferometric methods, the SH technique needs no reference beam in the setup, which simplifies the system. With a preregistered reference, the OPD image can be reconstructed from a single shot. Also, the method has a rather relaxed requirement on the illumination coherence, thus a cheap light source such as a LED is feasible in the setup. In our previous research, we have successfully verified that a conventional transmissive microscope can be transformed into an optical path difference microscope by using a Shack-Hartmann wavefront sensor under incoherent illumination. The key condition is that the numerical aperture of illumination should be smaller than the numerical aperture of imaging lens. This approach is also applicable to characterization of reflective and slightly scattering surfaces.

  11. A comparative physical evaluation of four X-ray films.

    PubMed

    Egyed, M; Shearer, D R

    1981-09-01

    In this study, four general purpose radiographic films (Agfa Gevaert Curix RP-1, duPont Cronex 4, Fuji RX, and Kodak XRP-1) were compared using three independent techniques. By examining the characteristic curves for the four films, film speed and contrast were compared over the diagnostically useful density range. These curves were generated using three methods: (1) irradiation of a standard film cassette lined with high-speed screens, covered by a twelve-step aluminum wedge; (2) direct exposure of film strips to an electro-luminescent sensitometer; and (3) direct irradiation of a standard film cassette lined with high-speed screens. The latter technique provided quantitative values for film speed and relative contrast. All three techniques provided virtually properly identical results and indicate that under properly controlled conditions simplified methods of film testing can give results equivalent to those obtained by more sophisticated techniques.

  12. Consolidation of molecular testing in clinical virology.

    PubMed

    Scagnolari, Carolina; Turriziani, Ombretta; Monteleone, Katia; Pierangeli, Alessandra; Antonelli, Guido

    2017-04-01

    The development of quantitative methods for the detection of viral nucleic acids have significantly improved our ability to manage disease progression and to assess the efficacy of antiviral treatment. Moreover, major advances in molecular technologies during the last decade have allowed the identification of new host genetic markers associated with antiviral drug response but have also strongly revolutionized the way we see and perform virus diagnostics in the coming years. Areas covered: In this review, we describe the history and development of virology diagnostic methods, dedicating particular emphasis on the gradual evolution and recent advances toward the introduction of multiparametric platforms for the syndromic diagnosis. In parallel, we outline the consolidation of viral genome quantification practice in different clinical settings. Expert commentary: More rapid, accurate and affordable molecular technology can be predictable with particular emphasis on emerging techniques (next generation sequencing, digital PCR, point of care testing and syndromic diagnosis) to simplify viral diagnosis in the next future.

  13. Getting Innovative Therapies Faster to Patients at the Right Dose: Impact of Quantitative Pharmacology Towards First Registration and Expanding Therapeutic Use

    PubMed Central

    Nayak, Satyaprakash; Sander, Oliver; Al‐Huniti, Nidal; de Alwis, Dinesh; Chain, Anne; Chenel, Marylore; Sunkaraneni, Soujanya; Agrawal, Shruti; Gupta, Neeraj

    2018-01-01

    Quantitative pharmacology (QP) applications in translational medicine, drug‐development, and therapeutic use were crowd‐sourced by the ASCPT Impact and Influence initiative. Highlighted QP case studies demonstrated faster access to innovative therapies for patients through 1) rational dose selection for pivotal trials; 2) reduced trial‐burden for vulnerable populations; or 3) simplified posology. Critical success factors were proactive stakeholder engagement, alignment on the value of model‐informed approaches, and utilizing foundational clinical pharmacology understanding of the therapy. PMID:29330855

  14. FDTD-based quantitative analysis of terahertz wave detection for multilayered structures.

    PubMed

    Tu, Wanli; Zhong, Shuncong; Shen, Yaochun; Zhou, Qing; Yao, Ligang

    2014-10-01

    Experimental investigations have shown that terahertz pulsed imaging (TPI) is able to quantitatively characterize a range of multilayered media (e.g., biological issues, pharmaceutical tablet coatings, layered polymer composites, etc.). Advanced modeling of the interaction of terahertz radiation with a multilayered medium is required to enable the wide application of terahertz technology in a number of emerging fields, including nondestructive testing. Indeed, there have already been many theoretical analyses performed on the propagation of terahertz radiation in various multilayered media. However, to date, most of these studies used 1D or 2D models, and the dispersive nature of the dielectric layers was not considered or was simplified. In the present work, the theoretical framework of using terahertz waves for the quantitative characterization of multilayered media was established. A 3D model based on the finite difference time domain (FDTD) method is proposed. A batch of pharmaceutical tablets with a single coating layer of different coating thicknesses and different refractive indices was modeled. The reflected terahertz wave from such a sample was computed using the FDTD method, assuming that the incident terahertz wave is broadband, covering a frequency range up to 3.5 THz. The simulated results for all of the pharmaceutical-coated tablets considered were found to be in good agreement with the experimental results obtained using a commercial TPI system. In addition, we studied a three-layered medium to mimic the occurrence of defects in the sample.

  15. Flipping interferometry and its application for quantitative phase microscopy in a micro-channel.

    PubMed

    Roitshtain, Darina; Turko, Nir A; Javidi, Bahram; Shaked, Natan T

    2016-05-15

    We present a portable, off-axis interferometric module for quantitative phase microscopy of live cells, positioned at the exit port of a coherently illuminated inverted microscope. The module creates on the digital camera an interference pattern between the image of the sample and its flipped version. The proposed simplified module is based on a retro-reflector modification in an external Michelson interferometer. The module does not contain any lenses, pinholes, or gratings and its alignment is straightforward. Still, it allows full control of the off-axis angle and does not suffer from ghost images. As experimentally demonstrated, the module is useful for quantitative phase microscopy of live cells rapidly flowing in a micro-channel.

  16. Methods for a longitudinal quantitative outcome with a multivariate Gaussian distribution multi-dimensionally censored by therapeutic intervention.

    PubMed

    Sun, Wanjie; Larsen, Michael D; Lachin, John M

    2014-04-15

    In longitudinal studies, a quantitative outcome (such as blood pressure) may be altered during follow-up by the administration of a non-randomized, non-trial intervention (such as anti-hypertensive medication) that may seriously bias the study results. Current methods mainly address this issue for cross-sectional studies. For longitudinal data, the current methods are either restricted to a specific longitudinal data structure or are valid only under special circumstances. We propose two new methods for estimation of covariate effects on the underlying (untreated) general longitudinal outcomes: a single imputation method employing a modified expectation-maximization (EM)-type algorithm and a multiple imputation (MI) method utilizing a modified Monte Carlo EM-MI algorithm. Each method can be implemented as one-step, two-step, and full-iteration algorithms. They combine the advantages of the current statistical methods while reducing their restrictive assumptions and generalizing them to realistic scenarios. The proposed methods replace intractable numerical integration of a multi-dimensionally censored MVN posterior distribution with a simplified, sufficiently accurate approximation. It is particularly attractive when outcomes reach a plateau after intervention due to various reasons. Methods are studied via simulation and applied to data from the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications study of treatment for type 1 diabetes. Methods proved to be robust to high dimensions, large amounts of censored data, low within-subject correlation, and when subjects receive non-trial intervention to treat the underlying condition only (with high Y), or for treatment in the majority of subjects (with high Y) in combination with prevention for a small fraction of subjects (with normal Y). Copyright © 2013 John Wiley & Sons, Ltd.

  17. 48 CFR 1313.302 - Purchase orders.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Purchase orders. 1313.302 Section 1313.302 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisitions Methods 1313.302 Purchase orders. ...

  18. 48 CFR 813.302 - Purchase orders.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Purchase orders. 813.302 Section 813.302 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 813.302 Purchase...

  19. 48 CFR 1413.305 - Imprest fund.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Imprest fund. 1413.305 Section 1413.305 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 1413.305 Imprest fund. ...

  20. 48 CFR 1413.305 - Imprest fund.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Imprest fund. 1413.305 Section 1413.305 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 1413.305 Imprest fund. ...

  1. Simple design of slanted grating with simplified modal method.

    PubMed

    Li, Shubin; Zhou, Changhe; Cao, Hongchao; Wu, Jun

    2014-02-15

    A simplified modal method (SMM) is presented that offers a clear physical image for subwavelength slanted grating. The diffraction characteristic of the slanted grating under Littrow configuration is revealed by the SMM as an equivalent rectangular grating, which is in good agreement with rigorous coupled-wave analysis. Based on the equivalence, we obtained an effective analytic solution for simplifying the design and optimization of a slanted grating. It offers a new approach for design of the slanted grating, e.g., a 1×2 beam splitter can be easily designed. This method should be helpful for designing various new slanted grating devices.

  2. Intentionality, degree of damage, and moral judgments.

    PubMed

    Berg-Cross, L G

    1975-12-01

    153 first graders were given Piagetian moral judgment problems with a new simplified methodology as well as the usual story-pair paradigm. The new methodology involved making quantitative judgments about single stories and examined the influence of level of intentionality and degree of damage upon absolute punishment ratings. Contrary to results obtained with a story-pair methodology, it was found that with single stories even 6-year-old children responded to the level of intention in the stories as well as the quantity and quality of damage involved. This suggested that Piaget's methodology may be forcing children to employ a simplifying strategy while under other conditions they are able to perform the mental operations necessary to make complex moral judgments.

  3. A simplified immunoprecipitation method for quantitatively measuring antibody responses in clinical sera samples by using mammalian-produced Renilla luciferase-antigen fusion proteins.

    PubMed

    Burbelo, Peter D; Goldman, Radoslav; Mattson, Thomas L

    2005-08-18

    Assays detecting human antigen-specific antibodies are medically useful. However, the usefulness of existing simple immunoassay formats is limited by technical considerations such as sera antibodies to contaminants in insufficiently pure antigen, a problem likely exacerbated when antigen panels are screened to obtain clinically useful data. We developed a novel and simple immunoprecipitation technology for identifying clinical sera containing antigen-specific antibodies and for generating quantitative antibody response profiles. This method is based on fusing protein antigens to an enzyme reporter, Renilla luciferase (Ruc), and expressing these fusions in mammalian cells, where mammalian-specific post-translational modifications can be added. After mixing crude extracts, sera and protein A/G beads together and incubating, during which the Ruc-antigen fusion become immobilized on the A/G beads, antigen-specific antibody is quantitated by washing the beads and adding coelenterazine substrate and measuring light production. We have characterized this technology with sera from patients having three different types of cancers. We show that 20-85% of these sera contain significant titers of antibodies against at least one of five frequently mutated and/or overexpressed tumor-associated proteins. Five of six colon cancer sera tested gave responses that were statistically significantly greater than the average plus three standard deviations of 10 control sera. The results of competition experiments, preincubating positive sera with unmodified E. coli-produced antigens, varied dramatically. This technology has several advantages over current quantitative immunoassays including its relative simplicity, its avoidance of problems associated with E. coli-produced antigens and its use of antigens that can carry mammalian or disease-specific post-translational modifications. This assay should be generally useful for analyzing sera for antibodies recognizing any protein or its post-translational modifications.

  4. A simplified immunoprecipitation method for quantitatively measuring antibody responses in clinical sera samples by using mammalian-produced Renilla luciferase-antigen fusion proteins

    PubMed Central

    Burbelo, Peter D; Goldman, Radoslav; Mattson, Thomas L

    2005-01-01

    Background Assays detecting human antigen-specific antibodies are medically useful. However, the usefulness of existing simple immunoassay formats is limited by technical considerations such as sera antibodies to contaminants in insufficiently pure antigen, a problem likely exacerbated when antigen panels are screened to obtain clinically useful data. Results We developed a novel and simple immunoprecipitation technology for identifying clinical sera containing antigen-specific antibodies and for generating quantitative antibody response profiles. This method is based on fusing protein antigens to an enzyme reporter, Renilla luciferase (Ruc), and expressing these fusions in mammalian cells, where mammalian-specific post-translational modifications can be added. After mixing crude extracts, sera and protein A/G beads together and incubating, during which the Ruc-antigen fusion become immobilized on the A/G beads, antigen-specific antibody is quantitated by washing the beads and adding coelenterazine substrate and measuring light production. We have characterized this technology with sera from patients having three different types of cancers. We show that 20–85% of these sera contain significant titers of antibodies against at least one of five frequently mutated and/or overexpressed tumor-associated proteins. Five of six colon cancer sera tested gave responses that were statistically significantly greater than the average plus three standard deviations of 10 control sera. The results of competition experiments, preincubating positive sera with unmodified E. coli-produced antigens, varied dramatically. Conclusion This technology has several advantages over current quantitative immunoassays including its relative simplicity, its avoidance of problems associated with E. coli-produced antigens and its use of antigens that can carry mammalian or disease-specific post-translational modifications. This assay should be generally useful for analyzing sera for antibodies recognizing any protein or its post-translational modifications. PMID:16109166

  5. Simplified power control method for cellular mobile communication

    NASA Astrophysics Data System (ADS)

    Leung, Y. W.

    1994-04-01

    The centralized power control (CPC) method measures the gain of the communication links between every mobile and every base station in the cochannel cells and determines optimal transmitter power to maximize the minimum carrier-to-interference ratio. The authors propose a simplified power control method which has nearly the same performance as the CPC method but which involves much smaller measurement overhead.

  6. "Don't Be a Whore, that's Not Ladylike": Discursive Discipline and Sorority Women's Gendered Subjectivity

    ERIC Educational Resources Information Center

    Berbary, Lisbeth A.

    2012-01-01

    While multiple and competing understandings of sororities exist in popular culture, academic research on sororities tends to homogenize the experience of sorority women, simplifying their existence to a quantitative understanding of specific behaviors such as those associated with binge drinking, eating disorders, and heterosexuality.…

  7. Deposition and Characterization of Improved Hydrogen Getter Materials - Report on FY 14-15 Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hubbard, Kevin Mark; Sandoval, Cynthia Wathen

    2015-10-15

    The goals of this work have been two-fold. First, to perform an initial, quantitative, optimization of getter performance, with the primary variables being DEB/Pd ratio and UV power. Second, to simplify the deposition process to make it more compatible with the DOE production environment.

  8. Three-Dimensional Surface Parameters and Multi-Fractal Spectrum of Corroded Steel

    PubMed Central

    Shanhua, Xu; Songbo, Ren; Youde, Wang

    2015-01-01

    To study multi-fractal behavior of corroded steel surface, a range of fractal surfaces of corroded surfaces of Q235 steel were constructed by using the Weierstrass-Mandelbrot method under a high total accuracy. The multi-fractal spectrum of fractal surface of corroded steel was calculated to study the multi-fractal characteristics of the W-M corroded surface. Based on the shape feature of the multi-fractal spectrum of corroded steel surface, the least squares method was applied to the quadratic fitting of the multi-fractal spectrum of corroded surface. The fitting function was quantitatively analyzed to simplify the calculation of multi-fractal characteristics of corroded surface. The results showed that the multi-fractal spectrum of corroded surface was fitted well with the method using quadratic curve fitting, and the evolution rules and trends were forecasted accurately. The findings can be applied to research on the mechanisms of corroded surface formation of steel and provide a new approach for the establishment of corrosion damage constitutive models of steel. PMID:26121468

  9. Combined multi-spectrum and orthogonal Laplacianfaces for fast CB-XLCT imaging with single-view data

    NASA Astrophysics Data System (ADS)

    Zhang, Haibo; Geng, Guohua; Chen, Yanrong; Qu, Xuan; Zhao, Fengjun; Hou, Yuqing; Yi, Huangjian; He, Xiaowei

    2017-12-01

    Cone-beam X-ray luminescence computed tomography (CB-XLCT) is an attractive hybrid imaging modality, which has the potential of monitoring the metabolic processes of nanophosphors-based drugs in vivo. Single-view data reconstruction as a key issue of CB-XLCT imaging promotes the effective study of dynamic XLCT imaging. However, it suffers from serious ill-posedness in the inverse problem. In this paper, a multi-spectrum strategy is adopted to relieve the ill-posedness of reconstruction. The strategy is based on the third-order simplified spherical harmonic approximation model. Then, an orthogonal Laplacianfaces-based method is proposed to reduce the large computational burden without degrading the imaging quality. Both simulated data and in vivo experimental data were used to evaluate the efficiency and robustness of the proposed method. The results are satisfactory in terms of both location and quantitative recovering with computational efficiency, indicating that the proposed method is practical and promising for single-view CB-XLCT imaging.

  10. Petri net-based method for the analysis of the dynamics of signal propagation in signaling pathways.

    PubMed

    Hardy, Simon; Robillard, Pierre N

    2008-01-15

    Cellular signaling networks are dynamic systems that propagate and process information, and, ultimately, cause phenotypical responses. Understanding the circuitry of the information flow in cells is one of the keys to understanding complex cellular processes. The development of computational quantitative models is a promising avenue for attaining this goal. Not only does the analysis of the simulation data based on the concentration variations of biological compounds yields information about systemic state changes, but it is also very helpful for obtaining information about the dynamics of signal propagation. This article introduces a new method for analyzing the dynamics of signal propagation in signaling pathways using Petri net theory. The method is demonstrated with the Ca(2+)/calmodulin-dependent protein kinase II (CaMKII) regulation network. The results constitute temporal information about signal propagation in the network, a simplified graphical representation of the network and of the signal propagation dynamics and a characterization of some signaling routes as regulation motifs.

  11. Three-Dimensional Surface Parameters and Multi-Fractal Spectrum of Corroded Steel.

    PubMed

    Shanhua, Xu; Songbo, Ren; Youde, Wang

    2015-01-01

    To study multi-fractal behavior of corroded steel surface, a range of fractal surfaces of corroded surfaces of Q235 steel were constructed by using the Weierstrass-Mandelbrot method under a high total accuracy. The multi-fractal spectrum of fractal surface of corroded steel was calculated to study the multi-fractal characteristics of the W-M corroded surface. Based on the shape feature of the multi-fractal spectrum of corroded steel surface, the least squares method was applied to the quadratic fitting of the multi-fractal spectrum of corroded surface. The fitting function was quantitatively analyzed to simplify the calculation of multi-fractal characteristics of corroded surface. The results showed that the multi-fractal spectrum of corroded surface was fitted well with the method using quadratic curve fitting, and the evolution rules and trends were forecasted accurately. The findings can be applied to research on the mechanisms of corroded surface formation of steel and provide a new approach for the establishment of corrosion damage constitutive models of steel.

  12. Simplified procedure for computing the absorption of sound by the atmosphere

    DOT National Transportation Integrated Search

    2007-10-31

    This paper describes a study that resulted in the development of a simplified : method for calculating attenuation by atmospheric-absorption for wide-band : sounds analyzed by one-third octave-band filters. The new method [referred to : herein as the...

  13. Hybrid model based unified scheme for endoscopic Cerenkov and radio-luminescence tomography: Simulation demonstration

    NASA Astrophysics Data System (ADS)

    Wang, Lin; Cao, Xin; Ren, Qingyun; Chen, Xueli; He, Xiaowei

    2018-05-01

    Cerenkov luminescence imaging (CLI) is an imaging method that uses an optical imaging scheme to probe a radioactive tracer. Application of CLI with clinically approved radioactive tracers has opened an opportunity for translating optical imaging from preclinical to clinical applications. Such translation was further improved by developing an endoscopic CLI system. However, two-dimensional endoscopic imaging cannot identify accurate depth and obtain quantitative information. Here, we present an imaging scheme to retrieve the depth and quantitative information from endoscopic Cerenkov luminescence tomography, which can also be applied for endoscopic radio-luminescence tomography. In the scheme, we first constructed a physical model for image collection, and then a mathematical model for characterizing the luminescent light propagation from tracer to the endoscopic detector. The mathematical model is a hybrid light transport model combined with the 3rd order simplified spherical harmonics approximation, diffusion, and radiosity equations to warrant accuracy and speed. The mathematical model integrates finite element discretization, regularization, and primal-dual interior-point optimization to retrieve the depth and the quantitative information of the tracer. A heterogeneous-geometry-based numerical simulation was used to explore the feasibility of the unified scheme, which demonstrated that it can provide a satisfactory balance between imaging accuracy and computational burden.

  14. A Simplified Diagnostic Method for Elastomer Bond Durability

    NASA Technical Reports Server (NTRS)

    White, Paul

    2009-01-01

    A simplified method has been developed for determining bond durability under exposure to water or high humidity conditions. It uses a small number of test specimens with relatively short times of water exposure at elevated temperature. The method is also gravimetric; the only equipment being required is an oven, specimen jars, and a conventional laboratory balance.

  15. A Manual of Simplified Laboratory Methods for Operators of Wastewater Treatment Facilities.

    ERIC Educational Resources Information Center

    Westerhold, Arnold F., Ed.; Bennett, Ernest C., Ed.

    This manual is designed to provide the small wastewater treatment plant operator, as well as the new or inexperienced operator, with simplified methods for laboratory analysis of water and wastewater. It is emphasized that this manual is not a replacement for standard methods but a guide for plants with insufficient equipment to perform analyses…

  16. A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network

    NASA Astrophysics Data System (ADS)

    Jiang, H.; Cheng, H.; Zhang, Y.; Liu, J.

    A growing number of space activities have created an orbital debris environment that poses increasing impact risks to existing space systems and human space flight. For the safety of in-orbit spacecraft, a lot of observation facilities are needed to catalog space objects, especially in low earth orbit. Surveillance of Low earth orbit objects are mainly rely on ground-based radar, due to the ability limitation of exist radar facilities, a large number of ground-based radar need to build in the next few years in order to meet the current space surveillance demands. How to optimize the embattling of ground-based radar surveillance network is a problem to need to be solved. The traditional method for embattling optimization of ground-based radar surveillance network is mainly through to the detection simulation of all possible stations with cataloged data, and makes a comprehensive comparative analysis of various simulation results with the combinational method, and then selects an optimal result as station layout scheme. This method is time consuming for single simulation and high computational complexity for the combinational analysis, when the number of stations increases, the complexity of optimization problem will be increased exponentially, and cannot be solved with traditional method. There is no better way to solve this problem till now. In this paper, target detection procedure was simplified. Firstly, the space coverage of ground-based radar was simplified, a space coverage projection model of radar facilities in different orbit altitudes was built; then a simplified objects cross the radar coverage model was established according to the characteristics of space objects orbit motion; after two steps simplification, the computational complexity of the target detection was greatly simplified, and simulation results shown the correctness of the simplified results. In addition, the detection areas of ground-based radar network can be easily computed with the simplified model, and then optimized the embattling of ground-based radar surveillance network with the artificial intelligent algorithm, which can greatly simplifies the computational complexities. Comparing with the traditional method, the proposed method greatly improved the computational efficiency.

  17. Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

    2000-01-01

    Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

  18. On the analysis of time-of-flight spin-echo modulated dark-field imaging data

    NASA Astrophysics Data System (ADS)

    Sales, Morten; Plomp, Jeroen; Bouwman, Wim G.; Tremsin, Anton S.; Habicht, Klaus; Strobl, Markus

    2017-06-01

    Spin-Echo Modulated Small Angle Neutron Scattering with spatial resolution, i.e. quantitative Spin-Echo Dark Field Imaging, is an emerging technique coupling neutron imaging with spatially resolved quantitative small angle scattering information. However, the currently achieved relatively large modulation periods of the order of millimeters are superimposed to the images of the samples. So far this required an independent reduction and analyses of the image and scattering information encoded in the measured data and is involving extensive curve fitting routines. Apart from requiring a priori decisions potentially limiting the information content that is extractable also a straightforward judgment of the data quality and information content is hindered. In contrast we propose a significantly simplified routine directly applied to the measured data, which does not only allow an immediate first assessment of data quality and delaying decisions on potentially information content limiting further reduction steps to a later and better informed state, but also, as results suggest, generally better analyses. In addition the method enables to drop the spatial resolution detector requirement for non-spatially resolved Spin-Echo Modulated Small Angle Neutron Scattering.

  19. Quantitative brain tissue oximetry, phase spectroscopy and imaging the range of homeostasis in piglet brain.

    PubMed

    Chance, Britton; Ma, Hong Yan; Nioka, Shoko

    2003-01-01

    The quantification of tissue oxygen by frequency or time domain methods has been discussed in a number of prior publications where the meaning of the tissue hemoglobin oxygen saturation was unclear and where the CW instruments were unsuitable for proper quantitative measurements [1, 2]. The development of the IQ Phase Meter has greatly simplified and made reliable the difficult determination of precise phase and amplitude signals from brain. This contribution reports on the calibration of the instrument in model systems and the use of the instrument to measure tissue saturation (StO2) in a small animal model. In addition, a global interpretation of the meaning of tissue oxygen has been formulated based on the idea that autoregulation will maintain tissue oxygen at a fixed value over a range of arterial and venous oxygen values over the range of autoregulation. Beyond that range, the tissue oxygen is still correctly measured but, as expected, approaches the arterial saturation at low metabolic rates and the venous saturation at high metabolic rates of mitochondria.

  20. Fast Realistic MRI Simulations Based on Generalized Multi-Pool Exchange Tissue Model.

    PubMed

    Liu, Fang; Velikina, Julia V; Block, Walter F; Kijowski, Richard; Samsonov, Alexey A

    2017-02-01

    We present MRiLab, a new comprehensive simulator for large-scale realistic MRI simulations on a regular PC equipped with a modern graphical processing unit (GPU). MRiLab combines realistic tissue modeling with numerical virtualization of an MRI system and scanning experiment to enable assessment of a broad range of MRI approaches including advanced quantitative MRI methods inferring microstructure on a sub-voxel level. A flexible representation of tissue microstructure is achieved in MRiLab by employing the generalized tissue model with multiple exchanging water and macromolecular proton pools rather than a system of independent proton isochromats typically used in previous simulators. The computational power needed for simulation of the biologically relevant tissue models in large 3D objects is gained using parallelized execution on GPU. Three simulated and one actual MRI experiments were performed to demonstrate the ability of the new simulator to accommodate a wide variety of voxel composition scenarios and demonstrate detrimental effects of simplified treatment of tissue micro-organization adapted in previous simulators. GPU execution allowed  ∼ 200× improvement in computational speed over standard CPU. As a cross-platform, open-source, extensible environment for customizing virtual MRI experiments, MRiLab streamlines the development of new MRI methods, especially those aiming to infer quantitatively tissue composition and microstructure.

  1. Fast Realistic MRI Simulations Based on Generalized Multi-Pool Exchange Tissue Model

    PubMed Central

    Velikina, Julia V.; Block, Walter F.; Kijowski, Richard; Samsonov, Alexey A.

    2017-01-01

    We present MRiLab, a new comprehensive simulator for large-scale realistic MRI simulations on a regular PC equipped with a modern graphical processing unit (GPU). MRiLab combines realistic tissue modeling with numerical virtualization of an MRI system and scanning experiment to enable assessment of a broad range of MRI approaches including advanced quantitative MRI methods inferring microstructure on a sub-voxel level. A flexibl representation of tissue microstructure is achieved in MRiLab by employing the generalized tissue model with multiple exchanging water and macromolecular proton pools rather than a system of independent proton isochromats typically used in previous simulators. The computational power needed for simulation of the biologically relevant tissue models in large 3D objects is gained using parallelized execution on GPU. Three simulated and one actual MRI experiments were performed to demonstrate the ability of the new simulator to accommodate a wide variety of voxel composition scenarios and demonstrate detrimental effects of simplifie treatment of tissue micro-organization adapted in previous simulators. GPU execution allowed ∼200× improvement in computational speed over standard CPU. As a cross-platform, open-source, extensible environment for customizing virtual MRI experiments, MRiLab streamlines the development of new MRI methods, especially those aiming to infer quantitatively tissue composition and microstructure. PMID:28113746

  2. A simplified method for determining reactive rate parameters for reaction ignition and growth in explosives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, P.J.

    1996-07-01

    A simplified method for determining the reactive rate parameters for the ignition and growth model is presented. This simplified ignition and growth (SIG) method consists of only two adjustable parameters, the ignition (I) and growth (G) rate constants. The parameters are determined by iterating these variables in DYNA2D hydrocode simulations of the failure diameter and the gap test sensitivity until the experimental values are reproduced. Examples of four widely different explosives were evaluated using the SIG model. The observed embedded gauge stress-time profiles for these explosives are compared to those calculated by the SIG equation and the results are described.

  3. Improved Simplified Methods for Effective Seismic Analysis and Design of Isolated and Damped Bridges in Western and Eastern North America

    NASA Astrophysics Data System (ADS)

    Koval, Viacheslav

    The seismic design provisions of the CSA-S6 Canadian Highway Bridge Design Code and the AASHTO LRFD Seismic Bridge Design Specifications have been developed primarily based on historical earthquake events that have occurred along the west coast of North America. For the design of seismic isolation systems, these codes include simplified analysis and design methods. The appropriateness and range of application of these methods are investigated through extensive parametric nonlinear time history analyses in this thesis. It was found that there is a need to adjust existing design guidelines to better capture the expected nonlinear response of isolated bridges. For isolated bridges located in eastern North America, new damping coefficients are proposed. The applicability limits of the code-based simplified methods have been redefined to ensure that the modified method will lead to conservative results and that a wider range of seismically isolated bridges can be covered by this method. The possibility of further improving current simplified code methods was also examined. By transforming the quantity of allocated energy into a displacement contribution, an idealized analytical solution is proposed as a new simplified design method. This method realistically reflects the effects of ground-motion and system design parameters, including the effects of a drifted oscillation center. The proposed method is therefore more appropriate than current existing simplified methods and can be applicable to isolation systems exhibiting a wider range of properties. A multi-level-hazard performance matrix has been adopted by different seismic provisions worldwide and will be incorporated into the new edition of the Canadian CSA-S6-14 Bridge Design code. However, the combined effect and optimal use of isolation and supplemental damping devices in bridges have not been fully exploited yet to achieve enhanced performance under different levels of seismic hazard. A novel Dual-Level Seismic Protection (DLSP) concept is proposed and developed in this thesis which permits to achieve optimum seismic performance with combined isolation and supplemental damping devices in bridges. This concept is shown to represent an attractive design approach for both the upgrade of existing seismically deficient bridges and the design of new isolated bridges.

  4. Practical Aspects of Stabilized FEM Discretizations of Nonlinear Conservation Law Systems with Convex Extension

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Saini, Subhash (Technical Monitor)

    1999-01-01

    This talk considers simplified finite element discretization techniques for first-order systems of conservation laws equipped with a convex (entropy) extension. Using newly developed techniques in entropy symmetrization theory, simplified forms of the Galerkin least-squares (GLS) and the discontinuous Galerkin (DG) finite element method have been developed and analyzed. The use of symmetrization variables yields numerical schemes which inherit global entropy stability properties of the POE system. Central to the development of the simplified GLS and DG methods is the Degenerative Scaling Theorem which characterizes right symmetrizes of an arbitrary first-order hyperbolic system in terms of scaled eigenvectors of the corresponding flux Jacobean matrices. A constructive proof is provided for the Eigenvalue Scaling Theorem with detailed consideration given to the Euler, Navier-Stokes, and magnetohydrodynamic (MHD) equations. Linear and nonlinear energy stability is proven for the simplified GLS and DG methods. Spatial convergence properties of the simplified GLS and DO methods are numerical evaluated via the computation of Ringleb flow on a sequence of successively refined triangulations. Finally, we consider a posteriori error estimates for the GLS and DG demoralization assuming error functionals related to the integrated lift and drag of a body. Sample calculations in 20 are shown to validate the theory and implementation.

  5. An integrated workflow for robust alignment and simplified quantitative analysis of NMR spectrometry data.

    PubMed

    Vu, Trung N; Valkenborg, Dirk; Smets, Koen; Verwaest, Kim A; Dommisse, Roger; Lemière, Filip; Verschoren, Alain; Goethals, Bart; Laukens, Kris

    2011-10-20

    Nuclear magnetic resonance spectroscopy (NMR) is a powerful technique to reveal and compare quantitative metabolic profiles of biological tissues. However, chemical and physical sample variations make the analysis of the data challenging, and typically require the application of a number of preprocessing steps prior to data interpretation. For example, noise reduction, normalization, baseline correction, peak picking, spectrum alignment and statistical analysis are indispensable components in any NMR analysis pipeline. We introduce a novel suite of informatics tools for the quantitative analysis of NMR metabolomic profile data. The core of the processing cascade is a novel peak alignment algorithm, called hierarchical Cluster-based Peak Alignment (CluPA). The algorithm aligns a target spectrum to the reference spectrum in a top-down fashion by building a hierarchical cluster tree from peak lists of reference and target spectra and then dividing the spectra into smaller segments based on the most distant clusters of the tree. To reduce the computational time to estimate the spectral misalignment, the method makes use of Fast Fourier Transformation (FFT) cross-correlation. Since the method returns a high-quality alignment, we can propose a simple methodology to study the variability of the NMR spectra. For each aligned NMR data point the ratio of the between-group and within-group sum of squares (BW-ratio) is calculated to quantify the difference in variability between and within predefined groups of NMR spectra. This differential analysis is related to the calculation of the F-statistic or a one-way ANOVA, but without distributional assumptions. Statistical inference based on the BW-ratio is achieved by bootstrapping the null distribution from the experimental data. The workflow performance was evaluated using a previously published dataset. Correlation maps, spectral and grey scale plots show clear improvements in comparison to other methods, and the down-to-earth quantitative analysis works well for the CluPA-aligned spectra. The whole workflow is embedded into a modular and statistically sound framework that is implemented as an R package called "speaq" ("spectrum alignment and quantitation"), which is freely available from http://code.google.com/p/speaq/.

  6. Existence and stability, and discrete BB and rank conditions, for general mixed-hybrid finite elements in elasticity

    NASA Technical Reports Server (NTRS)

    Xue, W.-M.; Atluri, S. N.

    1985-01-01

    In this paper, all possible forms of mixed-hybrid finite element methods that are based on multi-field variational principles are examined as to the conditions for existence, stability, and uniqueness of their solutions. The reasons as to why certain 'simplified hybrid-mixed methods' in general, and the so-called 'simplified hybrid-displacement method' in particular (based on the so-called simplified variational principles), become unstable, are discussed. A comprehensive discussion of the 'discrete' BB-conditions, and the rank conditions, of the matrices arising in mixed-hybrid methods, is given. Some recent studies aimed at the assurance of such rank conditions, and the related problem of the avoidance of spurious kinematic modes, are presented.

  7. 77 FR 73965 - Allocation of Costs Under the Simplified Methods; Hearing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-12

    ... DEPARTMENT OF THE TREASURY Internal Revenue Service 26 CFR Part 1 [REG-126770-06] RIN 1545-BG07 Allocation of Costs Under the Simplified Methods; Hearing AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice of public hearing on notice proposed rulemaking. SUMMARY: This document provides notice of...

  8. The development and evaluation of a nursing information system for caring clinical in-patient.

    PubMed

    Fang, Yu-Wen; Li, Chih-Ping; Wang, Mei-Hua

    2015-01-01

    The research aimed to develop a nursing information system in order to simplify the admission procedure for caring clinical in-patient, enhance the efficiency of medical information documentation. Therefore, by correctly delivering patients’ health records, and providing continues care, patient safety and care quality would be effectively improved. The study method was to apply Spiral Model development system to compose a nursing information team. By using strategies of data collection, working environment observation, applying use-case modeling, and conferences of Joint Application Design (JAD) to complete the system requirement analysis and design. The Admission Care Management Information System (ACMIS) mainly included: (1) Admission nursing management information system. (2) Inter-shift meeting information management system. (3) The linkage of drug management system and physical examination record system. The framework contained qualitative and quantitative components that provided both formative and summative elements of the evaluation. System evaluation was to apply information success model, and developed questionnaire of consisting nurses’ acceptance and satisfaction. The results of questionnaires were users’ satisfaction, the perceived self-involvement, age and information quality were positively to personal and organizational effectiveness. According to the results of this study, the Admission Care Management Information System was practical to simplifying clinic working procedure and effective in communicating and documenting admission medical information.

  9. Rapid Determination of Two Triterpenoid Acids in Chaenomelis Fructus Using Supercritical Fluid Extraction On-line Coupled with Supercritical Fluid Chromatography.

    PubMed

    Zhang, Xiaotian; Ji, Feng; Li, Yueqi; He, Tian; Han, Ya; Wang, Daidong; Lin, Zongtao; Chen, Shizhong

    2018-01-01

    In this study, an on-line supercritical fluid extraction (SFE) and supercritical fluid chromatography (SFC) method was developed for the rapid determination of oleanoic acid and ursolic acid in Chaenomelis Fructus. After optimization of the conditions, the two triterpenoid acids was obtained by SFE using 20% methanol as a modifier at 35°C in 8 min. They were resolved on a Shim-pack UC-X Diol column (4.6 × 150 mm, 3 μm) in 14 min (0 - 10 min, 5 - 10%; 10 - 14 min, 10% methanol in CO 2 ) with a backpressure of 15 MPa at 40°C. The on-line SFE-SFC method could be completed within 40 min (10.79 mg/g dry plant, R s = 2.36), while the ultrasound-assisted extraction and HPLC method required at least 90 min (3.55 mg/g dry plant, R s = 1.92). This on-line SFE-SFC method is powerful to simplify the pre-processing and quantitative analysis of natural products.

  10. Precise determination of N-acetylcysteine in pharmaceuticals by microchip electrophoresis.

    PubMed

    Rudašová, Marína; Masár, Marián

    2016-01-01

    A novel microchip electrophoresis method for the rapid and high-precision determination of N-acetylcysteine, a pharmaceutically active ingredient, in mucolytics has been developed. Isotachophoresis separations were carried out at pH 6.0 on a microchip with conductivity detection. The methods of external calibration and internal standard were used to evaluate the results. The internal standard method effectively eliminated variations in various working parameters, mainly run-to-run fluctuations of an injected volume. The repeatability and accuracy of N-acetylcysteine determination in all mucolytic preparations tested (Solmucol 90 and 200, and ACC Long 600) were more than satisfactory with the relative standard deviation and relative error values <0.7 and <1.9%, respectively. A recovery range of 99-101% of N-acetylcysteine in the analyzed pharmaceuticals predetermines the proposed method for accurate analysis as well. This work, in general, indicates analytical possibilities of microchip isotachophoresis for the quantitative analysis of simplified samples such as pharmaceuticals that contain the analyte(s) at relatively high concentrations. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Analysis of glycosaminoglycan-derived disaccharides by capillary electrophoresis using laser-induced fluorescence detection

    PubMed Central

    Chang, Yuqing; Yang, Bo; Zhao, Xue; Linhardt, Robert J.

    2012-01-01

    A quantitative and highly sensitive method for the analysis of glycosaminoglycan (GAG)-derived disaccharides is presented that relies on capillary electrophoresis (CE) with laser-induced fluorescence (LIF) detection. This method enables complete separation of seventeen GAG-derived disaccharides in a single run. Unsaturated disaccharides were derivatized with 2-aminoacridone (AMAC) to improve sensitivity. The limit of detection was at the attomole level and about 100-fold more sensitive than traditional CE-ultraviolet detection. A CE separation timetable was developed to achieve complete resolution and shorten analysis time. The RSD of migration time and peak areas at both low and high concentrations of unsaturated disaccharides are all less than 2.7% and 3.2%, respectively, demonstrating that this is a reproducible method. This analysis was successfully applied to cultured Chinese hamster ovary cell samples for determination of GAG disaccharides. The current method simplifies GAG extraction steps, and reduces inaccuracy in calculating ratios of heparin/heparan sulfate to chondroitin sulfate/dermatan sulfate, resulting from the separate analyses of a single sample. PMID:22609076

  12. Improved method for the extraction and chromatographic analysis on a fused-core column of ellagitannins found in oak-aged wine.

    PubMed

    Navarro, María; Kontoudakis, Nikolaos; Canals, Joan Miquel; García-Romero, Esteban; Gómez-Alonso, Sergio; Zamora, Fernando; Hermosín-Gutiérrez, Isidro

    2017-07-01

    A new method for the analysis of ellagitannins observed in oak-aged wine is proposed, exhibiting interesting advantages with regard to previously reported analytical methods. The necessary extraction of ellagitannins from wine was simplified to a single step of solid phase extraction (SPE) using size exclusion chromatography with Sephadex LH-20 without the need for any previous SPE of phenolic compounds using reversed-phase materials. The quantitative recovery of wine ellagitannins requires a combined elution with methanol and ethyl acetate, especially for increasing the recovery of the less polar acutissimins. The chromatographic method was performed using a fused-core C18 column, thereby avoiding the coelution of main ellagitannins, such as vescalagin and roburin E. However, the very polar ellagitannins, namely, the roburins A, B and C, still partially coeluted, and their quantification was assisted by the MS detector. This methodology also enabled the analysis of free gallic and ellagic acids in the same chromatographic run. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. 77 FR 15969 - Waybill Data Released in Three-Benchmark Rail Rate Proceedings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-19

    ... confidentiality of the contract rates, as required by 49 U.S.C. 11904. Background In Simplified Standards for Rail Rate Cases (Simplified Standards), EP 646 (Sub-No. 1) (STB served Sept. 5, 2007), aff'd sub nom. CSX...\\ Under the Three-Benchmark method as revised in Simplified Standards, each party creates and proffers to...

  14. 48 CFR 13.005 - List of laws inapplicable to contracts and subcontracts at or below the simplified acquisition...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false List of laws inapplicable to contracts and subcontracts at or below the simplified acquisition threshold. 13.005 Section 13.005 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES...

  15. Mid-infrared spectroscopy combined with chemometrics to detect Sclerotinia stem rot on oilseed rape (Brassica napus L.) leaves.

    PubMed

    Zhang, Chu; Feng, Xuping; Wang, Jian; Liu, Fei; He, Yong; Zhou, Weijun

    2017-01-01

    Detection of plant diseases in a fast and simple way is crucial for timely disease control. Conventionally, plant diseases are accurately identified by DNA, RNA or serology based methods which are time consuming, complex and expensive. Mid-infrared spectroscopy is a promising technique that simplifies the detection procedure for the disease. Mid-infrared spectroscopy was used to identify the spectral differences between healthy and infected oilseed rape leaves. Two different sample sets from two experiments were used to explore and validate the feasibility of using mid-infrared spectroscopy in detecting Sclerotinia stem rot (SSR) on oilseed rape leaves. The average mid-infrared spectra showed differences between healthy and infected leaves, and the differences varied among different sample sets. Optimal wavenumbers for the 2 sample sets selected by the second derivative spectra were similar, indicating the efficacy of selecting optimal wavenumbers. Chemometric methods were further used to quantitatively detect the oilseed rape leaves infected by SSR, including the partial least squares-discriminant analysis, support vector machine and extreme learning machine. The discriminant models using the full spectra and the optimal wavenumbers of the 2 sample sets were effective for classification accuracies over 80%. The discriminant results for the 2 sample sets varied due to variations in the samples. The use of two sample sets proved and validated the feasibility of using mid-infrared spectroscopy and chemometric methods for detecting SSR on oilseed rape leaves. The similarities among the selected optimal wavenumbers in different sample sets made it feasible to simplify the models and build practical models. Mid-infrared spectroscopy is a reliable and promising technique for SSR control. This study helps in developing practical application of using mid-infrared spectroscopy combined with chemometrics to detect plant disease.

  16. Image segmentation algorithm based on improved PCNN

    NASA Astrophysics Data System (ADS)

    Chen, Hong; Wu, Chengdong; Yu, Xiaosheng; Wu, Jiahui

    2017-11-01

    A modified simplified Pulse Coupled Neural Network (PCNN) model is proposed in this article based on simplified PCNN. Some work have done to enrich this model, such as imposing restrictions items of the inputs, improving linking inputs and internal activity of PCNN. A self-adaptive parameter setting method of linking coefficient and threshold value decay time constant is proposed here, too. At last, we realized image segmentation algorithm for five pictures based on this proposed simplified PCNN model and PSO. Experimental results demonstrate that this image segmentation algorithm is much better than method of SPCNN and OTSU.

  17. Immersed boundary-simplified lattice Boltzmann method for incompressible viscous flows

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Shu, C.; Tan, D.

    2018-05-01

    An immersed boundary-simplified lattice Boltzmann method is developed in this paper for simulations of two-dimensional incompressible viscous flows with immersed objects. Assisted by the fractional step technique, the problem is resolved in a predictor-corrector scheme. The predictor step solves the flow field without considering immersed objects, and the corrector step imposes the effect of immersed boundaries on the velocity field. Different from the previous immersed boundary-lattice Boltzmann method which adopts the standard lattice Boltzmann method (LBM) as the flow solver in the predictor step, a recently developed simplified lattice Boltzmann method (SLBM) is applied in the present method to evaluate intermediate flow variables. Compared to the standard LBM, SLBM requires lower virtual memories, facilitates the implementation of physical boundary conditions, and shows better numerical stability. The boundary condition-enforced immersed boundary method, which accurately ensures no-slip boundary conditions, is implemented as the boundary solver in the corrector step. Four typical numerical examples are presented to demonstrate the stability, the flexibility, and the accuracy of the present method.

  18. An improved loopless mounting method for cryocrystallography

    NASA Astrophysics Data System (ADS)

    Qi, Jian-Xun; Jiang, Fan

    2010-01-01

    Based on a recent loopless mounting method, a simplified loopless and bufferless crystal mounting method is developed for macromolecular crystallography. This simplified crystal mounting system is composed of the following components: a home-made glass capillary, a brass seat for holding the glass capillary, a flow regulator, and a vacuum pump for evacuation. Compared with the currently prevalent loop mounting method, this simplified method has almost the same mounting procedure and thus is compatible with the current automated crystal mounting system. The advantages of this method include higher signal-to-noise ratio, more accurate measurement, more rapid flash cooling, less x-ray absorption and thus less radiation damage to the crystal. This method can be extended to the flash-freeing of a crystal without or with soaking it in a lower concentration of cryoprotectant, thus it may be the best option for data collection in the absence of suitable cryoprotectant. Therefore, it is suggested that this mounting method should be further improved and extensively applied to cryocrystallographic experiments.

  19. Research on simplified parametric finite element model of automobile frontal crash

    NASA Astrophysics Data System (ADS)

    Wu, Linan; Zhang, Xin; Yang, Changhai

    2018-05-01

    The modeling method and key technologies of the automobile frontal crash simplified parametric finite element model is studied in this paper. By establishing the auto body topological structure, extracting and parameterizing the stiffness properties of substructures, choosing appropriate material models for substructures, the simplified parametric FE model of M6 car is built. The comparison of the results indicates that the simplified parametric FE model can accurately calculate the automobile crash responses and the deformation of the key substructures, and the simulation time is reduced from 6 hours to 2 minutes.

  20. A low-cost landslide displacement activity assessment from time-lapse photogrammetry and rainfall data: Application to the Tessina landslide site

    NASA Astrophysics Data System (ADS)

    Gabrieli, F.; Corain, L.; Vettore, L.

    2016-09-01

    Acquiring useful and reliable displacement data from a complex landslide site is often a problem because of large, localized and scattered erosive processes and deformations; the inaccessibility of the site; the high cost of instrumentation and maintenance. However, these data are of fundamental importance not only to hazard assessments but also to understanding the processes at the basis of slope evolution. In this framework, time-lapse photogrammetry can represent a good compromise; the low accuracy is compensated for by the wide-ranging and dense spatial displacement information that can be obtained with inexpensive equipment. Nevertheless, when large displacement monitoring data sets become available, the problem becomes the choice of the most suitable statistical model to describe the probability of movement and adequately simplify the complexity of a scattered, intermittent, and spatially inhomogeneous displacement field. In this paper, an automated displacement detection method, which is based on the absolute image differences and digital correlations from a sequence of photos, was developed and applied to a photographic survey activity at the head of the Tessina landslide (northeastern Italy). The method allowed us to simplify and binarize the displacement field and to recognize the intermittent activity and the peculiar behaviours of different parts of the landslide, which were identified and classified by combining geomorphological and geological information. Moreover, for the first time, sliding correlations between these areas were quantitatively estimated using time-series-based binary logistic regression and the definition of a probability-based directed graph of displacement occurrence that connected the source zones to the lower depletion basin and the main collector channel. Using rainfall data, event-based logistic and Poisson regression models were applied to the upper zones of the landslide to estimate the probability of movement of each scarp and the persistence of the displacement as a result of certain rainfall events. The results of these statistical analyses highlighted the capability of this approach to quantitatively evaluate the pattern of displacement occurrences and to assess the evolution of a landslide site to gain insight into geomorphological processes.

  1. A simplified parsimonious higher order multivariate Markov chain model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, a simplified parsimonious higher-order multivariate Markov chain model (SPHOMMCM) is presented. Moreover, parameter estimation method of TPHOMMCM is give. Numerical experiments shows the effectiveness of TPHOMMCM.

  2. Simplified half-life methods for the analysis of kinetic data

    NASA Technical Reports Server (NTRS)

    Eberhart, J. G.; Levin, E.

    1988-01-01

    The analysis of reaction rate data has as its goal the determination of the order rate constant which characterize the data. Chemical reactions with one reactant and present simplified methods for accomplishing this goal are considered. The approaches presented involve the use of half lives or other fractional lives. These methods are particularly useful for the more elementary discussions of kinetics found in general and physical chemistry courses.

  3. From Inverse Problems in Mathematical Physiology to Quantitative Differential Diagnoses

    PubMed Central

    Zenker, Sven; Rubin, Jonathan; Clermont, Gilles

    2007-01-01

    The improved capacity to acquire quantitative data in a clinical setting has generally failed to improve outcomes in acutely ill patients, suggesting a need for advances in computer-supported data interpretation and decision making. In particular, the application of mathematical models of experimentally elucidated physiological mechanisms could augment the interpretation of quantitative, patient-specific information and help to better target therapy. Yet, such models are typically complex and nonlinear, a reality that often precludes the identification of unique parameters and states of the model that best represent available data. Hypothesizing that this non-uniqueness can convey useful information, we implemented a simplified simulation of a common differential diagnostic process (hypotension in an acute care setting), using a combination of a mathematical model of the cardiovascular system, a stochastic measurement model, and Bayesian inference techniques to quantify parameter and state uncertainty. The output of this procedure is a probability density function on the space of model parameters and initial conditions for a particular patient, based on prior population information together with patient-specific clinical observations. We show that multimodal posterior probability density functions arise naturally, even when unimodal and uninformative priors are used. The peaks of these densities correspond to clinically relevant differential diagnoses and can, in the simplified simulation setting, be constrained to a single diagnosis by assimilating additional observations from dynamical interventions (e.g., fluid challenge). We conclude that the ill-posedness of the inverse problem in quantitative physiology is not merely a technical obstacle, but rather reflects clinical reality and, when addressed adequately in the solution process, provides a novel link between mathematically described physiological knowledge and the clinical concept of differential diagnoses. We outline possible steps toward translating this computational approach to the bedside, to supplement today's evidence-based medicine with a quantitatively founded model-based medicine that integrates mechanistic knowledge with patient-specific information. PMID:17997590

  4. Simplified neutrosophic sets and their applications in multi-criteria group decision-making problems

    NASA Astrophysics Data System (ADS)

    Peng, Juan-juan; Wang, Jian-qiang; Wang, Jing; Zhang, Hong-yu; Chen, Xiao-hong

    2016-07-01

    As a variation of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete and inconsistent information that exists in the real world. Simplified neutrosophic sets (SNSs) have been proposed for the main purpose of addressing issues with a set of specific numbers. However, there are certain problems regarding the existing operations of SNSs, as well as their aggregation operators and the comparison methods. Therefore, this paper defines the novel operations of simplified neutrosophic numbers (SNNs) and develops a comparison method based on the related research of intuitionistic fuzzy numbers. On the basis of these operations and the comparison method, some SNN aggregation operators are proposed. Additionally, an approach for multi-criteria group decision-making (MCGDM) problems is explored by applying these aggregation operators. Finally, an example to illustrate the applicability of the proposed method is provided and a comparison with some other methods is made.

  5. Formative Research on the Simplifying Conditions Method (SCM) for Task Analysis and Sequencing.

    ERIC Educational Resources Information Center

    Kim, YoungHwan; Reigluth, Charles M.

    The Simplifying Conditions Method (SCM) is a set of guidelines for task analysis and sequencing of instructional content under the Elaboration Theory (ET). This article introduces the fundamentals of SCM and presents the findings from a formative research study on SCM. It was conducted in two distinct phases: design and instruction. In the first…

  6. A Simplified Method for Tissue Engineering Skeletal Muscle Organoids in Vitro

    NASA Technical Reports Server (NTRS)

    Shansky, Janet; DelTatto, Michael; Chromiak, Joseph; Vandenburgh, Herman

    1996-01-01

    Tissue-engineered three dimensional skeletal muscle organ-like structures have been formed in vitro from primary myoblasts by several different techniques. This report describes a simplified method for generating large numbers of muscle organoids from either primary embryonic avian or neonatal rodent myoblasts, which avoids the requirements for stretching and other mechanical stimulation.

  7. Multidisciplinary Optimization Methods for Aircraft Preliminary Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian

    1994-01-01

    This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.

  8. 29 CFR 2520.104-48 - Alternative method of compliance for model simplified employee pensions-IRS Form 5305-SEP.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... employee pensions-IRS Form 5305-SEP. 2520.104-48 Section 2520.104-48 Labor Regulations Relating to Labor... compliance for model simplified employee pensions—IRS Form 5305-SEP. Under the authority of section 110 of... Security Act of 1974 in the case of a simplified employee pension (SEP) described in section 408(k) of the...

  9. Comprehensive HPTLC Fingerprinting for Quality Control of an Herbal Drug - The Case of Angelica gigas Root.

    PubMed

    Frommenwiler, Débora Arruda; Kim, Jonghwan; Yook, Chang-Soo; Tran, Thi Thu Trang; Cañigueral, Salvador; Reich, Eike

    2018-04-01

    The quality of herbal drugs is usually controlled using several tests recommended in a monograph. HPTLC is the method of choice for identification in many pharmacopoeias. If combined with a suitable reference material for comparison, HPTLC can provide information beyond identification and thus may simplify quality control. This paper describes, as a proof of concept, how HPTLC can be applied to define specifications for an herbal reference material and to control the quality of an herbal drug according to these specifications. Based on multiple batches of cultivated Angelica gigas root, a specific HPTLC method for identification was optimized. This method can distinguish 27 related species. It also can detect the presence of mixtures of A. gigas with two other Angelica species traded as "Dang gui" and is suitable as well for quantitative assessment of samples in a test for minimum content of the sum of decursin and decursinol angelate. The new concept of "comprehensive HPTLC fingerprinting" is proposed: HPTLC fingerprints (images), which are used for identification, are converted into peak profiles and the intensities of selected zones are quantitatively compared to those of the corresponding zones of the reference material. Following a collaborative trial involving three laboratories in three countries, the method was applied to check the quality of further candidates for establishing an appropriate reference material. In conclusion, this case demonstrates that a single HPTLC analysis can provide information about identity, purity, and minimum content of markers of an herbal drug. Georg Thieme Verlag KG Stuttgart · New York.

  10. Comparison of the Calculations Results of Heat Exchange Between a Single-Family Building and the Ground Obtained with the Quasi-Stationary and 3-D Transient Models. Part 2: Intermittent and Reduced Heating Mode

    NASA Astrophysics Data System (ADS)

    Staszczuk, Anna

    2017-03-01

    The paper provides comparative results of calculations of heat exchange between ground and typical residential buildings using simplified (quasi-stationary) and more accurate (transient, three-dimensional) methods. Such characteristics as building's geometry, basement hollow and construction of ground touching assemblies were considered including intermittent and reduced heating mode. The calculations with simplified methods were conducted in accordance with currently valid norm: PN-EN ISO 13370:2008. Thermal performance of buildings. Heat transfer via the ground. Calculation methods. Comparative estimates concerning transient, 3-D, heat flow were performed with computer software WUFI®plus. The differences of heat exchange obtained using more exact and simplified methods have been specified as a result of the analysis.

  11. Correlative SEM SERS for quantitative analysis of dimer nanoparticles.

    PubMed

    Timmermans, F J; Lenferink, A T M; van Wolferen, H A G M; Otto, C

    2016-11-14

    A Raman microscope integrated with a scanning electron microscope was used to investigate plasmonic structures by correlative SEM-SERS analysis. The integrated Raman-SEM microscope combines high-resolution electron microscopy information with SERS signal enhancement from selected nanostructures with adsorbed Raman reporter molecules. Correlative analysis is performed for dimers of two gold nanospheres. Dimers were selected on the basis of SEM images from multi aggregate samples. The effect of the orientation of the dimer with respect to the polarization state of the laser light and the effect of the particle gap size on the Raman signal intensity is observed. Additionally, calculations are performed to simulate the electric near field enhancement. These simulations are based on the morphologies observed by electron microscopy. In this way the experiments are compared with the enhancement factor calculated with near field simulations and are subsequently used to quantify the SERS enhancement factor. Large differences between experimentally observed and calculated enhancement factors are regularly detected, a phenomenon caused by nanoscale differences between the real and 'simplified' simulated structures. Quantitative SERS experiments reveal the structure induced enhancement factor, ranging from ∼200 to ∼20 000, averaged over the full nanostructure surface. The results demonstrate correlative Raman-SEM microscopy for the quantitative analysis of plasmonic particles and structures, thus enabling a new analytical method in the field of SERS and plasmonics.

  12. Mobility Spectrometer Studies on Hydrazine and Ammonia Detection

    NASA Technical Reports Server (NTRS)

    Niu, William; Eiceman, Gary; Szumlas, Andrew; Lewis, John

    2011-01-01

    An airborne vapor analyzer for detecting sub- to low- parts-per-million (ppm) hydrazine in the presence of higher concentration levels of ammonia has been under development for the Orion program. The detector is based on ambient pressure ionization and ion mobility characterization. The detector encompasses: 1) a membrane inlet to exclude particulate and aerosols from the analyzer inlet; 2) a method to separate hydrazine from ammonia which would otherwise lead to loss of calibration and quantitative accuracy for the hydrazine determination; and 3) response and quantitative determinations for both hydrazine and ammonia. Laboratory studies were made to explore some of these features including mobility measurements mindful of power, size, and weight issues. The study recommended the use of a mobility spectrometer of traditional design with a reagent gas and equipped with an inlet transfer line of bonded phase fused silica tube. The inlet transfer line provided gas phase separation of neutrals of ammonia from hydrazine at 50 C simplifying significantly the ionization chemistry that underlies response in a mobility spectrometer. Performance of the analyzer was acceptable between ranges of 30 to 80 C for both the pre-fractionation column and the drift tube. An inlet comprised of a combined membrane with valve-less injector allowed high speed quantitative determination of ammonia and hydrazine without cross reactivity from common metabolites such as alcohols, esters, and aldehydes. Preliminary test results and some of the design features are discussed.

  13. Response of deep and shallow tropical maritime cumuli to large-scale processes

    NASA Technical Reports Server (NTRS)

    Yanai, M.; Chu, J.-H.; Stark, T. E.; Nitta, T.

    1976-01-01

    The bulk diagnostic method of Yanai et al. (1973) and a simplified version of the spectral diagnostic method of Nitta (1975) are used for a more quantitative evaluation of the response of various types of cumuliform clouds to large-scale processes, using the same data set in the Marshall Islands area for a 100-day period in 1956. The dependence of the cloud mass flux distribution on radiative cooling, large-scale vertical motion, and evaporation from the sea is examined. It is shown that typical radiative cooling rates in the tropics tend to produce a bimodal distribution of mass spectrum exhibiting deep and shallow clouds. The bimodal distribution is further enhanced when the large-scale vertical motion is upward, and a nearly unimodal distribution of shallow clouds prevails when the relative cooling is compensated by the heating due to the large-scale subsidence. Both deep and shallow clouds are modulated by large-scale disturbances. The primary role of surface evaporation is to maintain the moisture flux at the cloud base.

  14. Ab initio folding of proteins using all-atom discrete molecular dynamics

    PubMed Central

    Ding, Feng; Tsao, Douglas; Nie, Huifen; Dokholyan, Nikolay V.

    2008-01-01

    Summary Discrete molecular dynamics (DMD) is a rapid sampling method used in protein folding and aggregation studies. Until now, DMD was used to perform simulations of simplified protein models in conjunction with structure-based force fields. Here, we develop an all-atom protein model and a transferable force field featuring packing, solvation, and environment-dependent hydrogen bond interactions. Using the replica exchange method, we perform folding simulations of six small proteins (20–60 residues) with distinct native structures. In all cases, native or near-native states are reached in simulations. For three small proteins, multiple folding transitions are observed and the computationally-characterized thermodynamics are in quantitative agreement with experiments. The predictive power of all-atom DMD highlights the importance of environment-dependent hydrogen bond interactions in modeling protein folding. The developed approach can be used for accurate and rapid sampling of conformational spaces of proteins and protein-protein complexes, and applied to protein engineering and design of protein-protein interactions. PMID:18611374

  15. Physically-based in silico light sheet microscopy for visualizing fluorescent brain models

    PubMed Central

    2015-01-01

    Background We present a physically-based computational model of the light sheet fluorescence microscope (LSFM). Based on Monte Carlo ray tracing and geometric optics, our method simulates the operational aspects and image formation process of the LSFM. This simulated, in silico LSFM creates synthetic images of digital fluorescent specimens that can resemble those generated by a real LSFM, as opposed to established visualization methods producing visually-plausible images. We also propose an accurate fluorescence rendering model which takes into account the intrinsic characteristics of fluorescent dyes to simulate the light interaction with fluorescent biological specimen. Results We demonstrate first results of our visualization pipeline to a simplified brain tissue model reconstructed from the somatosensory cortex of a young rat. The modeling aspects of the LSFM units are qualitatively analysed, and the results of the fluorescence model were quantitatively validated against the fluorescence brightness equation and characteristic emission spectra of different fluorescent dyes. AMS subject classification Modelling and simulation PMID:26329404

  16. 2D and 3D X-ray phase retrieval of multi-material objects using a single defocus distance.

    PubMed

    Beltran, M A; Paganin, D M; Uesugi, K; Kitchen, M J

    2010-03-29

    A method of tomographic phase retrieval is developed for multi-material objects whose components each has a distinct complex refractive index. The phase-retrieval algorithm, based on the Transport-of-Intensity equation, utilizes propagation-based X-ray phase contrast images acquired at a single defocus distance for each tomographic projection. The method requires a priori knowledge of the complex refractive index for each material present in the sample, together with the total projected thickness of the object at each orientation. The requirement of only a single defocus distance per projection simplifies the experimental setup and imposes no additional dose compared to conventional tomography. The algorithm was implemented using phase contrast data acquired at the SPring-8 Synchrotron facility in Japan. The three-dimensional (3D) complex refractive index distribution of a multi-material test object was quantitatively reconstructed using a single X-ray phase-contrast image per projection. The technique is robust in the presence of noise, compared to conventional absorption based tomography.

  17. Validated HPLC-Diode Array Detector Method for Simultaneous Evaluation of Six Quality Markers in Coffee.

    PubMed

    Gant, Anastasia; Leyva, Vanessa E; Gonzalez, Ana E; Maruenda, Helena

    2015-01-01

    Nicotinic acid, N-methylpyridinium ion, and trigonelline are well studied nutritional biomarkers present in coffee, and they are indicators of thermal decomposition during roasting. However, no method is yet available for their simultaneous determination. This paper describes a rapid and validated HPLC-diode array detector method for the simultaneous quantitation of caffeine, trigonelline, nicotinic acid, N-methylpyridinium ion, 5-caffeoylquinic acid, and 5-hydroxymethyl furfural that is applicable to three coffee matrixes: green, roasted, and instant. Baseline separation among all compounds was achieved in 30 min using a phenyl-hexyl RP column (250×4.6 mm, 5 μm particle size), 0.3% aqueous formic buffer (pH 2.4)-methanol mobile phase at a flow rate of 1 mL/min, and a column temperature at 30°C. The method showed good linear correlation (r2>0.9985), precision (less than 3.9%), sensitivity (LOD=0.023-0.237 μg/mL; LOQ=0.069-0.711 μg/mL), and recovery (84-102%) for all compounds. This simplified method is amenable for a more complete routine evaluation of coffee in industry.

  18. Artificial cell mimics as simplified models for the study of cell biology.

    PubMed

    Salehi-Reyhani, Ali; Ces, Oscar; Elani, Yuval

    2017-07-01

    Living cells are hugely complex chemical systems composed of a milieu of distinct chemical species (including DNA, proteins, lipids, and metabolites) interconnected with one another through a vast web of interactions: this complexity renders the study of cell biology in a quantitative and systematic manner a difficult task. There has been an increasing drive towards the utilization of artificial cells as cell mimics to alleviate this, a development that has been aided by recent advances in artificial cell construction. Cell mimics are simplified cell-like structures, composed from the bottom-up with precisely defined and tunable compositions. They allow specific facets of cell biology to be studied in isolation, in a simplified environment where control of variables can be achieved without interference from a living and responsive cell. This mini-review outlines the core principles of this approach and surveys recent key investigations that use cell mimics to address a wide range of biological questions. It will also place the field in the context of emerging trends, discuss the associated limitations, and outline future directions of the field. Impact statement Recent years have seen an increasing drive to construct cell mimics and use them as simplified experimental models to replicate and understand biological phenomena in a well-defined and controlled system. By summarizing the advances in this burgeoning field, and using case studies as a basis for discussion on the limitations and future directions of this approach, it is hoped that this minireview will spur others in the experimental biology community to use artificial cells as simplified models with which to probe biological systems.

  19. Combining Ultrasound Pulse-Echo and Transmission Computed Tomography for Quantitative Imaging the Cortical Shell of Long Bone Replicas

    NASA Astrophysics Data System (ADS)

    Shortell, Matthew P.; Althomali, Marwan A. M.; Wille, Marie-Luise; Langton, Christian M.

    2017-11-01

    We demonstrate a simple technique for quantitative ultrasound imaging of the cortical shell of long bone replicas. Traditional ultrasound computed tomography instruments use the transmitted or reflected waves for separate reconstructions but suffer from strong refraction artefacts in highly heterogenous samples such as bones in soft tissue. The technique described here simplifies the long bone to a two-component composite and uses both the transmitted and reflected waves for reconstructions, allowing the speed of sound and thickness of the cortical shell to be calculated accurately. The technique is simple to implement, computationally inexpensive and sample positioning errors are minimal.

  20. Methods and Applications of the Audibility Index in Hearing Aid Selection and Fitting

    PubMed Central

    Amlani, Amyn M.; Punch, Jerry L.; Ching, Teresa Y. C.

    2002-01-01

    During the first half of the 20th century, communications engineers at Bell Telephone Laboratories developed the articulation model for predicting speech intelligibility transmitted through different telecommunication devices under varying electroacoustic conditions. The profession of audiology adopted this model and its quantitative aspects, known as the Articulation Index and Speech Intelligibility Index, and applied these indices to the prediction of unaided and aided speech intelligibility in hearing-impaired listeners. Over time, the calculation methods of these indices—referred to collectively in this paper as the Audibility Index—have been continually refined and simplified for clinical use. This article provides (1) an overview of the basic principles and the calculation methods of the Audibility Index, the Speech Transmission Index and related indices, as well as the Speech Recognition Sensitivity Model, (2) a review of the literature on using the Audibility Index to predict speech intelligibility of hearing-impaired listeners, (3) a review of the literature on the applicability of the Audibility Index to the selection and fitting of hearing aids, and (4) a discussion of future scientific needs and clinical applications of the Audibility Index. PMID:25425917

  1. Determination of supplemental feeding needs for astaxanthin and canthaxanthin in salmonids by supramolecular solvent-based microextraction and liquid chromatography-UV/VIS spectroscopy.

    PubMed

    Caballo, Carmen; Costi, Esther María; Sicilia, María Dolores; Rubio, Soledad

    2012-09-15

    Development of simple and rapid analytical methods for predicting supplemental feeding requirements in aquaculture is a need to reduce production costs. In this article, a supramolecular solvent (SUPRAS) made up of decanoic acid (DeA) assemblies was proposed to simplify sample treatment in the total and individual determination of carotenoids (red-pink pigments) in farmed salmonids. The analytes were quantitatively extracted in a single step that spends a few minutes using a small volume of SUPRAS (i.e. 800 μL) and directly determined in extracts without the interference from fats or other matrix components. The methods based on the combination of microextraction with SUPRAS and photometry or HPLC-UV/VIS spectroscopy were developed for the determination of total and individual carotenoids, respectively. The applicability of the methods was demonstrated by analysing non-fortified and fortified samples of farmed Atlantic salmons and rainbow trouts. Recoveries obtained by photometry and HPLC-UV/VIS spectroscopy were within the intervals 98-104% and 94-106%, respectively. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Isoniazid Preventive Therapy among Children Living with Tuberculosis Patients: Is It Working? A Mixed-Method Study from Bhopal, India

    PubMed Central

    Singh, Akash Ranjan; Kharate, Atul; Bhat, Prashant; Kokane, Arun M; Bali, Surya; Sahu, Swaroop; Verma, Manoj; Nagar, Mukesh; Kumar, Ajay MV

    2017-01-01

    Abstract Objective We assessed uptake of isoniazid preventive therapy (IPT) among child contacts of smear-positive tuberculosis (TB) patients and its implementation challenges from healthcare providers’ and parents’ perspectives in Bhopal, India. Methods A mixed-method study design: quantitative phase (review of programme records and house-to-house survey of smear-positive TB patients) followed by qualitative phase (interviews of healthcare providers and parents). Results Of 59 child contacts (<6 years) of 129 index patients, 51 were contacted. Among them, 19 of 51 (37%) were screened for TB and one had TB. Only 11 of 50 (22%) children were started and 10 of 50 (20%) completed IPT. Content analysis of interviews revealed lack of awareness, risk perception among parents, cumbersome screening process, isoniazid stock-outs, inadequate knowledge among healthcare providers and poor programmatic monitoring as main barriers to IPT implementation. Conclusion National TB programme should counsel parents, train healthcare providers, simplify screening procedures, ensure regular drug supply and introduce an indicator to strengthen monitoring and uptake of IPT. PMID:28082666

  3. Possibility-induced simplified neutrosophic aggregation operators and their application to multi-criteria group decision-making

    NASA Astrophysics Data System (ADS)

    Şahin, Rıdvan; Liu, Peide

    2017-07-01

    Simplified neutrosophic set (SNS) is an appropriate tool used to express the incompleteness, indeterminacy and uncertainty of the evaluation objects in decision-making process. In this study, we define the concept of possibility SNS including two types of information such as the neutrosophic performance provided from the evaluation objects and its possibility degree using a value ranging from zero to one. Then by extending the existing neutrosophic information, aggregation models for SNSs that cannot be used effectively to fusion the two different information described above, we propose two novel neutrosophic aggregation operators considering possibility, which are named as a possibility-induced simplified neutrosophic weighted arithmetic averaging operator and possibility-induced simplified neutrosophic weighted geometric averaging operator, and discuss their properties. Moreover, we develop a useful method based on the proposed aggregation operators for solving a multi-criteria group decision-making problem with the possibility simplified neutrosophic information, in which the weights of decision-makers and decision criteria are calculated based on entropy measure. Finally, a practical example is utilised to show the practicality and effectiveness of the proposed method.

  4. Rapid LC-MS/MS quantification of the major benzodiazepines and their metabolites on dried blood spots using a simple and cost-effective sample pretreatment.

    PubMed

    Déglon, Julien; Versace, François; Lauer, Estelle; Widmer, Christèle; Mangin, Patrice; Thomas, Aurélien; Staub, Christian

    2012-06-01

    Dried blood spots (DBS) sampling has gained popularity in the bioanalytical community as an alternative to conventional plasma sampling, as it provides numerous benefits in terms of sample collection and logistics. The aim of this work was to show that these advantages can be coupled with a simple and cost-effective sample pretreatment, with subsequent rapid LC-MS/MS analysis for quantitation of 15 benzodiazepines, six metabolites and three Z-drugs. For this purpose, a simplified offline procedure was developed that consisted of letting a 5-µl DBS infuse directly into 100 µl of MeOH, in a conventional LC vial. The parameters related to the DBS pretreatment, such as extraction time or internal standard addition, were investigated and optimized, demonstrating that passive infusion in a regular LC vial was sufficient to quantitatively extract the analytes of interest. The method was validated according to international criteria in the therapeutic concentration ranges of the selected compounds. The presented strategy proved to be efficient for the rapid analysis of the selected drugs. Indeed, the offline sample preparation was reduced to a minimum, using a small amount of organic solvent and consumables, without affecting the accuracy of the method. Thus, this approach enables simple and rapid DBS analysis, even when using a non-DBS-dedicated autosampler, while lowering the costs and environmental impact.

  5. Design and implementation of a fault-tolerant and dynamic metadata database for clinical trials

    NASA Astrophysics Data System (ADS)

    Lee, J.; Zhou, Z.; Talini, E.; Documet, J.; Liu, B.

    2007-03-01

    In recent imaging-based clinical trials, quantitative image analysis (QIA) and computer-aided diagnosis (CAD) methods are increasing in productivity due to higher resolution imaging capabilities. A radiology core doing clinical trials have been analyzing more treatment methods and there is a growing quantity of metadata that need to be stored and managed. These radiology centers are also collaborating with many off-site imaging field sites and need a way to communicate metadata between one another in a secure infrastructure. Our solution is to implement a data storage grid with a fault-tolerant and dynamic metadata database design to unify metadata from different clinical trial experiments and field sites. Although metadata from images follow the DICOM standard, clinical trials also produce metadata specific to regions-of-interest and quantitative image analysis. We have implemented a data access and integration (DAI) server layer where multiple field sites can access multiple metadata databases in the data grid through a single web-based grid service. The centralization of metadata database management simplifies the task of adding new databases into the grid and also decreases the risk of configuration errors seen in peer-to-peer grids. In this paper, we address the design and implementation of a data grid metadata storage that has fault-tolerance and dynamic integration for imaging-based clinical trials.

  6. TROSY-based z-exchange spectroscopy: application to the determination of the activation energy for intermolecular protein translocation between specific sites on different DNA molecules.

    PubMed

    Sahu, Debashish; Clore, G Marius; Iwahara, Junji

    2007-10-31

    A two-dimensional TROSY-based z-exchange 1H-15N correlation experiment for the quantitative analysis of kinetic processes in the slow exchange regime is presented. The pulse scheme converts the product operator terms Nz into 2NzHz and 2NzHz into -Nz in the middle of the z-mixing period, thereby suppressing the buildup of spurious semi-TROSY peaks arising from the different relaxation rates for the Nz and 2NzHz terms and simplifying the behavior of longitudinal magnetization for an exchanging system during the mixing period. Theoretical considerations and experimental data demonstrate that the TROSY-based z-exchange experiment permits quantitative determination of rate constants using the same procedure as that for the conventional non-TROSY 15Nz-exchange experiment. Line narrowing as a consequence of the use of the TROSY principle makes the method particularly suitable for kinetic studies at low temperature, thereby permitting activation energies to be extracted from data acquired over a wider temperature range. We applied this method to the investigation of the process whereby the HoxD9 homeodomain translocates between specific target sites on different DNA molecules via a direct transfer mechanism without going through the intermediary of free protein. The activation enthalpy for intermolecular translocation was determined to be 17 kcal/mol.

  7. Fiber-optic laser-induced fluorescence probe for the detection of environmental pollutants

    NASA Astrophysics Data System (ADS)

    Bublitz, J.; Dickenhausen, M.; Grätz, M.; Todt, S.; Schade, W.

    1995-06-01

    Laser-induced fluorescence (LIF) spectroscopy in combination with fiber optics is shown to be a powerful tool for qualitative and quantitative diagnostics of environmental pollutants in water and soil. Time-integrated data accumulation of the LIF signals in early and late time windows with respect to the excitation pulse simplifies the method so that it becomes attractive for practical applications. Results from field measurements are reported, as oil contaminations under a gas station and in an industrial sewer system are investigated. A KrF-excimer laser and a hydrogen Raman shifter can be applied for multiwavelength excitation. This allows a discrimination between benzene, toluene, xylene, and ethylbenzene aromatics and polycyclic aromatic hydrocarbon molecules in the samples under investigation. For a rough theoretical approach, a computer simulation is developed to describe the experimental results.

  8. Quantifying exploratory low dose compounds in humans with AMS

    PubMed Central

    Dueker, Stephen R.; Vuong, Le T.; Lohstroh, Peter N.; Giacomo, Jason A.; Vogel, John S.

    2010-01-01

    Accelerator Mass Spectrometry is an established technology whose essentiality extends beyond simply a better detector for radiolabeled molecules. Attomole sensitivity reduces radioisotope exposures in clinical subjects to the point that no population need be excluded from clinical study. Insights in human physiochemistry are enabled by the quantitative recovery of simplified AMS processes that provide biological concentrations of all labeled metabolites and total compound related material at non-saturating levels. In this paper, we review some of the exploratory applications of AMS 14C in toxicological, nutritional, and pharmacological research. This body of research addresses the human physiochemistry of important compounds in their own right, but also serves as examples of the analytical methods and clinical practices that are available for studying low dose physiochemistry of candidate therapeutic compounds, helping to broaden the knowledge base of AMS application in pharmaceutical research. PMID:21047543

  9. Acoustic field modulation in regenerators

    NASA Astrophysics Data System (ADS)

    Hu, J. Y.; Wang, W.; Luo, E. C.; Chen, Y. Y.

    2016-12-01

    The regenerator is a key component that transfers energy between heat and work. The conversion efficiency is significantly influenced by the acoustic field in the regenerator. Much effort has been spent to quantitatively determine this influence, but few comprehensive experimental verifications have been performed because of difficulties in modulating and measuring the acoustic field. In this paper, a method requiring two compressors is introduced and theoretically investigated that achieves acoustic field modulation in the regenerator. One compressor outputs the acoustic power for the regenerator; the other acts as a phase shifter. A RC load dissipates the acoustic power out of both the regenerator and the latter compressor. The acoustic field can be modulated by adjusting the current in the two compressors and opening the RC load. The acoustic field is measured with pressure sensors instead of flow-field imaging equipment, thereby greatly simplifying the experiment.

  10. Fault Diagnostics for Turbo-Shaft Engine Sensors Based on a Simplified On-Board Model

    PubMed Central

    Lu, Feng; Huang, Jinquan; Xing, Yaodong

    2012-01-01

    Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient. PMID:23112645

  11. Fault diagnostics for turbo-shaft engine sensors based on a simplified on-board model.

    PubMed

    Lu, Feng; Huang, Jinquan; Xing, Yaodong

    2012-01-01

    Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.

  12. A simplified method for elastic-plastic-creep structural analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1984-01-01

    A simplified inelastic analysis computer program (ANSYPM) was developed for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects are calculated on the basis of stress relaxation at constant strain, creep at constant stress or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.

  13. A simplified method for elastic-plastic-creep structural analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1985-01-01

    A simplified inelastic analysis computer program (ANSYPM) was developed for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects are calculated on the basis of stress relaxation at constant strain, creep at constant stress or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.

  14. 3D TOCSY-HSQC NMR for metabolic flux analysis using non-uniform sampling

    DOE PAGES

    Reardon, Patrick N.; Marean-Reardon, Carrie L.; Bukovec, Melanie A.; ...

    2016-02-05

    13C-Metabolic Flux Analysis ( 13C-MFA) is rapidly being recognized as the authoritative method for determining fluxes through metabolic networks. Site-specific 13C enrichment information obtained using NMR spectroscopy is a valuable input for 13C-MFA experiments. Chemical shift overlaps in the 1D or 2D NMR experiments typically used for 13C-MFA frequently hinder assignment and quantitation of site-specific 13C enrichment. Here we propose the use of a 3D TOCSY-HSQC experiment for 13C-MFA. We employ Non-Uniform Sampling (NUS) to reduce the acquisition time of the experiment to a few hours, making it practical for use in 13C-MFA experiments. Our data show that the NUSmore » experiment is linear and quantitative. Identification of metabolites in complex mixtures, such as a biomass hydrolysate, is simplified by virtue of the 13C chemical shift obtained in the experiment. In addition, the experiment reports 13C-labeling information that reveals the position specific labeling of subsets of isotopomers. As a result, the information provided by this technique will enable more accurate estimation of metabolic fluxes in larger metabolic networks.« less

  15. Towards a non-invasive quantitative analysis of the organic components in museum objects varnishes by vibrational spectroscopies: methodological approach.

    PubMed

    Daher, Céline; Pimenta, Vanessa; Bellot-Gurlet, Ludovic

    2014-11-01

    The compositions of ancient varnishes are mainly determined destructively by separation methods coupled to mass spectrometry. In this study, a methodology for non-invasive quantitative analyses of varnishes by vibrational spectroscopies is proposed. For that, experimental simplified varnishes of colophony and linseed oil were prepared according to 18th century traditional recipes with an increasing mass concentration ratio of colophony/linseed oil. FT-Raman and IR analyses using ATR and non-invasive reflectance modes were done on the "pure" materials and on the different mixtures. Then, a new approach involving spectral decomposition calculation was developed considering the mixture spectra as a linear combination of the pure materials ones, and giving a relative amount of each component. Specific spectral regions were treated and the obtained results show a good accuracy between the prepared and calculated amounts of the two compounds. We were thus able to detect and quantify from 10% to 50% of colophony in linseed oil using non-invasive techniques that can also be conducted in situ with portable instruments when it comes to museum varnished objects and artifacts. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. 3D-PTV around Operational Wind Turbines

    NASA Astrophysics Data System (ADS)

    Brownstein, Ian; Dabiri, John

    2016-11-01

    Laboratory studies and numerical simulations of wind turbines are typically constrained in how they can inform operational turbine behavior. Laboratory experiments are usually unable to match both pertinent parameters of full-scale wind turbines, the Reynolds number (Re) and tip speed ratio, using scaled-down models. Additionally, numerical simulations of the flow around wind turbines are constrained by the large domain size and high Re that need to be simulated. When these simulations are preformed, turbine geometry is typically simplified resulting in flow structures near the rotor not being well resolved. In order to bypass these limitations, a quantitative flow visualization method was developed to take in situ measurements of the flow around wind turbines at the Field Laboratory for Optimized Wind Energy (FLOWE) in Lancaster, CA. The apparatus constructed was able to seed an approximately 9m x 9m x 5m volume in the wake of the turbine using artificial snow. Quantitative measurements were obtained by tracking the evolution of the artificial snow using a four camera setup. The methodology for calibrating and collecting data, as well as preliminary results detailing the flow around a 2kW vertical-axis wind turbine (VAWT), will be presented.

  17. Random-walk mobility analysis of Lisbon's plans for the post-1755 reconstruction

    NASA Astrophysics Data System (ADS)

    de Sampayo, Mafalda Teixeira; Sousa-Rodrigues, David

    2016-11-01

    The different options for the reconstruction of the city of Lisbon in the aftermath of the 1755 earthquake are studied with an agent-based model based on randomwalks. This method gives a comparative quantitative measure of mobility of the circulation spaces within the city. The plans proposed for the city of Lisbon signified a departure from the medieval mobility city model. The intricacy of the old city circulation spaces is greatly reduced in the new plans and the mobility between different areas is substantially improved. The simulation results of the random-walk model show that those plans keeping the main force lines of the old city presented less improvement in terms ofmobility. The plans that had greater design freedom were, by contrast, easier to navigate. Lisbon's reconstruction followed a plan that included a shift in the traditional notions of mobility. This affected the daily lives of its citizens by potentiating an easy access to the waterfront, simplifying orientation and navigability. Using the random-walk model it is shown how to quantitatively measure the potential that synthetic plans have in terms of the permeability and navigability of different city public spaces.

  18. Equivalent model optimization with cyclic correction approximation method considering parasitic effect for thermoelectric coolers.

    PubMed

    Wang, Ning; Chen, Jiajun; Zhang, Kun; Chen, Mingming; Jia, Hongzhi

    2017-11-21

    As thermoelectric coolers (TECs) have become highly integrated in high-heat-flux chips and high-power devices, the parasitic effect between component layers has become increasingly obvious. In this paper, a cyclic correction method for the TEC model is proposed using the equivalent parameters of the proposed simplified model, which were refined from the intrinsic parameters and parasitic thermal conductance. The results show that the simplified model agrees well with the data of a commercial TEC under different heat loads. Furthermore, the temperature difference of the simplified model is closer to the experimental data than the conventional model and the model containing parasitic thermal conductance at large heat loads. The average errors in the temperature difference between the proposed simplified model and the experimental data are no more than 1.6 K, and the error is only 0.13 K when the absorbed heat power Q c is equal to 80% of the maximum achievable absorbed heat power Q max . The proposed method and model provide a more accurate solution for integrated TECs that are small in size.

  19. Simplified Least Squares Shadowing sensitivity analysis for chaotic ODEs and PDEs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chater, Mario, E-mail: chaterm@mit.edu; Ni, Angxiu, E-mail: niangxiu@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu

    This paper develops a variant of the Least Squares Shadowing (LSS) method, which has successfully computed the derivative for several chaotic ODEs and PDEs. The development in this paper aims to simplify Least Squares Shadowing method by improving how time dilation is treated. Instead of adding an explicit time dilation term as in the original method, the new variant uses windowing, which can be more efficient and simpler to implement, especially for PDEs.

  20. Highly simplified lateral flow-based nucleic acid sample preparation and passive fluid flow control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cary, Robert E.

    2015-12-08

    Highly simplified lateral flow chromatographic nucleic acid sample preparation methods, devices, and integrated systems are provided for the efficient concentration of trace samples and the removal of nucleic acid amplification inhibitors. Methods for capturing and reducing inhibitors of nucleic acid amplification reactions, such as humic acid, using polyvinylpyrrolidone treated elements of the lateral flow device are also provided. Further provided are passive fluid control methods and systems for use in lateral flow assays.

  1. Highly simplified lateral flow-based nucleic acid sample preparation and passive fluid flow control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cary, Robert B.

    Highly simplified lateral flow chromatographic nucleic acid sample preparation methods, devices, and integrated systems are provided for the efficient concentration of trace samples and the removal of nucleic acid amplification inhibitors. Methods for capturing and reducing inhibitors of nucleic acid amplification reactions, such as humic acid, using polyvinylpyrrolidone treated elements of the lateral flow device are also provided. Further provided are passive fluid control methods and systems for use in lateral flow assays.

  2. Preformulation considerations for controlled release dosage forms. Part II. Selected candidate support.

    PubMed

    Chrzanowski, Frank

    2008-01-01

    Practical examples of preformulation support of the form selected for formulation development are provided using several drug substances (DSs). The examples include determination of the solubilities vs. pH particularly for the range pH 1 to 8 because of its relationship to gastrointestinal (GI) conditions and dissolution method development. The advantages of equilibrium solubility and trial solubility methods are described. The equilibrium method is related to detecting polymorphism and the trial solubility method, to simplifying difficult solubility problems. An example of two polymorphs existing in mixtures of DS is presented in which one of the forms is very unstable. Accelerating stability studies are used in conjunction with HPLC and quantitative X-ray powder diffraction (QXRD) to demonstrate the differences in chemical and polymorphic stabilities. The results from two model excipient compatibility methods are compared to determine which has better predictive accuracy for room temperature stability. A DSC (calorimetric) method and an isothermal stress with quantitative analysis (ISQA) method that simulates wet granulation conditions were compared using a 2 year room temperature sample set as reference. An example of a pH stability profile for understanding stability and extrapolating stability to other environments is provided. The pH-stability of omeprazole and lansoprazole, which are extremely unstable in acidic and even mildly acidic conditions, are related to the formulation of delayed release dosage forms and the resolution of the problem associated with free carboxyl groups from the enteric coating polymers reacting with the DSs. Dissolution method requirements for CR dosage forms are discussed. The applicability of a modified disintegration time (DT) apparatus for supporting CR dosage form development of a pH sensitive DS at a specific pH such as duodenal pH 5.6 is related. This method is applicable for DSs such as peptides, proteins, enzymes and natural products where physical observation can be used in place of a difficult to perform analytical method, saving resources and providing rapid preformulation support.

  3. Photographic and drafting techniques simplify method of producing engineering drawings

    NASA Technical Reports Server (NTRS)

    Provisor, H.

    1968-01-01

    Combination of photographic and drafting techniques has been developed to simplify the preparation of three dimensional and dimetric engineering drawings. Conventional photographs can be converted to line drawings by making copy negatives on high contrast film.

  4. A Monte-Carlo game theoretic approach for Multi-Criteria Decision Making under uncertainty

    NASA Astrophysics Data System (ADS)

    Madani, Kaveh; Lund, Jay R.

    2011-05-01

    Game theory provides a useful framework for studying Multi-Criteria Decision Making problems. This paper suggests modeling Multi-Criteria Decision Making problems as strategic games and solving them using non-cooperative game theory concepts. The suggested method can be used to prescribe non-dominated solutions and also can be used as a method to predict the outcome of a decision making problem. Non-cooperative stability definitions for solving the games allow consideration of non-cooperative behaviors, often neglected by other methods which assume perfect cooperation among decision makers. To deal with the uncertainty in input variables a Monte-Carlo Game Theory (MCGT) approach is suggested which maps the stochastic problem into many deterministic strategic games. The games are solved using non-cooperative stability definitions and the results include possible effects of uncertainty in input variables on outcomes. The method can handle multi-criteria multi-decision-maker problems with uncertainty. The suggested method does not require criteria weighting, developing a compound decision objective, and accurate quantitative (cardinal) information as it simplifies the decision analysis by solving problems based on qualitative (ordinal) information, reducing the computational burden substantially. The MCGT method is applied to analyze California's Sacramento-San Joaquin Delta problem. The suggested method provides insights, identifies non-dominated alternatives, and predicts likely decision outcomes.

  5. Interference coupling analysis based on a hybrid method: application to a radio telescope system

    NASA Astrophysics Data System (ADS)

    Xu, Qing-Lin; Qiu, Yang; Tian, Jin; Liu, Qi

    2018-02-01

    Working in a way that passively receives electromagnetic radiation from a celestial body, a radio telescope can be easily disturbed by external radio frequency interference as well as electromagnetic interference generated by electric and electronic components operating at the telescope site. A quantitative analysis of these interferences must be taken into account carefully for further electromagnetic protection of the radio telescope. In this paper, based on electromagnetic topology theory, a hybrid method that combines the Baum-Liu-Tesche (BLT) equation and transfer function is proposed. In this method, the coupling path of the radio telescope is divided into strong coupling and weak coupling sub-paths, and the coupling intensity criterion is proposed by analyzing the conditions in which the BLT equation simplifies to a transfer function. According to the coupling intensity criterion, the topological model of a typical radio telescope system is established. The proposed method is used to solve the interference response of the radio telescope system by analyzing subsystems with different coupling modes separately and then integrating the responses of the subsystems as the response of the entire system. The validity of the proposed method is verified numerically. The results indicate that the proposed method, compared with the direct solving method, reduces the difficulty and improves the efficiency of interference prediction.

  6. Simplified method for numerical modeling of fiber lasers.

    PubMed

    Shtyrina, O V; Yarutkina, I A; Fedoruk, M P

    2014-12-29

    A simplified numerical approach to modeling of dissipative dispersion-managed fiber lasers is examined. We present a new numerical iteration algorithm for finding the periodic solutions of the system of nonlinear ordinary differential equations describing the intra-cavity dynamics of the dissipative soliton characteristics in dispersion-managed fiber lasers. We demonstrate that results obtained using simplified model are in good agreement with full numerical modeling based on the corresponding partial differential equations.

  7. A Simplified Approach to Cloud Masking with VIIRS in the S-NPP/JPSS Era

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary J.; Lafontaine, Frank J.

    2014-01-01

    The quantitative detection of clouds in satellite imagery has a number of important applications in weather analysis. The proper interpretation of satellite imagery for improved situational awareness depends on knowing where the clouds are at all times of the day. Additionally, many products derived from infrared measurements need accurate cloud information to mask out regions where retrieval of geophysical parameters in the atmosphere or on the surface are not possible. Thus, the accurate detection of the presence of clouds in satellite imagery on a global basis is important to the product developers and the operational weather community to support their decision-making process. This abstract describes an application of a two-channel bispectral composite threshold (BCT) approach applied to VIIRS imagery. The simplified BCT approach uses only the 10.76 and 3.75 micrometer spectral channels in two spectral tests; a straightforward infrared threshold test with the longwave channel and a shortwave minus longwave channel difference test. The key to the success of this approach as demonstrated in past applications to GOES and MODIS data is the generation of temporally and spatially dependent thresholds used in the tests from a previous number of days at similar observations to the current data. The presentation will present an overview of the approach and intercomparison results with other satellites, methods, and against verification data.

  8. Orbital-selective Mott phases of a one-dimensional three-orbital Hubbard model studied using computational techniques

    DOE PAGES

    Liu, Guangkun; Kaushal, Nitin; Liu, Shaozhi; ...

    2016-06-24

    A recently introduced one-dimensional three-orbital Hubbard model displays orbital-selective Mott phases with exotic spin arrangements such as spin block states [J. Rincón et al., Phys. Rev. Lett. 112, 106405 (2014)]. In this paper we show that the constrained-path quantum Monte Carlo (CPQMC) technique can accurately reproduce the phase diagram of this multiorbital one-dimensional model, paving the way to future CPQMC studies in systems with more challenging geometries, such as ladders and planes. The success of this approach relies on using the Hartree-Fock technique to prepare the trial states needed in CPQMC. In addition, we study a simplified version of themore » model where the pair-hopping term is neglected and the Hund coupling is restricted to its Ising component. The corresponding phase diagrams are shown to be only mildly affected by the absence of these technically difficult-to-implement terms. This is confirmed by additional density matrix renormalization group and determinant quantum Monte Carlo calculations carried out for the same simplified model, with the latter displaying only mild fermion sign problems. Lastly, we conclude that these methods are able to capture quantitatively the rich physics of the several orbital-selective Mott phases (OSMP) displayed by this model, thus enabling computational studies of the OSMP regime in higher dimensions, beyond static or dynamic mean-field approximations.« less

  9. Methods for the field evaluation of quantitative G6PD diagnostics: a review.

    PubMed

    Ley, Benedikt; Bancone, Germana; von Seidlein, Lorenz; Thriemer, Kamala; Richards, Jack S; Domingo, Gonzalo J; Price, Ric N

    2017-09-11

    Individuals with glucose-6-phosphate dehydrogenase (G6PD) deficiency are at risk of severe haemolysis following the administration of 8-aminoquinoline compounds. Primaquine is the only widely available 8-aminoquinoline for the radical cure of Plasmodium vivax. Tafenoquine is under development with the potential to simplify treatment regimens, but point-of-care (PoC) tests will be needed to provide quantitative measurement of G6PD activity prior to its administration. There is currently a lack of appropriate G6PD PoC tests, but a number of new tests are in development and are likely to enter the market in the coming years. As these are implemented, they will need to be validated in field studies. This article outlines the technical details for the field evaluation of novel quantitative G6PD diagnostics such as sample handling, reference testing and statistical analysis. Field evaluation is based on the comparison of paired samples, including one sample tested by the new assay at point of care and one sample tested by the gold-standard reference method, UV spectrophotometry in an established laboratory. Samples can be collected as capillary or venous blood; the existing literature suggests that potential differences in capillary or venous blood are unlikely to affect results substantially. The collection and storage of samples is critical to ensure preservation of enzyme activity, it is recommended that samples are stored at 4 °C and testing occurs within 4 days of collection. Test results can be visually presented as scatter plot, Bland-Altman plot, and a histogram of the G6PD activity distribution of the study population. Calculating the adjusted male median allows categorizing results according to G6PD activity to calculate standard performance indicators and to perform receiver operating characteristic (ROC) analysis.

  10. Human fetal lung morphometry at autopsy with new modeling to quantitate structural maturity.

    PubMed

    Lipsett, Jill

    2017-06-01

    To demonstrate a simplified morphometric procedure, including a new model for acinar structural maturity, applicable to autopsy fetal lung and present reference values for these parameters. Cases with autopsy consent for research were studied. To simplify analysis only critical morphometric parameters were measured to allow calculation of gas-exchange surface area. A total of 58 fetuses, 16-40 weeks were included. Subjects were rejected with any condition predisposing to pulmonary hypo/hyperplasia, significant maceration, or if lung weight/bodyweight or microscopy identified pulmonary hypoplasia or lung growth disorders. Lungs were inflation fixed, weights and volumes determined, sampled, then returned to the body. Volume densities (V V ) of parenchyma/non-parenchyma and air-space/gas-exchange tissue, gas-exchange surface density (S V ), and total surface area (SA) were determined. The number, mean radius, and septal thickness of modeled airspace-spheres were calculated. Equations were generated for each parameter function of gestation and bodyweight. From 16 to 40-week weights and volumes increased as power functions from ∼4 g/mL to ∼90 g/mL. Parenchyma/non-parenchyma changed little-75:25 (16 weeks) to 71:29 (term). Parenchyma was 10% airspace:90% tissue early and 50:50 by term. Gas-exchange S V increased from 175 to 450 cm 2 /cm 3 and total SA increased from 0.059 to 4.793 m 2 . There were 3.31 × 10 6 airspace-spheres, 12 µ radius, septal thickness 30 µ at 16 weeks, increasing to 56.92 × 10 6 , 26 µ radius, septal thickness 13 µ by term. Morphometry can feasibly be performed at autopsy, providing more informative quantitative data on lung structural development than current methods utilized. This reference data set compares well with published data. © 2017 Wiley Periodicals, Inc.

  11. Impacts of heterogeneous organic matter on phenanthrene sorption--Different soil and sediment samples

    USGS Publications Warehouse

    Karapanagioti, Hrissi K.; Childs, Jeffrey; Sabatini, David A.

    2001-01-01

    Organic petrography has been proposed as a tool for characterizing the heterogeneous organic matter present in soil and sediment samples. A new simplified method is proposed as a quantitative means of interpreting observed sorption behavior for phenanthrene and different soils and sediments based on their organic petrographical characterization. This method is tested under singe solute conditions and at phenanthrene concentration of 1 μg/L. Since the opaque organic matter fraction dominates the sorption process, we propose that by quantifying this fraction one can interpret organic content normalized sorption distribution coefficient (Koc) values for a sample. While this method was developed and tested for various samples within the same aquifer, in the current study the method is validated for soil and sediment samples from different sites that cover a wide range of organic matter origin, age, and organic content. All 10 soil and sediment samples studied had log Koc values for the opaque particles between 5.6 and 6.8. This range of Koc values illustrates the heterogeneity of opaque particles between sites and geological formations and thus the need to characterize the opaque fraction of materials on a site-by-site basis.

  12. Identification of Pseudallescheria and Scedosporium species by three molecular methods.

    PubMed

    Lu, Qiaoyun; Gerrits van den Ende, A H G; Bakkers, J M J E; Sun, Jiufeng; Lackner, M; Najafzadeh, M J; Melchers, W J G; Li, Ruoyu; de Hoog, G S

    2011-03-01

    The major clinically relevant species in Scedosporium (teleomorph Pseudallescheria) are Pseudallescheria boydii, Scedosporium aurantiacum, Scedosporium apiospermum, and Scedosporium prolificans, while Pseudallescheria minutispora, Petriellopsis desertorum, and Scedosporium dehoogii are exceptional agents of disease. Three molecular methods targeting the partial β-tubulin gene were developed and evaluated to identify six closely related species of the S. apiospermum complex using quantitative real-time PCR (qPCR), PCR-based reverse line blot (PCR-RLB), and loop-mediated isothermal amplification (LAMP). qPCR was not specific enough for the identification of all species but had the highest sensitivity. The PCR-RLB assay was efficient for the identification of five species. LAMP distinguished all six species unambiguously. The analytical sensitivities of qPCR, PCR-RLB, and LAMP combined with MagNAPure, CTAB (cetyltrimethylammonium bromide), and FTA filter (Whatman) extraction were 50, 5 × 10(3), and 5 × 10(2) cells/μl, respectively. When LAMP was combined with a simplified DNA extraction method using an FTA filter, identification to the species level was achieved within 2 h, including DNA extraction. The FTA-LAMP assay is therefore recommended as a cost-effective, simple, and rapid method for the identification of Scedosporium species.

  13. Determination of halonitromethanes and haloacetamides: an evaluation of sample preservation and analyte stability in drinking water.

    PubMed

    Liew, Deborah; Linge, Kathryn L; Joll, Cynthia A; Heitz, Anna; Charrois, Jeffrey W A

    2012-06-08

    Simultaneous quantitation of 6 halonitromethanes (HNMs) and 5 haloacetamides (HAAms) was achieved with a simplified liquid-liquid extraction (LLE) method, followed by gas chromatography-mass spectrometry. Stability tests showed that brominated tri-HNMs immediately degraded in the presence of ascorbic acid, sodium sulphite and sodium borohydride, and also reduced in samples treated with ammonium chloride, or with no preservation. Both ammonium chloride and ascorbic acid were suitable for the preservation of HAAms. Ammonium chloride was most suitable for preserving both HNMs and HAAms, although it is recommended that samples be analysed as soon as possible after collection. While groundwater samples exhibited a greater analytical bias compared to other waters, the good recoveries (>90%) of most analytes in tap water suggest that the method is very appropriate for determining these analytes in treated drinking waters. Application of the method to water from three drinking water treatment plants in Western Australia indicating N-DBP formation did occur, with increased detections after chlorination. The method is recommended for low-cost, rapid screening of both HNMs and HAAms in drinking water. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. A randomized controlled trial of the different impression methods for the complete denture fabrication: Patient reported outcomes.

    PubMed

    Jo, Ayami; Kanazawa, Manabu; Sato, Yusuke; Iwaki, Maiko; Akiba, Norihisa; Minakuchi, Shunsuke

    2015-08-01

    To compare the effect of conventional complete dentures (CD) fabricated using two different impression methods on patient-reported outcomes in a randomized controlled trial (RCT). A cross-over RCT was performed with edentulous patients, required maxillomandibular CDs. Mandibular CDs were fabricated using two different methods. The conventional method used a custom tray border moulded with impression compound and a silicone. The simplified used a stock tray and an alginate. Participants were randomly divided into two groups. The C-S group had the conventional method used first, followed by the simplified. The S-C group was in the reverse order. Adjustment was performed four times. A wash out period was set for 1 month. The primary outcome was general patient satisfaction, measured using visual analogue scales, and the secondary outcome was oral health-related quality of life, measured using the Japanese version of the Oral Health Impact Profile for edentulous (OHIP-EDENT-J) questionnaire scores. Twenty-four participants completed the trial. With regard to general patient satisfaction, the conventional method was significantly more acceptable than the simplified. No significant differences were observed between the two methods in the OHIP-EDENT-J scores. This study showed CDs fabricated with a conventional method were significantly more highly rated for general patient satisfaction than a simplified. CDs, fabricated with the conventional method that included a preliminary impression made using alginate in a stock tray and subsequently a final impression made using silicone in a border moulded custom tray resulted in higher general patient satisfaction. UMIN000009875. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Dynamic characteristics and simplified numerical methods of an all-vertical-piled wharf in offshore deep water

    NASA Astrophysics Data System (ADS)

    Zhang, Hua-qing; Sun, Xi-ping; Wang, Yuan-zhan; Yin, Ji-long; Wang, Chao-yang

    2015-10-01

    There has been a growing trend in the development of offshore deep-water ports in China. For such deep sea projects, all-vertical-piled wharves are suitable structures and generally located in open waters, greatly affected by wave action. Currently, no systematic studies or simplified numerical methods are available for deriving the dynamic characteristics and dynamic responses of all-vertical-piled wharves under wave cyclic loads. In this article, we compare the dynamic characteristics of an all-vertical-piled wharf with those of a traditional inshore high-piled wharf through numerical analysis; our research reveals that the vibration period of an all-vertical-piled wharf under cyclic loading is longer than that of an inshore high-piled wharf and is much closer to the period of the loading wave. Therefore, dynamic calculation and analysis should be conducted when designing and calculating the characteristics of an all-vertical-piled wharf. We establish a dynamic finite element model to examine the dynamic response of an all-vertical-piled wharf under wave cyclic loads and compare the results with those under wave equivalent static load; the comparison indicates that dynamic amplification of the structure is evident when the wave dynamic load effect is taken into account. Furthermore, a simplified dynamic numerical method for calculating the dynamic response of an all-vertical-piled wharf is established based on the P-Y curve. Compared with finite element analysis, the simplified method is more convenient to use and applicable to large structural deformation while considering the soil non-linearity. We confirmed that the simplified method has acceptable accuracy and can be used in engineering applications.

  16. Simplified multiple scattering model for radiative transfer in turbid water

    NASA Technical Reports Server (NTRS)

    Ghovanlou, A. H.; Gupta, G. N.

    1978-01-01

    Quantitative analytical procedures for relating selected water quality parameters to the characteristics of the backscattered signals, measured by remote sensors, require the solution of the radiative transport equation in turbid media. Presented is an approximate closed form solution of this equation and based on this solution, the remote sensing of sediments is discussed. The results are compared with other standard closed form solutions such as quasi-single scattering approximations.

  17. Iterative Addition of Kinetic Effects to Cold Plasma RF Wave Solvers

    NASA Astrophysics Data System (ADS)

    Green, David; Berry, Lee; RF-SciDAC Collaboration

    2017-10-01

    The hot nature of fusion plasmas requires a wave vector dependent conductivity tensor for accurate calculation of wave heating and current drive. Traditional methods for calculating the linear, kinetic full-wave plasma response rely on a spectral method such that the wave vector dependent conductivity fits naturally within the numerical method. These methods have seen much success for application to the well-confined core plasma of tokamaks. However, quantitative prediction of high power RF antenna designs for fusion applications has meant a requirement of resolving the geometric details of the antenna and other plasma facing surfaces for which the Fourier spectral method is ill-suited. An approach to enabling the addition of kinetic effects to the more versatile finite-difference and finite-element cold-plasma full-wave solvers was presented by where an operator-split iterative method was outlined. Here we expand on this approach, examine convergence and present a simplified kinetic current estimator for rapidly updating the right-hand side of the wave equation with kinetic corrections. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  18. Automated high-throughput purification of genomic DNA from plant leaf or seed using MagneSil paramagnetic particles

    NASA Astrophysics Data System (ADS)

    Bitner, Rex M.; Koller, Susan C.

    2004-06-01

    Three different methods of automated high throughput purification of genomic DNA from plant materials processed in 96 well plates are described. One method uses MagneSil paramagnetic particles to purify DNA present in single leaf punch samples or small seed samples, using 320ul capacity 96 well plates which minimizes reagent and plate costs. A second method uses 2.2 ml and 1.2 ml capacity plates and allows the purification of larger amounts of DNA from 5-6 punches of materials or larger amounts of seeds. The third method uses the MagneSil ONE purification system to purify a fixed amount of DNA, thus simplifying the processing of downstream applications by normalizing the amounts of DNA so they do not require quantitation. Protocols for the purification of a fixed yield of DNA, e.g. 1 ug, from plant leaf or seed samples using MagneSil paramagnetic particles and a Beckman-Coulter BioMek FX robot are described. DNA from all three methods is suitable for applications such as PCR, RAPD, STR, READIT SNP analysis, and multiplexed PCR systems. The MagneSil ONE system is also suitable for use with SNP detection systems such as Third Wave Technology"s Invader methods.

  19. Getting Innovative Therapies Faster to Patients at the Right Dose: Impact of Quantitative Pharmacology Towards First Registration and Expanding Therapeutic Use.

    PubMed

    Nayak, Satyaprakash; Sander, Oliver; Al-Huniti, Nidal; de Alwis, Dinesh; Chain, Anne; Chenel, Marylore; Sunkaraneni, Soujanya; Agrawal, Shruti; Gupta, Neeraj; Visser, Sandra A G

    2018-03-01

    Quantitative pharmacology (QP) applications in translational medicine, drug-development, and therapeutic use were crowd-sourced by the ASCPT Impact and Influence initiative. Highlighted QP case studies demonstrated faster access to innovative therapies for patients through 1) rational dose selection for pivotal trials; 2) reduced trial-burden for vulnerable populations; or 3) simplified posology. Critical success factors were proactive stakeholder engagement, alignment on the value of model-informed approaches, and utilizing foundational clinical pharmacology understanding of the therapy. © 2018 The Authors Clinical Pharmacology & Therapeutics published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  20. 48 CFR 713.000 - Scope of part.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Scope of part. 713.000 Section 713.000 Federal Acquisition Regulations System AGENCY FOR INTERNATIONAL DEVELOPMENT CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES 713.000 Scope of part. The simplified...

  1. 48 CFR 713.000 - Scope of part.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Scope of part. 713.000 Section 713.000 Federal Acquisition Regulations System AGENCY FOR INTERNATIONAL DEVELOPMENT CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES 713.000 Scope of part. The simplified...

  2. Leveraging unsupervised training sets for multi-scale compartmentalization in renal pathology

    NASA Astrophysics Data System (ADS)

    Lutnick, Brendon; Tomaszewski, John E.; Sarder, Pinaki

    2017-03-01

    Clinical pathology relies on manual compartmentalization and quantification of biological structures, which is time consuming and often error-prone. Application of computer vision segmentation algorithms to histopathological image analysis, in contrast, can offer fast, reproducible, and accurate quantitative analysis to aid pathologists. Algorithms tunable to different biologically relevant structures can allow accurate, precise, and reproducible estimates of disease states. In this direction, we have developed a fast, unsupervised computational method for simultaneously separating all biologically relevant structures from histopathological images in multi-scale. Segmentation is achieved by solving an energy optimization problem. Representing the image as a graph, nodes (pixels) are grouped by minimizing a Potts model Hamiltonian, adopted from theoretical physics, modeling interacting electron spins. Pixel relationships (modeled as edges) are used to update the energy of the partitioned graph. By iteratively improving the clustering, the optimal number of segments is revealed. To reduce computational time, the graph is simplified using a Cantor pairing function to intelligently reduce the number of included nodes. The classified nodes are then used to train a multiclass support vector machine to apply the segmentation over the full image. Accurate segmentations of images with as many as 106 pixels can be completed only in 5 sec, allowing for attainable multi-scale visualization. To establish clinical potential, we employed our method in renal biopsies to quantitatively visualize for the first time scale variant compartments of heterogeneous intra- and extraglomerular structures simultaneously. Implications of the utility of our method extend to fields such as oncology, genomics, and non-biological problems.

  3. Single-Scale Fusion: An Effective Approach to Merging Images.

    PubMed

    Ancuti, Codruta O; Ancuti, Cosmin; De Vleeschouwer, Christophe; Bovik, Alan C

    2017-01-01

    Due to its robustness and effectiveness, multi-scale fusion (MSF) based on the Laplacian pyramid decomposition has emerged as a popular technique that has shown utility in many applications. Guided by several intuitive measures (weight maps) the MSF process is versatile and straightforward to be implemented. However, the number of pyramid levels increases with the image size, which implies sophisticated data management and memory accesses, as well as additional computations. Here, we introduce a simplified formulation that reduces MSF to only a single level process. Starting from the MSF decomposition, we explain both mathematically and intuitively (visually) a way to simplify the classical MSF approach with minimal loss of information. The resulting single-scale fusion (SSF) solution is a close approximation of the MSF process that eliminates important redundant computations. It also provides insights regarding why MSF is so effective. While our simplified expression is derived in the context of high dynamic range imaging, we show its generality on several well-known fusion-based applications, such as image compositing, extended depth of field, medical imaging, and blending thermal (infrared) images with visible light. Besides visual validation, quantitative evaluations demonstrate that our SSF strategy is able to yield results that are highly competitive with traditional MSF approaches.

  4. Determination of the antileukemic drug mitoguazone and seven other closely related bis(amidinohydrazones) in human blood serum by high-performance liquid chromatography.

    PubMed

    Koskinen, M; Elo, H; Lukkari, P; Riekkola, M L

    1996-10-11

    A reversed-phase (C18) HPLC method with diode-array detection was developed for the separation and determination of methylglyoxal bis(amidinohydrazone) (mitoguazone) and seven closely related aliphatic analogs thereof, namely the bis(amidinohydrazones) of glyoxal, dimethylglyoxal, ethylmethylglyoxal, methylpropylglyoxal, butylmethylglyoxal, diethylglyoxal and dipropylglyoxal. The mobile phase consisted of a non-linear binary gradient of methanol and 0.03 M aqueous sodium acetate buffer (pH 4.3). Good separation of the eight congeners was achieved. On increasing the methanol content of the eluent, the bis(amidinohydrazones) eluted in the order of increasing number of carbon atoms in the side-chains. The method was also applied to the quantitative analysis of the compounds in aqueous solution and, combined with ultrafiltration, for the separation of the eight congeners in spiked human blood serum. A separate simplified method for the quantitative determination of each of the compounds in spiked human blood serum samples was also developed. The methods developed made for the first time possible the simultaneous HPLC analysis of more than one bis(amidinohydrazones). The results obtained indicate that the bis(amidinohydrazones) studied obviously have a distinct tendency to form ion associates with acetate ions and probably also other carboxylate ions in aqueous solution. This aspect may be of biochemical significance, especially concerning the intracellular binding of the compounds. Each one of the compounds studied invariably gave rise to one peak only, this result supporting the theory that the conventional synthesis of each of the compounds gives rise to one geometrical isomer only. This result is completely in agreement with the results of previous proton and carbon NMR spectroscopic as well as X-ray diffraction studies.

  5. Simplified enzymatic high-performance anion exchange chromatographic determination of total fructans in food and pet food-limitations and measurement uncertainty.

    PubMed

    Stöber, Paul; Bénet, Sylvie; Hischenhuber, Claudia

    2004-04-21

    A simplified method to determine total fructans in food and pet food has been developed and validated. It follows the principle of AOAC method 997.08, i.e., high-performance anion exchange chromatographic (HPAEC) determination of total fructose released from fructans (F(f)) and total glucose released from fructans (G(f)) after enzymatic fructan hydrolysis. Unlike AOAC method 997.08, calculation of total fructans is based on the determination of F(f) alone. This is motivated by the inherent difficulty to accurately determine low amounts of G(f) since many food and pet food products contain other sources of total glucose (e.g., starch and sucrose). In this case, a correction factor g can be used (1.05 by default) to take into account the theoretical contribution of G(f). At levels >5% of total fructans and in commercial fructan ingredients, both F(f) and G(f) can and should be accurately determined; hence, no correction factor g is required. The method is suitable to quantify total fructans in various food and pet food products at concentrations >or=0.2% providing that the product does not contain other significant sources of total fructose such as free fructose or sucrose. Recovery rates in commercial fructan ingredients and in selected food and pet food ranged from 97 to 102%. As part of a measurement uncertainty estimation study, individual contributions to the total uncertainty (u) of the total fructan content were identified and quantified by using the validation data available. As a result, a correlation between the sucrose content and the total uncertainty of the total fructan content was established allowing us to define a limit of quantitation as a function of the sucrose content. One can conclude that this method is limited to food products where the sucrose content does not exceed about three times the total fructan content. Despite this limitation, which is inherent to any total fructan method based on the same approach, this procedure represents an excellent compromise with regard to accuracy, applicability, and convenience.

  6. A simplified parsimonious higher order multivariate Markov chain model with new convergence condition

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, we present a simplified parsimonious higher-order multivariate Markov chain model with new convergence condition. (TPHOMMCM-NCC). Moreover, estimation method of the parameters in TPHOMMCM-NCC is give. Numerical experiments illustrate the effectiveness of TPHOMMCM-NCC.

  7. Compiler-aided systematic construction of large-scale DNA strand displacement circuits using unpurified components

    NASA Astrophysics Data System (ADS)

    Thubagere, Anupama J.; Thachuk, Chris; Berleant, Joseph; Johnson, Robert F.; Ardelean, Diana A.; Cherry, Kevin M.; Qian, Lulu

    2017-02-01

    Biochemical circuits made of rationally designed DNA molecules are proofs of concept for embedding control within complex molecular environments. They hold promise for transforming the current technologies in chemistry, biology, medicine and material science by introducing programmable and responsive behaviour to diverse molecular systems. As the transformative power of a technology depends on its accessibility, two main challenges are an automated design process and simple experimental procedures. Here we demonstrate the use of circuit design software, combined with the use of unpurified strands and simplified experimental procedures, for creating a complex DNA strand displacement circuit that consists of 78 distinct species. We develop a systematic procedure for overcoming the challenges involved in using unpurified DNA strands. We also develop a model that takes synthesis errors into consideration and semi-quantitatively reproduces the experimental data. Our methods now enable even novice researchers to successfully design and construct complex DNA strand displacement circuits.

  8. Influence of novel oral anticoagulants on anticoagulation care management.

    PubMed

    Janzic, Andrej; Kos, Mitja

    2017-09-01

    Anticoagulation treatment was recently improved by the introduction of novel oral anticoagulants (NOACs). Using a combination of qualitative and quantitative methods, this study explores the effects of the introduction of NOACs on anticoagulation care in Slovenia. Face-to-face interviews with key stakeholders revealed evolvement and challenges of anticoagulation care from different perspectives. Obtained information was further explored through the analysis of nationwide data of drug prescriptions and realization of health care services. Simplified management of anticoagulation treatment with NOACs and their high penetration expanded the capacity of anticoagulation clinics, and consequentially the treated population increased by more than 50 % in the last 5 years. The main challenge concerned the expenditures for medicines, which increased approximately 10 times in just a few years. At the same time, the anticoagulation clinics and their core organisation were not affected, which is not expected to change, since they are vital in delivering high-quality care.

  9. Energy saving strategies of honeybees in dipping nectar

    PubMed Central

    Wu, Jianing; Yang, Heng; Yan, Shaoze

    2015-01-01

    The honeybee’s drinking process has generally been simplified because of its high speed and small scale. In this study, we clearly observed the drinking cycle of the Italian honeybee using a specially designed high-speed camera system. We analysed the pattern of glossal hair erection and the movement kinematics of the protracting tongue (glossa). Results showed that the honeybee used two special protraction strategies to save energy. First, the glossal hairs remain adpressed until the end of the protraction, which indicates that the hydraulic resistance is reduced to less than 1/3 of that in the case if the hairs remain erect. Second, the glossa protracts with a specific velocity profile and we quantitatively demonstrated that this moving strategy helps reduce the total energy needed for protraction compared with the typical form of protraction with constant acceleration and deceleration. These findings suggest effective methods to optimise the control policies employed by next-generation microfluidic pumps. PMID:26446300

  10. Improved building up a model of toxicity towards Pimephales promelas by the Monte Carlo method.

    PubMed

    Toropova, Alla P; Toropov, Andrey A; Raskova, Maria; Raska, Ivan

    2016-12-01

    By optimization of so-called correlation weights of attributes of simplified molecular input-line entry system (SMILES) quantitative structure - activity relationships (QSAR) for toxicity towards Pimephales promelas are established. A new SMILES attribute has been utilized in this work. This attribute is a molecular descriptor, which reflects (i) presence of different kinds of bonds (double, triple, and stereo chemical bonds); (ii) presence of nitrogen, oxygen, sulphur, and phosphorus atoms; and (iii) presence of fluorine, chlorine, bromine, and iodine atoms. The statistical characteristics of the best model are the following: n=226, r 2 =0.7630, RMSE=0.654 (training set); n=114, r 2 =0.7024, RMSE=0.766 (calibration set); n=226, r 2 =0.6292, RMSE=0.870 (validation set). A new criterion to select a preferable split into the training and validation sets are suggested and discussed. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Computational Analysis of Behavior.

    PubMed

    Egnor, S E Roian; Branson, Kristin

    2016-07-08

    In this review, we discuss the emerging field of computational behavioral analysis-the use of modern methods from computer science and engineering to quantitatively measure animal behavior. We discuss aspects of experiment design important to both obtaining biologically relevant behavioral data and enabling the use of machine vision and learning techniques for automation. These two goals are often in conflict. Restraining or restricting the environment of the animal can simplify automatic behavior quantification, but it can also degrade the quality or alter important aspects of behavior. To enable biologists to design experiments to obtain better behavioral measurements, and computer scientists to pinpoint fruitful directions for algorithm improvement, we review known effects of artificial manipulation of the animal on behavior. We also review machine vision and learning techniques for tracking, feature extraction, automated behavior classification, and automated behavior discovery, the assumptions they make, and the types of data they work best with.

  12. Energy saving strategies of honeybees in dipping nectar.

    PubMed

    Wu, Jianing; Yang, Heng; Yan, Shaoze

    2015-10-08

    The honeybee's drinking process has generally been simplified because of its high speed and small scale. In this study, we clearly observed the drinking cycle of the Italian honeybee using a specially designed high-speed camera system. We analysed the pattern of glossal hair erection and the movement kinematics of the protracting tongue (glossa). Results showed that the honeybee used two special protraction strategies to save energy. First, the glossal hairs remain adpressed until the end of the protraction, which indicates that the hydraulic resistance is reduced to less than 1/3 of that in the case if the hairs remain erect. Second, the glossa protracts with a specific velocity profile and we quantitatively demonstrated that this moving strategy helps reduce the total energy needed for protraction compared with the typical form of protraction with constant acceleration and deceleration. These findings suggest effective methods to optimise the control policies employed by next-generation microfluidic pumps.

  13. Photochemistry and Transmission Pump-Probe Spectroscopy of 2-Azidobiphenyls in Aqueous Nanocrystalline Suspensions: Simplified Kinetics in Crystalline Solids.

    PubMed

    Chung, Tim S; Ayitou, Anoklase J-L; Park, Jin H; Breslin, Vanessa M; Garcia-Garibay, Miguel A

    2017-04-20

    Aqueous nanocrystalline suspensions provide a simple and efficient medium for performing transmission spectroscopy measurements in the solid state. In this Letter we describe the use of laser flash photolysis methods to analyze the photochemistry of 2-azidobiphenyl and several aryl-substituted derivatives. We show that all the crystalline compounds analyzed in this study transform quantitatively into carbazole products via a crystal-to-crystal reconstructive phase transition. While the initial steps of the reaction cannot be followed within the time resolution of our instrument (ca. 8 ns), we detected the primary isocarbazole photoproducts and analyzed the kinetics of their formal 1,5-H shift reactions, which take place in time scales that range from a few nanoseconds to several microseconds. It is worth noting that the high reaction selectivity observed in the crystalline state translates into a clean and simple kinetic process compared to that in solution.

  14. Coarse-Grained Structural Modeling of Molecular Motors Using Multibody Dynamics

    PubMed Central

    Parker, David; Bryant, Zev; Delp, Scott L.

    2010-01-01

    Experimental and computational approaches are needed to uncover the mechanisms by which molecular motors convert chemical energy into mechanical work. In this article, we describe methods and software to generate structurally realistic models of molecular motor conformations compatible with experimental data from different sources. Coarse-grained models of molecular structures are constructed by combining groups of atoms into a system of rigid bodies connected by joints. Contacts between rigid bodies enforce excluded volume constraints, and spring potentials model system elasticity. This simplified representation allows the conformations of complex molecular motors to be simulated interactively, providing a tool for hypothesis building and quantitative comparisons between models and experiments. In an example calculation, we have used the software to construct atomically detailed models of the myosin V molecular motor bound to its actin track. The software is available at www.simtk.org. PMID:20428469

  15. Spacelab experiment computer study. Volume 1: Executive summary (presentation)

    NASA Technical Reports Server (NTRS)

    Lewis, J. L.; Hodges, B. C.; Christy, J. O.

    1976-01-01

    A quantitative cost for various Spacelab flight hardware configurations is provided along with varied software development options. A cost analysis of Spacelab computer hardware and software is presented. The cost study is discussed based on utilization of a central experiment computer with optional auxillary equipment. Groundrules and assumptions used in deriving the costing methods for all options in the Spacelab experiment study are presented. The groundrules and assumptions, are analysed and the options along with their cost considerations, are discussed. It is concluded that Spacelab program cost for software development and maintenance is independent of experimental hardware and software options, that distributed standard computer concept simplifies software integration without a significant increase in cost, and that decisions on flight computer hardware configurations should not be made until payload selection for a given mission and a detailed analysis of the mission requirements are completed.

  16. Unimolecular decomposition reactions at low-pressure: A comparison of competitive methods

    NASA Technical Reports Server (NTRS)

    Adams, G. F.

    1980-01-01

    The lack of a simple rate coefficient expression to describe the pressure and temperature dependence hampers chemical modeling of flame systems. Recently developed simplified models to describe unimolecular processes include the calculation of rate constants for thermal unimolecular reactions and recombinations at the low pressure limit, at the high pressure limit and in the intermediate fall-off region. Comparison between two different applications of Troe's simplified model and a comparison between the simplified model and the classic RRKM theory are described.

  17. Application of a simplified theory of ELF propagation to a simplified worldwide model of the ionosphere

    NASA Astrophysics Data System (ADS)

    Behroozi-Toosi, A. B.; Booker, H. G.

    1980-12-01

    The simplified theory of ELF wave propagation in the earth-ionosphere transmission lines developed by Booker (1980) is applied to a simplified worldwide model of the ionosphere. The theory, which involves the comparison of the local vertical refractive index gradient with the local wavelength in order to classify the altitude into regions of low and high gradient, is used for a model of electron and negative ion profiles in the D and E regions below 150 km. Attention is given to the frequency dependence of ELF propagation at a middle latitude under daytime conditions, the daytime latitude dependence of ELF propagation at the equinox, the effects of sunspot, seasonal and diurnal variations on propagation, nighttime propagation neglecting and including propagation above 100 km, and the effect on daytime ELF propagation of a sudden ionospheric disturbance. The numerical values obtained by the method for the propagation velocity and attenuation rate are shown to be in general agreement with the analytic Naval Ocean Systems Center computer program. It is concluded that the method employed gives more physical insights into propagation processes than any other method, while requiring less effort and providing maximal accuracy.

  18. Principles, performance, and applications of spectral reconstitution (SR) in quantitative analysis of oils by Fourier transform infrared spectroscopy (FT-IR).

    PubMed

    García-González, Diego L; Sedman, Jacqueline; van de Voort, Frederik R

    2013-04-01

    Spectral reconstitution (SR) is a dilution technique developed to facilitate the rapid, automated, and quantitative analysis of viscous oil samples by Fourier transform infrared spectroscopy (FT-IR). This technique involves determining the dilution factor through measurement of an absorption band of a suitable spectral marker added to the diluent, and then spectrally removing the diluent from the sample and multiplying the resulting spectrum to compensate for the effect of dilution on the band intensities. The facsimile spectrum of the neat oil thus obtained can then be qualitatively or quantitatively analyzed for the parameter(s) of interest. The quantitative performance of the SR technique was examined with two transition-metal carbonyl complexes as spectral markers, chromium hexacarbonyl and methylcyclopentadienyl manganese tricarbonyl. The estimation of the volume fraction (VF) of the diluent in a model system, consisting of canola oil diluted to various extents with odorless mineral spirits, served as the basis for assessment of these markers. The relationship between the VF estimates and the true volume fraction (VF(t)) was found to be strongly dependent on the dilution ratio and also depended, to a lesser extent, on the spectral resolution. These dependences are attributable to the effect of changes in matrix polarity on the bandwidth of the ν(CO) marker bands. Excellent VF(t) estimates were obtained by making a polarity correction devised with a variance-spectrum-delineated correction equation. In the absence of such a correction, SR was shown to introduce only a minor and constant bias, provided that polarity differences among all the diluted samples analyzed were minimal. This bias can be built into the calibration of a quantitative FT-IR analytical method by subjecting appropriate calibration standards to the same SR procedure as the samples to be analyzed. The primary purpose of the SR technique is to simplify preparation of diluted samples such that only approximate proportions need to be adhered to, rather than using exact weights or volumes, the marker accounting for minor variations. Additional applications discussed include the use of the SR technique in extraction-based, quantitative, automated FT-IR methods for the determination of moisture, acid number, and base number in lubricating oils, as well as of moisture content in edible oils.

  19. Quantitative, equal carbon response HSQC experiment, QEC-HSQC

    NASA Astrophysics Data System (ADS)

    Mäkelä, Valtteri; Helminen, Jussi; Kilpeläinen, Ilkka; Heikkinen, Sami

    2016-10-01

    Quantitative NMR has become increasingly useful and popular in recent years, with many new and emerging applications in metabolomics, quality control, reaction monitoring and other types of mixture analysis. While sensitive and simple to acquire, the low resolving power of 1D 1H NMR spectra can be a limiting factor when analyzing complex mixtures. This drawback can be solved by observing a different type of nuclei offering improved resolution or with multidimensional experiments, such as HSQC. In this paper, we present a novel Quantitative, Equal Carbon HSQC (QEC-HSQC) experiment providing an equal response across different type of carbons regardless of the number of attached protons, in addition to an uniform response over a wide range of 1JCH couplings. This enables rapid quantification and integration over multiple signals without the need for complete resonance assignments and simplifies the integration of overlapping signals.

  20. Lessons from mouse chimaera experiments with a reiterated transgene marker: revised marker criteria and a review of chimaera markers.

    PubMed

    Keighren, Margaret A; Flockhart, Jean; Hodson, Benjamin A; Shen, Guan-Yi; Birtley, James R; Notarnicola-Harwood, Antonio; West, John D

    2015-08-01

    Recent reports of a new generation of ubiquitous transgenic chimaera markers prompted us to consider the criteria used to evaluate new chimaera markers and develop more objective assessment methods. To investigate this experimentally we used several series of fetal and adult chimaeras, carrying an older, multi-copy transgenic marker. We used two additional independent markers and objective, quantitative criteria for cell selection and cell mixing to investigate quantitative and spatial aspects of developmental neutrality. We also suggest how the quantitative analysis we used could be simplified for future use with other markers. As a result, we recommend a five-step procedure for investigators to evaluate new chimaera markers based partly on criteria proposed previously but with a greater emphasis on examining the developmental neutrality of prospective new markers. These five steps comprise (1) review of published information, (2) evaluation of marker detection, (3) genetic crosses to check for effects on viability and growth, (4) comparisons of chimaeras with and without the marker and (5) analysis of chimaeras with both cell populations labelled. Finally, we review a number of different chimaera markers and evaluate them using the extended set of criteria. These comparisons indicate that, although the new generation of ubiquitous fluorescent markers are the best of those currently available and fulfil most of the criteria required of a chimaera marker, further work is required to determine whether they are developmentally neutral.

  1. Lq -Lp optimization for multigrid fluorescence tomography of small animals using simplified spherical harmonics

    NASA Astrophysics Data System (ADS)

    Edjlali, Ehsan; Bérubé-Lauzière, Yves

    2018-01-01

    We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.

  2. A fluid model simulation of a simplified plasma limiter based on spectral-element time-domain method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qian, Cheng; Ding, Dazhi, E-mail: dzding@njust.edu.cn; Fan, Zhenhong

    2015-03-15

    A simplified plasma limiter prototype is proposed and the fluid model coupled with Maxwell's equations is established to describe the operating mechanism of plasma limiter. A three-dimensional (3-D) simplified sandwich structure plasma limiter model is analyzed with the spectral-element time-domain (SETD) method. The field breakdown threshold of air and argon at different frequency is predicted and compared with the experimental data and there is a good agreement between them for gas microwave breakdown discharge problems. Numerical results demonstrate that the two-layer plasma limiter (plasma-slab-plasma) has better protective characteristics than a one-layer plasma limiter (slab-plasma-slab) with the same length of gasmore » chamber.« less

  3. A simplified method for extracting androgens from avian egg yolks

    USGS Publications Warehouse

    Kozlowski, C.P.; Bauman, J.E.; Hahn, D.C.

    2009-01-01

    Female birds deposit significant amounts of steroid hormones into the yolks of their eggs. Studies have demonstrated that these hormones, particularly androgens, affect nestling growth and development. In order to measure androgen concentrations in avian egg yolks, most authors follow the extraction methods outlined by Schwabl (1993. Proc. Nat. Acad. Sci. USA 90:11446-11450). We describe a simplified method for extracting androgens from avian egg yolks. Our method, which has been validated through recovery and linearity experiments, consists of a single ethanol precipitation that produces substantially higher recoveries than those reported by Schwabl.

  4. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  5. Study on a pattern classification method of soil quality based on simplified learning sample dataset

    USGS Publications Warehouse

    Zhang, Jiahua; Liu, S.; Hu, Y.; Tian, Y.

    2011-01-01

    Based on the massive soil information in current soil quality grade evaluation, this paper constructed an intelligent classification approach of soil quality grade depending on classical sampling techniques and disordered multiclassification Logistic regression model. As a case study to determine the learning sample capacity under certain confidence level and estimation accuracy, and use c-means algorithm to automatically extract the simplified learning sample dataset from the cultivated soil quality grade evaluation database for the study area, Long chuan county in Guangdong province, a disordered Logistic classifier model was then built and the calculation analysis steps of soil quality grade intelligent classification were given. The result indicated that the soil quality grade can be effectively learned and predicted by the extracted simplified dataset through this method, which changed the traditional method for soil quality grade evaluation. ?? 2011 IEEE.

  6. Research on carrying capacity of hydrostatic slideway on heavy-duty gantry CNC machine

    NASA Astrophysics Data System (ADS)

    Cui, Chao; Guo, Tieneng; Wang, Yijie; Dai, Qin

    2017-05-01

    Hydrostatic slideway is a key part in the heavy-duty gantry CNC machine, which supports the total weight of the gantry and moves smoothly along the table. Therefore, the oil film between sliding rails plays an important role on the carrying capacity and precision of machine. In this paper, the oil film in no friction is simulated with three-dimensional CFD. The carrying capacity of heavy hydrostatic slideway, pressure and velocity characteristic of the flow field are analyzed. The simulation result is verified through comparing with the experimental data obtained from the heavy-duty gantry machine. For the requirement of engineering, the oil film carrying capacity is analyzed with simplified theoretical method. The precision of the simplified method is evaluated and the effectiveness is verified with the experimental data. The simplified calculation method is provided for designing oil pad on heavy-duty gantry CNC machine hydrostatic slideway.

  7. A Simplified Method for Implementing Run-Time Polymorphism in Fortran95

    DOE PAGES

    Decyk, Viktor K.; Norton, Charles D.

    2004-01-01

    This paper discusses a simplified technique for software emulation of inheritance and run-time polymorphism in Fortran95. This technique involves retaining the same type throughout an inheritance hierarchy, so that only functions which are modified in a derived class need to be implemented.

  8. Simbol-X Background Minimization: Mirror Spacecraft Passive Shielding Trade-off Study

    NASA Astrophysics Data System (ADS)

    Fioretti, V.; Malaguti, G.; Bulgarelli, A.; Palumbo, G. G. C.; Ferri, A.; Attinà, P.

    2009-05-01

    The present work shows a quantitative trade-off analysis of the Simbol-X Mirror Spacecraft (MSC) passive shielding, in the phase space of the various parameters: mass budget, dimension, geometry and composition. A simplified physical (and geometrical) model of the sky screen, implemented by means of a GEANT4 simulation, has been developed to perform a performance-driven mass optimization and evaluate the residual background level on Simbol-X focal plane.

  9. Towards the automatization of the Foucault knife-edge quantitative test

    NASA Astrophysics Data System (ADS)

    Rodríguez, G.; Villa, J.; Martínez, G.; de la Rosa, I.; Ivanov, R.

    2017-08-01

    Given the increasing necessity of simple, economical and reliable methods and instruments for performing quality tests of optical surfaces such as mirrors and lenses, in the recent years we resumed the study of the long forgotten Foucault knife-edge test from the point of view of the physical optics, ultimately achieving a closed mathematical expression that directly relates the knife-edge position along the displacement paraxial axis with the observable irradiance pattern, which later allowed us to propose a quantitative methodology for estimating the wavefront error of an aspherical mirror with precision akin to interferometry. In this work, we present a further improved digital image processing algorithm in which the sigmoidal cost-function for calculating the transient slope-point of each associated intensity-illumination profile is replaced for a simplified version of it, thus making the whole process of estimating the wavefront gradient remarkably more stable and efficient, at the same time, the Fourier based algorithm employed for gradient integration has been replaced as well for a regularized quadratic cost-function that allows a considerably easier introduction of the region of interest (ROI) of the function, which solved by means of a linear gradient conjugate method largely increases the overall accuracy and efficiency of the algorithm. This revised approach of our methodology can be easily implemented and handled by most single-board microcontrollers in the market, hence enabling the implementation of a full-integrated automatized test apparatus, opening a realistic path for even the proposal of a stand-alone optical mirror analyzer prototype.

  10. Pore network extraction from pore space images of various porous media systems

    NASA Astrophysics Data System (ADS)

    Yi, Zhixing; Lin, Mian; Jiang, Wenbin; Zhang, Zhaobin; Li, Haishan; Gao, Jian

    2017-04-01

    Pore network extraction, which is defined as the transformation from irregular pore space to a simplified network in the form of pores connected by throats, is significant to microstructure analysis and network modeling. A physically realistic pore network is not only a representation of the pore space in the sense of topology and morphology, but also a good tool for predicting transport properties accurately. We present a method to extract pore network by employing the centrally located medial axis to guide the construction of maximal-balls-like skeleton where the pores and throats are defined and parameterized. To validate our method, various rock samples including sand pack, sandstones, and carbonates were used to extract pore networks. The pore structures were compared quantitatively with the structures extracted by medial axis method or maximal ball method. The predicted absolute permeability and formation factor were verified against the theoretical solutions obtained by lattice Boltzmann method and finite volume method, respectively. The two-phase flow was simulated through the networks extracted from homogeneous sandstones, and the generated relative permeability curves were compared with the data obtained from experimental method and other numerical models. The results show that the accuracy of our network is higher than that of other networks for predicting transport properties, so the presented method is more reliable for extracting physically realistic pore network.

  11. Psychometric Evaluation of the Simplified Chinese Version of Flourishing Scale

    ERIC Educational Resources Information Center

    Tang, Xiaoqing; Duan, Wenjie; Wang, Zhizhang; Liu, Tianyuan

    2016-01-01

    Objectives: The Flourishing Scale (FS) was developed to measure psychological well-being from the eudaimonic perspective, highlighting the flourishing of human functioning. This article evaluated the psychometric characteristics of the simplified Chinese version of FS among a Chinese community population. Method: A total of 433 participants from…

  12. Use of the Monte Carlo Method for OECD Principles-Guided QSAR Modeling of SIRT1 Inhibitors.

    PubMed

    Kumar, Ashwani; Chauhan, Shilpi

    2017-01-01

    SIRT1 inhibitors offer therapeutic potential for the treatment of a number of diseases including cancer and human immunodeficiency virus infection. A diverse series of 45 compounds with reported SIRT1 inhibitory activity has been employed for the development of quantitative structure-activity relationship (QSAR) models using the Monte Carlo optimization method. This method makes use of simplified molecular input line entry system notation of the molecular structure. The QSAR models were built up according to OECD principles. Three subsets of three splits were examined and validated by respective external sets. All the three described models have good statistical quality. The best model has the following statistical characteristics: R 2  = 0.8350, Q 2 test  = 0.7491 for the test set and R 2  = 0.9655, Q 2 ext  = 0.9261 for the validation set. In the mechanistic interpretation, structural attributes responsible for the endpoint increase and decrease are defined. Further, the design of some prospective SIRT1 inhibitors is also presented on the basis of these structural attributes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Identification of Pseudallescheria and Scedosporium Species by Three Molecular Methods▿

    PubMed Central

    Lu, Qiaoyun; Gerrits van den Ende, A. H. G.; Bakkers, J. M. J. E.; Sun, Jiufeng; Lackner, M.; Najafzadeh, M. J.; Melchers, W. J. G.; Li, Ruoyu; de Hoog, G. S.

    2011-01-01

    The major clinically relevant species in Scedosporium (teleomorph Pseudallescheria) are Pseudallescheria boydii, Scedosporium aurantiacum, Scedosporium apiospermum, and Scedosporium prolificans, while Pseudallescheria minutispora, Petriellopsis desertorum, and Scedosporium dehoogii are exceptional agents of disease. Three molecular methods targeting the partial β-tubulin gene were developed and evaluated to identify six closely related species of the S. apiospermum complex using quantitative real-time PCR (qPCR), PCR-based reverse line blot (PCR-RLB), and loop-mediated isothermal amplification (LAMP). qPCR was not specific enough for the identification of all species but had the highest sensitivity. The PCR-RLB assay was efficient for the identification of five species. LAMP distinguished all six species unambiguously. The analytical sensitivities of qPCR, PCR-RLB, and LAMP combined with MagNAPure, CTAB (cetyltrimethylammonium bromide), and FTA filter (Whatman) extraction were 50, 5 × 103, and 5 × 102 cells/μl, respectively. When LAMP was combined with a simplified DNA extraction method using an FTA filter, identification to the species level was achieved within 2 h, including DNA extraction. The FTA-LAMP assay is therefore recommended as a cost-effective, simple, and rapid method for the identification of Scedosporium species. PMID:21177887

  14. Sustainable solar energy capability studies by using S2H model in treating groundwater supply

    NASA Astrophysics Data System (ADS)

    Musa, S.; Anuar, M. F.; Shahabuddin, M. M.; Ridzuan, M. B.; Radin Mohamed, R. M. S.; Madun, M. A.

    2018-04-01

    Groundwater extracted in Research Centre for Soft Soil Malaysia (RECESS) contains a number of pollutants that exceed the safe level for consumption. A Solar-Hydro (S2H) model which is a practical prototype has been introduced to treat the groundwater sustainably by solar energy process (evaporation method). Selected parameters was tested which are sulphate, nitrate, chloride, fluoride, pH and dissolved oxygen. The water quality result shows that all parameters have achieved 100% of the drinking water quality standard issued by the Ministry of Health Malaysia. Evaporation method was proven that this solar energy can be applied in sustainably treating groundwater quality with up to 90% effectiveness. On the other hand, the quantitative analysis has shown that the production of clean water is below than 2% according to time constraints and design factors. Thus, this study can be generate clean and fresh water from groundwater by using a simplified model and it has huge potential to be implemented by the local communities with a larger scale and affordable design.

  15. Complete de-Dopplerization and acoustic holography for external noise of a high-speed train.

    PubMed

    Yang, Diange; Wen, Junjie; Miao, Feng; Wang, Ziteng; Gu, Xiaoan; Lian, Xiaomin

    2016-09-01

    Identification and measurement of moving sound sources are the bases for vehicle noise control. Acoustic holography has been applied in successfully identifying the moving sound source since the 1990s. However, due to the high demand for the accuracy of holographic data, currently the maximum velocity achieved by acoustic holography is just above 100 km/h. The objective of this study was to establish a method based on the complete Morse acoustic model to restore the measured signal in high-speed situations, and to propose a far-field acoustic holography method applicable for high-speed moving sound sources. Simulated comparisons of the proposed far-field acoustic holography with complete Morse model, the acoustic holography with simplified Morse model and traditional delay-and-sum beamforming were conducted. Experiments with a high-speed train running at the speed of 278 km/h validated the proposed far-field acoustic holography. This study extended the applications of acoustic holography to high-speed situations and established the basis for quantitative measurements of far-field acoustic holography.

  16. Quantitative analysis of in vivo mucosal bacterial biofilms.

    PubMed

    Singhal, Deepti; Boase, Sam; Field, John; Jardeleza, Camille; Foreman, Andrew; Wormald, Peter-John

    2012-01-01

    Quantitative assays of mucosal biofilms on ex vivo samples are challenging using the currently applied specialized microscopic techniques to identify them. The COMSTAT2 computer program has been applied to in vitro biofilm models for quantifying biofilm structures seen on confocal scanning laser microscopy (CSLM). The aim of this study was to quantify Staphylococcus aureus (S. aureus) biofilms seen via CSLM on ex situ samples of sinonasal mucosa, using the COMSTAT2 program. S. aureus biofilms were grown in frontal sinuses of 4 merino sheep as per a previously standardized sheep sinusitis model for biofilms. Two sinonasal mucosal samples, 10 mm × 10 mm in size, from each of the 2 sinuses of the 4 sheep were analyzed for biofilm presence with Baclight stain and CSLM. Two random image stacks of mucosa with S. aureus biofilm were recorded from each sample, and analyzed using COMSTAT2 software that translates image stacks into a simplified 3-dimensional matrix of biofilm mass by eliminating surrounding host tissue. Three independent observers analyzed images using COMSTAT2 and 3 repeated rounds of analyses were done to calculate biofilm biomass. The COMSTAT2 application uses an observer-dependent threshold setting to translate CSLM biofilm images into a simplified 3-dimensional output for quantitative analysis. Intraclass correlation coefficient (ICC) between thresholds set by the 3 observers for each image stacks was 0.59 (p = 0.0003). Threshold values set at different points of time by a single observer also showed significant correlation as seen by ICC of 0.80 (p < 0.001). COMSTAT2 can be applied to quantify and study the complex 3-dimensional biofilm structures that are recorded via CSLM on mucosal tissue like the sinonasal mucosa. Copyright © 2011 American Rhinologic Society-American Academy of Otolaryngic Allergy, LLC.

  17. Optical chirp z-transform processor with a simplified architecture.

    PubMed

    Ngo, Nam Quoc

    2014-12-29

    Using a simplified chirp z-transform (CZT) algorithm based on the discrete-time convolution method, this paper presents the synthesis of a simplified architecture of a reconfigurable optical chirp z-transform (OCZT) processor based on the silica-based planar lightwave circuit (PLC) technology. In the simplified architecture of the reconfigurable OCZT, the required number of optical components is small and there are no waveguide crossings which make fabrication easy. The design of a novel type of optical discrete Fourier transform (ODFT) processor as a special case of the synthesized OCZT is then presented to demonstrate its effectiveness. The designed ODFT can be potentially used as an optical demultiplexer at the receiver of an optical fiber orthogonal frequency division multiplexing (OFDM) transmission system.

  18. Endpoint titration and immunotherapy.

    PubMed

    King, H C

    1985-11-01

    Inhalant allergy, or "atopy" as it is now termed, is the best understood form of allergy today. In some circles, it is the only recognized form of allergy. While an overall picture of its effects on the body and a reasonable approach to its treatment now exist, many problems remain to be solved and much improvement in its treatment will probably occur within the next several years. Many new approaches to treatment of aeroallergens are now available; however, all are compared with the skin test, which is and has been the baseline for testing and treatment. Endpoint titration provides a quantitative means for undertaking treatment of aeroallergen sensitivity. In no other way does it differ from the forms of skin testing that have been widely used for generations. The practitioners of endpoint titration feel that this difference is highly significant in simplifying, validating, and shortening the necessary period of therapy. While the concept of endpoint titration is not difficult, it is by definition a quantitative form of testing and requires a degree of expertise in performing it correctly. While a good understanding of the method may be gained from the literature, adequate hands-on experience should be obtained by any physician prior to instituting the technique as a treatment modality. Once mastered, it becomes a reliable baseline for all forms of inhalant allergy care.

  19. Study on a novel core module based on optical fiber bundles for urine dry-chemistry analysis

    NASA Astrophysics Data System (ADS)

    Liu, Gaiqin; Ma, Zengwei; Li, Rui; Hu, Nan; Chen, Ping; Wang, Fei; Zhang, Ruiying; Chen, Longcong

    2017-09-01

    A core module with a novel optical structure is presented to analyze urine by the dry-chemistry method in this paper. It consists of a 32-bit microprocessor, optical fiber bundles, a high precision color sensor and a temperature sensor. The optical fiber bundles are adopted to control the spread path of light and reduce the influence of ambient light and the distance between the strip and sensor effectively. And the temperature sensor is applied to detect the environmental temperature to calibrate the measurement results. Therefore, all these can bring a lot of benefits to the core module, such as improving its test accuracy, reducing its volume and cost, and simplifying its assembly. Additionally, some parameters, including the calculation coefficient about reflectivity of each item, semi-quantitative intervals, the number of test items, may be modified by corresponding instructions in order to enhance its applicability. Meanwhile, its outputs can be chosen among the original data, normalized color values, reflectivity, and the semi-quantitative level of each test item by available instructions. Our results show that the module has high measurement accuracy of more than 95%, good stability, reliability, and consistency and can be easily used in various types of urine analyzers.

  20. Validation of a quantitative and confirmatory method for residue analysis of aminoglycoside antibiotics in poultry, bovine, equine and swine kidney through liquid chromatography-tandem mass spectrometry.

    PubMed

    Almeida, M P; Rezende, C P; Souza, L F; Brito, R B

    2012-01-01

    The use of aminoglycoside antibiotics in food animals is approved in Brazil. Accordingly, Brazilian food safety legislation sets maximum levels for these drugs in tissues from these animals in an effort to guarantee that food safety is not compromised. Aiming to monitor the levels of these drugs in tissues from food animals, the validation of a quantitative, confirmatory method for the detection of residues of 10 aminoglycosides antibiotics in poultry, swine, equine and bovine kidney, with extraction using a solid phase and detection and quantification by LC-MS/MS was performed. The procedure is an adaptation of the US Department of Agriculture, Food Safety and Inspection Service (USDA-FSIS) qualitative method, with the inclusion of additional clean-up and quantification at lower levels, which proved more efficient. Extraction was performed using a phosphate buffer containing trifluoroacetic acid followed by neutralization, purification on a cationic exchange SPE cartridge, with elution with methanol/acetic acid, evaporation, and dilution in ion-pair solvent. The method was validated according to the criteria and requirements of the European Commission Decision 2002/657/EC, showing selectivity with no matrix interference. Linearity was established for all analytes using the method of weighted minimum squares. CCα and CCβ varied between 1036 and 12,293 µg kg(-1), and between 1073 and 14,588 µg kg(-1), respectively. The limits of quantification varied between 27 and 688 µg kg(-1). The values of recovery for all analytes in poultry kidney, fortified in the range of 500-1500 µg kg(-1), were higher than 90%, and the relative standard deviations were lower than 15%, except spectinomycin (21.8%). Uncertainty was estimated using a simplified methodology of 'bottom-up' and 'top-down' strategies. The results showed that this method is effective for the quantification and confirmation of aminoglycoside residues and could be used by the Brazilian programme of residue control.

  1. Is time to search the Wells Score 4.0?

    PubMed

    Rosa-Jiménez, F; Rosa-Jiménez, A; Lozano-Rodríguez, A; Martín-Moreno, P; Hinojosa-Martínez, M D; Montijano-Cabrera, Á M

    2015-01-01

    Wells score for deep vein thrombosis presents problems for implementation in the hospital emergencies, mainly due to the complexity of its enforcement. To assess whether the inclusion of D-dimer as a predictor might lead to a simplification of this clinical decision rule. A database of deep vein thrombosis patients was studied by logistic regression model in which the 10 predictors in the Wells score and the dimer D were included. The diagnosis was made with compression ultrasonography with Doppler signal. D-dimer was determined by a quantitative method of latex, a technique immunofiltration or a turbidimetric technique. 577 patients (54.1% women) were studied, with a mean age of 66.7 (14.2) years. 25.1% were diagnosed with deep vein thrombosis. Only four variables were independent, building a weighted model with greater predictive ability (area under the curve) than the original model (0.844 vs. 0.751, p<0.001). Both models showed an acceptable safety, with a similar rate of failure (0.8% vs. 1%). The simplified model allowed to select a higher percentage of patients who could have benefited from the non performance of the imaging test (20.6% vs. 15.8%, p=0.039). The introduction of D-dimer in a regression model simplifies the Wells score and maintain the same efficacy and safety, which could improve its implementation in the hospital emergencies. Copyright © 2014 Elsevier España, S.L.U. y Sociedad Española de Medicina Interna (SEMI). All rights reserved.

  2. Development and validation of a simplified titration method for monitoring volatile fatty acids in anaerobic digestion.

    PubMed

    Sun, Hao; Guo, Jianbin; Wu, Shubiao; Liu, Fang; Dong, Renjie

    2017-09-01

    The volatile fatty acids (VFAs) concentration has been considered as one of the most sensitive process performance indicators in anaerobic digestion (AD) process. However, the accurate determination of VFAs concentration in AD processes normally requires advanced equipment and complex pretreatment procedures. A simplified method with fewer sample pretreatment procedures and improved accuracy is greatly needed, particularly for on-site application. This report outlines improvements to the Nordmann method, one of the most popular titrations used for VFA monitoring. The influence of ion and solid interfering subsystems in titrated samples on results accuracy was discussed. The total solid content in titrated samples was the main factor affecting accuracy in VFA monitoring. Moreover, a high linear correlation was established between the total solids contents and VFA measurement differences between the traditional Nordmann equation and gas chromatography (GC). Accordingly, a simplified titration method was developed and validated using a semi-continuous experiment of chicken manure anaerobic digestion with various organic loading rates. The good fitting of the results obtained by this method in comparison with GC results strongly supported the potential application of this method to VFA monitoring. Copyright © 2017. Published by Elsevier Ltd.

  3. Development of Generation System of Simplified Digital Maps

    NASA Astrophysics Data System (ADS)

    Uchimura, Keiichi; Kawano, Masato; Tokitsu, Hiroki; Hu, Zhencheng

    In recent years, digital maps have been used in a variety of scenarios, including car navigation systems and map information services over the Internet. These digital maps are formed by multiple layers of maps of different scales; the map data most suitable for the specific situation are used. Currently, the production of map data of different scales is done by hand due to constraints related to processing time and accuracy. We conducted research concerning technologies for automatic generation of simplified map data from detailed map data. In the present paper, the authors propose the following: (1) a method to transform data related to streets, rivers, etc. containing widths into line data, (2) a method to eliminate the component points of the data, and (3) a method to eliminate data that lie below a certain threshold. In addition, in order to evaluate the proposed method, a user survey was conducted; in this survey we compared maps generated using the proposed method with the commercially available maps. From the viewpoint of the amount of data reduction and processing time, and on the basis of the results of the survey, we confirmed the effectiveness of the automatic generation of simplified maps using the proposed methods.

  4. Simplified Model to Predict Deflection and Natural Frequency of Steel Pole Structures

    NASA Astrophysics Data System (ADS)

    Balagopal, R.; Prasad Rao, N.; Rokade, R. P.

    2018-04-01

    Steel pole structures are suitable alternate to transmission line towers, due to difficulty encountered in finding land for the new right of way for installation of new lattice towers. The steel poles have tapered cross section and they are generally used for communication, power transmission and lighting purposes. Determination of deflection of steel pole is important to decide its functionality requirement. The excessive deflection of pole may affect the signal attenuation and short circuiting problems in communication/transmission poles. In this paper, a simplified method is proposed to determine both primary and secondary deflection based on dummy unit load/moment method. The predicted deflection from proposed method is validated with full scale experimental investigation conducted on 8 m and 30 m high lighting mast, 132 and 400 kV transmission pole and found to be in close agreement with each other. Determination of natural frequency is an important criterion to examine its dynamic sensitivity. A simplified semi-empirical method using the static deflection from the proposed method is formulated to determine its natural frequency. The natural frequency predicted from proposed method is validated with FE analysis results. Further the predicted results are validated with experimental results available in literature.

  5. A simplified fractional order impedance model and parameter identification method for lithium-ion batteries

    PubMed Central

    Yang, Qingxia; Xu, Jun; Cao, Binggang; Li, Xiuqing

    2017-01-01

    Identification of internal parameters of lithium-ion batteries is a useful tool to evaluate battery performance, and requires an effective model and algorithm. Based on the least square genetic algorithm, a simplified fractional order impedance model for lithium-ion batteries and the corresponding parameter identification method were developed. The simplified model was derived from the analysis of the electrochemical impedance spectroscopy data and the transient response of lithium-ion batteries with different states of charge. In order to identify the parameters of the model, an equivalent tracking system was established, and the method of least square genetic algorithm was applied using the time-domain test data. Experiments and computer simulations were carried out to verify the effectiveness and accuracy of the proposed model and parameter identification method. Compared with a second-order resistance-capacitance (2-RC) model and recursive least squares method, small tracing voltage fluctuations were observed. The maximum battery voltage tracing error for the proposed model and parameter identification method is within 0.5%; this demonstrates the good performance of the model and the efficiency of the least square genetic algorithm to estimate the internal parameters of lithium-ion batteries. PMID:28212405

  6. Effect of picric acid and enzymatic creatinine on the efficiency of the glomerular filtration rate predicator formula.

    PubMed

    Qiu, Ling; Guo, Xiuzhi; Zhu, Yan; Shou, Weilin; Gong, Mengchun; Zhang, Lin; Han, Huijuan; Quan, Guoqiang; Xu, Tao; Li, Hang; Li, Xuewang

    2013-01-01

    To investigate the impact of serum creatinine measurement on the applicability of glomerular filtration rate (GFR) evaluation equations. 99mTc-DTPA plasma clearance rate was used as GFR reference (rGFR) in patients with chronic kidney disease (CKD). Serum creatinine was measureded using enzymatic or picric acid creatinine reagent. The GFR of the patients were estimated using the Cockcroft-Gault equation corrected for body surface area, simplified Modification of Diet in Renal Disease (MDRD) equation, simplified MDRD equation corrected to isotopes dilution mass spectrometry, the CKD epidemiology collaborative research equation, and two Chinese simplified MDRD equations. Significant differences in the eGFR results estimated through enzymatic and picric acid methods were observed for the same evaluation equation. The intraclass correlation coefficient (ICC) of eGFR when the creatinine was measured by the picric acid method was significantly lower than that of the enzymatic method. The assessment accuracy of every equation using the enzymatic method to measure creatinine was significantly higher than that measured by the picric acid method when rGFR was > or = 60 mL/min/1.73m2. A significant difference was demonstrated in the same GFR evaluation equation using the picric acid and enzymatic methods. The enzymatic creatinine method was better than the picric acid method.

  7. Simplified method for the transverse bending analysis of twin celled concrete box girder bridges

    NASA Astrophysics Data System (ADS)

    Chithra, J.; Nagarajan, Praveen; S, Sajith A.

    2018-03-01

    Box girder bridges are one of the best options for bridges with span more than 25 m. For the study of these bridges, three-dimensional finite element analysis is the best suited method. However, performing three-dimensional analysis for routine design is difficult as well as time consuming. Also, software used for the three-dimensional analysis are very expensive. Hence designers resort to simplified analysis for predicting longitudinal and transverse bending moments. Among the many analytical methods used to find the transverse bending moments, SFA is the simplest and widely used in design offices. Results from simplified frame analysis can be used for the preliminary analysis of the concrete box girder bridges.From the review of literatures, it is found that majority of the work done using SFA is restricted to the analysis of single cell box girder bridges. Not much work has been done on the analysis multi-cell concrete box girder bridges. In this present study, a double cell concrete box girder bridge is chosen. The bridge is modelled using three- dimensional finite element software and the results are then compared with the simplified frame analysis. The study mainly focuses on establishing correction factors for transverse bending moment values obtained from SFA.

  8. Detection and Quantitation of T-2 Mycotoxin Using a Simplified Protein Synthesis Inhibition Assay.

    DTIC Science & Technology

    1983-07-18

    immunosuppressive nature of the mycotoxins. The mouse bioassay (Ueno et al, 1971) and the skin sensitivity test (Ueno et al, 1970; Chung, 1974) are effective... natural occurrence. In Mycotoxins in Human and Animal Health WJ. V. Rodricks, C. W. Hasseltine, and M. A. Mehlman, eds.), pp. 229-253. Pathotox: Publishers...363, 14I37-1441. Terac, K., and Ito, E. (1981). The effects of naturally occurring bisdihydrofuran ring-containing mycotoxins on cultured chick

  9. Quantitative chemical exchange saturation transfer (qCEST) MRI - omega plot analysis of RF-spillover-corrected inverse CEST ratio asymmetry for simultaneous determination of labile proton ratio and exchange rate.

    PubMed

    Wu, Renhua; Xiao, Gang; Zhou, Iris Yuwen; Ran, Chongzhao; Sun, Phillip Zhe

    2015-03-01

    Chemical exchange saturation transfer (CEST) MRI is sensitive to labile proton concentration and exchange rate, thus allowing measurement of dilute CEST agent and microenvironmental properties. However, CEST measurement depends not only on the CEST agent properties but also on the experimental conditions. Quantitative CEST (qCEST) analysis has been proposed to address the limitation of the commonly used simplistic CEST-weighted calculation. Recent research has shown that the concomitant direct RF saturation (spillover) effect can be corrected using an inverse CEST ratio calculation. We postulated that a simplified qCEST analysis is feasible with omega plot analysis of the inverse CEST asymmetry calculation. Specifically, simulations showed that the numerically derived labile proton ratio and exchange rate were in good agreement with input values. In addition, the qCEST analysis was confirmed experimentally in a phantom with concurrent variation in CEST agent concentration and pH. Also, we demonstrated that the derived labile proton ratio increased linearly with creatine concentration (P < 0.01) while the pH-dependent exchange rate followed a dominantly base-catalyzed exchange relationship (P < 0.01). In summary, our study verified that a simplified qCEST analysis can simultaneously determine labile proton ratio and exchange rate in a relatively complex in vitro CEST system. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Preparing and Presenting Effective Research Posters

    PubMed Central

    Miller, Jane E

    2007-01-01

    Objectives Posters are a common way to present results of a statistical analysis, program evaluation, or other project at professional conferences. Often, researchers fail to recognize the unique nature of the format, which is a hybrid of a published paper and an oral presentation. This methods note demonstrates how to design research posters to convey study objectives, methods, findings, and implications effectively to varied professional audiences. Methods A review of existing literature on research communication and poster design is used to identify and demonstrate important considerations for poster content and layout. Guidelines on how to write about statistical methods, results, and statistical significance are illustrated with samples of ineffective writing annotated to point out weaknesses, accompanied by concrete examples and explanations of improved presentation. A comparison of the content and format of papers, speeches, and posters is also provided. Findings Each component of a research poster about a quantitative analysis should be adapted to the audience and format, with complex statistical results translated into simplified charts, tables, and bulleted text to convey findings as part of a clear, focused story line. Conclusions Effective research posters should be designed around two or three key findings with accompanying handouts and narrative description to supply additional technical detail and encourage dialog with poster viewers. PMID:17355594

  11. Methodology for determining major constituents of ayahuasca and their metabolites in blood.

    PubMed

    McIlhenny, Ethan H; Riba, Jordi; Barbanoj, Manel J; Strassman, Rick; Barker, Steven A

    2012-03-01

    There is an increasing interest in potential medical applications of ayahuasca, a South American psychotropic plant tea with a long cultural history of indigenous medical and religious use. Clinical research into ayahuasca will require specific, sensitive and comprehensive methods for the characterization and quantitation of these compounds and their metabolites in blood. A combination of two analytical techniques (high-performance liquid chromatography with ultraviolet and/or fluorescence detection and gas chromatography with nitrogen-phosphorus detection) has been used for the analysis of some of the constituents of ayahuasca in blood following its oral consumption. We report here a single methodology for the direct analysis of 14 of the major alkaloid components of ayahuasca, including several known and potential metabolites of N,N-dimethyltryptamine and the harmala alkaloids in blood. The method uses 96-well plate/protein precipitation/filtration for plasma samples, and analysis by HPLC-ion trap-ion trap-mass spectrometry using heated electrospray ionization to reduce matrix effects. The method expands the list of compounds capable of being monitored in blood following ayahuasca administration while providing a simplified approach to their analysis. The method has adequate sensitivity, specificity and reproducibility to make it useful for clinical research with ayahuasca. Copyright © 2011 John Wiley & Sons, Ltd.

  12. Partial differential equation techniques for analysing animal movement: A comparison of different methods.

    PubMed

    Wang, Yi-Shan; Potts, Jonathan R

    2017-03-07

    Recent advances in animal tracking have allowed us to uncover the drivers of movement in unprecedented detail. This has enabled modellers to construct ever more realistic models of animal movement, which aid in uncovering detailed patterns of space use in animal populations. Partial differential equations (PDEs) provide a popular tool for mathematically analysing such models. However, their construction often relies on simplifying assumptions which may greatly affect the model outcomes. Here, we analyse the effect of various PDE approximations on the analysis of some simple movement models, including a biased random walk, central-place foraging processes and movement in heterogeneous landscapes. Perhaps the most commonly-used PDE method dates back to a seminal paper of Patlak from 1953. However, our results show that this can be a very poor approximation in even quite simple models. On the other hand, more recent methods, based on transport equation formalisms, can provide more accurate results, as long as the kernel describing the animal's movement is sufficiently smooth. When the movement kernel is not smooth, we show that both the older and newer methods can lead to quantitatively misleading results. Our detailed analysis will aid future researchers in the appropriate choice of PDE approximation for analysing models of animal movement. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. A simplified implementation of edge detection in MATLAB is faster and more sensitive than fast fourier transform for actin fiber alignment quantification.

    PubMed

    Kemeny, Steven Frank; Clyne, Alisa Morss

    2011-04-01

    Fiber alignment plays a critical role in the structure and function of cells and tissues. While fiber alignment quantification is important to experimental analysis and several different methods for quantifying fiber alignment exist, many studies focus on qualitative rather than quantitative analysis perhaps due to the complexity of current fiber alignment methods. Speed and sensitivity were compared in edge detection and fast Fourier transform (FFT) for measuring actin fiber alignment in cells exposed to shear stress. While edge detection using matrix multiplication was consistently more sensitive than FFT, image processing time was significantly longer. However, when MATLAB functions were used to implement edge detection, MATLAB's efficient element-by-element calculations and fast filtering techniques reduced computation cost 100 times compared to the matrix multiplication edge detection method. The new computation time was comparable to the FFT method, and MATLAB edge detection produced well-distributed fiber angle distributions that statistically distinguished aligned and unaligned fibers in half as many sample images. When the FFT sensitivity was improved by dividing images into smaller subsections, processing time grew larger than the time required for MATLAB edge detection. Implementation of edge detection in MATLAB is simpler, faster, and more sensitive than FFT for fiber alignment quantification.

  14. Simplifier: a web tool to eliminate redundant NGS contigs.

    PubMed

    Ramos, Rommel Thiago Jucá; Carneiro, Adriana Ribeiro; Azevedo, Vasco; Schneider, Maria Paula; Barh, Debmalya; Silva, Artur

    2012-01-01

    Modern genomic sequencing technologies produce a large amount of data with reduced cost per base; however, this data consists of short reads. This reduction in the size of the reads, compared to those obtained with previous methodologies, presents new challenges, including a need for efficient algorithms for the assembly of genomes from short reads and for resolving repetitions. Additionally after abinitio assembly, curation of the hundreds or thousands of contigs generated by assemblers demands considerable time and computational resources. We developed Simplifier, a stand-alone software that selectively eliminates redundant sequences from the collection of contigs generated by ab initio assembly of genomes. Application of Simplifier to data generated by assembly of the genome of Corynebacterium pseudotuberculosis strain 258 reduced the number of contigs generated by ab initio methods from 8,004 to 5,272, a reduction of 34.14%; in addition, N50 increased from 1 kb to 1.5 kb. Processing the contigs of Escherichia coli DH10B with Simplifier reduced the mate-paired library 17.47% and the fragment library 23.91%. Simplifier removed redundant sequences from datasets produced by assemblers, thereby reducing the effort required for finalization of genome assembly in tests with data from Prokaryotic organisms. Simplifier is available at http://www.genoma.ufpa.br/rramos/softwares/simplifier.xhtmlIt requires Sun jdk 6 or higher.

  15. Quantitative analysis of changes in salivary mutans streptococci after orthodontic treatment.

    PubMed

    Jung, Woo-Sun; Kim, Ho; Park, So-Yoon; Cho, Eun-Jung; Ahn, Sug-Joon

    2014-05-01

    The purpose of this study was to analyze the initial changes in salivary mutans streptococci levels after orthodontic treatment with fixed appliances. Our subjects consisted of 58 adults. Whole saliva and simplified oral hygiene index values were obtained at 4 time points: at debonding (T1), 1 week after debonding (T2), 5 weeks after debonding (T3), and 13 weeks after debonding (T4). Repeated measures analysis of variance was used to determine the time-related differences in salivary bacterial levels and the simplified oral hygiene index values among the 4 time points after quantifying the salivary levels of Streptococcus mutans, Streptococcus sobrinus, and total bacteria with real-time polymerase chain reaction. Simplified oral hygiene index values and total bacteria significantly decreased, but salivary mutans streptococci levels significantly increased after orthodontic treatment. The amounts of total bacteria in saliva significantly decreased at T3 (T1, T2 > T3, T4), and the simplified oral hygiene index values decreased at T2 (T1 > T2, T3, T4). However, salivary S mutans and S sobrinus significantly increased at T3 and T4, respectively (T1, T2 < T3 < T4). Furthermore, the proportion of mutans streptococci to total bacteria significantly increased at T4 (T1, T2, T3 < T4). This study suggests that careful hygienic procedures are needed to reduce the risk for dental caries after orthodontic treatment, despite overall improved oral hygiene status. Copyright © 2014 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  16. Research Participants' Understanding of and Reactions to Certificates of Confidentiality.

    PubMed

    Beskow, Laura M; Check, Devon K; Ammarell, Natalie

    2014-01-01

    Certificates of Confidentiality are intended to facilitate participation in critical public health research by protecting against forced disclosure of identifying data in legal proceedings, but little is known about the effect of Certificate descriptions in consent forms. To gain preliminary insights, we conducted qualitative interviews with 50 HIV-positive individuals in Durham, North Carolina to explore their subjective understanding of Certificate descriptions and whether their reactions differed based on receiving a standard versus simplified description. Most interviewees were neither reassured nor alarmed by Certificate information, and most said it would not influence their willingness to participate or provide truthful information. However, compared with those receiving the simplified description, more who read the standard description said it raised new concerns, that their likelihood of participating would be lower, and that they might be less forthcoming. Most interviewees said they found the Certificate description clear, but standard-group participants often found particular words and phrases confusing, while simplified-group participants more often questioned the information's substance. Valid informed consent requires comprehension and voluntariness. Our findings highlight the importance of developing consent descriptions of Certificates and other confidentiality protections that are simple and accurate. These qualitative results provide rich detail to inform a larger, quantitative study that would permit further rigorous comparisons.

  17. Weather data for simplified energy calculation methods. Volume II. Middle United States: TRY data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olsen, A.R.; Moreno, S.; Deringer, J.

    1984-08-01

    The objective of this report is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 22 cities in the continental United States using Test Reference Year (TRY) source weather data. The weather data at each city has been summarized in a number of ways to provide differing levels ofmore » detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.« less

  18. Leisure-time activities--its program and importance in the institutionalized protection of old people.

    PubMed

    Ljubić, Marijana

    2003-12-01

    This paper is a "report" or preliminary summation of a larger research project and paper. Leisure activities programs and their importance have not yet been systematically investigated in Croatian nursing homes, so this will contribute to a better understanding of this area of research. Through a ten year period of research study of 60 old people it has been shown that by the application of organized and suitable leisure activities we can prevent and redirect the measures so as to continually improve the life quality of old people living in nursing homes, regardless of their medical condition/place of residence. The topic of this paper is very popular in gerontological science. The research applied modern qualitative and quantitative methods of research in gerontology and therefore represents a novelty to the methodologically obsolete methods that have been in use in this country so far, which included polls and simplified quantitative processing of collected data. The results are useful for practical purposes because programs have been elaborated which will serve to improve the quality of leisure time and active life-planning in nursing homes. The foundations for further scientific research have been set with specific goals to focus on the certain aspects of the problems. In that sense, this paper invites all sorts of other challenging hypothesis to come out (e.g. the ratio of intellectual activities, active and passive types of activities etc.) and also opens the door for this kind of methodology in these types of research. This will help increase the number of such types of research as the qualitative methods of research have been disregarded in our country.

  19. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE PAGES

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; ...

    2017-09-20

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  20. Discontinuous Galerkin Methods for NonLinear Differential Systems

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Mansour, Nagi (Technical Monitor)

    2001-01-01

    This talk considers simplified finite element discretization techniques for first-order systems of conservation laws equipped with a convex (entropy) extension. Using newly developed techniques in entropy symmetrization theory, simplified forms of the discontinuous Galerkin (DG) finite element method have been developed and analyzed. The use of symmetrization variables yields numerical schemes which inherit global entropy stability properties of the PDE (partial differential equation) system. Central to the development of the simplified DG methods is the Eigenvalue Scaling Theorem which characterizes right symmetrizers of an arbitrary first-order hyperbolic system in terms of scaled eigenvectors of the corresponding flux Jacobian matrices. A constructive proof is provided for the Eigenvalue Scaling Theorem with detailed consideration given to the Euler equations of gas dynamics and extended conservation law systems derivable as moments of the Boltzmann equation. Using results from kinetic Boltzmann moment closure theory, we then derive and prove energy stability for several approximate DG fluxes which have practical and theoretical merit.

  1. A simplified method for assessing particle deposition rate in aircraft cabins

    NASA Astrophysics Data System (ADS)

    You, Ruoyu; Zhao, Bin

    2013-03-01

    Particle deposition in aircraft cabins is important for the exposure of passengers to particulate matter, as well as the airborne infectious diseases. In this study, a simplified method is proposed for initial and quick assessment of particle deposition rate in aircraft cabins. The method included: collecting the inclined angle, area, characteristic length, and freestream air velocity for each surface in a cabin; estimating the friction velocity based on the characteristic length and freestream air velocity; modeling the particle deposition velocity using the empirical equation we developed previously; and then calculating the particle deposition rate. The particle deposition rates for the fully-occupied, half-occupied, 1/4-occupied and empty first-class cabin of the MD-82 commercial airliner were estimated. The results show that the occupancy did not significantly influence the particle deposition rate of the cabin. Furthermore, the simplified human model can be used in the assessment with acceptable accuracy. Finally, the comparison results show that the particle deposition rate of aircraft cabins and indoor environments are quite similar.

  2. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  3. Chemical profiling approach to evaluate the influence of traditional and simplified decoction methods on the holistic quality of Da-Huang-Xiao-Shi decoction using high-performance liquid chromatography coupled with diode-array detection and time-of-flight mass spectrometry.

    PubMed

    Yan, Xuemei; Zhang, Qianying; Feng, Fang

    2016-04-01

    Da-Huang-Xiao-Shi decoction, consisting of Rheum officinale Baill, Mirabilitum, Phellodendron amurense Rupr. and Gardenia jasminoides Ellis, is a traditional Chinese medicine used for the treatment of jaundice. As described in "Jin Kui Yao Lue", a traditional multistep decoction of Da-Huang-Xiao-Shi decoction was required while simplified one-step decoction was used in recent repsorts. To investigate the chemical difference between the decoctions obtained by the traditional and simplified preparations, a sensitive and reliable approach of high-performance liquid chromatography coupled with diode-array detection and electrospray ionization time-of-flight mass spectrometry was established. As a result, a total of 105 compounds were detected and identified. Analysis of the chromatogram profiles of the two decoctions showed that many compounds in the decoction of simplified preparation had changed obviously compared with those in traditional preparation. The changes of constituents would be bound to cause the differences in the therapeutic effects of the two decoctions. The present study demonstrated that certain preparation methods significantly affect the holistic quality of traditional Chinese medicines and the use of a suitable preparation method is crucial for these medicines to produce special clinical curative effect. This research results elucidated the scientific basis of traditional preparation methods in Chinese medicines. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Development and Validation of a Simplified Renal Replacement Therapy Suitable for Prolonged Field Care in a Porcine (Sus scrofa) Model of Acute Kidney Injury

    DTIC Science & Technology

    2018-03-01

    of a Simplified Renal Replacement Therapy Suitable for Prolonged Field Care in a Porcine (Sus scrofa) Model of Acute Kidney Injury. PRINCIPAL...and methods, results - include tables/figures, and conclusions/applications.) Objectives/Background: Acute kidney injury (AKI) is a serious

  5. A Simplified Technique for Evaluating Human "CCR5" Genetic Polymorphism

    ERIC Educational Resources Information Center

    Falteisek, Lukáš; Cerný, Jan; Janštová, Vanda

    2013-01-01

    To involve students in thinking about the problem of AIDS (which is important in the view of nondecreasing infection rates), we established a practical lab using a simplified adaptation of Thomas's (2004) method to determine the polymorphism of HIV co-receptor CCR5 from students' own epithelial cells. CCR5 is a receptor involved in inflammatory…

  6. A simplified dynamic model of the T700 turboshaft engine

    NASA Technical Reports Server (NTRS)

    Duyar, Ahmet; Gu, Zhen; Litt, Jonathan S.

    1992-01-01

    A simplified open-loop dynamic model of the T700 turboshaft engine, valid within the normal operating range of the engine, is developed. This model is obtained by linking linear state space models obtained at different engine operating points. Each linear model is developed from a detailed nonlinear engine simulation using a multivariable system identification and realization method. The simplified model may be used with a model-based real time diagnostic scheme for fault detection and diagnostics, as well as for open loop engine dynamics studies and closed loop control analysis utilizing a user generated control law.

  7. Analysis of temperature distribution in liquid-cooled turbine blades

    NASA Technical Reports Server (NTRS)

    Livingood, John N B; Brown, W Byron

    1952-01-01

    The temperature distribution in liquid-cooled turbine blades determines the amount of cooling required to reduce the blade temperature to permissible values at specified locations. This report presents analytical methods for computing temperature distributions in liquid-cooled turbine blades, or in simplified shapes used to approximate sections of the blade. The individual analyses are first presented in terms of their mathematical development. By means of numerical examples, comparisons are made between simplified and more complete solutions and the effects of several variables are examined. Nondimensional charts to simplify some temperature-distribution calculations are also given.

  8. In vivo two-dimensional NMR correlation spectroscopy

    NASA Astrophysics Data System (ADS)

    Kraft, Robert A.

    1999-10-01

    The poor resolution of in-vivo one- dimensional nuclear magnetic resonance spectroscopy (NMR) has limited its clinical potential. Currently, only the large singlet methyl resonances arising from N-acetyl aspartate (NAA), choline, and creatine are quantitated in a clinical setting. Other metabolites such as myo- inositol, glutamine, glutamate, lactate, and γ- amino butyric acid (GABA) are of clinical interest but quantitation is difficult due to the overlapping resonances and limited spectral resolution. To improve the spectral resolution and distinguish between overlapping resonances, a series of two- dimensional chemical shift correlation spectroscopy experiments were developed for a 1.5 Tesla clinical imaging magnet. Two-dimensional methods are attractive for in vivo spectroscopy due to their ability to unravel overlapping resonances with the second dimension, simplifying the interpretation and quantitation of low field NMR spectra. Two-dimensional experiments acquired with mix-mode line shape negate the advantages of the second dimension. For this reason, a new experiment, REVOLT, was developed to achieve absorptive mode line shape in both dimensions. Absorptive mode experiments were compared to mixed mode experiments with respect to sensitivity, resolution, and water suppression. Detailed theoretical and experimental calculations of the optimum spin lock and radio frequency power deposition were performed. Two-dimensional spectra were acquired from human bone marrow and human brain tissue. The human brain tissue spectra clearly reveal correlations among the coupled spins of NAA, glutamine, glutamate, lactate, GABA, aspartate and myo-inositol obtained from a single experiment of 23 minutes from a volume of 59 mL. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  9. Quantitative Profiling of Major Neutral Lipid Classes in Human Meibum by Direct Infusion Electrospray Ionization Mass Spectrometry

    PubMed Central

    Chen, Jianzhong; Green, Kari B.; Nichols, Kelly K.

    2013-01-01

    Purpose. The purpose of this investigation was to better understand lipid composition in human meibum. Methods. Intact lipids in meibum samples were detected by direct infusion electrospray ionization mass spectrometry (ESI-MS) analysis in positive detection mode using sodium iodide (NaI) as an additive. The peak intensities of all major types of lipid species, that is, wax esters (WEs), cholesteryl esters (CEs), and diesters (DEs) were corrected for peak overlapping and isotopic distribution; an additional ionization efficiency correction was performed for WEs and CEs, which was simplified by the observation that the corresponding ionization efficiency was primarily dependent on the specific lipid class and saturation degree of the lipids while independent of the carbon chain length. A set of WE and CE standards was spiked in meibum samples for ionization efficiency determination and absolute quantitation. Results. The absolute amount (μmol/mg) for each of 51 WEs and 31 CEs in meibum samples was determined. The summed masses for 51 WEs and 31 CEs accounted for 48 ± 4% and 40 ± 2%, respectively, of the total meibum lipids. The mass percentages of saturated and unsaturated species were determined to be 75 ± 2% and 25 ± 1% for CEs and 14 ± 1% and 86 ± 1% for WEs. The profiles for two types of DEs were also obtained, which include 42 α,ω Type II DEs, and 21 ω Type I-St DEs. Conclusions. Major neutral lipid classes in meibum samples were quantitatively profiled by ESI-MS analysis with NaI additive. PMID:23847307

  10. A simplified analytic form for generation of axisymmetric plasma boundaries

    DOE PAGES

    Luce, Timothy C.

    2017-02-23

    An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less

  11. A simplified application of the method of operators to the calculation of disturbed motions of an airplane

    NASA Technical Reports Server (NTRS)

    Jones, Robert T

    1937-01-01

    A simplified treatment of the application of Heaviside's operational methods to problems of airplane dynamics is given. Certain graphical methods and logarithmic formulas that lessen the amount of computation involved are explained. The problem representing a gust disturbance or control manipulation is taken up and it is pointed out that in certain cases arbitrary control manipulations may be dealt with as though they imposed specific constraints on the airplane, thus avoiding the necessity of any integration. The application of the calculations described in the text is illustrated by several examples chosen to show the use of the methods and the practicability of the graphical and logarithmic computations described.

  12. A simplified analytic form for generation of axisymmetric plasma boundaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luce, Timothy C.

    An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less

  13. Cloud field classification based upon high spatial resolution textural features. II - Simplified vector approaches

    NASA Technical Reports Server (NTRS)

    Chen, D. W.; Sengupta, S. K.; Welch, R. M.

    1989-01-01

    This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.

  14. Simplification and improvement of protein detection in two-dimensional electrophoresis gels with SERVA HPE™ lightning red.

    PubMed

    Griebel, Anja; Obermaier, Christian; Westermeier, Reiner; Moche, Martin; Büttner, Knut

    2013-07-01

    A new fluorescent amino-reactive dye has been tested for both labelling proteins prior to electrophoretic separations and between the two steps of two-dimensional electrophoresis. A series of experiments showed, that the labelling of lysines with this dye is compatible with all standard additives used for sample preparation, including reducing substances and carrier ampholytes. Using this dye for pre-labelling considerably simplifies the electrophoresis and detection workflow and provides highly sensitive and quantitative visualisation of proteins.

  15. Early dynamic 18F-FDG PET to detect hyperperfusion in hepatocellular carcinoma liver lesions.

    PubMed

    Schierz, Jan-Henning; Opfermann, Thomas; Steenbeck, Jörg; Lopatta, Eric; Settmacher, Utz; Stallmach, Andreas; Marlowe, Robert J; Freesmeyer, Martin

    2013-06-01

    In addition to angiographic data on vascularity and vascular access, demonstration of hepatocellular carcinoma (HCC) liver nodule hypervascularization is a prerequisite for certain intrahepatic antitumor therapies. Early dynamic (ED) (18)F-FDG PET/CT could serve this purpose when the current standard method, contrast-enhanced (CE) CT, or other CE morphologic imaging modalities are unsuitable. A recent study showed ED (18)F-FDG PET/CT efficacy in this setting but applied a larger-than-standard (18)F-FDG activity and an elaborate protocol likely to hinder routine use. We developed a simplified protocol using standard activities and easily generated visual and descriptive or quantitative endpoints. This pilot study assessed the ability of these endpoints to detect HCC hyperperfusion and, thereby, evaluated the suitability in of the protocol everyday practice. Twenty-seven patients with 34 HCCs (diameter ≥ 1.5 cm) with hypervascularization on 3-phase CE CT underwent liver ED (18)F-FDG PET for 240 s, starting with (18)F-FDG (250-MBq bolus injection). Four frames at 15-s intervals, followed by 3 frames at 60-s intervals were reconstructed. Endpoints included focal tracer accumulation in the first 4 frames (60 s), subsequent focal washout, and visual and quantitative differences between tumor and liver regions of interest in maximum and mean ED standardized uptake value (ED SUVmax and ED SUVmean, respectively) 240-s time-activity curves. All 34 lesions were identified by early focal (18)F-FDG accumulation and faster time-to-peak ED SUVmax or ED SUVmean than in nontumor tissue. Tumor peak ED SUVmax and ED SUVmean exceeded liver levels in 85% and 53%, respectively, of lesions. Nadir tumor signal showed no consistent pattern relative to nontumor signal. HCC had a significantly shorter time to peak and significantly faster rate to peak for both ED SUVmax and ED SUVmean curves and a significantly higher peak ED SUVmax but not peak ED SUVmean than the liver. This pilot study provided proof of principle that our simplified ED (18)F-FDG PET/CT protocol includes endpoints that effectively detect HCC hypervascularization; this finding suggests that the protocol can be used routinely.

  16. Crustal Gravitational Potential Energy Change and Subduction Earthquakes

    NASA Astrophysics Data System (ADS)

    Zhu, P. P.

    2017-05-01

    Crustal gravitational potential energy (GPE) change induced by earthquakes is an important subject in geophysics and seismology. For the past forty years the research on this subject stayed in the stage of qualitative estimate. In recent few years the 3D dynamic faulting theory provided a quantitative solution of this subject. The theory deduced a quantitative calculating formula for the crustal GPE change using the mathematic method of tensor analysis under the principal stresses system. This formula contains only the vertical principal stress, rupture area, slip, dip, and rake; it does not include the horizontal principal stresses. It is just involved in simple mathematical operations and does not hold complicated surface or volume integrals. Moreover, the hanging wall vertical moving (up or down) height has a very simple expression containing only slip, dip, and rake. The above results are significant to investigate crustal GPE change. Commonly, the vertical principal stress is related to the gravitational field, substituting the relationship between the vertical principal stress and gravitational force into the above formula yields an alternative formula of crustal GPE change. The alternative formula indicates that even with lack of in situ borehole measured stress data, scientists can still quantitatively calculate crustal GPE change. The 3D dynamic faulting theory can be used for research on continental fault earthquakes; it also can be applied to investigate subduction earthquakes between oceanic and continental plates. Subduction earthquakes hold three types: (a) crust only on the vertical up side of the rupture area; (b) crust and seawater both on the vertical up side of the rupture area; (c) crust only on the vertical up side of the partial rupture area, and crust and seawater both on the vertical up side of the remaining rupture area. For each type we provide its quantitative formula of the crustal GPE change. We also establish a simplified model (called CRW Model) as follows: for Type B and Type C subduction earthquakes, if the seawater average depth on the vertical up side of the rupture area is less than a tenth of the hypocenter depth, then take the approximation that the seawater above the continental plate is replaced by the upper crustal material of the continental plate. The formula of quantitative calculating the crustal GPE change is also provided for this model. Finally, for 16 September 2015 Mw 8.3 Illapel Chile earthquake, we apply CRW Model and obtain the following results: the crustal GPE change is equal to 1.8 × 1019 J, and the hanging wall vertical moving-up height is 1.9 m with respect to the footwall. We believe this paper might be the first report on the quantitative solution of the crustal GPE change for this subduction earthquake; our results and related method will be helpful in research into the earthquakes in Peru-Chile subduction zone and the Andean orogeny. In short, this study expounds a new method for quantitative determining the crustal GPE change caused by subduction earthquakes, which is different from other existing methods.

  17. Simplification of a scoring system maintained overall accuracy but decreased the proportion classified as low risk.

    PubMed

    Sanders, Sharon; Flaws, Dylan; Than, Martin; Pickering, John W; Doust, Jenny; Glasziou, Paul

    2016-01-01

    Scoring systems are developed to assist clinicians in making a diagnosis. However, their uptake is often limited because they are cumbersome to use, requiring information on many predictors, or complicated calculations. We examined whether, and how, simplifications affected the performance of a validated score for identifying adults with chest pain in an emergency department who have low risk of major adverse cardiac events. We simplified the Emergency Department Assessment of Chest pain Score (EDACS) by three methods: (1) giving equal weight to each predictor included in the score, (2) reducing the number of predictors, and (3) using both methods--giving equal weight to a reduced number of predictors. The diagnostic accuracy of the simplified scores was compared with the original score in the derivation (n = 1,974) and validation (n = 909) data sets. There was no difference in the overall accuracy of the simplified versions of the score compared with the original EDACS as measured by the area under the receiver operating characteristic curve (0.74 to 0.75 for simplified versions vs. 0.75 for the original score in the validation cohort). With score cut-offs set to maintain the sensitivity of the combination of score and tests (electrocardiogram and cardiac troponin) at a level acceptable to clinicians (99%), simplification reduced the proportion of patients classified as low risk from 50% with the original score to between 22% and 42%. Simplification of a clinical score resulted in similar overall accuracy but reduced the proportion classified as low risk and therefore eligible for early discharge compared with the original score. Whether the trade-off is acceptable, will depend on the context in which the score is to be used. Developers of clinical scores should consider simplification as a method to increase uptake, but further studies are needed to determine the best methods of deriving and evaluating simplified scores. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Two new oro-cervical radiographic indexes for chronological age estimation: a pilot study on an Italian population.

    PubMed

    Lajolo, Carlo; Giuliani, Michele; Cordaro, Massimo; Marigo, Luca; Marcelli, Antonio; Fiorillo, Fabio; Pascali, Vincenzo L; Oliva, Antonio

    2013-10-01

    Chronological age (CA) plays a fundamental role in forensic dentistry (i.e. personal identification and evaluation of imputability). Even though several studies outlined the association between biological and chronological age, there is still great variability in the estimates. The aim of this study was to determine the possible correlation between biological and CA age through the use of two new radiographic indexes (Oro-Cervical Radiographic Simplified Score - OCRSS and Oro-Cervical Radiographic Simplified Score Without Wisdom Teeth - OCRSSWWT) that are based on the oro-cervical area. Sixty Italian Caucasian individuals were divided into 3 groups according to their CA: Group 1: CAG 1 = 8-14 yr; Group 2: CAG 2 = 14-18 yr; Group 3: CAG 3 = 18-25 yr; panorexes and standardised cephalograms were evaluated according Demirjian's Method for dental age calculation (DM), Cervical Vertebral Maturation method for skeletal age calculation (CVMS) and Third Molar Development for age estimation (TMD). The stages of each method were simplified in order to generate OCRSS, which summarized the simplified scores of the three methods, and OCRSSWWT, which summarized the simplified DM and CVMS scores. There was a significant correlation between OCRSS and CAGs (Slope = 0.954, p < 0.001, R-squared = 0.79) and between OCRSSWWT and CAGs (Slope = 0.863, p < 0.001, R-squared = 0.776). Even though the indexes, especially OCRSS, appear to be highly reliable, growth variability among individuals can deeply influence the anatomical changes from childhood to adulthood. A multi-disciplinary approach that considers many different biomarkers could help make radiological age determination more reliable when it is used to predict CA. Copyright © 2013 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  19. Application of the principal fractional meta-trigonometric functions for the solution of linear commensurate-order time-invariant fractional differential equations.

    PubMed

    Lorenzo, C F; Hartley, T T; Malti, R

    2013-05-13

    A new and simplified method for the solution of linear constant coefficient fractional differential equations of any commensurate order is presented. The solutions are based on the R-function and on specialized Laplace transform pairs derived from the principal fractional meta-trigonometric functions. The new method simplifies the solution of such fractional differential equations and presents the solutions in the form of real functions as opposed to fractional complex exponential functions, and thus is directly applicable to real-world physics.

  20. Transosseous fixation of pediatric displaced mandibular fractures with polyglactin resorbable suture--a simplified technique.

    PubMed

    Chandan, Sanjay; Halli, Rajshekhar; Joshi, Samir; Chhabaria, Gaurav; Setiya, Sneha

    2013-11-01

    Management of pediatric mandibular fractures presents a unique challenge to surgeons in terms of its numerous variations compared to adults. Both conservative and open methods have been advocated with their obvious limitations and complications. However, conservative modalities may not be possible in grossly displaced fractures, which necessitate the open method of fixation. We present a novel and simplified technique of transosseous fixation of displaced pediatric mandibular fractures with polyglactin resorbable suture, which provides adequate stability without any interference with tooth buds and which is easy to master.

  1. Approximate method for calculating free vibrations of a large-wind-turbine tower structure

    NASA Technical Reports Server (NTRS)

    Das, S. C.; Linscott, B. S.

    1977-01-01

    A set of ordinary differential equations were derived for a simplified structural dynamic lumped-mass model of a typical large-wind-turbine tower structure. Dunkerley's equation was used to arrive at a solution for the fundamental natural frequencies of the tower in bending and torsion. The ERDA-NASA 100-kW wind turbine tower structure was modeled, and the fundamental frequencies were determined by the simplified method described. The approximate fundamental natural frequencies for the tower agree within 18 percent with test data and predictions analyzed.

  2. Simplified solution for point contact deformation between two elastic solids

    NASA Technical Reports Server (NTRS)

    Brewe, D. E.; Hamrock, B. J.

    1976-01-01

    A linear-regression by the method of least squares is made on the geometric variables that occur in the equation for point contact deformation. The ellipticity and the complete eliptic integrals of the first and second kind are expressed as a function of the x, y-plane principal radii. The ellipticity was varied from 1 (circular contact) to 10 (a configuration approaching line contact). These simplified equations enable one to calculate easily the point-contact deformation to within 3 percent without resorting to charts or numerical methods.

  3. Simplified form of tinnitus retraining therapy in adults: a retrospective study

    PubMed Central

    Aazh, Hashir; Moore, Brian CJ; Glasberg, Brian R

    2008-01-01

    Background Since the first description of tinnitus retraining therapy (TRT), clinicians have modified and customised the method of TRT in order to suit their practice and their patients. A simplified form of TRT is used at Ealing Primary Care Trust Audiology Department. Simplified TRT is different from TRT in the type and (shorter) duration of the counseling but is similar to TRT in the application of sound therapy except for patients exhibiting tinnitus with no hearing loss and no decreased sound tolerance (wearable sound generators were not mandatory or recommended here, whereas they are for TRT). The main goal of this retrospective study was to assess the efficacy of simplified TRT. Methods Data were collected from a series of 42 consecutive patients who underwent simplified TRT for a period of 3 to 23 months. Perceived tinnitus handicap was measured by the Tinnitus Handicap Inventory (THI) and perceived tinnitus loudness, annoyance and the effect of tinnitus on life were assessed through the Visual Analog Scale (VAS). Results The mean THI and VAS scores were significantly decreased after 3 to 23 months of treatment. The mean decline of the THI score was 45 (SD = 22) and the difference between pre- and post-treatment scores was statistically significant. The mean decline of the VAS scores was 1.6 (SD = 2.1) for tinnitus loudness, 3.6 (SD = 2.6) for annoyance, and 3.9 (SD = 2.3) for effect on life. The differences between pre- and post-treatment VAS scores were statistically significant for tinnitus loudness, annoyance, and effect on life. The decline of THI scores was not significantly correlated with age and duration of tinnitus. Conclusion The results suggest that benefit may be obtained from a substantially simplified form of TRT. PMID:18980672

  4. Investigating Geosparql Requirements for Participatory Urban Planning

    NASA Astrophysics Data System (ADS)

    Mohammadi, E.; Hunter, A. J. S.

    2015-06-01

    We propose that participatory GIS (PGIS) activities including participatory urban planning can be made more efficient and effective if spatial reasoning rules are integrated with PGIS tools to simplify engagement for public contributors. Spatial reasoning is used to describe relationships between spatial entities. These relationships can be evaluated quantitatively or qualitatively using geometrical algorithms, ontological relations, and topological methods. Semantic web services utilize tools and methods that can facilitate spatial reasoning. GeoSPARQL, introduced by OGC, is a spatial reasoning standard used to make declarations about entities (graphical contributions) that take the form of a subject-predicate-object triple or statement. GeoSPARQL uses three basic methods to infer topological relationships between spatial entities, including: OGC's simple feature topology, RCC8, and the DE-9IM model. While these methods are comprehensive in their ability to define topological relationships between spatial entities, they are often inadequate for defining complex relationships that exist in the spatial realm. Particularly relationships between urban entities, such as those between a bus route, the collection of associated bus stops and their overall surroundings as an urban planning pattern. In this paper we investigate common qualitative spatial reasoning methods as a preliminary step to enhancing the capabilities of GeoSPARQL in an online participatory GIS framework in which reasoning is used to validate plans based on standard patterns that can be found in an efficient/effective urban environment.

  5. PCV2 on the spot-A new method for the detection of single porcine circovirus type 2 secreting cells.

    PubMed

    Fossum, Caroline; Hjertner, Bernt; Lövgren, Tanja; Fuxler, Lisbeth; Charerntantanakul, Wasin; Wallgren, Per

    2014-02-01

    A porcine circovirus type 2 SPOT (PCV2-SPOT) assay was established to enumerate virus-secreting lymphocytes obtained from naturally infected pigs. The assay is based on the same principle as general ELISPOT assays but instead of detecting cytokine or immunoglobulin secretion, PCV2 particles are immobilized and detected as filter spots. The method was used to evaluate the influence of various cell activators on the PCV2 secretion in vitro and was also applied to study the PCV2 secretion by lymphocytes obtained from pigs in healthy herds and in a herd afflicted by postweaning multisystemic wasting disease (PMWS). Peripheral blood mononuclear cells (PBMCs) obtained from a pig with severe PMWS produced PCV2-SPOTs spontaneously whereas PBMCs obtained from pigs infected subclinically only generated PCV2-SPOTs upon in vitro stimulation. The PCV2 secretion potential was related to the PCV2 DNA content in the PBMCs as determined by two PCV2 real-time PCR assays, developed to differentiate between Swedish PCV2 genogroups 1 (PCV2a) and 3 (PCV2b). Besides the current application these qPCRs could simplify future epidemiological studies and allow genogroup detection/quantitation in dual infection experiments and similar studies. The developed PCV2-SPOT assay offers a semi-quantitative approach to evaluate the potential of PCV2-infected porcine cells to release PCV2 viral particles as well as a system to evaluate the ability of different cell types or compounds to affect PCV2 replication and secretion. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. The quantification of spermatozoa by real-time quantitative PCR, spectrophotometry, and spermatophore cap size.

    PubMed

    Doyle, Jacqueline M; McCormick, Cory R; DeWoody, J Andrew

    2011-01-01

    Many animals, such as crustaceans, insects, and salamanders, package their sperm into spermatophores, and the number of spermatozoa contained in a spermatophore is relevant to studies of sexual selection and sperm competition. We used two molecular methods, real-time quantitative polymerase chain reaction (RT-qPCR) and spectrophotometry, to estimate sperm numbers from spermatophores. First, we designed gene-specific primers that produced a single amplicon in four species of ambystomatid salamanders. A standard curve generated from cloned amplicons revealed a strong positive relationship between template DNA quantity and cycle threshold, suggesting that RT-qPCR could be used to quantify sperm in a given sample. We then extracted DNA from multiple Ambystoma maculatum spermatophores, performed RT-qPCR on each sample, and estimated template copy numbers (i.e. sperm number) using the standard curve. Second, we used spectrophotometry to determine the number of sperm per spermatophore by measuring DNA concentration relative to the genome size. We documented a significant positive relationship between the estimates of sperm number based on RT-qPCR and those based on spectrophotometry. When these molecular estimates were compared to spermatophore cap size, which in principle could predict the number of sperm contained in the spermatophore, we also found a significant positive relationship between sperm number and spermatophore cap size. This linear model allows estimates of sperm number strictly from cap size, an approach which could greatly simplify the estimation of sperm number in future studies. These methods may help explain variation in fertilization success where sperm competition is mediated by sperm quantity. © 2010 Blackwell Publishing Ltd.

  7. Efficient calculation of the polarizability: a simplified effective-energy technique

    NASA Astrophysics Data System (ADS)

    Berger, J. A.; Reining, L.; Sottile, F.

    2012-09-01

    In a recent publication [J.A. Berger, L. Reining, F. Sottile, Phys. Rev. B 82, 041103(R) (2010)] we introduced the effective-energy technique to calculate in an accurate and numerically efficient manner the GW self-energy as well as the polarizability, which is required to evaluate the screened Coulomb interaction W. In this work we show that the effective-energy technique can be used to further simplify the expression for the polarizability without a significant loss of accuracy. In contrast to standard sum-over-state methods where huge summations over empty states are required, our approach only requires summations over occupied states. The three simplest approximations we obtain for the polarizability are explicit functionals of an independent- or quasi-particle one-body reduced density matrix. We provide evidence of the numerical accuracy of this simplified effective-energy technique as well as an analysis of our method.

  8. Simplified model of mean double step (MDS) in human body movement

    NASA Astrophysics Data System (ADS)

    Dusza, Jacek J.; Wawrzyniak, Zbigniew M.; Mugarra González, C. Fernando

    In this paper we present a simplified and useful model of the human body movement based on the full gait cycle description, called the Mean Double Step (MDS). It enables the parameterization and simplification of the human movement. Furthermore it allows a description of the gait cycle by providing standardized estimators to transform the gait cycle into a periodical movement process. Moreover the method of simplifying the MDS model and its compression are demonstrated. The simplification is achieved by reducing the number of bars of the spectrum and I or by reducing the number of samples describing the MDS both in terms of reducing their computational burden and their resources for the data storage. Our MDS model, which is applicable to the gait cycle method for examining patients, is non-invasive and provides the additional advantage of featuring a functional characterization of the relative or absolute movement of any part of the body.

  9. Toward a Definition of the Engineering Method.

    ERIC Educational Resources Information Center

    Koen, Billy V.

    1988-01-01

    Describes a preliminary definition of engineering method as well as a definition and examples of engineering heuristics. After discussing some alternative definitions of the engineering method, a simplified definition of the engineering method is suggested. (YP)

  10. An Investigation to Determine if Higher Speeds are Obtained with the Diamond Jubilee Gregg Shorthand Method.

    ERIC Educational Resources Information Center

    Starbuck, Ethel

    The purpose of the study was to determine whether higher shorthand speeds were achieved by high school students in a 1-year shorthand course through the use of Simplified Gregg Shorthand or through the use of Diamond Jubilee (DJ) Gregg Shorthand. The control group consisted of 75 students enrolled in Simplified Shorthand during the years…

  11. Nonstandard and Higher-Order Finite-Difference Methods for Electromagnetics

    DTIC Science & Technology

    2009-10-26

    Simplified Fuselage filled with 90 passengers. . . . . . . . . 135 4.4. A top view photograph of the expanded polystyrene passenger support, and the... expanded polystyrene supports. . . . . . . . . . . . . . . . . . . . . . . 140 4.10. Measured S11 (the exterior antenna) of the simplified fuselage...escape. To keep the passengers in their designated locations and upright, an expanded polystyrene support system was made. In a sheet of 1” thick

  12. Simplified analytical model and balanced design approach for light-weight wood-based structural panel in bending

    Treesearch

    Jinghao Li; John F. Hunt; Shaoqin Gong; Zhiyong Cai

    2016-01-01

    This paper presents a simplified analytical model and balanced design approach for modeling lightweight wood-based structural panels in bending. Because many design parameters are required to input for the model of finite element analysis (FEA) during the preliminary design process and optimization, the equivalent method was developed to analyze the mechanical...

  13. Neutron residual stress measurement and numerical modeling in a curved thin-walled structure by laser powder bed fusion additive manufacturing

    DOE PAGES

    An, Ke; Yuan, Lang; Dial, Laura; ...

    2017-09-11

    Severe residual stresses in metal parts made by laser powder bed fusion additive manufacturing processes (LPBFAM) can cause both distortion and cracking during the fabrication processes. Limited data is currently available for both iterating through process conditions and design, and in particular, for validating numerical models to accelerate process certification. In this work, residual stresses of a curved thin-walled structure, made of Ni-based superalloy Inconel 625™ and fabricated by LPBFAM, were resolved by neutron diffraction without measuring the stress-free lattices along both the build and the transverse directions. The stresses of the entire part during fabrication and after cooling downmore » were predicted by a simplified layer-by-layer finite element based numerical model. The simulated and measured stresses were found in good quantitative agreement. The validated simplified simulation methodology will allow to assess residual stresses in more complex structures and to significantly reduce manufacturing cycle time.« less

  14. Neutron residual stress measurement and numerical modeling in a curved thin-walled structure by laser powder bed fusion additive manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Ke; Yuan, Lang; Dial, Laura

    Severe residual stresses in metal parts made by laser powder bed fusion additive manufacturing processes (LPBFAM) can cause both distortion and cracking during the fabrication processes. Limited data is currently available for both iterating through process conditions and design, and in particular, for validating numerical models to accelerate process certification. In this work, residual stresses of a curved thin-walled structure, made of Ni-based superalloy Inconel 625™ and fabricated by LPBFAM, were resolved by neutron diffraction without measuring the stress-free lattices along both the build and the transverse directions. The stresses of the entire part during fabrication and after cooling downmore » were predicted by a simplified layer-by-layer finite element based numerical model. The simulated and measured stresses were found in good quantitative agreement. The validated simplified simulation methodology will allow to assess residual stresses in more complex structures and to significantly reduce manufacturing cycle time.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Yan, Da; D'Oca, Simona

    Occupant behavior has significant impacts on building energy performance and occupant comfort. However, occupant behavior is not well understood and is often oversimplified in the building life cycle, due to its stochastic, diverse, complex, and interdisciplinary nature. The use of simplified methods or tools to quantify the impacts of occupant behavior in building performance simulations significantly contributes to performance gaps between simulated models and actual building energy consumption. Therefore, it is crucial to understand occupant behavior in a comprehensive way, integrating qualitative approaches and data- and model-driven quantitative approaches, and employing appropriate tools to guide the design and operation ofmore » low-energy residential and commercial buildings that integrate technological and human dimensions. This paper presents ten questions, highlighting some of the most important issues regarding concepts, applications, and methodologies in occupant behavior research. The proposed questions and answers aim to provide insights into occupant behavior for current and future researchers, designers, and policy makers, and most importantly, to inspire innovative research and applications to increase energy efficiency and reduce energy use in buildings.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naselsky, Pavel; Jackson, Andrew D.; Liu, Hao, E-mail: naselsky@nbi.ku.dk, E-mail: liuhao@nbi.dk

    We present a simplified method for the extraction of meaningful signals from Hanford and Livingston 32 second data for the GW150914 event made publicly available by the LIGO collaboration, and demonstrate its ability to reproduce the LIGO collaboration's own results quantitatively given the assumption that all narrow peaks in the power spectrum are a consequence of physically uninteresting signals and can be removed. After the clipping of these peaks and return to the time domain, the GW150914 event is readily distinguished from broadband background noise. This simple technique allows us to identify the GW150914 event without any assumption regarding itsmore » physical origin and with minimal assumptions regarding its shape. We also confirm that the LIGO GW150914 event is uniquely correlated in the Hanford and Livingston detectors for the full 4096 second data at the level of 6–7 σ with a temporal displacement of τ = 6.9 ± 0.4 ms. We have also identified a few events that are morphologically close to GW150914 but less strongly cross correlated with it.« less

  17. Atmospheric Precorrected Differential Absorption technique to retrieve columnar water vapor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlaepfer, D.; Itten, K.I.; Borel, C.C.

    1998-09-01

    Differential absorption techniques are suitable to retrieve the total column water vapor contents from imaging spectroscopy data. A technique called Atmospheric Precorrected Differential Absorption (APDA) is derived directly from simplified radiative transfer equations. It combines a partial atmospheric correction with a differential absorption technique. The atmospheric path radiance term is iteratively corrected during the retrieval of water vapor. This improves the results especially over low background albedos. The error of the method for various ground reflectance spectra is below 7% for most of the spectra. The channel combinations for two test cases are then defined, using a quantitative procedure, whichmore » is based on MODTRAN simulations and the image itself. An error analysis indicates that the influence of aerosols and channel calibration is minimal. The APDA technique is then applied to two AVIRIS images acquired in 1991 and 1995. The accuracy of the measured water vapor columns is within a range of {+-}5% compared to ground truth radiosonde data.« less

  18. Electron paramagnetic resonance studies of slowly tumbling vanadyl spin probes in nematic liquid crystals

    NASA Technical Reports Server (NTRS)

    Bruno, G. V.; Harrington, J. K.; Eastman, M. P.

    1978-01-01

    The purposes of this vanadyl spin probe study are threefold: (1) to establish when the breakdown of motionally narrowed formulas occurs; (2) to analyze the experimental vanadyl EPR line shapes by the stochastic Lioville method as developed by Polnaszek et al. (1973) for slow tumbling in an anisotropic liquid; and (3) to compare the vanadyl probe study results with those of Polnaszek and Freed (1975). Spectral EPR line shapes are simulated for experimental spectra of vanadyl acetylacetonate (VOAA) in nematic liquid crystal butyl p-(p-ethoxyphenoxycarbonyl) phenyl carbonate (BEPC) and Phase V of EM laboratories. It is shown that the use of typical vanadyl complexes as spin probes for nematic liquid crystals simplifies the theoretical analysis and the subsequent interpretation. Guidelines for the breakdown of motionally narrowed formulas are established. Both the slow tumbling aspects and the effects of non-Brownian rotation should be resolved in order to extract quantitative information about molecular ordering and rotational mobility.

  19. Electron paramagnetic resonance studies of slowly tumbling vanadyl spin probes in nematic liquid crystals

    NASA Technical Reports Server (NTRS)

    Bruno, G. V.; Harrington, J. K.; Eastman, M. P.

    1978-01-01

    An analysis of EPR line shapes by the method of Polnaszek, Bruno, and Freed is made for slowly tumbling vanadyl spin probes in viscous nematic liquid crystals. The use of typical vanadyl complexes as spin probes for nematic liquid crystals is shown to simplify the theoretical analysis and the subsequent interpretation. Rotational correlation times tau and orientational ordering parameters S sub Z where slow tumbling effects are expected to be observed in vanadyl EPR spectra are indicated in a plot. Analysis of the inertial effects on the probe reorientation, which are induced by slowly fluctuating torque components of the local solvent structure, yield quantitative values for tau and S sub Z. The weakly ordered probe VOAA is in the slow tumbling region and displays these inertial effects throughout the nematic range of BEPC and Phase V. VOAA exhibits different reorientation behavior near the isotropic-nematic transition temperature than that displayed far below this transition temperature.

  20. Use of mathematics to guide target selection in systems pharmacology; application to receptor tyrosine kinase (RTK) pathways.

    PubMed

    Benson, Neil; van der Graaf, Piet H; Peletier, Lambertus A

    2017-11-15

    A key element of the drug discovery process is target selection. Although the topic is subject to much discussion and experimental effort, there are no defined quantitative rules around optimal selection. Often 'rules of thumb', that have not been subject to rigorous exploration, are used. In this paper we explore the 'rule of thumb' notion that the molecule that initiates a pathway signal is the optimal target. Given the multi-factorial and complex nature of this question, we have simplified an example pathway to its logical minimum of two steps and used a mathematical model of this to explore the different options in the context of typical small and large molecule drugs. In this paper, we report the conclusions of our analysis and describe the analysis tool and methods used. These provide a platform to enable a more extensive enquiry into this important topic. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Understanding the LIGO GW150914 event

    NASA Astrophysics Data System (ADS)

    Naselsky, Pavel; Jackson, Andrew D.; Liu, Hao

    2016-08-01

    We present a simplified method for the extraction of meaningful signals from Hanford and Livingston 32 second data for the GW150914 event made publicly available by the LIGO collaboration, and demonstrate its ability to reproduce the LIGO collaboration's own results quantitatively given the assumption that all narrow peaks in the power spectrum are a consequence of physically uninteresting signals and can be removed. After the clipping of these peaks and return to the time domain, the GW150914 event is readily distinguished from broadband background noise. This simple technique allows us to identify the GW150914 event without any assumption regarding its physical origin and with minimal assumptions regarding its shape. We also confirm that the LIGO GW150914 event is uniquely correlated in the Hanford and Livingston detectors for the full 4096 second data at the level of 6-7 σ with a temporal displacement of τ = 6.9 ± 0.4 ms. We have also identified a few events that are morphologically close to GW150914 but less strongly cross correlated with it.

  2. Video stereolization: combining motion analysis with user interaction.

    PubMed

    Liao, Miao; Gao, Jizhou; Yang, Ruigang; Gong, Minglun

    2012-07-01

    We present a semiautomatic system that converts conventional videos into stereoscopic videos by combining motion analysis with user interaction, aiming to transfer as much as possible labeling work from the user to the computer. In addition to the widely used structure from motion (SFM) techniques, we develop two new methods that analyze the optical flow to provide additional qualitative depth constraints. They remove the camera movement restriction imposed by SFM so that general motions can be used in scene depth estimation-the central problem in mono-to-stereo conversion. With these algorithms, the user's labeling task is significantly simplified. We further developed a quadratic programming approach to incorporate both quantitative depth and qualitative depth (such as these from user scribbling) to recover dense depth maps for all frames, from which stereoscopic view can be synthesized. In addition to visual results, we present user study results showing that our approach is more intuitive and less labor intensive, while producing 3D effect comparable to that from current state-of-the-art interactive algorithms.

  3. Shannon Entropy of the Canonical Genetic Code

    NASA Astrophysics Data System (ADS)

    Nemzer, Louis

    The probability that a non-synonymous point mutation in DNA will adversely affect the functionality of the resultant protein is greatly reduced if the substitution is conservative. In that case, the amino acid coded by the mutated codon has similar physico-chemical properties to the original. Many simplified alphabets, which group the 20 common amino acids into families, have been proposed. To evaluate these schema objectively, we introduce a novel, quantitative method based on the inherent redundancy in the canonical genetic code. By calculating the Shannon information entropy carried by 1- or 2-bit messages, groupings that best leverage the robustness of the code are identified. The relative importance of properties related to protein folding - like hydropathy and size - and function, including side-chain acidity, can also be estimated. In addition, this approach allows us to quantify the average information value of nucleotide codon positions, and explore the physiological basis for distinguishing between transition and transversion mutations. Supported by NSU PFRDG Grant #335347.

  4. Easy-to-learn cardiopulmonary resuscitation training programme: a randomised controlled trial on laypeople’s resuscitation performance

    PubMed Central

    Ko, Rachel Jia Min; Lim, Swee Han; Wu, Vivien Xi; Leong, Tak Yam; Liaw, Sok Ying

    2018-01-01

    INTRODUCTION Simplifying the learning of cardiopulmonary resuscitation (CPR) is advocated to improve skill acquisition and retention. A simplified CPR training programme focusing on continuous chest compression, with a simple landmark tracing technique, was introduced to laypeople. The study aimed to examine the effectiveness of the simplified CPR training in improving lay rescuers’ CPR performance as compared to standard CPR. METHODS A total of 85 laypeople (aged 21–60 years) were recruited and randomly assigned to undertake either a two-hour simplified or standard CPR training session. They were tested two months after the training on a simulated cardiac arrest scenario. Participants’ performance on the sequence of CPR steps was observed and evaluated using a validated CPR algorithm checklist. The quality of chest compression and ventilation was assessed from the recording manikins. RESULTS The simplified CPR group performed significantly better on the CPR algorithm when compared to the standard CPR group (p < 0.01). No significant difference was found between the groups in time taken to initiate CPR. However, a significantly higher number of compressions and proportion of adequate compressions was demonstrated by the simplified group than the standard group (p < 0.01). Hands-off time was significantly shorter in the simplified CPR group than in the standard CPR group (p < 0.001). CONCLUSION Simplifying the learning of CPR by focusing on continuous chest compressions, with simple hand placement for chest compression, could lead to better acquisition and retention of CPR algorithms, and better quality of chest compressions than standard CPR. PMID:29167910

  5. Simplified paraboloid phase model-based phase tracker for demodulation of a single complex fringe.

    PubMed

    He, A; Deepan, B; Quan, C

    2017-09-01

    A regularized phase tracker (RPT) is an effective method for demodulation of single closed-fringe patterns. However, lengthy calculation time, specially designed scanning strategy, and sign-ambiguity problems caused by noise and saddle points reduce its effectiveness, especially for demodulating large and complex fringe patterns. In this paper, a simplified paraboloid phase model-based regularized phase tracker (SPRPT) is proposed. In SPRPT, first and second phase derivatives are pre-determined by the density-direction-combined method and discrete higher-order demodulation algorithm, respectively. Hence, cost function is effectively simplified to reduce the computation time significantly. Moreover, pre-determined phase derivatives improve the robustness of the demodulation of closed, complex fringe patterns. Thus, no specifically designed scanning strategy is needed; nevertheless, it is robust against the sign-ambiguity problem. The paraboloid phase model also assures better accuracy and robustness against noise. Both the simulated and experimental fringe patterns (obtained using electronic speckle pattern interferometry) are used to validate the proposed method, and a comparison of the proposed method with existing RPT methods is carried out. The simulation results show that the proposed method has achieved the highest accuracy with less computational time. The experimental result proves the robustness and the accuracy of the proposed method for demodulation of noisy fringe patterns and its feasibility for static and dynamic applications.

  6. Simplified MPN method for enumeration of soil naphthalene degraders using gaseous substrate.

    PubMed

    Wallenius, Kaisa; Lappi, Kaisa; Mikkonen, Anu; Wickström, Annika; Vaalama, Anu; Lehtinen, Taru; Suominen, Leena

    2012-02-01

    We describe a simplified microplate most-probable-number (MPN) procedure to quantify the bacterial naphthalene degrader population in soil samples. In this method, the sole substrate naphthalene is dosed passively via gaseous phase to liquid medium and the detection of growth is based on the automated measurement of turbidity using an absorbance reader. The performance of the new method was evaluated by comparison with a recently introduced method in which the substrate is dissolved in inert silicone oil and added individually to each well, and the results are scored visually using a respiration indicator dye. Oil-contaminated industrial soil showed slightly but significantly higher MPN estimate with our method than with the reference method. This suggests that gaseous naphthalene was dissolved in an adequate concentration to support the growth of naphthalene degraders without being too toxic. The dosing of substrate via gaseous phase notably reduced the work load and risk of contamination. The result scoring by absorbance measurement was objective and more reliable than measurement with indicator dye, and it also enabled further analysis of cultures. Several bacterial genera were identified by cloning and sequencing of 16S rRNA genes from the MPN wells incubated in the presence of gaseous naphthalene. In addition, the applicability of the simplified MPN method was demonstrated by a significant positive correlation between the level of oil contamination and the number of naphthalene degraders detected in soil.

  7. A simplified focusing and astigmatism correction method for a scanning electron microscope

    NASA Astrophysics Data System (ADS)

    Lu, Yihua; Zhang, Xianmin; Li, Hai

    2018-01-01

    Defocus and astigmatism can lead to blurred images and poor resolution. This paper presents a simplified method for focusing and astigmatism correction of a scanning electron microscope (SEM). The method consists of two steps. In the first step, the fast Fourier transform (FFT) of the SEM image is performed and the FFT is subsequently processed with a threshold to achieve a suitable result. In the second step, the threshold FFT is used for ellipse fitting to determine the presence of defocus and astigmatism. The proposed method clearly provides the relationships between the defocus, the astigmatism and the direction of stretching of the FFT, and it can determine the astigmatism in a single image. Experimental studies are conducted to demonstrate the validity of the proposed method.

  8. A simplified fourwall interference assessment procedure for airfoil data obtained in the Langley 0.3-meter transonic cryogenic tunnel

    NASA Technical Reports Server (NTRS)

    Murthy, A. V.

    1987-01-01

    A simplified fourwall interference assessment method has been described, and a computer program developed to facilitate correction of the airfoil data obtained in the Langley 0.3-m Transonic Cryogenic Tunnel (TCT). The procedure adopted is to first apply a blockage correction due to sidewall boundary-layer effects by various methods. The sidewall boundary-layer corrected data are then used to calculate the top and bottom wall interference effects by the method of Capallier, Chevallier and Bouinol, using the measured wall pressure distribution and the model force coefficients. The interference corrections obtained by the present method have been compared with other methods and found to give good agreement for the experimental data obtained in the TCT with slotted top and bottom walls.

  9. simplified aerosol representations in global modeling

    NASA Astrophysics Data System (ADS)

    Kinne, Stefan; Peters, Karsten; Stevens, Bjorn; Rast, Sebastian; Schutgens, Nick; Stier, Philip

    2015-04-01

    The detailed treatment of aerosol in global modeling is complex and time-consuming. Thus simplified approaches are investigated, which prescribe 4D (space and time) distributions of aerosol optical properties and of aerosol microphysical properties. Aerosol optical properties are required to assess aerosol direct radiative effects and aerosol microphysical properties (in terms of their ability as aerosol nuclei to modify cloud droplet concentrations) are needed to address the indirect aerosol impact on cloud properties. Following the simplifying concept of the monthly gridded (1x1 lat/lon) aerosol climatology (MAC), new approaches are presented and evaluated against more detailed methods, including comparisons to detailed simulations with complex aerosol component modules.

  10. Weather data for simplified energy calculation methods. Volume IV. United States: WYEC data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olsen, A.R.; Moreno, S.; Deringer, J.

    The objective of this report is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 23 cities using Weather Year for Energy Calculations (WYEC) source weather data. Considerable overlap is present in cities (21) covered by both the TRY and WYEC data. The weather data at each city hasmore » been summarized in a number of ways to provide differing levels of detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.« less

  11. Evaluation of Several Approximate Methods for Calculating the Symmetrical Bending-Moment Response of Flexible Airplanes to Isotropic Atmospheric Turbulence

    NASA Technical Reports Server (NTRS)

    Bennett, Floyd V.; Yntema, Robert T.

    1959-01-01

    Several approximate procedures for calculating the bending-moment response of flexible airplanes to continuous isotropic turbulence are presented and evaluated. The modal methods (the mode-displacement and force-summation methods) and a matrix method (segmented-wing method) are considered. These approximate procedures are applied to a simplified airplane for which an exact solution to the equation of motion can be obtained. The simplified airplane consists of a uniform beam with a concentrated fuselage mass at the center. Airplane motions are limited to vertical rigid-body translation and symmetrical wing bending deflections. Output power spectra of wing bending moments based on the exact transfer-function solutions are used as a basis for the evaluation of the approximate methods. It is shown that the force-summation and the matrix methods give satisfactory accuracy and that the mode-displacement method gives unsatisfactory accuracy.

  12. A simplified bioprocess for human alpha-fetoprotein production from inclusion bodies.

    PubMed

    Leong, Susanna S J; Middelberg, Anton P J

    2007-05-01

    A simple and effective Escherichia coli (E. coli) bioprocess is demonstrated for the preparation of recombinant human alpha-fetoprotein (rhAFP), a pharmaceutically promising protein that has important immunomodulatory functions. The new rhAFP process employs only unit operations that are easy to scale and validate, and reduces the complexity embedded in existing inclusion body processing methods. A key requirement in the establishment of this process was the attainment of high purity rhAFP prior to protein refolding because (i) rhAFP binds easily to hydrophobic contaminants once refolded, and (ii) rhAFP aggregates during renaturation, in a contaminant- dependent way. In this work, direct protein extraction from cell suspension was coupled with a DNA precipitation-centrifugation step prior to purification using two simple chromatographic steps. Refolding was conducted using a single-step, redox-optimized dilution refolding protocol, with refolding success determined by reversed phase HPLC analysis, ELISA, and circular dichroism spectroscopy. Quantitation of DNA and protein contaminant loads after each unit operation showed that contaminant levels were reduced to levels comparable to traditional flowsheets. Protein microchemical modification due to carbamylation in this urea-based process was identified and minimized, yielding a final refolded and purified product that was significantly purified from carbamylated variants. Importantly, this work conclusively demonstrates, for the first time, that a chemical extraction process can substitute the more complex traditional inclusion body processing flowsheet, without compromising product purity and yield. This highly intensified and simplified process is expected to be of general utility for the preparation of other therapeutic candidates expressed as inclusion bodies. (c) 2006 Wiley Periodicals, Inc.

  13. A novel method of fuzzy fault tree analysis combined with VB program to identify and assess the risk of coal dust explosions

    PubMed Central

    Li, Jia; Wang, Deming; Huang, Zonghou

    2017-01-01

    Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents. PMID:28793348

  14. A novel method of fuzzy fault tree analysis combined with VB program to identify and assess the risk of coal dust explosions.

    PubMed

    Wang, Hetang; Li, Jia; Wang, Deming; Huang, Zonghou

    2017-01-01

    Coal dust explosions (CDE) are one of the main threats to the occupational safety of coal miners. Aiming to identify and assess the risk of CDE, this paper proposes a novel method of fuzzy fault tree analysis combined with the Visual Basic (VB) program. In this methodology, various potential causes of the CDE are identified and a CDE fault tree is constructed. To overcome drawbacks from the lack of exact probability data for the basic events, fuzzy set theory is employed and the probability data of each basic event is treated as intuitionistic trapezoidal fuzzy numbers. In addition, a new approach for calculating the weighting of each expert is also introduced in this paper to reduce the error during the expert elicitation process. Specifically, an in-depth quantitative analysis of the fuzzy fault tree, such as the importance measure of the basic events and the cut sets, and the CDE occurrence probability is given to assess the explosion risk and acquire more details of the CDE. The VB program is applied to simplify the analysis process. A case study and analysis is provided to illustrate the effectiveness of this proposed method, and some suggestions are given to take preventive measures in advance and avoid CDE accidents.

  15. Determination of the Shear Stress Distribution in a Laminate from the Applied Shear Resultant--A Simplified Shear Solution

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Aboudi, Jacob; Yarrington, Phillip W.

    2007-01-01

    The simplified shear solution method is presented for approximating the through-thickness shear stress distribution within a composite laminate based on laminated beam theory. The method does not consider the solution of a particular boundary value problem, rather it requires only knowledge of the global shear loading, geometry, and material properties of the laminate or panel. It is thus analogous to lamination theory in that ply level stresses can be efficiently determined from global load resultants (as determined, for instance, by finite element analysis) at a given location in a structure and used to evaluate the margin of safety on a ply by ply basis. The simplified shear solution stress distribution is zero at free surfaces, continuous at ply boundaries, and integrates to the applied shear load. Comparisons to existing theories are made for a variety of laminates, and design examples are provided illustrating the use of the method for determining through-thickness shear stress margins in several types of composite panels and in the context of a finite element structural analysis.

  16. Analysis of CERN computing infrastructure and monitoring data

    NASA Astrophysics Data System (ADS)

    Nieke, C.; Lassnig, M.; Menichetti, L.; Motesnitsalis, E.; Duellmann, D.

    2015-12-01

    Optimizing a computing infrastructure on the scale of LHC requires a quantitative understanding of a complex network of many different resources and services. For this purpose the CERN IT department and the LHC experiments are collecting a large multitude of logs and performance probes, which are already successfully used for short-term analysis (e.g. operational dashboards) within each group. The IT analytics working group has been created with the goal to bring data sources from different services and on different abstraction levels together and to implement a suitable infrastructure for mid- to long-term statistical analysis. It further provides a forum for joint optimization across single service boundaries and the exchange of analysis methods and tools. To simplify access to the collected data, we implemented an automated repository for cleaned and aggregated data sources based on the Hadoop ecosystem. This contribution describes some of the challenges encountered, such as dealing with heterogeneous data formats, selecting an efficient storage format for map reduce and external access, and will describe the repository user interface. Using this infrastructure we were able to quantitatively analyze the relationship between CPU/wall fraction, latency/throughput constraints of network and disk and the effective job throughput. In this contribution we will first describe the design of the shared analysis infrastructure and then present a summary of first analysis results from the combined data sources.

  17. Checklist to operationalize measurement characteristics of patient-reported outcome measures.

    PubMed

    Francis, David O; McPheeters, Melissa L; Noud, Meaghan; Penson, David F; Feurer, Irene D

    2016-08-02

    The purpose of this study was to advance a checklist of evaluative criteria designed to assess patient-reported outcome (PRO) measures' developmental measurement properties and applicability, which can be used by systematic reviewers, researchers, and clinicians with a varied range of expertise in psychometric measure development methodology. A directed literature search was performed to identify original studies, textbooks, consensus guidelines, and published reports that propose criteria for assessing the quality of PRO measures. Recommendations from these sources were iteratively distilled into a checklist of key attributes. Preliminary items underwent evaluation through 24 cognitive interviews with clinicians and quantitative researchers. Six measurement theory methodological novices independently applied the final checklist to assess six PRO measures encompassing a variety of methods, applications, and clinical constructs. Agreement between novice and expert scores was assessed. The distillation process yielded an 18-item checklist with six domains: (1) conceptual model, (2) content validity, (3) reliability, (4) construct validity, (5) scoring and interpretation, and (6) respondent burden and presentation. With minimal instruction, good agreement in checklist item ratings was achieved between quantitative researchers with expertise in measurement theory and less experienced clinicians (mean kappa 0.70; range 0.66-0.87). We present a simplified checklist that can help guide systematic reviewers, researchers, and clinicians with varied measurement theory expertise to evaluate the strengths and weakness of candidate PRO measures' developmental properties and the appropriateness for specific applications.

  18. An improved method for quantitatively measuring the sequences of total organic carbon and black carbon in marine sediment cores

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoming; Zhu, Qing; Zhou, Qianzhi; Liu, Jinzhong; Yuan, Jianping; Wang, Jianghai

    2018-01-01

    Understanding global carbon cycle is critical to uncover the mechanisms of global warming and remediate its adverse effects on human activities. Organic carbon in marine sediments is an indispensable part of the global carbon reservoir in global carbon cycling. Evaluating such a reservoir calls for quantitative studies of marine carbon burial, which closely depend on quantifying total organic carbon and black carbon in marine sediment cores and subsequently on obtaining their high-resolution temporal sequences. However, the conventional methods for detecting the contents of total organic carbon or black carbon cannot resolve the following specific difficulties, i.e., (1) a very limited amount of each subsample versus the diverse analytical items, (2) a low and fluctuating recovery rate of total organic carbon or black carbon versus the reproducibility of carbon data, and (3) a large number of subsamples versus the rapid batch measurements. In this work, (i) adopting the customized disposable ceramic crucibles with the microporecontrolled ability, (ii) developing self-made or customized facilities for the procedures of acidification and chemothermal oxidization, and (iii) optimizing procedures and carbon-sulfur analyzer, we have built a novel Wang-Xu-Yuan method (the WXY method) for measuring the contents of total organic carbon or black carbon in marine sediment cores, which includes the procedures of pretreatment, weighing, acidification, chemothermal oxidation and quantification; and can fully meet the requirements of establishing their highresolution temporal sequences, whatever in the recovery, experimental efficiency, accuracy and reliability of the measurements, and homogeneity of samples. In particular, the usage of disposable ceramic crucibles leads to evidently simplify the experimental scenario, which further results in the very high recovery rates for total organic carbon and black carbon. This new technique may provide a significant support for revealing the mechanism of carbon burial and evaluating the capacity of marine carbon accumulation and sequestration.

  19. Equilibrium ex vivo calibration of homogenized tissue for in vivo SPME quantitation of doxorubicin in lung tissue.

    PubMed

    Roszkowska, Anna; Tascon, Marcos; Bojko, Barbara; Goryński, Krzysztof; Dos Santos, Pedro Reck; Cypel, Marcelo; Pawliszyn, Janusz

    2018-06-01

    The fast and sensitive determination of concentrations of anticancer drugs in specific organs can improve the efficacy of chemotherapy and minimize its adverse effects. In this paper, ex vivo solid-phase microextraction (SPME) coupled to LC-MS/MS as a method for rapidly quantitating doxorubicin (DOX) in lung tissue was optimized. Furthermore, the theoretical and practical challenges related to the real-time monitoring of DOX levels in the lung tissue of a living organism (in vivo SPME) are presented. In addition, several parameters for ex vivo/in vivo SPME studies, such as extraction efficiency of autoclaved fibers, intact/homogenized tissue differences, critical tissue amount, and the absence of an internal standard are thoroughly examined. To both accurately quantify DOX in solid tissue and minimize the error related to the lack of an internal standard, a calibration method at equilibrium conditions was chosen. In optimized ex vivo SPME conditions, the targeted compound was extracted by directly introducing a 15 mm (45 µm thickness) mixed-mode fiber into 15 g of homogenized tissue for 20 min, followed by a desorption step in an optimal solvent mixture. The detection limit for DOX was 2.5 µg g -1 of tissue. The optimized ex vivo SPME method was successfully applied for the analysis of DOX in real pig lung biopsies, providing an averaged accuracy and precision of 103.2% and 12.3%, respectively. Additionally, a comparison between SPME and solid-liquid extraction revealed good agreement. The results presented herein demonstrate that the developed SPME method radically simplifies the sample preparation step and eliminates the need for tissue biopsies. These results suggest that SPME can accurately quantify DOX in different tissue compartments and can be potentially useful for monitoring and adjusting drug dosages during chemotherapy in order to achieve effective and safe concentrations of doxorubicin. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Robust real-time extraction of respiratory signals from PET list-mode data.

    PubMed

    Salomon, Andre; Zhang, Bin; Olivier, Patrick; Goedicke, Andreas

    2018-05-01

    Respiratory motion, which typically cannot simply be suspended during PET image acquisition, affects lesions' detection and quantitative accuracy inside or in close vicinity to the lungs. Some motion compensation techniques address this issue via pre-sorting ("binning") of the acquired PET data into a set of temporal gates, where each gate is assumed to be minimally affected by respiratory motion. Tracking respiratory motion is typically realized using dedicated hardware (e.g. using respiratory belts and digital cameras). Extracting respiratory signalsdirectly from the acquired PET data simplifies the clinical workflow as it avoids to handle additional signal measurement equipment. We introduce a new data-driven method "Combined Local Motion Detection" (CLMD). It uses the Time-of-Flight (TOF) information provided by state-of-the-art PET scanners in order to enable real-time respiratory signal extraction without additional hardware resources. CLMD applies center-of-mass detection in overlapping regions based on simple back-positioned TOF event sets acquired in short time frames. Following a signal filtering and quality-based pre-selection step, the remaining extracted individual position information over time is then combined to generate a global respiratory signal. The method is evaluated using 7 measured FDG studies from single and multiple scan positions of the thorax region, and it is compared to other software-based methods regarding quantitative accuracy and statistical noise stability. Correlation coefficients around 90% between the reference and the extracted signal have been found for those PET scans where motion affected features such as tumors or hot regions were present in the PET field-of-view. For PET scans with a quarter of typically applied radiotracer doses, the CLMD method still provides similar high correlation coefficients which indicates its robustness to noise. Each CLMD processing needed less than 0.4s in total on a standard multi-core CPU and thus provides a robust and accurate approach enabling real-time processing capabilities using standard PC hardware. © 2018 Institute of Physics and Engineering in Medicine.

  1. Robust real-time extraction of respiratory signals from PET list-mode data

    NASA Astrophysics Data System (ADS)

    Salomon, André; Zhang, Bin; Olivier, Patrick; Goedicke, Andreas

    2018-06-01

    Respiratory motion, which typically cannot simply be suspended during PET image acquisition, affects lesions’ detection and quantitative accuracy inside or in close vicinity to the lungs. Some motion compensation techniques address this issue via pre-sorting (‘binning’) of the acquired PET data into a set of temporal gates, where each gate is assumed to be minimally affected by respiratory motion. Tracking respiratory motion is typically realized using dedicated hardware (e.g. using respiratory belts and digital cameras). Extracting respiratory signals directly from the acquired PET data simplifies the clinical workflow as it avoids handling additional signal measurement equipment. We introduce a new data-driven method ‘combined local motion detection’ (CLMD). It uses the time-of-flight (TOF) information provided by state-of-the-art PET scanners in order to enable real-time respiratory signal extraction without additional hardware resources. CLMD applies center-of-mass detection in overlapping regions based on simple back-positioned TOF event sets acquired in short time frames. Following a signal filtering and quality-based pre-selection step, the remaining extracted individual position information over time is then combined to generate a global respiratory signal. The method is evaluated using seven measured FDG studies from single and multiple scan positions of the thorax region, and it is compared to other software-based methods regarding quantitative accuracy and statistical noise stability. Correlation coefficients around 90% between the reference and the extracted signal have been found for those PET scans where motion affected features such as tumors or hot regions were present in the PET field-of-view. For PET scans with a quarter of typically applied radiotracer doses, the CLMD method still provides similar high correlation coefficients which indicates its robustness to noise. Each CLMD processing needed less than 0.4 s in total on a standard multi-core CPU and thus provides a robust and accurate approach enabling real-time processing capabilities using standard PC hardware.

  2. Site specific measurements of bone formation using [18F] sodium fluoride PET/CT

    PubMed Central

    Puri, Tanuj; Siddique, Musib; Frost, Michelle L.; Moore, Amelia E. B.; Fogelman, Ignac

    2018-01-01

    Dynamic positron emission tomography (PET) imaging with fluorine-18 labelled sodium fluoride ([18F]NaF) allows the quantitative assessment of regional bone formation by measuring the plasma clearance of fluoride to bone at any site in the skeleton. Today, hybrid PET and computed tomography (CT) dual-modality systems (PET/CT) are widely available, and [18F]NaF PET/CT offers a convenient non-invasive method of studying bone formation at the important osteoporotic fracture sites at the hip and spine, as well as sites of pure cortical or trabecular bone. The technique complements conventional measurements of bone turnover using biochemical markers or bone biopsy as a tool to investigate new therapies for osteoporosis, and has a potential role as an early biomarker of treatment efficacy in clinical trials. This article reviews methods of acquiring and analyzing dynamic [18F]NaF PET/CT scan data, and outlines a simplified approach combining venous blood sampling with a series of short (3- to 5-minute) static PET/CT scans acquired at different bed positions to estimate [18F]NaF plasma clearance at multiple sites in the skeleton with just a single injection of tracer. PMID:29541623

  3. Computer-aided classification of lung nodules on computed tomography images via deep learning technique

    PubMed Central

    Hua, Kai-Lung; Hsu, Che-Hao; Hidayati, Shintami Chusnul; Cheng, Wen-Huang; Chen, Yu-Jen

    2015-01-01

    Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD) scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain. PMID:26346558

  4. Protocols for the analytical characterization of therapeutic monoclonal antibodies. II - Enzymatic and chemical sample preparation.

    PubMed

    Bobaly, Balazs; D'Atri, Valentina; Goyon, Alexandre; Colas, Olivier; Beck, Alain; Fekete, Szabolcs; Guillarme, Davy

    2017-08-15

    The analytical characterization of therapeutic monoclonal antibodies and related proteins usually incorporates various sample preparation methodologies. Indeed, quantitative and qualitative information can be enhanced by simplifying the sample, thanks to the removal of sources of heterogeneity (e.g. N-glycans) and/or by decreasing the molecular size of the tested protein by enzymatic or chemical fragmentation. These approaches make the sample more suitable for chromatographic and mass spectrometric analysis. Structural elucidation and quality control (QC) analysis of biopharmaceutics are usually performed at intact, subunit and peptide levels. In this paper, general sample preparation approaches used to attain peptide, subunit and glycan level analysis are overviewed. Protocols are described to perform tryptic proteolysis, IdeS and papain digestion, reduction as well as deglycosylation by PNGase F and EndoS2 enzymes. Both historical and modern sample preparation methods were compared and evaluated using rituximab and trastuzumab, two reference therapeutic mAb products approved by Food and Drug Administration (FDA) and European Medicines Agency (EMA). The described protocols may help analysts to develop sample preparation methods in the field of therapeutic protein analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Site specific measurements of bone formation using [18F] sodium fluoride PET/CT.

    PubMed

    Blake, Glen M; Puri, Tanuj; Siddique, Musib; Frost, Michelle L; Moore, Amelia E B; Fogelman, Ignac

    2018-02-01

    Dynamic positron emission tomography (PET) imaging with fluorine-18 labelled sodium fluoride ([ 18 F]NaF) allows the quantitative assessment of regional bone formation by measuring the plasma clearance of fluoride to bone at any site in the skeleton. Today, hybrid PET and computed tomography (CT) dual-modality systems (PET/CT) are widely available, and [ 18 F]NaF PET/CT offers a convenient non-invasive method of studying bone formation at the important osteoporotic fracture sites at the hip and spine, as well as sites of pure cortical or trabecular bone. The technique complements conventional measurements of bone turnover using biochemical markers or bone biopsy as a tool to investigate new therapies for osteoporosis, and has a potential role as an early biomarker of treatment efficacy in clinical trials. This article reviews methods of acquiring and analyzing dynamic [ 18 F]NaF PET/CT scan data, and outlines a simplified approach combining venous blood sampling with a series of short (3- to 5-minute) static PET/CT scans acquired at different bed positions to estimate [ 18 F]NaF plasma clearance at multiple sites in the skeleton with just a single injection of tracer.

  6. Complexity Reduction in Large Quantum Systems: Fragment Identification and Population Analysis via a Local Optimized Minimal Basis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohr, Stephan; Masella, Michel; Ratcliff, Laura E.

    We present, within Kohn-Sham Density Functional Theory calculations, a quantitative method to identify and assess the partitioning of a large quantum mechanical system into fragments. We then introduce a simple and efficient formalism (which can be written as generalization of other well-known population analyses) to extract, from first principles, electrostatic multipoles for these fragments. The corresponding fragment multipoles can in this way be seen as reliable (pseudo-) observables. By applying our formalism within the code BigDFT, we show that the usage of a minimal set of in-situ optimized basis functions is of utmost importance for having at the same timemore » a proper fragment definition and an accurate description of the electronic structure. With this approach it becomes possible to simplify the modeling of environmental fragments by a set of multipoles, without notable loss of precision in the description of the active quantum mechanical region. Furthermore, this leads to a considerable reduction of the degrees of freedom by an effective coarsegraining approach, eventually also paving the way towards efficient QM/QM and QM/MM methods coupling together different levels of accuracy.« less

  7. Measurement error of Young’s modulus considering the gravity and thermal expansion of thin specimens for in situ tensile testing

    NASA Astrophysics Data System (ADS)

    Ma, Zhichao; Zhao, Hongwei; Ren, Luquan

    2016-06-01

    Most miniature in situ tensile devices compatible with scanning/transmission electron microscopes or optical microscopes adopt a horizontal layout. In order to analyze and calculate the measurement error of the tensile Young’s modulus, the effects of gravity and temperature changes, which would respectively lead to and intensify the bending deformation of thin specimens, are considered as influencing factors. On the basis of a decomposition method of static indeterminacy, equations of simplified deflection curves are obtained and, accordingly, the actual gage length is confirmed. By comparing the effects of uniaxial tensile load on the change of the deflection curve with gravity, the relation between the actual and directly measured tensile Young’s modulus is obtained. Furthermore, the quantitative effects of ideal gage length l o, temperature change ΔT and the density ρ of the specimen on the modulus difference and modulus ratio are calculated. Specimens with larger l o and ρ present more obvious measurement errors for Young’s modulus, but the effect of ΔT is not significant. The calculation method of Young’s modulus is particularly suitable for thin specimens.

  8. Computer-aided classification of lung nodules on computed tomography images via deep learning technique.

    PubMed

    Hua, Kai-Lung; Hsu, Che-Hao; Hidayati, Shintami Chusnul; Cheng, Wen-Huang; Chen, Yu-Jen

    2015-01-01

    Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD) scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain.

  9. Complexity Reduction in Large Quantum Systems: Fragment Identification and Population Analysis via a Local Optimized Minimal Basis

    DOE PAGES

    Mohr, Stephan; Masella, Michel; Ratcliff, Laura E.; ...

    2017-07-21

    We present, within Kohn-Sham Density Functional Theory calculations, a quantitative method to identify and assess the partitioning of a large quantum mechanical system into fragments. We then introduce a simple and efficient formalism (which can be written as generalization of other well-known population analyses) to extract, from first principles, electrostatic multipoles for these fragments. The corresponding fragment multipoles can in this way be seen as reliable (pseudo-) observables. By applying our formalism within the code BigDFT, we show that the usage of a minimal set of in-situ optimized basis functions is of utmost importance for having at the same timemore » a proper fragment definition and an accurate description of the electronic structure. With this approach it becomes possible to simplify the modeling of environmental fragments by a set of multipoles, without notable loss of precision in the description of the active quantum mechanical region. Furthermore, this leads to a considerable reduction of the degrees of freedom by an effective coarsegraining approach, eventually also paving the way towards efficient QM/QM and QM/MM methods coupling together different levels of accuracy.« less

  10. Uncontrolled Manifold Reference Feedback Control of Multi-Joint Robot Arms

    PubMed Central

    Togo, Shunta; Kagawa, Takahiro; Uno, Yoji

    2016-01-01

    The brain must coordinate with redundant bodies to perform motion tasks. The aim of the present study is to propose a novel control model that predicts the characteristics of human joint coordination at a behavioral level. To evaluate the joint coordination, an uncontrolled manifold (UCM) analysis that focuses on the trial-to-trial variance of joints has been proposed. The UCM is a nonlinear manifold associated with redundant kinematics. In this study, we directly applied the notion of the UCM to our proposed control model called the “UCM reference feedback control.” To simplify the problem, the present study considered how the redundant joints were controlled to regulate a given target hand position. We considered a conventional method that pre-determined a unique target joint trajectory by inverse kinematics or any other optimization method. In contrast, our proposed control method generates a UCM as a control target at each time step. The target UCM is a subspace of joint angles whose variability does not affect the hand position. The joint combination in the target UCM is then selected so as to minimize the cost function, which consisted of the joint torque and torque change. To examine whether the proposed method could reproduce human-like joint coordination, we conducted simulation and measurement experiments. In the simulation experiments, a three-link arm with a shoulder, elbow, and wrist regulates a one-dimensional target of a hand through proposed method. In the measurement experiments, subjects performed a one-dimensional target-tracking task. The kinematics, dynamics, and joint coordination were quantitatively compared with the simulation data of the proposed method. As a result, the UCM reference feedback control could quantitatively reproduce the difference of the mean value for the end hand position between the initial postures, the peaks of the bell-shape tangential hand velocity, the sum of the squared torque, the mean value for the torque change, the variance components, and the index of synergy as well as the human subjects. We concluded that UCM reference feedback control can reproduce human-like joint coordination. The inference for motor control of the human central nervous system based on the proposed method was discussed. PMID:27462215

  11. Simplified welding distortion analysis for fillet welding using composite shell elements

    NASA Astrophysics Data System (ADS)

    Kim, Mingyu; Kang, Minseok; Chung, Hyun

    2015-09-01

    This paper presents the simplified welding distortion analysis method to predict the welding deformation of both plate and stiffener in fillet welds. Currently, the methods based on equivalent thermal strain like Strain as Direct Boundary (SDB) has been widely used due to effective prediction of welding deformation. Regarding the fillet welding, however, those methods cannot represent deformation of both members at once since the temperature degree of freedom is shared at the intersection nodes in both members. In this paper, we propose new approach to simulate deformation of both members. The method can simulate fillet weld deformations by employing composite shell element and using different thermal expansion coefficients according to thickness direction with fixed temperature at intersection nodes. For verification purpose, we compare of result from experiments, 3D thermo elastic plastic analysis, SDB method and proposed method. Compared of experiments results, the proposed method can effectively predict welding deformation for fillet welds.

  12. Curves showing column strength of steel and duralumin tubing

    NASA Technical Reports Server (NTRS)

    Ross, Orrin E

    1929-01-01

    Given here are a set of column strength curves that are intended to simplify the method of determining the size of struts in an airplane structure when the load in the member is known. The curves will also simplify the checking of the strength of a strut if the size and length are known. With these curves, no computations are necessary, as in the case of the old-fashioned method of strut design. The process is so simple that draftsmen or others who are not entirely familiar with mechanics can check the strength of a strut without much danger of error.

  13. Streamlined approach to mapping the magnetic induction of skyrmionic materials.

    PubMed

    Chess, Jordan J; Montoya, Sergio A; Harvey, Tyler R; Ophus, Colin; Couture, Simon; Lomakin, Vitaliy; Fullerton, Eric E; McMorran, Benjamin J

    2017-06-01

    Recently, Lorentz transmission electron microscopy (LTEM) has helped researchers advance the emerging field of magnetic skyrmions. These magnetic quasi-particles, composed of topologically non-trivial magnetization textures, have a large potential for application as information carriers in low-power memory and logic devices. LTEM is one of a very few techniques for direct, real-space imaging of magnetic features at the nanoscale. For Fresnel-contrast LTEM, the transport of intensity equation (TIE) is the tool of choice for quantitative reconstruction of the local magnetic induction through the sample thickness. Typically, this analysis requires collection of at least three images. Here, we show that for uniform, thin, magnetic films, which includes many skyrmionic samples, the magnetic induction can be quantitatively determined from a single defocused image using a simplified TIE approach. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. PaCER - A fully automated method for electrode trajectory and contact reconstruction in deep brain stimulation.

    PubMed

    Husch, Andreas; V Petersen, Mikkel; Gemmar, Peter; Goncalves, Jorge; Hertel, Frank

    2018-01-01

    Deep brain stimulation (DBS) is a neurosurgical intervention where electrodes are permanently implanted into the brain in order to modulate pathologic neural activity. The post-operative reconstruction of the DBS electrodes is important for an efficient stimulation parameter tuning. A major limitation of existing approaches for electrode reconstruction from post-operative imaging that prevents the clinical routine use is that they are manual or semi-automatic, and thus both time-consuming and subjective. Moreover, the existing methods rely on a simplified model of a straight line electrode trajectory, rather than the more realistic curved trajectory. The main contribution of this paper is that for the first time we present a highly accurate and fully automated method for electrode reconstruction that considers curved trajectories. The robustness of our proposed method is demonstrated using a multi-center clinical dataset consisting of N  = 44 electrodes. In all cases the electrode trajectories were successfully identified and reconstructed. In addition, the accuracy is demonstrated quantitatively using a high-accuracy phantom with known ground truth. In the phantom experiment, the method could detect individual electrode contacts with high accuracy and the trajectory reconstruction reached an error level below 100 μm (0.046 ± 0.025 mm). An implementation of the method is made publicly available such that it can directly be used by researchers or clinicians. This constitutes an important step towards future integration of lead reconstruction into standard clinical care.

  15. Expanding the geography of evapotranspiration: An improved method to quantify land-to-air water fluxes in tropical and subtropical regions

    PubMed Central

    Jerszurki, Daniela; Souza, Jorge L. M.; Silva, Lucas C. R.

    2017-01-01

    The development of new reference evapotranspiration (ETo) methods hold significant promise for improving our quantitative understanding of climatic impacts on water loss from the land to the atmosphere. To address the challenge of estimating ETo in tropical and subtropical regions where direct measurements are scarce we tested a new method based on geographical patterns of extraterrestrial radiation (Ra) and atmospheric water potential (Ψair). Our approach consisted of generating daily estimates of ETo across several climate zones in Brazil–as a model system–which we compared with standard EToPM (Penman-Monteith) estimates. In contrast with EToPM, the simplified method (EToMJS) relies solely on Ψair calculated from widely available air temperature (oC) and relative humidity (%) data, which combined with Ra data resulted in reliable estimates of equivalent evaporation (Ee) and ETo. We used regression analyses of Ψair vs EToPM and Ee vs EToPM to calibrate the EToMJS(Ψair) and EToMJS estimates from 2004 to 2014 and between seasons and climatic zone. Finally, we evaluated the performance of the new method based on the coefficient of determination (R2) and correlation (R), index of agreement “d”, mean absolute error (MAE) and mean reason (MR). This evaluation confirmed the suitability of the EToMJS method for application in tropical and subtropical regions, where the climatic information needed for the standard EToPM calculation is absent. PMID:28658324

  16. Expanding the geography of evapotranspiration: An improved method to quantify land-to-air water fluxes in tropical and subtropical regions.

    PubMed

    Jerszurki, Daniela; Souza, Jorge L M; Silva, Lucas C R

    2017-01-01

    The development of new reference evapotranspiration (ETo) methods hold significant promise for improving our quantitative understanding of climatic impacts on water loss from the land to the atmosphere. To address the challenge of estimating ETo in tropical and subtropical regions where direct measurements are scarce we tested a new method based on geographical patterns of extraterrestrial radiation (Ra) and atmospheric water potential (Ψair). Our approach consisted of generating daily estimates of ETo across several climate zones in Brazil-as a model system-which we compared with standard EToPM (Penman-Monteith) estimates. In contrast with EToPM, the simplified method (EToMJS) relies solely on Ψair calculated from widely available air temperature (oC) and relative humidity (%) data, which combined with Ra data resulted in reliable estimates of equivalent evaporation (Ee) and ETo. We used regression analyses of Ψair vs EToPM and Ee vs EToPM to calibrate the EToMJS(Ψair) and EToMJS estimates from 2004 to 2014 and between seasons and climatic zone. Finally, we evaluated the performance of the new method based on the coefficient of determination (R2) and correlation (R), index of agreement "d", mean absolute error (MAE) and mean reason (MR). This evaluation confirmed the suitability of the EToMJS method for application in tropical and subtropical regions, where the climatic information needed for the standard EToPM calculation is absent.

  17. Simplified adsorption method for detection of antibodies to Candida albicans germ tubes.

    PubMed Central

    Ponton, J; Quindos, G; Arilla, M C; Mackenzie, D W

    1994-01-01

    Two modifications that simplify and shorten a method for adsorption of the antibodies against the antigens expressed on both blastospore and germ tube cell wall surfaces (methods 2 and 3) were compared with the original method of adsorption (method 1) to detect anti-Candida albicans germ tube antibodies in 154 serum specimens. Adsorption of the sera by both modified methods resulted in titers very similar to those obtained by the original method. Only 5.2% of serum specimens tested by method 2 and 5.8% of serum specimens tested by method 3 presented greater than one dilution discrepancies in the titers with respect to the titer observed by method 1. When a test based on method 2 was evaluated with sera from patients with invasive candidiasis, the best discriminatory results (sensitivity, 84.6%; specificity, 87.9%; positive predictive value, 75.9%; negative predictive value, 92.7%; efficiency, 86.9%) were obtained when a titer of > or = 1:160 was considered positive. PMID:8126184

  18. Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses.

    PubMed

    Ye, Jun

    2015-03-01

    In pattern recognition and medical diagnosis, similarity measure is an important mathematical tool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophic sets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based on cosine function, including single valued neutrosophic cosine similarity measures and interval neutrosophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced by taking into account the importance of each element. Further, a medical diagnosis method using the improved cosine similarity measures was proposed to solve medical diagnosis problems with simplified neutrosophic information. The improved cosine similarity measures between SNSs were introduced based on cosine function. Then, we compared the improved cosine similarity measures of SNSs with existing cosine similarity measures of SNSs by numerical examples to demonstrate their effectiveness and rationality for overcoming some shortcomings of existing cosine similarity measures of SNSs in some cases. In the medical diagnosis method, we can find a proper diagnosis by the cosine similarity measures between the symptoms and considered diseases which are represented by SNSs. Then, the medical diagnosis method based on the improved cosine similarity measures was applied to two medical diagnosis problems to show the applications and effectiveness of the proposed method. Two numerical examples all demonstrated that the improved cosine similarity measures of SNSs based on the cosine function can overcome the shortcomings of the existing cosine similarity measures between two vectors in some cases. By two medical diagnoses problems, the medical diagnoses using various similarity measures of SNSs indicated the identical diagnosis results and demonstrated the effectiveness and rationality of the diagnosis method proposed in this paper. The improved cosine measures of SNSs based on cosine function can overcome some drawbacks of existing cosine similarity measures of SNSs in vector space, and then their diagnosis method is very suitable for handling the medical diagnosis problems with simplified neutrosophic information and demonstrates the effectiveness and rationality of medical diagnoses. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Scarless assembly of unphosphorylated DNA fragments with a simplified DATEL method.

    PubMed

    Ding, Wenwen; Weng, Huanjiao; Jin, Peng; Du, Guocheng; Chen, Jian; Kang, Zhen

    2017-05-04

    Efficient assembly of multiple DNA fragments is a pivotal technology for synthetic biology. A scarless and sequence-independent DNA assembly method (DATEL) using thermal exonucleases has been developed recently. Here, we present a simplified DATEL (sDATEL) for efficient assembly of unphosphorylated DNA fragments with low cost. The sDATEL method is only dependent on Taq DNA polymerase and Taq DNA ligase. After optimizing the committed parameters of the reaction system such as pH and the concentration of Mg 2+ and NAD+, the assembly efficiency was increased by 32-fold. To further improve the assembly capacity, the number of thermal cycles was optimized, resulting in successful assembly 4 unphosphorylated DNA fragments with an accuracy of 75%. sDATEL could be a desirable method for routine manual and automated assembly.

  20. Stepwise sensitivity analysis from qualitative to quantitative: Application to the terrestrial hydrological modeling of a Conjunctive Surface-Subsurface Process (CSSP) land surface model

    NASA Astrophysics Data System (ADS)

    Gan, Yanjun; Liang, Xin-Zhong; Duan, Qingyun; Choi, Hyun Il; Dai, Yongjiu; Wu, Huan

    2015-06-01

    An uncertainty quantification framework was employed to examine the sensitivities of 24 model parameters from a newly developed Conjunctive Surface-Subsurface Process (CSSP) land surface model (LSM). The sensitivity analysis (SA) was performed over 18 representative watersheds in the contiguous United States to examine the influence of model parameters in the simulation of terrestrial hydrological processes. Two normalized metrics, relative bias (RB) and Nash-Sutcliffe efficiency (NSE), were adopted to assess the fit between simulated and observed streamflow discharge (SD) and evapotranspiration (ET) for a 14 year period. SA was conducted using a multiobjective two-stage approach, in which the first stage was a qualitative SA using the Latin Hypercube-based One-At-a-Time (LH-OAT) screening, and the second stage was a quantitative SA using the Multivariate Adaptive Regression Splines (MARS)-based Sobol' sensitivity indices. This approach combines the merits of qualitative and quantitative global SA methods, and is effective and efficient for understanding and simplifying large, complex system models. Ten of the 24 parameters were identified as important across different watersheds. The contribution of each parameter to the total response variance was then quantified by Sobol' sensitivity indices. Generally, parameter interactions contribute the most to the response variance of the CSSP, and only 5 out of 24 parameters dominate model behavior. Four photosynthetic and respiratory parameters are shown to be influential to ET, whereas reference depth for saturated hydraulic conductivity is the most influential parameter for SD in most watersheds. Parameter sensitivity patterns mainly depend on hydroclimatic regime, as well as vegetation type and soil texture. This article was corrected on 26 JUN 2015. See the end of the full text for details.

  1. Quantitative Raman characterization of cross-linked collagen thin films as a model system for diagnosing early osteoarthritis

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Durney, Krista M.; Fomovsky, Gregory; Ateshian, Gerard A.; Vukelic, Sinisa

    2016-03-01

    The onset of osteoarthritis (OA)in articular cartilage is characterized by degradation of extracellular matrix (ECM). Specifically, breakage of cross-links between collagen fibrils in the articular cartilage leads to loss of structural integrity of the bulk tissue. Since there are no broadly accepted, non-invasive, label-free tools for diagnosing OA at its early stage, Raman spectroscopyis therefore proposed in this work as a novel, non-destructive diagnostic tool. In this study, collagen thin films were employed to act as a simplified model system of the cartilage collagen extracellular matrix. Cross-link formation was controlled via exposure to glutaraldehyde (GA), by varying exposure time and concentration levels, and Raman spectral information was collected to quantitatively characterize the cross-link assignments imparted to the collagen thin films during treatment. A novel, quantitative method was developed to analyze the Raman signal obtained from collagen thin films. Segments of Raman signal were decomposed and modeled as the sum of individual bands, providing an optimization function for subsequent curve fitting against experimental findings. Relative changes in the concentration of the GA-induced pyridinium cross-links were extracted from the model, as a function of the exposure to GA. Spatially resolved characterization enabled construction of spectral maps of the collagen thin films, which provided detailed information about the variation of cross-link formation at various locations on the specimen. Results showed that Raman spectral data correlate with glutaraldehyde treatment and therefore may be used as a proxy by which to measure loss of collagen cross-links in vivo. This study proposes a promising system of identifying onset of OA and may enable early intervention treatments that may serve to slow or prevent osteoarthritis progression.

  2. The effect of the behavior of an average consumer on the public debt dynamics

    NASA Astrophysics Data System (ADS)

    De Luca, Roberto; Di Mauro, Marco; Falzarano, Angelo; Naddeo, Adele

    2017-09-01

    An important issue within the present economic crisis is understanding the dynamics of the public debt of a given country, and how the behavior of average consumers and tax payers in that country affects it. Starting from a model of the average consumer behavior introduced earlier by the authors, we propose a simple model to quantitatively address this issue. The model is then studied and analytically solved under some reasonable simplifying assumptions. In this way we obtain a condition under which the public debt steadily decreases.

  3. Weighted minimum-norm source estimation of magnetoencephalography utilizing the temporal information of the measured data

    NASA Astrophysics Data System (ADS)

    Iwaki, Sunao; Ueno, Shoogo

    1998-06-01

    The weighted minimum-norm estimation (wMNE) is a popular method to obtain the source distribution in the human brain from magneto- and electro- encephalograpic measurements when detailed information about the generator profile is not available. We propose a method to reconstruct current distributions in the human brain based on the wMNE technique with the weighting factors defined by a simplified multiple signal classification (MUSIC) prescanning. In this method, in addition to the conventional depth normalization technique, weighting factors of the wMNE were determined by the cost values previously calculated by a simplified MUSIC scanning which contains the temporal information of the measured data. We performed computer simulations of this method and compared it with the conventional wMNE method. The results show that the proposed method is effective for the reconstruction of the current distributions from noisy data.

  4. Simplified method for preparation of concentrated exoproteins produced by Staphylococcus aureus grown on surface of cellophane bag containing liquid medium.

    PubMed

    Ikigai, H; Seki, K; Nishihara, S; Masuda, S

    1988-01-01

    A simplified method for preparation of concentrated exoproteins including protein A and alpha-toxin produced by Staphylococcus aureus was successfully devised. The concentrated proteins were obtained by cultivating S. aureus organisms on the surface of a liquid medium-containing cellophane bag enclosed in a sterilized glass flask. With the same amount of medium, the total amount of proteins obtained by the method presented here was identical with that obtained by conventional liquid culture. The concentration of proteins obtained by the method, however, was high enough to observe their distinct bands stained on polyacrylamide gel electrophoresis. This method was considered quite useful not only for large-scale cultivation for the purification of staphylococcal proteins but also for small-scale study using the proteins. The precise description of the method was presented and its possible usefulness was discussed.

  5. Quantitative Method for Simultaneous Analysis of Acetaminophen and 6 Metabolites.

    PubMed

    Lammers, Laureen A; Achterbergh, Roos; Pistorius, Marcel C M; Romijn, Johannes A; Mathôt, Ron A A

    2017-04-01

    Hepatotoxicity after ingestion of high-dose acetaminophen [N-acetyl-para-aminophenol (APAP)] is caused by the metabolites of the drug. To gain more insight into factors influencing susceptibility to APAP hepatotoxicity, quantification of APAP and metabolites is important. A few methods have been developed to simultaneously quantify APAP and its most important metabolites. However, these methods require a comprehensive sample preparation and long run times. The aim of this study was to develop and validate a simplified, but sensitive method for the simultaneous quantification of acetaminophen, the main metabolites acetaminophen glucuronide and acetaminophen sulfate, and 4 Cytochrome P450-mediated metabolites by using liquid chromatography with mass spectrometric (LC-MS) detection. The method was developed and validated for the human plasma, and it entailed a single method for sample preparation, enabling quick processing of the samples followed by an LC-MS method with a chromatographic run time of 9 minutes. The method was validated for selectivity, linearity, accuracy, imprecision, dilution integrity, recovery, process efficiency, ionization efficiency, and carryover effect. The method showed good selectivity without matrix interferences. For all analytes, the mean process efficiency was >86%, and the mean ionization efficiency was >94%. Furthermore, the accuracy was between 90.3% and 112% for all analytes, and the within- and between-run imprecision were <20% for the lower limit of quantification and <14.3% for the middle level and upper limit of quantification. The method presented here enables the simultaneous quantification of APAP and 6 of its metabolites. It is less time consuming than previously reported methods because it requires only a single and simple method for the sample preparation followed by an LC-MS method with a short run time. Therefore, this analytical method provides a useful method for both clinical and research purposes.

  6. Identification and characterization of unrecognized viruses in stool samples of non-polio acute flaccid paralysis children by simplified VIDISCA.

    PubMed

    Shaukat, Shahzad; Angez, Mehar; Alam, Muhammad Masroor; Jebbink, Maarten F; Deijs, Martin; Canuti, Marta; Sharif, Salmaan; de Vries, Michel; Khurshid, Adnan; Mahmood, Tariq; van der Hoek, Lia; Zaidi, Syed Sohail Zahoor

    2014-08-12

    The use of sequence independent methods combined with next generation sequencing for identification purposes in clinical samples appears promising and exciting results have been achieved to understand unexplained infections. One sequence independent method, Virus Discovery based on cDNA Amplified Fragment Length Polymorphism (VIDISCA) is capable of identifying viruses that would have remained unidentified in standard diagnostics or cell cultures. VIDISCA is normally combined with next generation sequencing, however, we set up a simplified VIDISCA which can be used in case next generation sequencing is not possible. Stool samples of 10 patients with unexplained acute flaccid paralysis showing cytopathic effect in rhabdomyosarcoma cells and/or mouse cells were used to test the efficiency of this method. To further characterize the viruses, VIDISCA-positive samples were amplified and sequenced with gene specific primers. Simplified VIDISCA detected seven viruses (70%) and the proportion of eukaryotic viral sequences from each sample ranged from 8.3 to 45.8%. Human enterovirus EV-B97, EV-B100, echovirus-9 and echovirus-21, human parechovirus type-3, human astrovirus probably a type-3/5 recombinant, and tetnovirus-1 were identified. Phylogenetic analysis based on the VP1 region demonstrated that the human enteroviruses are more divergent isolates circulating in the community. Our data support that a simplified VIDISCA protocol can efficiently identify unrecognized viruses grown in cell culture with low cost, limited time without need of advanced technical expertise. Also complex data interpretation is avoided thus the method can be used as a powerful diagnostic tool in limited resources. Redesigning the routine diagnostics might lead to additional detection of previously undiagnosed viruses in clinical samples of patients.

  7. A simplified method of performance indicators development for epidemiological surveillance networks--application to the RESAPATH surveillance network.

    PubMed

    Sorbe, A; Chazel, M; Gay, E; Haenni, M; Madec, J-Y; Hendrikx, P

    2011-06-01

    Develop and calculate performance indicators allows to continuously follow the operation of an epidemiological surveillance network. This is an internal evaluation method, implemented by the coordinators in collaboration with all the actors of the network. Its purpose is to detect weak points in order to optimize management. A method for the development of performance indicators of epidemiological surveillance networks was developed in 2004 and was applied to several networks. Its implementation requires a thorough description of the network environment and all its activities to define priority indicators. Since this method is considered to be complex, our objective consisted in developing a simplified approach and applying it to an epidemiological surveillance network. We applied the initial method to a theoretical network model to obtain a list of generic indicators that can be adapted to any surveillance network. We obtained a list of 25 generic performance indicators, intended to be reformulated and described according to the specificities of each network. It was used to develop performance indicators for RESAPATH, an epidemiological surveillance network of antimicrobial resistance in pathogenic bacteria of animal origin in France. This application allowed us to validate the simplified method, its value in terms of practical implementation, and its level of user acceptance. Its ease of use and speed of application compared to the initial method argue in favor of its use on broader scale. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  8. SU-C-201-06: Utility of Quantitative 3D SPECT/CT Imaging in Patient Specific Internal Dosimetry of 153-Samarium with GATE Monte Carlo Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fallahpoor, M; Abbasi, M; Sen, A

    Purpose: Patient-specific 3-dimensional (3D) internal dosimetry in targeted radionuclide therapy is essential for efficient treatment. Two major steps to achieve reliable results are: 1) generating quantitative 3D images of radionuclide distribution and attenuation coefficients and 2) using a reliable method for dose calculation based on activity and attenuation map. In this research, internal dosimetry for 153-Samarium (153-Sm) was done by SPECT-CT images coupled GATE Monte Carlo package for internal dosimetry. Methods: A 50 years old woman with bone metastases from breast cancer was prescribed 153-Sm treatment (Gamma: 103keV and beta: 0.81MeV). A SPECT/CT scan was performed with the Siemens Simbia-Tmore » scanner. SPECT and CT images were registered using default registration software. SPECT quantification was achieved by compensating for all image degrading factors including body attenuation, Compton scattering and collimator-detector response (CDR). Triple energy window method was used to estimate and eliminate the scattered photons. Iterative ordered-subsets expectation maximization (OSEM) with correction for attenuation and distance-dependent CDR was used for image reconstruction. Bilinear energy mapping is used to convert Hounsfield units in CT image to attenuation map. Organ borders were defined by the itk-SNAP toolkit segmentation on CT image. GATE was then used for internal dose calculation. The Specific Absorbed Fractions (SAFs) and S-values were reported as MIRD schema. Results: The results showed that the largest SAFs and S-values are in osseous organs as expected. S-value for lung is the highest after spine that can be important in 153-Sm therapy. Conclusion: We presented the utility of SPECT-CT images and Monte Carlo for patient-specific dosimetry as a reliable and accurate method. It has several advantages over template-based methods or simplified dose estimation methods. With advent of high speed computers, Monte Carlo can be used for treatment planning on a day to day basis.« less

  9. Shielding analyses of an AB-BNCT facility using Monte Carlo simulations and simplified methods

    NASA Astrophysics Data System (ADS)

    Lai, Bo-Lun; Sheu, Rong-Jiun

    2017-09-01

    Accurate Monte Carlo simulations and simplified methods were used to investigate the shielding requirements of a hypothetical accelerator-based boron neutron capture therapy (AB-BNCT) facility that included an accelerator room and a patient treatment room. The epithermal neutron beam for BNCT purpose was generated by coupling a neutron production target with a specially designed beam shaping assembly (BSA), which was embedded in the partition wall between the two rooms. Neutrons were produced from a beryllium target bombarded by 1-mA 30-MeV protons. The MCNP6-generated surface sources around all the exterior surfaces of the BSA were established to facilitate repeated Monte Carlo shielding calculations. In addition, three simplified models based on a point-source line-of-sight approximation were developed and their predictions were compared with the reference Monte Carlo results. The comparison determined which model resulted in better dose estimation, forming the basis of future design activities for the first ABBNCT facility in Taiwan.

  10. A simplified digital lock-in amplifier for the scanning grating spectrometer.

    PubMed

    Wang, Jingru; Wang, Zhihong; Ji, Xufei; Liu, Jie; Liu, Guangda

    2017-02-01

    For the common measurement and control system of a scanning grating spectrometer, the use of an analog lock-in amplifier requires complex circuitry and sophisticated debugging, whereas the use of a digital lock-in amplifier places a high demand on the calculation capability and storage space. In this paper, a simplified digital lock-in amplifier based on averaging the absolute values within a complete period is presented and applied to a scanning grating spectrometer. The simplified digital lock-in amplifier was implemented on a low-cost microcontroller without multipliers, and got rid of the reference signal and specific configuration of the sampling frequency. Two positive zero-crossing detections were used to lock the phase of the measured signal. However, measurement method errors were introduced by the following factors: frequency fluctuation, sampling interval, and integer restriction of the sampling number. The theoretical calculation and experimental results of the signal-to-noise ratio of the proposed measurement method were 2055 and 2403, respectively.

  11. Simplified methods for computing total sediment discharge with the modified Einstein procedure

    USGS Publications Warehouse

    Colby, Bruce R.; Hubbell, David Wellington

    1961-01-01

    A procedure was presented in 1950 by H. A. Einstein for computing the total discharge of sediment particles of sizes that are in appreciable quantities in the stream bed. This procedure was modified by the U.S. Geological Survey and adapted to computing the total sediment discharge of a stream on the basis of samples of bed sediment, depth-integrated samples of suspended sediment, streamflow measurements, and water temperature. This paper gives simplified methods for computing total sediment discharge by the modified Einstein procedure. Each of four homographs appreciably simplifies a major step in the computations. Within the stated limitations, use of the homographs introduces much less error than is present in either the basic data or the theories on which the computations of total sediment discharge are based. The results are nearly as accurate mathematically as those that could be obtained from the longer and more complex arithmetic and algebraic computations of the Einstein procedure.

  12. Analysis of simplified heat transfer models for thermal property determination of nano-film by TDTR method

    NASA Astrophysics Data System (ADS)

    Wang, Xinwei; Chen, Zhe; Sun, Fangyuan; Zhang, Hang; Jiang, Yuyan; Tang, Dawei

    2018-03-01

    Heat transfer in nanostructures is of critical importance for a wide range of applications such as functional materials and thermal management of electronics. Time-domain thermoreflectance (TDTR) has been proved to be a reliable measurement technique for the thermal property determinations of nanoscale structures. However, it is difficult to determine more than three thermal properties at the same time. Heat transfer model simplifications can reduce the fitting variables and provide an alternative way for thermal property determination. In this paper, two simplified models are investigated and analyzed by the transform matrix method and simulations. TDTR measurements are performed on Al-SiO2-Si samples with different SiO2 thickness. Both theoretical and experimental results show that the simplified tri-layer model (STM) is reliable and suitable for thin film samples with a wide range of thickness. Furthermore, the STM can also extract the intrinsic thermal conductivity and interfacial thermal resistance from serial samples with different thickness.

  13. Effective Alternating Direction Optimization Methods for Sparsity-Constrained Blind Image Deblurring.

    PubMed

    Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi

    2017-01-18

    Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.

  14. A new simplified method for measuring the permeability characteristics of highly porous media

    NASA Astrophysics Data System (ADS)

    Qin, Yinghong; Zhang, Mingyi; Mei, Guoxiong

    2018-07-01

    Fluid flow through highly porous media is important in a variety of science and technology fields, including hydrology, chemical engineering, convections in porous media, and others. While many methods have been available to measure the permeability of tight solid materials, such as concrete and rock, the technique for measuring the permeability of highly porous media is limited (such as gravel, aggregated soils, and crushed rock). This study proposes a new simplified method for measuring the permeability of highly porous media with a permeability of 10-8-10-4 m2, using a Venturi tube to gauge the gas flowing rate through the sample. Using crushed rocks and glass beads as the test media, we measure the permeability and inertial resistance factor of six types of single-size aggregate columns. We compare the testing results with the published permeability and inertial resistance factor of crushed rock and of glass beads. We found that in a log-log graph, the permeability and inertial resistance factor of a single-size aggregate heap increases linearly with the mean diameter of the aggregate. We speculate that the proposed simplified method is suitable to efficiently test the permeability and inertial resistance factor of a variety of porous media with an intrinsic permeability of 10-8-10-4 m2.

  15. Economic impact of simplified de Gramont regimen in first-line therapy in metastatic colorectal cancer.

    PubMed

    Limat, Samuel; Bracco-Nolin, Claire-Hélène; Legat-Fagnoni, Christine; Chaigneau, Loic; Stein, Ulrich; Huchet, Bernard; Pivot, Xavier; Woronoff-Lemsi, Marie-Christine

    2006-06-01

    The cost of chemotherapy has dramatically increased in advanced colorectal cancer patients, and the schedule of fluorouracil administration appears to be a determining factor. This retrospective study compared direct medical costs related to two different de Gramont schedules (standard vs. simplified) given in first-line chemotherapy with oxaliplatin or irinotecan. This cost-minimization analysis was performed from the French Health System perspective. Consecutive unselected patients treated in first-line therapy by LV5FU2 de Gramont with oxaliplatin (Folfox regimen) or with irinotecan (Folfiri regimen) were enrolled. Hospital and outpatient resources related to chemotherapy and adverse events were collected from 1999 to 2004 in 87 patients. Overall cost was reduced in the simplified regimen. The major factor which explained cost saving was the lower need for admissions for chemotherapy. Amount of cost saving depended on the method for assessing hospital stay. In patients treated by the Folfox regimen the per diem and DRG methods found cost savings of Euro 1,997 and Euro 5,982 according to studied schedules; in patients treated by Folfiri regimen cost savings of Euro 4,773 and Euro 7,274 were observed, respectively. In addition, travel costs were also reduced by simplified regimens. The robustness of our results was showed by one-way sensitivity analyses. These findings demonstrate that the simplified de Gramont schedule reduces costs of current first-line chemotherapy in advanced colorectal cancer. Interestingly, our study showed several differences in costs between two costing approaches of hospital stay: average per diem and DRG costs. These results suggested that standard regimen may be considered a profitable strategy from the hospital perspective. The opposition between health system perspective and hospital perspective is worth examining and may affect daily practices. In conclusion, our study shows that the simplified de Gramont schedule in combination with oxaliplatin or irinotecan is an attractive option from the French Health System perspective. This safe and less costly regimen must compared to alternative options such as oral fluoropyrimidines.

  16. Improving biobank consent comprehension: a national randomized survey to assess the effect of a simplified form and review/retest intervention

    PubMed Central

    Beskow, Laura M.; Lin, Li; Dombeck, Carrie B.; Gao, Emily; Weinfurt, Kevin P.

    2017-01-01

    Purpose: To determine the individual and combined effects of a simplified form and a review/retest intervention on biobanking consent comprehension. Methods: We conducted a national online survey in which participants were randomized within four educational strata to review a simplified or traditional consent form. Participants then completed a comprehension quiz; for each item answered incorrectly, they reviewed the corresponding consent form section and answered another quiz item on that topic. Results: Consistent with our first hypothesis, comprehension among those who received the simplified form was not inferior to that among those who received the traditional form. Contrary to expectations, receipt of the simplified form did not result in significantly better comprehension compared with the traditional form among those in the lowest educational group. The review/retest procedure significantly improved quiz scores in every combination of consent form and education level. Although improved, comprehension remained a challenge in the lowest-education group. Higher quiz scores were significantly associated with willingness to participate. Conclusion: Ensuring consent comprehension remains a challenge, but simplified forms have virtues independent of their impact on understanding. A review/retest intervention may have a significant effect, but assessing comprehension raises complex questions about setting thresholds for understanding and consequences of not meeting them. Genet Med advance online publication 13 October 2016 PMID:27735922

  17. Self-powered integrated microfluidic point-of-care low-cost enabling (SIMPLE) chip

    PubMed Central

    Yeh, Erh-Chia; Fu, Chi-Cheng; Hu, Lucy; Thakur, Rohan; Feng, Jeffrey; Lee, Luke P.

    2017-01-01

    Portable, low-cost, and quantitative nucleic acid detection is desirable for point-of-care diagnostics; however, current polymerase chain reaction testing often requires time-consuming multiple steps and costly equipment. We report an integrated microfluidic diagnostic device capable of on-site quantitative nucleic acid detection directly from the blood without separate sample preparation steps. First, we prepatterned the amplification initiator [magnesium acetate (MgOAc)] on the chip to enable digital nucleic acid amplification. Second, a simplified sample preparation step is demonstrated, where the plasma is separated autonomously into 224 microwells (100 nl per well) without any hemolysis. Furthermore, self-powered microfluidic pumping without any external pumps, controllers, or power sources is accomplished by an integrated vacuum battery on the chip. This simple chip allows rapid quantitative digital nucleic acid detection directly from human blood samples (10 to 105 copies of methicillin-resistant Staphylococcus aureus DNA per microliter, ~30 min, via isothermal recombinase polymerase amplification). These autonomous, portable, lab-on-chip technologies provide promising foundations for future low-cost molecular diagnostic assays. PMID:28345028

  18. A quantitative model to assess Social Responsibility in Environmental Science and Technology.

    PubMed

    Valcárcel, M; Lucena, R

    2014-01-01

    The awareness of the impact of human activities in society and environment is known as "Social Responsibility" (SR). It has been a topic of growing interest in many enterprises since the fifties of the past Century, and its implementation/assessment is nowadays supported by international standards. There is a tendency to amplify its scope of application to other areas of the human activities, such as Research, Development and Innovation (R + D + I). In this paper, a model of quantitative assessment of Social Responsibility in Environmental Science and Technology (SR EST) is described in detail. This model is based on well established written standards as the EFQM Excellence model and the ISO 26000:2010 Guidance on SR. The definition of five hierarchies of indicators, the transformation of qualitative information into quantitative data and the dual procedure of self-evaluation and external evaluation are the milestones of the proposed model, which can be applied to Environmental Research Centres and institutions. In addition, a simplified model that facilitates its implementation is presented in the article. © 2013 Elsevier B.V. All rights reserved.

  19. Characterizing relationship between optical microangiography signals and capillary flow using microfluidic channels.

    PubMed

    Choi, Woo June; Qin, Wan; Chen, Chieh-Li; Wang, Jingang; Zhang, Qinqin; Yang, Xiaoqi; Gao, Bruce Z; Wang, Ruikang K

    2016-07-01

    Optical microangiography (OMAG) is a powerful optical angio-graphic tool to visualize micro-vascular flow in vivo. Despite numerous demonstrations for the past several years of the qualitative relationship between OMAG and flow, no convincing quantitative relationship has been proven. In this paper, we attempt to quantitatively correlate the OMAG signal with flow. Specifically, we develop a simplified analytical model of the complex OMAG, suggesting that the OMAG signal is a product of the number of particles in an imaging voxel and the decorrelation of OCT (optical coherence tomography) signal, determined by flow velocity, inter-frame time interval, and wavelength of the light source. Numerical simulation with the proposed model reveals that if the OCT amplitudes are correlated, the OMAG signal is related to a total number of particles across the imaging voxel cross-section per unit time (flux); otherwise it would be saturated but its strength is proportional to the number of particles in the imaging voxel (concentration). The relationship is validated using microfluidic flow phantoms with various preset flow metrics. This work suggests OMAG is a promising quantitative tool for the assessment of vascular flow.

  20. A Combined Gravity Compensation Method for INS Using the Simplified Gravity Model and Gravity Database.

    PubMed

    Zhou, Xiao; Yang, Gongliu; Wang, Jing; Wen, Zeyang

    2018-05-14

    In recent decades, gravity compensation has become an important way to reduce the position error of an inertial navigation system (INS), especially for a high-precision INS, because of the extensive application of high precision inertial sensors (accelerometers and gyros). This paper first deducts the INS's solution error considering gravity disturbance and simulates the results. Meanwhile, this paper proposes a combined gravity compensation method using a simplified gravity model and gravity database. This new combined method consists of two steps all together. Step 1 subtracts the normal gravity using a simplified gravity model. Step 2 first obtains the gravity disturbance on the trajectory of the carrier with the help of ELM training based on the measured gravity data (provided by Institute of Geodesy and Geophysics; Chinese Academy of sciences), and then compensates it into the error equations of the INS, considering the gravity disturbance, to further improve the navigation accuracy. The effectiveness and feasibility of this new gravity compensation method for the INS are verified through vehicle tests in two different regions; one is in flat terrain with mild gravity variation and the other is in complex terrain with fierce gravity variation. During 2 h vehicle tests, the positioning accuracy of two tests can improve by 20% and 38% respectively, after the gravity is compensated by the proposed method.

  1. A Combined Gravity Compensation Method for INS Using the Simplified Gravity Model and Gravity Database

    PubMed Central

    Zhou, Xiao; Yang, Gongliu; Wang, Jing; Wen, Zeyang

    2018-01-01

    In recent decades, gravity compensation has become an important way to reduce the position error of an inertial navigation system (INS), especially for a high-precision INS, because of the extensive application of high precision inertial sensors (accelerometers and gyros). This paper first deducts the INS’s solution error considering gravity disturbance and simulates the results. Meanwhile, this paper proposes a combined gravity compensation method using a simplified gravity model and gravity database. This new combined method consists of two steps all together. Step 1 subtracts the normal gravity using a simplified gravity model. Step 2 first obtains the gravity disturbance on the trajectory of the carrier with the help of ELM training based on the measured gravity data (provided by Institute of Geodesy and Geophysics; Chinese Academy of sciences), and then compensates it into the error equations of the INS, considering the gravity disturbance, to further improve the navigation accuracy. The effectiveness and feasibility of this new gravity compensation method for the INS are verified through vehicle tests in two different regions; one is in flat terrain with mild gravity variation and the other is in complex terrain with fierce gravity variation. During 2 h vehicle tests, the positioning accuracy of two tests can improve by 20% and 38% respectively, after the gravity is compensated by the proposed method. PMID:29757983

  2. Interlaboratory Validation of a Stable Isotope Dilution and Liquid Chromatography Tandem Mass Spectrometry Method for the Determination of Aflatoxins in Milk, Milk-Based Infant Formula, and Feed.

    PubMed

    Zhang, Kai; Liao, Chia-Ding; Prakash, Shristi; Conway, Michael; Cheng, Hwei-Fang

    2018-05-01

    An interlaboratory study was conducted to evaluate stable isotope dilution and LC tandem MS (MS/MS) for the determination of aflatoxins B1, B2, G1, G2, and M1 (AFB1, AFB2, AFG1, AFG2, and AFM1) in milk, milk-based infant formula (formula), and feed. Samples were first fortified with five 13C uniformly labeled aflatoxins {[13C]-internal standard (IS)} corresponding to the five native aflatoxins, which were subsequently extracted with acetonitrile-water (50 + 50, v/v), followed by centrifugation, filtration, and LC-MS/MS analysis. In addition to certified milk powder and animal feed, the three participating laboratories also analyzed milk, formula, and feed fortified with the five aflatoxins at concentrations ranging from 0.5 to 50 ng/g. The majority of recoveries ranged from 80 to 120%, with RSDs < 20%. Method LOQs were determined by the three laboratories using the three sample matrixes in replicates (n = 8), and the determined LOQs of AFB1, AFB2, AFG1, AFG2, and AFM1 ranged from 0.1 to 0.91, 0.24 to 0.64, 0.28 to 1.52, 0.19 to 3.80, and 0.12 to 0.45 ng/g, respectively. For detected aflatoxins in the certified materials, all measured concentrations were within ±25% of the certified values. Using [13C]-IS eliminated the need for matrix-matched calibration standards for quantitation, simplified sample preparation, and achieved simultaneous identification and quantitation of the aflatoxins in a simple LC-MS/MS procedure.

  3. A simplified quantitative method for assessing keratoconjunctivitis sicca from the Sjögren's Syndrome International Registry.

    PubMed

    Whitcher, John P; Shiboski, Caroline H; Shiboski, Stephen C; Heidenreich, Ana Maria; Kitagawa, Kazuko; Zhang, Shunhua; Hamann, Steffen; Larkin, Genevieve; McNamara, Nancy A; Greenspan, John S; Daniels, Troy E

    2010-03-01

    To describe, apply, and test a new ocular grading system for assessing keratoconjunctivitis sicca (KCS) using lissamine green and fluorescein. Prospective, observational, multicenter cohort study. The National Institutes of Health-funded Sjögren's Syndrome International Registry (called Sjögren's International Collaborative Clinical Alliance [SICCA]) is developing standardized classification criteria for Sjögren syndrome (SS) and is creating a biospecimen bank for future research. Eight SICCA ophthalmologists developed a new quantitative ocular grading system (SICCA ocular staining score [OSS]), and we analyzed OSS distribution among the SICCA cohort and its association with other phenotypic characteristics of SS. The SICCA cohort includes participants ranging from possibly early SS to advanced disease. Procedures include sequenced unanesthetized Schirmer test, tear break-up time, ocular surface staining, and external eye examination at the slit lamp. Using statistical analyses and proportional Venn diagrams, we examined interrelationships between abnormal OSS (>or=3) and other characteristics of SS (labial salivary gland [LSG] biopsy with focal lymphocytic sialadenitis and focus score >1 positive anti-SS A antibodies, anti-SS B antibodies, or both). Among 1208 participants, we found strong associations between abnormal OSS, positive serologic results, and positive LSG focus scores (P < .0001). Analysis of the overlapping relationships of these 3 measures defined a large group of participants who had KCS without other components of SS, representing a clinical entity distinct from the KCS associated with SS. This new method for assessing KCS will become the means for diagnosing the ocular component of SS in future classification criteria. We find 2 forms of KCS whose causes may differ. (c) 2010 Elsevier Inc. All rights reserved.

  4. Characterizing structural transitions using localized free energy landscape analysis.

    PubMed

    Banavali, Nilesh K; Mackerell, Alexander D

    2009-01-01

    Structural changes in molecules are frequently observed during biological processes like replication, transcription and translation. These structural changes can usually be traced to specific distortions in the backbones of the macromolecules involved. Quantitative energetic characterization of such distortions can greatly advance the atomic-level understanding of the dynamic character of these biological processes. Molecular dynamics simulations combined with a variation of the Weighted Histogram Analysis Method for potential of mean force determination are applied to characterize localized structural changes for the test case of cytosine (underlined) base flipping in a GTCAGCGCATGG DNA duplex. Free energy landscapes for backbone torsion and sugar pucker degrees of freedom in the DNA are used to understand their behavior in response to the base flipping perturbation. By simplifying the base flipping structural change into a two-state model, a free energy difference of upto 14 kcal/mol can be attributed to the flipped state relative to the stacked Watson-Crick base paired state. This two-state classification allows precise evaluation of the effect of base flipping on local backbone degrees of freedom. The calculated free energy landscapes of individual backbone and sugar degrees of freedom expectedly show the greatest change in the vicinity of the flipping base itself, but specific delocalized effects can be discerned upto four nucleotide positions away in both 5' and 3' directions. Free energy landscape analysis thus provides a quantitative method to pinpoint the determinants of structural change on the atomic scale and also delineate the extent of propagation of the perturbation along the molecule. In addition to nucleic acids, this methodology is anticipated to be useful for studying conformational changes in all macromolecules, including carbohydrates, lipids, and proteins.

  5. Predicting A Drug'S Membrane Permeability: Evolution of a Computational Model Validated with in Vitro Permeability Assay Data

    DOE PAGES

    Carpenter, Timothy S.; McNerney, M. Windy; Be, Nicholas A.; ...

    2016-02-16

    Membrane permeability is a key property to consider in drug design, especially when the drugs in question need to cross the blood-brain barrier (BBB). A comprehensive in vivo assessment of the BBB permeability of a drug takes considerable time and financial resources. A current, simplified in vitro model to investigate drug permeability is a Parallel Artificial Membrane Permeability Assay (PAMPA) that generally provides higher throughput and initial quantification of a drug's passive permeability. Computational methods can also be used to predict drug permeability. Our methods are highly advantageous as they do not require the synthesis of the desired drug, andmore » can be implemented rapidly using high-performance computing. In this study, we have used umbrella sampling Molecular Dynamics (MD) methods to assess the passive permeability of a range of compounds through a lipid bilayer. Furthermore, the permeability of these compounds was comprehensively quantified using the PAMPA assay to calibrate and validate the MD methodology. And after demonstrating a firm correlation between the two approaches, we then implemented our MD method to quantitatively predict the most permeable potential drug from a series of potential scaffolds. This permeability was then confirmed by the in vitro PAMPA methodology. Therefore, in this work we have illustrated the potential that these computational methods hold as useful tools to help predict a drug's permeability in a faster and more cost-effective manner. Release number: LLNL-ABS-677757.« less

  6. Predicting A Drug'S Membrane Permeability: Evolution of a Computational Model Validated with in Vitro Permeability Assay Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carpenter, Timothy S.; McNerney, M. Windy; Be, Nicholas A.

    Membrane permeability is a key property to consider in drug design, especially when the drugs in question need to cross the blood-brain barrier (BBB). A comprehensive in vivo assessment of the BBB permeability of a drug takes considerable time and financial resources. A current, simplified in vitro model to investigate drug permeability is a Parallel Artificial Membrane Permeability Assay (PAMPA) that generally provides higher throughput and initial quantification of a drug's passive permeability. Computational methods can also be used to predict drug permeability. Our methods are highly advantageous as they do not require the synthesis of the desired drug, andmore » can be implemented rapidly using high-performance computing. In this study, we have used umbrella sampling Molecular Dynamics (MD) methods to assess the passive permeability of a range of compounds through a lipid bilayer. Furthermore, the permeability of these compounds was comprehensively quantified using the PAMPA assay to calibrate and validate the MD methodology. And after demonstrating a firm correlation between the two approaches, we then implemented our MD method to quantitatively predict the most permeable potential drug from a series of potential scaffolds. This permeability was then confirmed by the in vitro PAMPA methodology. Therefore, in this work we have illustrated the potential that these computational methods hold as useful tools to help predict a drug's permeability in a faster and more cost-effective manner. Release number: LLNL-ABS-677757.« less

  7. Simplified methods for calculating photodissociation rates

    NASA Technical Reports Server (NTRS)

    Shimazaki, T.; Ogawa, T.; Farrell, B. C.

    1977-01-01

    Simplified methods for calculating the transmission of solar UV radiation and the dissociation coefficients of various molecules are compared. A significant difference sometimes appears in calculations of the individual band, but the total transmission and the total dissociation coefficients integrated over the entire SR (solar radiation) band region agree well between the methods. The ambiguities in the solar flux data affect the calculated dissociation coefficients more strongly than does the method. A simpler method is developed for the purpose of reducing the computation time and computer memory size necessary for storing coefficients of the equations. The new method can reduce the computation time by a factor of more than 3 and the memory size by a factor of more than 50 compared with the Hudson-Mahle method, and yet the result agrees within 10 percent (in most cases much less) with the original Hudson-Mahle results, except for H2O and CO2. A revised method is necessary for these two molecules, whose absorption cross sections change very rapidly over the SR band spectral range.

  8. Natural-Annotation-based Unsupervised Construction of Korean-Chinese Domain Dictionary

    NASA Astrophysics Data System (ADS)

    Liu, Wuying; Wang, Lin

    2018-03-01

    The large-scale bilingual parallel resource is significant to statistical learning and deep learning in natural language processing. This paper addresses the automatic construction issue of the Korean-Chinese domain dictionary, and presents a novel unsupervised construction method based on the natural annotation in the raw corpus. We firstly extract all Korean-Chinese word pairs from Korean texts according to natural annotations, secondly transform the traditional Chinese characters into the simplified ones, and finally distill out a bilingual domain dictionary after retrieving the simplified Chinese words in an extra Chinese domain dictionary. The experimental results show that our method can automatically build multiple Korean-Chinese domain dictionaries efficiently.

  9. On the joint inversion of geophysical data for models of the coupled core-mantle system

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1991-01-01

    Joint inversion of magnetic, earth rotation, geoid, and seismic data for a unified model of the coupled core-mantle system is proposed and shown to be possible. A sample objective function is offered and simplified by targeting results from independent inversions and summary travel time residuals instead of original observations. These data are parameterized in terms of a very simple, closed model of the topographically coupled core-mantle system. Minimization of the simplified objective function leads to a nonlinear inverse problem; an iterative method for solution is presented. Parameterization and method are emphasized; numerical results are not presented.

  10. Numerical simulation of fluid flow through simplified blade cascade with prescribed harmonic motion using discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    Vimmr, Jan; Bublík, Ondřej; Prausová, Helena; Hála, Jindřich; Pešek, Luděk

    2018-06-01

    This paper deals with a numerical simulation of compressible viscous fluid flow around three flat plates with prescribed harmonic motion. This arrangement presents a simplified blade cascade with forward wave motion. The aim of this simulation is to determine the aerodynamic forces acting on the flat plates. The mathematical model describing this problem is formed by Favre-averaged system of Navier-Stokes equations in arbitrary Lagrangian-Eulerian (ALE) formulation completed by one-equation Spalart-Allmaras turbulence model. The simulation was performed using the developed in-house CFD software based on discontinuous Galerkin method, which offers high order of accuracy.

  11. Simplified DFT methods for consistent structures and energies of large systems

    NASA Astrophysics Data System (ADS)

    Caldeweyher, Eike; Gerit Brandenburg, Jan

    2018-05-01

    Kohn–Sham density functional theory (DFT) is routinely used for the fast electronic structure computation of large systems and will most likely continue to be the method of choice for the generation of reliable geometries in the foreseeable future. Here, we present a hierarchy of simplified DFT methods designed for consistent structures and non-covalent interactions of large systems with particular focus on molecular crystals. The covered methods are a minimal basis set Hartree–Fock (HF-3c), a small basis set screened exchange hybrid functional (HSE-3c), and a generalized gradient approximated functional evaluated in a medium-sized basis set (B97-3c), all augmented with semi-classical correction potentials. We give an overview on the methods design, a comprehensive evaluation on established benchmark sets for geometries and lattice energies of molecular crystals, and highlight some realistic applications on large organic crystals with several hundreds of atoms in the primitive unit cell.

  12. Experimental determination of the viscous flow permeability of porous materials by measuring reflected low frequency acoustic waves

    NASA Astrophysics Data System (ADS)

    Berbiche, A.; Sadouki, M.; Fellah, Z. E. A.; Ogam, E.; Fellah, M.; Mitri, F. G.; Depollier, C.

    2016-01-01

    An acoustic reflectivity method is proposed for measuring the permeability or flow resistivity of air-saturated porous materials. In this method, a simplified expression of the reflection coefficient is derived in the Darcy's regime (low frequency range), which does not depend on frequency and porosity. Numerical simulations show that the reflection coefficient of a porous material can be approximated by its simplified expression obtained from its Taylor development to the first order. This approximation is good especially for resistive materials (of low permeability) and for the lower frequencies. The permeability is reconstructed by solving the inverse problem using waves reflected by plastic foam samples, at different frequency bandwidths in the Darcy regime. The proposed method has the advantage of being simple compared to the conventional methods that use experimental reflected data, and is complementary to the transmissivity method, which is more adapted to low resistive materials (high permeability).

  13. Methods of predicting aggregate voids.

    DOT National Transportation Integrated Search

    2013-03-01

    Percent voids in combined aggregates vary significantly. Simplified methods of predicting aggregate : voids were studied to determine the feasibility of a range of gradations using aggregates available in Kansas. : The 0.45 Power Curve Void Predictio...

  14. Simplified Life-Cycle Cost Estimation

    NASA Technical Reports Server (NTRS)

    Remer, D. S.; Lorden, G.; Eisenberger, I.

    1983-01-01

    Simple method for life-cycle cost (LCC) estimation avoids pitfalls inherent in formulations requiring separate estimates of inflation and interest rates. Method depends for validity observation that interest and inflation rates closely track each other.

  15. Evaluation of two-test serodiagnostic method for early Lyme disease in clinical practice.

    PubMed

    Trevejo, R T; Krause, P J; Sikand, V K; Schriefer, M E; Ryan, R; Lepore, T; Porter, W; Dennis, D T

    1999-04-01

    The Centers for Disease Control and Prevention (CDC) recommend a two-test approach for the serodiagnosis of Lyme disease (LD), with EIA testing followed by Western immunoblotting (WB) of EIA-equivocal and -positive specimens. This approach was compared with a simplified two-test approach (WB of EIA equivocals only) and WB alone for early LD. Case-patients with erythema migrans (EM) rash >/=5 cm were recruited from three primary-care practices in LD-endemic areas to provide acute- (S1) and convalescent-phase serum specimens (S2). The simplified approach had the highest sensitivity when either S1 or S2 samples were tested, nearly doubling when S2 were tested, while decreasing slightly for the other two approaches. Accordingly, the simplified approach had the lowest negative likelihood ratio for either S1 or S2. For early LD with EM, the simplified approach performed well and was less costly than the other testing approaches since less WB is required.

  16. Simplified Models for the Study of Postbuckled Hat-Stiffened Composite Panels

    NASA Technical Reports Server (NTRS)

    Vescovini, Riccardo; Davila, Carlos G.; Bisagni, Chiara

    2012-01-01

    The postbuckling response and failure of multistringer stiffened panels is analyzed using models with three levels of approximation. The first model uses a relatively coarse mesh to capture the global postbuckling response of a five-stringer panel. The second model can predict the nonlinear response as well as the debonding and crippling failure mechanisms in a single stringer compression specimen (SSCS). The third model consists of a simplified version of the SSCS that is designed to minimize the computational effort. The simplified model is well-suited to perform sensitivity analyses for studying the phenomena that lead to structural collapse. In particular, the simplified model is used to obtain a deeper understanding of the role played by geometric and material modeling parameters such as mesh size, inter-laminar strength, fracture toughness, and fracture mode mixity. Finally, a global/local damage analysis method is proposed in which a detailed local model is used to scan the global model to identify the locations that are most critical for damage tolerance.

  17. How the continents deform: The evidence from tectonic geodesy

    USGS Publications Warehouse

    Thatcher, Wayne R.

    2009-01-01

    Space geodesy now provides quantitative maps of the surface velocity field within tectonically active regions, supplying constraints on the spatial distribution of deformation, the forces that drive it, and the brittle and ductile properties of continental lithosphere. Deformation is usefully described as relative motions among elastic blocks and is block-like because major faults are weaker than adjacent intact crust. Despite similarities, continental block kinematics differs from global plate tectonics: blocks are much smaller, typically ∼100–1000 km in size; departures from block rigidity are sometimes measurable; and blocks evolve over ∼1–10 Ma timescales, particularly near their often geometrically irregular boundaries. Quantitatively relating deformation to the forces that drive it requires simplifying assumptions about the strength distribution in the lithosphere. If brittle/elastic crust is strongest, interactions among blocks control the deformation. If ductile lithosphere is the stronger, its flow properties determine the surface deformation, and a continuum approach is preferable.

  18. Revised planetary protection policy for solar system exploration.

    PubMed

    DeVincenzi, D L; Stabekis, P D

    1984-01-01

    In order to control contamination of planets by terrestrial microorganisms and organic constituents, U.S. planetary missions have been governed by a planetary protection (or planetary quarantine) policy which has changed little since 1972. This policy has recently been reviewed in light of new information obtained from planetary exploration during the past decade and because of changes to, or uncertainties in, some parameters used in the existing quantitative approach. On the basis of this analysis, a revised planetary protection policy with the following key features is proposed: deemphasizing the use of mathematical models and quantitative analyses; establishing requirements for target planet/mission type (i.e., orbiter, lander, etc.) combinations; considering sample return missions a separate category; simplifying documentation; and imposing implementing procedures (i.e., trajectory biasing, cleanroom assembly, spacecraft sterilization, etc.) by exception, i.e., only if the planet/mission combination warrants such controls.

  19. Evolution, Energy Landscapes and the Paradoxes of Protein Folding

    PubMed Central

    Wolynes, Peter G.

    2014-01-01

    Protein folding has been viewed as a difficult problem of molecular self-organization. The search problem involved in folding however has been simplified through the evolution of folding energy landscapes that are funneled. The funnel hypothesis can be quantified using energy landscape theory based on the minimal frustration principle. Strong quantitative predictions that follow from energy landscape theory have been widely confirmed both through laboratory folding experiments and from detailed simulations. Energy landscape ideas also have allowed successful protein structure prediction algorithms to be developed. The selection constraint of having funneled folding landscapes has left its imprint on the sequences of existing protein structural families. Quantitative analysis of co-evolution patterns allows us to infer the statistical characteristics of the folding landscape. These turn out to be consistent with what has been obtained from laboratory physicochemical folding experiments signalling a beautiful confluence of genomics and chemical physics. PMID:25530262

  20. Evidence for ice-ocean albedo feedback in the Arctic Ocean shifting to a seasonal ice zone.

    PubMed

    Kashiwase, Haruhiko; Ohshima, Kay I; Nihashi, Sohey; Eicken, Hajo

    2017-08-15

    Ice-albedo feedback due to the albedo contrast between water and ice is a major factor in seasonal sea ice retreat, and has received increasing attention with the Arctic Ocean shifting to a seasonal ice cover. However, quantitative evaluation of such feedbacks is still insufficient. Here we provide quantitative evidence that heat input through the open water fraction is the primary driver of seasonal and interannual variations in Arctic sea ice retreat. Analyses of satellite data (1979-2014) and a simplified ice-upper ocean coupled model reveal that divergent ice motion in the early melt season triggers large-scale feedback which subsequently amplifies summer sea ice anomalies. The magnitude of divergence controlling the feedback has doubled since 2000 due to a more mobile ice cover, which can partly explain the recent drastic ice reduction in the Arctic Ocean.

  1. Laser under ultrastrong light-matter interaction: Qualitative aspects and quantitative influences by level and mode truncations

    NASA Astrophysics Data System (ADS)

    Bamba, Motoaki; Ogawa, Tetsuo

    2016-03-01

    We investigate theoretically the light amplification by stimulated emission of radiation (laser) in the ultrastrong light-matter interaction regime under the two-level and single-mode approximations. The conventional picture of the laser is broken under the ultrastrong interaction. Instead, we must explicitly discuss the dynamics of the electric field and of the magnetic one distinctively, which make the "laser" qualitatively different from the conventional laser. We found that the laser generally accompanies odd-order harmonics of the electromagnetic fields both inside and outside the cavity and a synchronization with an oscillation of atomic population. A bistability is also demonstrated. However, since our model is quite simplified, we got quantitatively different results from the Hamiltonians in the velocity and length forms of the light-matter interaction, while the appearance of the multiple harmonics and the bistability is qualitatively reliable.

  2. Interpretation of searches for supersymmetry with simplified models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatrchyan, S.; Khachatryan, V.; Sirunyan, A. M.

    The results of searches for supersymmetry by the CMS experiment are interpreted in the framework of simplified models. The results are based on data corresponding to an integrated luminosity of 4.73 to 4.98 inverse femtobarns. The data were collected at the LHC in proton-proton collisions at a center-of-mass energy of 7 TeV. This paper describes the method of interpretation and provides upper limits on the product of the production cross section and branching fraction as a function of new particle masses for a number of simplified models. These limits and the corresponding experimental acceptance calculations can be used to constrainmore » other theoretical models and to compare different supersymmetry-inspired analyses.« less

  3. Simplified Computation for Nonparametric Windows Method of Probability Density Function Estimation.

    PubMed

    Joshi, Niranjan; Kadir, Timor; Brady, Michael

    2011-08-01

    Recently, Kadir and Brady proposed a method for estimating probability density functions (PDFs) for digital signals which they call the Nonparametric (NP) Windows method. The method involves constructing a continuous space representation of the discrete space and sampled signal by using a suitable interpolation method. NP Windows requires only a small number of observed signal samples to estimate the PDF and is completely data driven. In this short paper, we first develop analytical formulae to obtain the NP Windows PDF estimates for 1D, 2D, and 3D signals, for different interpolation methods. We then show that the original procedure to calculate the PDF estimate can be significantly simplified and made computationally more efficient by a judicious choice of the frame of reference. We have also outlined specific algorithmic details of the procedures enabling quick implementation. Our reformulation of the original concept has directly demonstrated a close link between the NP Windows method and the Kernel Density Estimator.

  4. Extensions and evaluations of a general quantitative theory of forest structure and dynamics

    PubMed Central

    Enquist, Brian J.; West, Geoffrey B.; Brown, James H.

    2009-01-01

    Here, we present the second part of a quantitative theory for the structure and dynamics of forests under demographic and resource steady state. The theory is based on individual-level allometric scaling relations for how trees use resources, fill space, and grow. These scale up to determine emergent properties of diverse forests, including size–frequency distributions, spacing relations, canopy configurations, mortality rates, population dynamics, successional dynamics, and resource flux rates. The theory uniquely makes quantitative predictions for both stand-level scaling exponents and normalizations. We evaluate these predictions by compiling and analyzing macroecological datasets from several tropical forests. The close match between theoretical predictions and data suggests that forests are organized by a set of very general scaling rules. Our mechanistic theory is based on allometric scaling relations, is complementary to “demographic theory,” but is fundamentally different in approach. It provides a quantitative baseline for understanding deviations from predictions due to other factors, including disturbance, variation in branching architecture, asymmetric competition, resource limitation, and other sources of mortality, which are not included in the deliberately simplified theory. The theory should apply to a wide range of forests despite large differences in abiotic environment, species diversity, and taxonomic and functional composition. PMID:19363161

  5. Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, William Monford

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less

  6. Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines

    DOE PAGES

    Wood, William Monford

    2018-02-07

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less

  7. Global preamplification simplifies targeted mRNA quantification

    PubMed Central

    Kroneis, Thomas; Jonasson, Emma; Andersson, Daniel; Dolatabadi, Soheila; Ståhlberg, Anders

    2017-01-01

    The need to perform gene expression profiling using next generation sequencing and quantitative real-time PCR (qPCR) on small sample sizes and single cells is rapidly expanding. However, to analyse few molecules, preamplification is required. Here, we studied global and target-specific preamplification using 96 optimised qPCR assays. To evaluate the preamplification strategies, we monitored the reactions in real-time using SYBR Green I detection chemistry followed by melting curve analysis. Next, we compared yield and reproducibility of global preamplification to that of target-specific preamplification by qPCR using the same amount of total RNA. Global preamplification generated 9.3-fold lower yield and 1.6-fold lower reproducibility than target-specific preamplification. However, the performance of global preamplification is sufficient for most downstream applications and offers several advantages over target-specific preamplification. To demonstrate the potential of global preamplification we analysed the expression of 15 genes in 60 single cells. In conclusion, we show that global preamplification simplifies targeted gene expression profiling of small sample sizes by a flexible workflow. We outline the pros and cons for global preamplification compared to target-specific preamplification. PMID:28332609

  8. Generation of longitudinal vibrations in piano strings: From physics to sound synthesis

    NASA Astrophysics Data System (ADS)

    Bank, Balázs; Sujbert, László

    2005-04-01

    Longitudinal vibration of piano strings greatly contributes to the distinctive character of low piano notes. In this paper a simplified modal model is developed, which describes the generation of phantom partials and longitudinal free modes jointly. The model is based on the simplification that the coupling from the transverse vibration to the longitudinal polarization is unidirectional. The modal formulation makes it possible to predict the prominent components of longitudinal vibration as a function of transverse modal frequencies. This provides a qualitative insight into the generation of longitudinal vibration, while the model is still capable of explaining the empirical results of earlier works. The semi-quantitative agreement with measurement results implies that the main source of phantom partials is the transverse to longitudinal coupling, while the string termination and the longitudinal to transverse coupling have only small influence. The results suggest that the longitudinal component of the tone can be treated as a quasi-harmonic spectrum with formantlike peaks at the longitudinal modal frequencies. The model is further simplified and applied for the real-time synthesis of piano sound with convincing sonic results. .

  9. Technical note: Fu-Liou-Gu and Corti-Peter model performance evaluation for radiative retrievals from cirrus clouds

    NASA Astrophysics Data System (ADS)

    Lolli, Simone; Campbell, James R.; Lewis, Jasper R.; Gu, Yu; Welton, Ellsworth J.

    2017-06-01

    We compare, for the first time, the performance of a simplified atmospheric radiative transfer algorithm package, the Corti-Peter (CP) model, versus the more complex Fu-Liou-Gu (FLG) model, for resolving top-of-the-atmosphere radiative forcing characteristics from single-layer cirrus clouds obtained from the NASA Micro-Pulse Lidar Network database in 2010 and 2011 at Singapore and in Greenbelt, Maryland, USA, in 2012. Specifically, CP simplifies calculation of both clear-sky longwave and shortwave radiation through regression analysis applied to radiative calculations, which contributes significantly to differences between the two. The results of the intercomparison show that differences in annual net top-of-the-atmosphere (TOA) cloud radiative forcing can reach 65 %. This is particularly true when land surface temperatures are warmer than 288 K, where the CP regression analysis becomes less accurate. CP proves useful for first-order estimates of TOA cirrus cloud forcing, but may not be suitable for quantitative accuracy, including the absolute sign of cirrus cloud daytime TOA forcing that can readily oscillate around zero globally.

  10. Shot-by-shot spectrum model for rod-pinch, pulsed radiography machines

    NASA Astrophysics Data System (ADS)

    Wood, Wm M.

    2018-02-01

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thus allowing for rapid optimization of the model across many shots. "Goodness of fit" is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays ("MCNPX") model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. Improvements to the model, specifically for application to other geometries, are discussed.

  11. International Conference on the Methods of Aerophysical Research 98 "ICMAR 98". Proceedings, Part 1

    DTIC Science & Technology

    1998-01-01

    pumping air through device and airdrying due to vapour condensation on cooled surfaces. Fig. 1 In this report, approximate estimates are presented...picture is used for flow field between disks and for water vapor condensation on cooled moving surfaces. Shown in Fig. 1 is a simplified flow...frequency of disks rotation), thus, breaking away from channel walls. Regarding condensation process, a number of usual simplifying assumptions is made

  12. Simplified conversions between specific conductance and salinity units for use with data from monitoring stations

    USGS Publications Warehouse

    Schemel, Laurence E.

    2001-01-01

    This article presents a simplified conversion to salinity units for use with specific conductance data from monitoring stations that have been normalized to a standard temperature of 25 °C and an equation for the reverse calculation. Although these previously undocumented methods have been shared with many IEP agencies over the last two decades, the sources of the equations and data are identified here so that the original literature can be accessed.

  13. A transfer function type of simplified electrochemical model with modified boundary conditions and Padé approximation for Li-ion battery: Part 2. Modeling and parameter estimation

    NASA Astrophysics Data System (ADS)

    Yuan, Shifei; Jiang, Lei; Yin, Chengliang; Wu, Hongjie; Zhang, Xi

    2017-06-01

    The electrochemistry-based battery model can provide physics-meaningful knowledge about the lithium-ion battery system with extensive computation burdens. To motivate the development of reduced order battery model, three major contributions have been made throughout this paper: (1) the transfer function type of simplified electrochemical model is proposed to address the current-voltage relationship with Padé approximation method and modified boundary conditions for electrolyte diffusion equations. The model performance has been verified under pulse charge/discharge and dynamic stress test (DST) profiles with the standard derivation less than 0.021 V and the runtime 50 times faster. (2) the parametric relationship between the equivalent circuit model and simplified electrochemical model has been established, which will enhance the comprehension level of two models with more in-depth physical significance and provide new methods for electrochemical model parameter estimation. (3) four simplified electrochemical model parameters: equivalent resistance Req, effective diffusion coefficient in electrolyte phase Deeff, electrolyte phase volume fraction ε and open circuit voltage (OCV), have been identified by the recursive least square (RLS) algorithm with the modified DST profiles under 45, 25 and 0 °C. The simulation results indicate that the proposed model coupled with RLS algorithm can achieve high accuracy for electrochemical parameter identification in dynamic scenarios.

  14. Methods of predicting aggregate voids : [technical summary].

    DOT National Transportation Integrated Search

    2013-03-01

    Percent voids in combined aggregates vary significantly. Simplified methods of predicting aggregate voids were studied to determine the feasibility of a range of gradations using aggregates available in Kansas. : The 0.45 Power Curve Void Prediction ...

  15. Liquid chromatography coupled to quadrupole-time of flight tandem mass spectrometry based quantitative structure-retention relationships of amino acid analogues derivatized via n-propyl chloroformate mediated reaction.

    PubMed

    Kritikos, Nikolaos; Tsantili-Kakoulidou, Anna; Loukas, Yannis L; Dotsikas, Yannis

    2015-07-17

    In the current study, quantitative structure-retention relationships (QSRR) were constructed based on data obtained by a LC-(ESI)-QTOF-MS/MS method for the determination of amino acid analogues, following their derivatization via chloroformate esters. Molecules were derivatized via n-propyl chloroformate/n-propanol mediated reaction. Derivatives were acquired through a liquid-liquid extraction procedure. Chromatographic separation is based on gradient elution using methanol/water mixtures from a 70/30% composition to an 85/15% final one, maintaining a constant rate of change. The group of examined molecules was diverse, including mainly α-amino acids, yet also β- and γ-amino acids, γ-amino acid analogues, decarboxylated and phosphorylated analogues and dipeptides. Projection to latent structures (PLS) method was selected for the formation of QSRRs, resulting in a total of three PLS models with high cross-validated coefficients of determination Q(2)Y. For this reason, molecular structures were previously described through the use of descriptors. Through stratified random sampling procedures, 57 compounds were split to a training set and a test set. Model creation was based on multiple criteria including principal component significance and eigenvalue, variable importance, form of residuals, etc. Validation was based on statistical metrics Rpred(2),QextF2(2),QextF3(2) for the test set and Roy's metrics rm(Av)(2) and rm(δ)(2), assessing both predictive stability and internal validity. Based on aforementioned models, simplified equivalent were then created using a multi-linear regression (MLR) method. MLR models were also validated with the same metrics. The suggested models are considered useful for the estimation of retention times of amino acid analogues for a series of applications. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Note: Calibration of atomic force microscope cantilevers using only their resonant frequency and quality factor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sader, John E., E-mail: jsader@unimelb.edu.au; Friend, James R.; Department of Mechanical and Aerospace Engineering, University of California-San Diego, La Jolla, California 92122

    2014-11-15

    A simplified method for calibrating atomic force microscope cantilevers was recently proposed by Sader et al. [Rev. Sci. Instrum. 83, 103705 (2012); Sec. III D] that relies solely on the resonant frequency and quality factor of the cantilever in fluid (typically air). This method eliminates the need to measure the hydrodynamic function of the cantilever, which can be time consuming given the wide range of cantilevers now available. Using laser Doppler vibrometry, we rigorously assess the accuracy of this method for a series of commercially available cantilevers and explore its performance under non-ideal conditions. This shows that the simplified methodmore » is highly accurate and can be easily implemented to perform fast, robust, and non-invasive spring constant calibration.« less

  17. 78 FR 34427 - 2012 Tax Information for Use In The Revenue Shortfall Allocation Method

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-07

    ... Information for Use In The Revenue Shortfall Allocation Method AGENCY: Surface Transportation Board, DOT... of American Railroads (AAR), for use in the Revenue Shortfall Allocation Method (RSAM). DATES... revised in Simplified Standards for Rail Rate Cases--Taxes in Revenue Shortfall Allocation Method, EP 646...

  18. Assessment of optional sediment transport functions via the complex watershed simulation model SWAT

    USDA-ARS?s Scientific Manuscript database

    The Soil and Water Assessment Tool 2012 (SWAT2012) offers four sediment routing methods as optional alternatives to the default simplified Bagnold method. Previous studies compared only one of these alternative sediment routing methods with the default method. The proposed study evaluated the impac...

  19. Separating intrinsic from extrinsic fluctuations in dynamic biological systems

    PubMed Central

    Paulsson, Johan

    2011-01-01

    From molecules in cells to organisms in ecosystems, biological populations fluctuate due to the intrinsic randomness of individual events and the extrinsic influence of changing environments. The combined effect is often too complex for effective analysis, and many studies therefore make simplifying assumptions, for example ignoring either intrinsic or extrinsic effects to reduce the number of model assumptions. Here we mathematically demonstrate how two identical and independent reporters embedded in a shared fluctuating environment can be used to identify intrinsic and extrinsic noise terms, but also how these contributions are qualitatively and quantitatively different from what has been previously reported. Furthermore, we show for which classes of biological systems the noise contributions identified by dual-reporter methods correspond to the noise contributions predicted by correct stochastic models of either intrinsic or extrinsic mechanisms. We find that for broad classes of systems, the extrinsic noise from the dual-reporter method can be rigorously analyzed using models that ignore intrinsic stochasticity. In contrast, the intrinsic noise can be rigorously analyzed using models that ignore extrinsic stochasticity only under very special conditions that rarely hold in biology. Testing whether the conditions are met is rarely possible and the dual-reporter method may thus produce flawed conclusions about the properties of the system, particularly about the intrinsic noise. Our results contribute toward establishing a rigorous framework to analyze dynamically fluctuating biological systems. PMID:21730172

  20. Separating intrinsic from extrinsic fluctuations in dynamic biological systems.

    PubMed

    Hilfinger, Andreas; Paulsson, Johan

    2011-07-19

    From molecules in cells to organisms in ecosystems, biological populations fluctuate due to the intrinsic randomness of individual events and the extrinsic influence of changing environments. The combined effect is often too complex for effective analysis, and many studies therefore make simplifying assumptions, for example ignoring either intrinsic or extrinsic effects to reduce the number of model assumptions. Here we mathematically demonstrate how two identical and independent reporters embedded in a shared fluctuating environment can be used to identify intrinsic and extrinsic noise terms, but also how these contributions are qualitatively and quantitatively different from what has been previously reported. Furthermore, we show for which classes of biological systems the noise contributions identified by dual-reporter methods correspond to the noise contributions predicted by correct stochastic models of either intrinsic or extrinsic mechanisms. We find that for broad classes of systems, the extrinsic noise from the dual-reporter method can be rigorously analyzed using models that ignore intrinsic stochasticity. In contrast, the intrinsic noise can be rigorously analyzed using models that ignore extrinsic stochasticity only under very special conditions that rarely hold in biology. Testing whether the conditions are met is rarely possible and the dual-reporter method may thus produce flawed conclusions about the properties of the system, particularly about the intrinsic noise. Our results contribute toward establishing a rigorous framework to analyze dynamically fluctuating biological systems.

  1. Interpretation of long- and short-wavelength magnetic anomalies

    USGS Publications Warehouse

    DeNoyer, John M.; Barringer, Anthony R.

    1980-01-01

    Magset was launched on October 30, 1979. More than a decade of examining existing data, devising appropriate models of the global magnetic field, and extending methods for interpreting long-wavelength magnetic anomalies preceded this launch Magnetic data collected by satellite can be interrupted by using a method of analysis that quantitively describes the magnetic field resulting from three-dimensional geologic structures that are bounded by an arbitrary number of polygonal faces, Each face my have any orientation and three or more sides. At each point of the external field, the component normal to each face is obtained by using an expression for the solid angle subtended by a generalized polygon. The "cross" of tangential components are relatively easy to obtain for the same polygons. No approximations have been made related to orbit height that restrict the dimensions of the polygons relative to the distance from the external field points. This permits the method to be used to model shorter wavelength anomalies obtained from aircraft or ground surveys. The magnetic fields for all the structures considered are determine in the same rectangular coordinate system. The coordinate system is in depended from the orientation of geologic trends and permits multiple structures or bodies to be included in the same magnetic field calculations. This single reference system also simplified adjustments in position and direction to account for earth curvature in regional interpretation.

  2. A simplified strategy for sensitive detection of Rose rosette virus compatible with three RT-PCR chemistries.

    PubMed

    Dobhal, Shefali; Olson, Jennifer D; Arif, Mohammad; Garcia Suarez, Johnny A; Ochoa-Corona, Francisco M

    2016-06-01

    Rose rosette disease is a disorder associated with infection by Rose rosette virus (RRV), a pathogen of roses that causes devastating effects on most garden cultivated varieties, and the wild invasive rose especially Rosa multiflora. Reliable and sensitive detection of this disease in early phases is needed to implement proper control measures. This study assesses a single primer-set based detection method for RRV and demonstrates its application in three different chemistries: Endpoint RT-PCR, TaqMan-quantitative RT-PCR (RT-qPCR) and SYBR Green RT-qPCR with High Resolution Melting analyses. A primer set (RRV2F/2R) was designed from consensus sequences of the nucleocapsid protein gene p3 located in the RNA 3 region of RRV. The specificity of primer set RRV2F/2R was validated in silico against published GenBank sequences and in-vitro against infected plant samples and an exclusivity panel of near-neighbor and other viruses that commonly infect Rosa spp. The developed assay is sensitive with a detection limit of 1fg from infected plant tissue. Thirty rose samples from 8 different states of the United States were tested using the developed methods. The developed methods are sensitive and reliable, and can be used by diagnostic laboratories for routine testing and disease management decisions. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Simultaneous determination of ethyl carbamate and urea in Korean rice wine by ultra-performance liquid chromatography coupled with mass spectrometric detection.

    PubMed

    Lee, Gyeong-Hweon; Bang, Dae-Young; Lim, Jung-Hoon; Yoon, Seok-Min; Yea, Myeong-Jai; Chi, Young-Min

    2017-10-15

    In this study, a rapid method for simultaneous detection of ethyl carbamate (EC) and urea in Korean rice wine was developed. To achieve quantitative analysis of EC and urea, the conditions for Ultra-performance liquid chromatography (UPLC) separation and atmospheric-pressure chemical ionization tandem mass spectrometry (APCI-MS/MS) detection were first optimized. Under the established conditions, the detection limit, relative standard deviation and linear range were 2.83μg/L, 3.75-5.96%, and 0.01-10.0mg/L, respectively, for urea; the corresponding values were 0.17μg/L, 1.06-4.01%, and 1.0-50.0μg/L, respectively, for EC. The correlation between the contents of EC and its precursor urea was determined under specific pH (3.5 and 4.5) and temperature (4, 25, and 50°C) conditions using the developed method. As a result, EC content was increased with greater temperature and lower pH. In Korean rice wine, urea was detected 0.19-1.37mg/L and EC was detected 2.0-7.7μg/L. The method developed in this study, which has the advantages of simplified sample preparation, low detection limits, and good selectivity, was successfully applied for the rapid analysis of EC and urea. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Designing ROW Methods

    NASA Technical Reports Server (NTRS)

    Freed, Alan D.

    1996-01-01

    There are many aspects to consider when designing a Rosenbrock-Wanner-Wolfbrandt (ROW) method for the numerical integration of ordinary differential equations (ODE's) solving initial value problems (IVP's). The process can be simplified by constructing ROW methods around good Runge-Kutta (RK) methods. The formulation of a new, simple, embedded, third-order, ROW method demonstrates this design approach.

  5. A “loop” shape descriptor and its application to automated segmentation of airways from CT scans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pu, Jiantao; Jin, Chenwang, E-mail: jcw76@163.com; Yu, Nan

    2015-06-15

    Purpose: A novel shape descriptor is presented to aid an automated identification of the airways depicted on computed tomography (CT) images. Methods: Instead of simplifying the tubular characteristic of the airways as an ideal mathematical cylindrical or circular shape, the proposed “loop” shape descriptor exploits the fact that the cross sections of any tubular structure (regardless of its regularity) always appear as a loop. In implementation, the authors first reconstruct the anatomical structures in volumetric CT as a three-dimensional surface model using the classical marching cubes algorithm. Then, the loop descriptor is applied to locate the airways with a concavemore » loop cross section. To deal with the variation of the airway walls in density as depicted on CT images, a multiple threshold strategy is proposed. A publicly available chest CT database consisting of 20 CT scans, which was designed specifically for evaluating an airway segmentation algorithm, was used for quantitative performance assessment. Measures, including length, branch count, and generations, were computed under the aid of a skeletonization operation. Results: For the test dataset, the airway length ranged from 64.6 to 429.8 cm, the generation ranged from 7 to 11, and the branch number ranged from 48 to 312. These results were comparable to the performance of the state-of-the-art algorithms validated on the same dataset. Conclusions: The authors’ quantitative experiment demonstrated the feasibility and reliability of the developed shape descriptor in identifying lung airways.« less

  6. Contrast imaging in mouse embryos using high-frequency ultrasound.

    PubMed

    Denbeigh, Janet M; Nixon, Brian A; Puri, Mira C; Foster, F Stuart

    2015-03-04

    Ultrasound contrast-enhanced imaging can convey essential quantitative information regarding tissue vascularity and perfusion and, in targeted applications, facilitate the detection and measure of vascular biomarkers at the molecular level. Within the mouse embryo, this noninvasive technique may be used to uncover basic mechanisms underlying vascular development in the early mouse circulatory system and in genetic models of cardiovascular disease. The mouse embryo also presents as an excellent model for studying the adhesion of microbubbles to angiogenic targets (including vascular endothelial growth factor receptor 2 (VEGFR2) or αvβ3) and for assessing the quantitative nature of molecular ultrasound. We therefore developed a method to introduce ultrasound contrast agents into the vasculature of living, isolated embryos. This allows freedom in terms of injection control and positioning, reproducibility of the imaging plane without obstruction and motion, and simplified image analysis and quantification. Late gestational stage (embryonic day (E)16.6 and E17.5) murine embryos were isolated from the uterus, gently exteriorized from the yolk sac and microbubble contrast agents were injected into veins accessible on the chorionic surface of the placental disc. Nonlinear contrast ultrasound imaging was then employed to collect a number of basic perfusion parameters (peak enhancement, wash-in rate and time to peak) and quantify targeted microbubble binding in an endoglin mouse model. We show the successful circulation of microbubbles within living embryos and the utility of this approach in characterizing embryonic vasculature and microbubble behavior.

  7. Effect of genetic architecture on the prediction accuracy of quantitative traits in samples of unrelated individuals.

    PubMed

    Morgante, Fabio; Huang, Wen; Maltecca, Christian; Mackay, Trudy F C

    2018-06-01

    Predicting complex phenotypes from genomic data is a fundamental aim of animal and plant breeding, where we wish to predict genetic merits of selection candidates; and of human genetics, where we wish to predict disease risk. While genomic prediction models work well with populations of related individuals and high linkage disequilibrium (LD) (e.g., livestock), comparable models perform poorly for populations of unrelated individuals and low LD (e.g., humans). We hypothesized that low prediction accuracies in the latter situation may occur when the genetics architecture of the trait departs from the infinitesimal and additive architecture assumed by most prediction models. We used simulated data for 10,000 lines based on sequence data from a population of unrelated, inbred Drosophila melanogaster lines to evaluate this hypothesis. We show that, even in very simplified scenarios meant as a stress test of the commonly used Genomic Best Linear Unbiased Predictor (G-BLUP) method, using all common variants yields low prediction accuracy regardless of the trait genetic architecture. However, prediction accuracy increases when predictions are informed by the genetic architecture inferred from mapping the top variants affecting main effects and interactions in the training data, provided there is sufficient power for mapping. When the true genetic architecture is largely or partially due to epistatic interactions, the additive model may not perform well, while models that account explicitly for interactions generally increase prediction accuracy. Our results indicate that accounting for genetic architecture can improve prediction accuracy for quantitative traits.

  8. Aiding alternatives assessment with an uncertainty-tolerant hazard scoring method.

    PubMed

    Faludi, Jeremy; Hoang, Tina; Gorman, Patrick; Mulvihill, Martin

    2016-11-01

    This research developed a single-score system to simplify and clarify decision-making in chemical alternatives assessment, accounting for uncertainty. Today, assessing alternatives to hazardous constituent chemicals is a difficult task-rather than comparing alternatives by a single definitive score, many independent toxicological variables must be considered at once, and data gaps are rampant. Thus, most hazard assessments are only comprehensible to toxicologists, but business leaders and politicians need simple scores to make decisions. In addition, they must balance hazard against other considerations, such as product functionality, and they must be aware of the high degrees of uncertainty in chemical hazard data. This research proposes a transparent, reproducible method to translate eighteen hazard endpoints into a simple numeric score with quantified uncertainty, alongside a similar product functionality score, to aid decisions between alternative products. The scoring method uses Clean Production Action's GreenScreen as a guide, but with a different method of score aggregation. It provides finer differentiation between scores than GreenScreen's four-point scale, and it displays uncertainty quantitatively in the final score. Displaying uncertainty also illustrates which alternatives are early in product development versus well-defined commercial products. This paper tested the proposed assessment method through a case study in the building industry, assessing alternatives to spray polyurethane foam insulation containing methylene diphenyl diisocyanate (MDI). The new hazard scoring method successfully identified trade-offs between different alternatives, showing finer resolution than GreenScreen Benchmarking. Sensitivity analysis showed that different weighting schemes in hazard scores had almost no effect on alternatives ranking, compared to uncertainty from data gaps. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Feasibility of the Simultaneous Determination of Monomer Concentrations and Particle Size in Emulsion Polymerization Using in Situ Raman Spectroscopy

    PubMed Central

    2015-01-01

    An immersion Raman probe was used in emulsion copolymerization reactions to measure monomer concentrations and particle sizes. Quantitative determination of monomer concentrations is feasible in two-monomer copolymerizations, but only the overall conversion could be measured by Raman spectroscopy in a four-monomer copolymerization. The feasibility of measuring monomer conversion and particle size was established using partial least-squares (PLS) calibration models. A simplified theoretical framework for the measurement of particle sizes based on photon scattering is presented, based on the elastic-sphere-vibration and surface-tension models. PMID:26900256

  10. Performance of advanced missions using fusion propulsion

    NASA Technical Reports Server (NTRS)

    Friedlander, Alan; Mcadams, Jim; Schulze, Norm

    1989-01-01

    A quantitive evaluation of the premise that nuclear fusion propulsion offers benefits as compared to other propulsion technologies for carrying out a program of advanced exploration of the solar system and beyond is presented. Using a simplified analytical model of trajectory performance, numerical results of mass requirements versus trip time are given for robotic missions beyond the solar system that include flyby and rendezvous with the Oort cloud of comets and with the star system Alpha Centauri. Round trip missions within the solar system, including robotic sample returns from the outer planet moons and multiple asteroid targets, and manned Mars exploration are also described.

  11. Measuring Total and Germinable Spore Populations

    NASA Technical Reports Server (NTRS)

    Noell, A.C.; Yung, P.T.; Yang, W.; Lee, C.; Ponce, A.

    2011-01-01

    It has been shown that bacterial endospores can be enumerated using a microscopy based assay that images the luminescent halos from terbium ions bound to dipicolinic acid, a spore specific chemical marker released upon spore germination. Further development of the instrument has simplified it towards automation while at the same time improving image quality. Enumeration of total spore populations has also been developed allowing measurement of the percentage of viable spores in any population by comparing the germinable/culturable spores to the total. Percentage viability will allow a more quantitative comparison of the ability of spores to survive across a wide range of extreme environments.

  12. Spatial accuracy of a simplified disaggregation method for traffic emissions applied in seven mid-sized Chilean cities

    NASA Astrophysics Data System (ADS)

    Ossés de Eicker, Margarita; Zah, Rainer; Triviño, Rubén; Hurni, Hans

    The spatial accuracy of top-down traffic emission inventory maps obtained with a simplified disaggregation method based on street density was assessed in seven mid-sized Chilean cities. Each top-down emission inventory map was compared against a reference, namely a more accurate bottom-up emission inventory map from the same study area. The comparison was carried out using a combination of numerical indicators and visual interpretation. Statistically significant differences were found between the seven cities with regard to the spatial accuracy of their top-down emission inventory maps. In compact cities with a simple street network and a single center, a good accuracy of the spatial distribution of emissions was achieved with correlation values>0.8 with respect to the bottom-up emission inventory of reference. In contrast, the simplified disaggregation method is not suitable for complex cities consisting of interconnected nuclei, resulting in correlation values<0.5. Although top-down disaggregation of traffic emissions generally exhibits low accuracy, the accuracy is significantly higher in compact cities and might be further improved by applying a correction factor for the city center. Therefore, the method can be used by local environmental authorities in cities with limited resources and with little knowledge on the pollution situation to get an overview on the spatial distribution of the emissions generated by traffic activities.

  13. Birth Control - Multiple Languages

    MedlinePlus

    ... Burmese (myanma bhasa) Expand Section Birth Control Methods - English PDF Birth Control Methods - myanma bhasa (Burmese) PDF ... Mandarin dialect) (简体中文) Expand Section Before Your Vasectomy - English PDF Before Your Vasectomy - 简体中文 (Chinese, Simplified (Mandarin ...

  14. A simplified real time method to forecast semi-enclosed basins storm surge

    NASA Astrophysics Data System (ADS)

    Pasquali, D.; Di Risio, M.; De Girolamo, P.

    2015-11-01

    Semi-enclosed basins are often prone to storm surge events. Indeed, their meteorological exposition, the presence of large continental shelf and their shape can lead to strong sea level set-up. A real time system aimed at forecasting storm surge may be of great help to protect human activities (i.e. to forecast flooding due to storm surge events), to manage ports and to safeguard coasts safety. This paper aims at illustrating a simple method able to forecast storm surge events in semi-enclosed basins in real time. The method is based on a mixed approach in which the results obtained by means of a simplified physics based model with low computational costs are corrected by means of statistical techniques. The proposed method is applied to a point of interest located in the Northern part of the Adriatic Sea. The comparison of forecasted levels against observed values shows the satisfactory reliability of the forecasts.

  15. Automated Simplification of Full Chemical Mechanisms

    NASA Technical Reports Server (NTRS)

    Norris, A. T.

    1997-01-01

    A code has been developed to automatically simplify full chemical mechanisms. The method employed is based on the Intrinsic Low Dimensional Manifold (ILDM) method of Maas and Pope. The ILDM method is a dynamical systems approach to the simplification of large chemical kinetic mechanisms. By identifying low-dimensional attracting manifolds, the method allows complex full mechanisms to be parameterized by just a few variables; in effect, generating reduced chemical mechanisms by an automatic procedure. These resulting mechanisms however, still retain all the species used in the full mechanism. Full and skeletal mechanisms for various fuels are simplified to a two dimensional manifold, and the resulting mechanisms are found to compare well with the full mechanisms, and show significant improvement over global one step mechanisms, such as those by Westbrook and Dryer. In addition, by using an ILDM reaction mechanism in a CID code, a considerable improvement in turn-around time can be achieved.

  16. Numerical Approximation of Elasticity Tensor Associated With Green-Naghdi Rate.

    PubMed

    Liu, Haofei; Sun, Wei

    2017-08-01

    Objective stress rates are often used in commercial finite element (FE) programs. However, deriving a consistent tangent modulus tensor (also known as elasticity tensor or material Jacobian) associated with the objective stress rates is challenging when complex material models are utilized. In this paper, an approximation method for the tangent modulus tensor associated with the Green-Naghdi rate of the Kirchhoff stress is employed to simplify the evaluation process. The effectiveness of the approach is demonstrated through the implementation of two user-defined fiber-reinforced hyperelastic material models. Comparisons between the approximation method and the closed-form analytical method demonstrate that the former can simplify the material Jacobian evaluation with satisfactory accuracy while retaining its computational efficiency. Moreover, since the approximation method is independent of material models, it can facilitate the implementation of complex material models in FE analysis using shell/membrane elements in abaqus.

  17. Nonlinear optimization simplified by hypersurface deformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stillinger, F.H.; Weber, T.A.

    1988-09-01

    A general strategy is advanced for simplifying nonlinear optimization problems, the ant-lion method. This approach exploits shape modifications of the cost-function hypersurface which distend basins surrounding low-lying minima (including global minima). By intertwining hypersurface deformations with steepest-descent displacements, the search is concentrated on a small relevant subset of all minima. Specific calculations demonstrating the value of this method are reported for the partitioning of two classes of irregular but nonrandom graphs, the prime-factor graphs and the pi graphs. We also indicate how this approach can be applied to the traveling salesman problem and to design layout optimization, and that itmore » may be useful in combination with simulated annealing strategies.« less

  18. Simplification of the Kalman filter for meteorological data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1991-01-01

    The paper proposes a new statistical method of data assimilation that is based on a simplification of the Kalman filter equations. The forecast error covariance evolution is approximated simply by advecting the mass-error covariance field, deriving the remaining covariances geostrophically, and accounting for external model-error forcing only at the end of each forecast cycle. This greatly reduces the cost of computation of the forecast error covariance. In simulations with a linear, one-dimensional shallow-water model and data generated artificially, the performance of the simplified filter is compared with that of the Kalman filter and the optimal interpolation (OI) method. The simplified filter produces analyses that are nearly optimal, and represents a significant improvement over OI.

  19. Two tradeoffs between economy and reliability in loss of load probability constrained unit commitment

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Wang, Mingqiang; Ning, Xingyao

    2018-02-01

    Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.

  20. An Analysis of Once-per-revolution Oscillating Aerodynamic Thrust Loads on Single-Rotation Propellers on Tractor Airplanes at Zero Yaw

    NASA Technical Reports Server (NTRS)

    Rogallo, Vernon L; Yaggy, Paul F; Mccloud, John L , III

    1956-01-01

    A simplified procedure is shown for calculating the once-per-revolution oscillating aerodynamic thrust loads on propellers of tractor airplanes at zero yaw. The only flow field information required for the application of the procedure is a knowledge of the upflow angles at the horizontal center line of the propeller disk. Methods are presented whereby these angles may be computed without recourse to experimental survey of the flow field. The loads computed by the simplified procedure are compared with those computed by a more rigorous method and the procedure is applied to several airplane configurations which are believed typical of current designs. The results are generally satisfactory.

  1. Improvement of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry identification of difficult-to-identify bacteria and its impact in the workflow of a clinical microbiology laboratory.

    PubMed

    Rodríguez-Sánchez, Belén; Marín, Mercedes; Sánchez-Carrillo, Carlos; Cercenado, Emilia; Ruiz, Adrián; Rodríguez-Créixems, Marta; Bouza, Emilio

    2014-05-01

    This study evaluates matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) capability for the identification of difficult-to-identify microorganisms. A total of 150 bacterial isolates inconclusively identified with conventional phenotypic tests were further assessed by 16S rRNA sequencing and by MALDI-TOF MS following 2 methods: a) a simplified formic acid-based, on-plate extraction and b) performing a tube-based extraction step. Using the simplified method, 29 isolates could not be identified. For the remaining 121 isolates (80.7%), we obtained a reliable identification by MALDI-TOF: in 103 isolates, the identification by 16S rRNA sequencing and MALDI TOF coincided at the species level (68.7% from the total 150 analyzed isolates and 85.1% from the samples with MALDI-TOF result), and in 18 isolates, the identification by both methods coincided at the genus level (12% from the total and 14.9% from the samples with MALDI-TOF results). No discordant results were observed. The performance of the tube-based extraction step allowed the identification at the species level of 6 of the 29 unidentified isolates by the simplified method. In summary, MALDI-TOF can be used for the rapid identification of many bacterial isolates inconclusively identified by conventional methods. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Modal ring method for the scattering of sound

    NASA Technical Reports Server (NTRS)

    Baumeister, Kenneth J.; Kreider, Kevin L.

    1993-01-01

    The modal element method for acoustic scattering can be simplified when the scattering body is rigid. In this simplified method, called the modal ring method, the scattering body is represented by a ring of triangular finite elements forming the outer surface. The acoustic pressure is calculated at the element nodes. The pressure in the infinite computational region surrounding the body is represented analytically by an eigenfunction expansion. The two solution forms are coupled by the continuity of pressure and velocity on the body surface. The modal ring method effectively reduces the two-dimensional scattering problem to a one-dimensional problem capable of handling very high frequency scattering. In contrast to the boundary element method or the method of moments, which perform a similar reduction in problem dimension, the model line method has the added advantage of having a highly banded solution matrix requiring considerably less computer storage. The method shows excellent agreement with analytic results for scattering from rigid circular cylinders over a wide frequency range (1 is equal to or less than ka is less than or equal to 100) in the near and far fields.

  3. Stability analysis via the concept of Lyapunov exponents: a case study in optimal controlled biped standing

    NASA Astrophysics Data System (ADS)

    Sun, Yuming; Wu, Christine Qiong

    2012-12-01

    Balancing control is important for biped standing. In spite of large efforts, it is very difficult to design balancing control strategies satisfying three requirements simultaneously: maintaining postural stability, improving energy efficiency and satisfying the constraints between the biped feet and the ground. In this article, a proportional-derivative (PD) controller is proposed for a standing biped, which is simplified as a two-link inverted pendulum with one additional rigid foot-link. The genetic algorithm (GA) is used to search for the control gain meeting all three requirements. The stability analysis of such a deterministic biped control system is carried out using the concept of Lyapunov exponents (LEs), based on which, the system stability, where the disturbance comes from the initial states, and the structural stability, where the disturbance comes from the PD gains, are examined quantitively in terms of stability region. This article contributes to the biped balancing control, more significantly, the method shown in the studied case of biped provides a general framework of systematic stability analysis for certain deterministic nonlinear dynamical systems.

  4. Ten questions concerning occupant behavior in buildings: The big picture

    DOE PAGES

    Hong, Tianzhen; Yan, Da; D'Oca, Simona; ...

    2016-12-27

    Occupant behavior has significant impacts on building energy performance and occupant comfort. However, occupant behavior is not well understood and is often oversimplified in the building life cycle, due to its stochastic, diverse, complex, and interdisciplinary nature. The use of simplified methods or tools to quantify the impacts of occupant behavior in building performance simulations significantly contributes to performance gaps between simulated models and actual building energy consumption. Therefore, it is crucial to understand occupant behavior in a comprehensive way, integrating qualitative approaches and data- and model-driven quantitative approaches, and employing appropriate tools to guide the design and operation ofmore » low-energy residential and commercial buildings that integrate technological and human dimensions. This paper presents ten questions, highlighting some of the most important issues regarding concepts, applications, and methodologies in occupant behavior research. The proposed questions and answers aim to provide insights into occupant behavior for current and future researchers, designers, and policy makers, and most importantly, to inspire innovative research and applications to increase energy efficiency and reduce energy use in buildings.« less

  5. Dynamical analysis of bounded and unbounded orbits in a generalized Hénon-Heiles system

    NASA Astrophysics Data System (ADS)

    Dubeibe, F. L.; Riaño-Doncel, A.; Zotos, Euaggelos E.

    2018-04-01

    The Hénon-Heiles potential was first proposed as a simplified version of the gravitational potential experimented by a star in the presence of a galactic center. Currently, this system is considered a paradigm in dynamical systems because despite its simplicity exhibits a very complex dynamical behavior. In the present paper, we perform a series expansion up to the fifth-order of a potential with axial and reflection symmetries, which after some transformations, leads to a generalized Hénon-Heiles potential. Such new system is analyzed qualitatively in both regimes of bounded and unbounded motion via the Poincaré sections method and plotting the exit basins. On the other hand, the quantitative analysis is performed through the Lyapunov exponents and the basin entropy, respectively. We find that in both regimes the chaoticity of the system decreases as long as the test particle energy gets far from the critical energy. Additionally, we may conclude that despite the inclusion of higher order terms in the series expansion, the new system shows wider zones of regularity (islands) than the ones present in the Hénon-Heiles system.

  6. A Multiplex Snapback Primer System for the Enrichment and Detection of JAK2 V617F and MPL W515L/K Mutations in Philadelphia-Negative Myeloproliferative Neoplasms

    PubMed Central

    Zhang, Yunqing; Zhang, Xinju; Xu, Xiao; Kang, Zhihua; Li, Shibao; Zhang, Chen; Su, Bing

    2014-01-01

    A multiplex snapback primer system was developed for the simultaneous detection of JAK2 V617F and MPL W515L/K mutations in Philadelphia chromosome- (Ph-) negative myeloproliferative neoplasms (MPNs). The multiplex system comprises two snapback versus limiting primer sets for JAK2 and MPL mutation enrichment and detection, respectively. Linear-After exponential (LATE) PCR strategy was employed for the primer design to maximize the amplification efficiency of the system. Low ionic strength buffer and rapid PCR protocol allowed for selective amplification of the mutant alleles. Amplification products were analyzed by melting curve analysis for mutation identification. The multiplex system archived 0.1% mutation load sensitivity and <5% coefficient of variation inter-/intra-assay reproducibility. 120 clinical samples were tested by the multiplex snapback primer assay, and verified with amplification refractory system (ARMS), quantitative PCR (qPCR) and Sanger sequencing method. The multiplex system, with a favored versatility, provided the molecular diagnosis of Ph-negative MPNs with a suitable implement and simplified the genetic test process. PMID:24729973

  7. Carbon Nanotube Fiber Ionization Mass Spectrometry: A Fundamental Study of a Multi-Walled Carbon Nanotube Functionalized Corona Discharge Pin for Polycyclic Aromatic Hydrocarbons Analysis.

    PubMed

    Nahan, Keaton S; Alvarez, Noe; Shanov, Vesselin; Vonderheide, Anne

    2017-11-01

    Mass spectrometry continues to tackle many complicated tasks, and ongoing research seeks to simplify its instrumentation as well as sampling. The desorption electrospray ionization (DESI) source was the first ambient ionization source to function without extensive gas requirements and chromatography. Electrospray techniques generally have low efficiency for ionization of nonpolar analytes and some researchers have resorted to methods such as direct analysis in real time (DART) or desorption atmospheric pressure chemical ionization (DAPCI) for their analysis. In this work, a carbon nanotube fiber ionization (nanoCFI) source was developed and was found to be capable of solid phase microextraction (SPME) of nonpolar analytes as well as ionization and sampling similar to that of direct probe atmospheric pressure chemical ionization (DP-APCI). Conductivity and adsorption were maintained by utilizing a corona pin functionalized with a multi-walled carbon nanotube (MWCNT) thread. Quantitative work with the nanoCFI source with a designed corona discharge pin insert demonstrated linearity up to 0.97 (R 2 ) of three target PAHs with phenanthrene internal standard. Graphical Abstract ᅟ.

  8. Physical Vapor Deposition of Thin Films

    NASA Astrophysics Data System (ADS)

    Mahan, John E.

    2000-01-01

    A unified treatment of the theories, data, and technologies underlying physical vapor deposition methods With electronic, optical, and magnetic coating technologies increasingly dominating manufacturing in the high-tech industries, there is a growing need for expertise in physical vapor deposition of thin films. This important new work provides researchers and engineers in this field with the information they need to tackle thin film processes in the real world. Presenting a cohesive, thoroughly developed treatment of both fundamental and applied topics, Physical Vapor Deposition of Thin Films incorporates many critical results from across the literature as it imparts a working knowledge of a variety of present-day techniques. Numerous worked examples, extensive references, and more than 100 illustrations and photographs accompany coverage of: * Thermal evaporation, sputtering, and pulsed laser deposition techniques * Key theories and phenomena, including the kinetic theory of gases, adsorption and condensation, high-vacuum pumping dynamics, and sputtering discharges * Trends in sputter yield data and a new simplified collisional model of sputter yield for pure element targets * Quantitative models for film deposition rate, thickness profiles, and thermalization of the sputtered beam

  9. Carbon Nanotube Fiber Ionization Mass Spectrometry: A Fundamental Study of a Multi-Walled Carbon Nanotube Functionalized Corona Discharge Pin for Polycyclic Aromatic Hydrocarbons Analysis

    NASA Astrophysics Data System (ADS)

    Nahan, Keaton S.; Alvarez, Noe; Shanov, Vesselin; Vonderheide, Anne

    2017-09-01

    Mass spectrometry continues to tackle many complicated tasks, and ongoing research seeks to simplify its instrumentation as well as sampling. The desorption electrospray ionization (DESI) source was the first ambient ionization source to function without extensive gas requirements and chromatography. Electrospray techniques generally have low efficiency for ionization of nonpolar analytes and some researchers have resorted to methods such as direct analysis in real time (DART) or desorption atmospheric pressure chemical ionization (DAPCI) for their analysis. In this work, a carbon nanotube fiber ionization (nanoCFI) source was developed and was found to be capable of solid phase microextraction (SPME) of nonpolar analytes as well as ionization and sampling similar to that of direct probe atmospheric pressure chemical ionization (DP-APCI). Conductivity and adsorption were maintained by utilizing a corona pin functionalized with a multi-walled carbon nanotube (MWCNT) thread. Quantitative work with the nanoCFI source with a designed corona discharge pin insert demonstrated linearity up to 0.97 (R2) of three target PAHs with phenanthrene internal standard. [Figure not available: see fulltext.

  10. Eutectic Experiment Development for Space Processing

    NASA Technical Reports Server (NTRS)

    Hopkins, R. H.

    1972-01-01

    A ground base test plan and a specimen evaluation scheme have been developed for the aluminum-copper eutectic solidification experiment to be run in the M518 multipurpose electric furnace during the Skylab mission. Besides thermal and solidification studies a detailed description is given of the quantitative metallographic technique which is appropriate for characterizing eutectic structures. This method should prove a key tool for evaluating specimen microstructure which is the most sensitive indicator of changes produced during solidification. It has been recommended that single grain pre-frozen eutectic specimens be used to simplify microstructural evaluation and to eliminate any porosity in the as-cast eutectic specimens. High purity (99.999%) materials from one supplier should be employed for all experiments. Laboratory studies indicate that porosity occurs in the MRC as-cast eutectic ingots but that this porosity can be eliminated by directional freezing. Chemical analysis shows that the MRC ingots are slightly Al rich and contain about .03% impurity. Because of the impurity content the lower cooldown rate (1.2 C/min) should be used for eutectic freezing if MRC material is used in the M518 furnace.

  11. Evaluating the Reliability of Emergency Response Systems for Large-Scale Incident Operations

    PubMed Central

    Jackson, Brian A.; Faith, Kay Sullivan; Willis, Henry H.

    2012-01-01

    Abstract The ability to measure emergency preparedness—to predict the likely performance of emergency response systems in future events—is critical for policy analysis in homeland security. Yet it remains difficult to know how prepared a response system is to deal with large-scale incidents, whether it be a natural disaster, terrorist attack, or industrial or transportation accident. This research draws on the fields of systems analysis and engineering to apply the concept of system reliability to the evaluation of emergency response systems. The authors describe a method for modeling an emergency response system; identifying how individual parts of the system might fail; and assessing the likelihood of each failure and the severity of its effects on the overall response effort. The authors walk the reader through two applications of this method: a simplified example in which responders must deliver medical treatment to a certain number of people in a specified time window, and a more complex scenario involving the release of chlorine gas. The authors also describe an exploratory analysis in which they parsed a set of after-action reports describing real-world incidents, to demonstrate how this method can be used to quantitatively analyze data on past response performance. The authors conclude with a discussion of how this method of measuring emergency response system reliability could inform policy discussion of emergency preparedness, how system reliability might be improved, and the costs of doing so. PMID:28083267

  12. The Renormalization Group and Its Applications to Generating Coarse-Grained Models of Large Biological Molecular Systems.

    PubMed

    Koehl, Patrice; Poitevin, Frédéric; Navaza, Rafael; Delarue, Marc

    2017-03-14

    Understanding the dynamics of biomolecules is the key to understanding their biological activities. Computational methods ranging from all-atom molecular dynamics simulations to coarse-grained normal-mode analyses based on simplified elastic networks provide a general framework to studying these dynamics. Despite recent successes in studying very large systems with up to a 100,000,000 atoms, those methods are currently limited to studying small- to medium-sized molecular systems due to computational limitations. One solution to circumvent these limitations is to reduce the size of the system under study. In this paper, we argue that coarse-graining, the standard approach to such size reduction, must define a hierarchy of models of decreasing sizes that are consistent with each other, i.e., that each model contains the information of the dynamics of its predecessor. We propose a new method, Decimate, for generating such a hierarchy within the context of elastic networks for normal-mode analysis. This method is based on the concept of the renormalization group developed in statistical physics. We highlight the details of its implementation, with a special focus on its scalability to large systems of up to millions of atoms. We illustrate its application on two large systems, the capsid of a virus and the ribosome translation complex. We show that highly decimated representations of those systems, containing down to 1% of their original number of atoms, still capture qualitatively and quantitatively their dynamics. Decimate is available as an OpenSource resource.

  13. A simplified boron diffusion for preparing the silicon single crystal p-n junction as an educational device

    NASA Astrophysics Data System (ADS)

    Shiota, Koki; Kai, Kazuho; Nagaoka, Shiro; Tsuji, Takuto; Wakahara, Akihiro; Rusop, Mohamad

    2016-07-01

    The educational method which is including designing, making, and evaluating actual semiconductor devices with learning the theory is one of the best way to obtain the fundamental understanding of the device physics and to cultivate the ability to make unique ideas using the knowledge in the semiconductor device. In this paper, the simplified Boron thermal diffusion process using Sol-Gel material under normal air environment was proposed based on simple hypothesis and the feasibility of the reproducibility and reliability were investigated to simplify the diffusion process for making the educational devices, such as p-n junction, bipolar and pMOS devices. As the result, this method was successfully achieved making p+ region on the surface of the n-type silicon substrates with good reproducibility. And good rectification property of the p-n junctions was obtained successfully. This result indicates that there is a possibility to apply on the process making pMOS or bipolar transistors. It suggests that there is a variety of the possibility of the applications in the educational field to foster an imagination of new devices.

  14. Super-resolution with an SLM and two intensity images

    NASA Astrophysics Data System (ADS)

    Alcalá Ochoa, Noé; de León, Y. Ponce

    2018-06-01

    It is reported a method which may simplify the optical setups used to achieve super-resolution through the amplitude multiplication of two waves. For this end we decompose a super-resolving pupil into two complex masks and with the aid of a Spatial Light Modulator (LCoS) we obtain two intensity images that are subtracted. With this proposal, the traditional experimental optical setups are considerably simplified, with the additional benefit that different masks can be utilized without needing to perform the setup alignment each time.

  15. Systematic comparison of the behaviors produced by computational models of epileptic neocortex.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warlaumont, A. S.; Lee, H. C.; Benayoun, M.

    2010-12-01

    Two existing models of brain dynamics in epilepsy, one detailed (i.e., realistic) and one abstract (i.e., simplified) are compared in terms of behavioral range and match to in vitro mouse recordings. A new method is introduced for comparing across computational models that may have very different forms. First, high-level metrics were extracted from model and in vitro output time series. A principal components analysis was then performed over these metrics to obtain a reduced set of derived features. These features define a low-dimensional behavior space in which quantitative measures of behavioral range and degree of match to real data canmore » be obtained. The detailed and abstract models and the mouse recordings overlapped considerably in behavior space. Both the range of behaviors and similarity to mouse data were similar between the detailed and abstract models. When no high-level metrics were used and principal components analysis was computed over raw time series, the models overlapped minimally with the mouse recordings. The method introduced here is suitable for comparing across different kinds of model data and across real brain recordings. It appears that, despite differences in form and computational expense, detailed and abstract models do not necessarily differ in their behaviors.« less

  16. Adaptive online self-gating (ADIOS) for free-breathing noncontrast renal MR angiography.

    PubMed

    Xie, Yibin; Fan, Zhaoyang; Saouaf, Rola; Natsuaki, Yutaka; Laub, Gerhard; Li, Debiao

    2015-01-01

    To develop a respiratory self-gating method, adaptive online self-gating (ADIOS), for noncontrast MR angiography (NC MRA) of renal arteries to overcome some limitations of current free-breathing methods. A NC MRA pulse sequence for online respiratory self-gating was developed based on three-dimensional balanced steady-state free precession (bSSFP) and slab-selective inversion-recovery. Motion information was derived directly from the slab being imaged for online gating. Scan efficiency was maintained by an automatic adaptive online algorithm. Qualitative and quantitative assessments of image quality were performed and results were compared with conventional diaphragm navigator (NAV). NC MRA imaging was successfully completed in all subjects (n = 15). Similarly good image quality was observed in the proximal-middle renal arteries with ADIOS compared with NAV. Superior image quality was observed in the middle-distal renal arteries in the right kidneys with no NAV-induced artifacts. Maximal visible artery length was significantly longer with ADIOS versus NAV in the right kidneys. NAV setup was completely eliminated and scan time was significantly shorter with ADIOS on average compared with NAV. The proposed ADIOS technique for noncontrast MRA provides high-quality visualization of renal arteries with no diaphragm navigator-induced artifacts, simplified setup, and shorter scan time. © 2014 Wiley Periodicals, Inc.

  17. Reductive amination-assisted quantitation of tamoxifen and its metabolites by liquid phase chromatography tandem mass spectrometry.

    PubMed

    Liang, Shih-Shin; Wang, Tsu-Nai; Chiu, Chien-Chih; Kuo, Po-Lin; Huang, Mei-Fang; Liu, Meng-Chieh; Tsai, Eing-Mei

    2016-02-19

    Tamoxifen, a hormonal therapy drug against estrogen receptor-positive breast cancer, can be metabolized by cytochrome P450 enzymes such as CYP3A4 and CYP3A5, and converted to N-desmethyltamoxifen, which is subsequently, metabolized by CYP2D6 and inverted to form 4-hydroxy-N-desmethyltamoxifen (endoxifen). Conventional mass spectrometry (MS) analyses of tamoxifen and its metabolites require isotopic internal standards (ISs). In this study, endoxifen and N-desmethyltamoxifen amine groups were modified by reductive amination with formaldehyde-D2 to produce new metabolite molecules. Both endoxifen and N-desmethyltamoxifen generated their corresponding D2-methyl modified analogs. This method is expected to simplify MS detection and overcome the difficulty in selecting adequate ISs when tamoxifen metabolites are analyzed by absolute quantification. It identified tamoxifen, D2-methyl modified endoxifen, and D2-methyl modified N-desmethyltamoxifen with a linearity ranging from 2 to 5000 ng/mL with correlation coefficient (R(2)) values of 0.9868, 0.9849, and 0.9880, respectively. Furthermore, this reductive amination-based method may enhance the signal intensities of D2-methyl modified N-desmethyltamoxifen and endoxifen, thus facilitating the MS detection. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Molecular dynamics approach to water structure of HII mesophase of monoolein

    NASA Astrophysics Data System (ADS)

    Kolev, Vesselin; Ivanova, Anela; Madjarova, Galia; Aserin, Abraham; Garti, Nissim

    2012-02-01

    The goal of the present work is to study theoretically the structure of water inside the water cylinder of the inverse hexagonal mesophase (HII) of glyceryl monooleate (monoolein, GMO), using the method of molecular dynamics. To simplify the computational model, a fixed structure of the GMO tube is maintained. The non-standard cylindrical geometry of the system required the development and application of a novel method for obtaining the starting distribution of water molecules. A predictor-corrector schema is employed for generation of the initial density of water. Molecular dynamics calculations are performed at constant volume and temperature (NVT ensemble) with 1D periodic boundary conditions applied. During the simulations the lipid structure is kept fixed, while the dynamics of water is unrestrained. Distribution of hydrogen bonds and density as well as radial distribution of water molecules across the water cylinder show the presence of water structure deep in the cylinder (about 6 Å below the GMO heads). The obtained results may help understanding the role of water structure in the processes of insertion of external molecules inside the GMO/water system. The present work has a semi-quantitative character and it should be considered as the initial stage of more comprehensive future theoretical studies.

  19. Detection and quantification of genetically modified organisms using very short, locked nucleic acid TaqMan probes.

    PubMed

    Salvi, Sergio; D'Orso, Fabio; Morelli, Giorgio

    2008-06-25

    Many countries have introduced mandatory labeling requirements on foods derived from genetically modified organisms (GMOs). Real-time quantitative polymerase chain reaction (PCR) based upon the TaqMan probe chemistry has become the method mostly used to support these regulations; moreover, event-specific PCR is the preferred method in GMO detection because of its high specificity based on the flanking sequence of the exogenous integrant. The aim of this study was to evaluate the use of very short (eight-nucleotide long), locked nucleic acid (LNA) TaqMan probes in 5'-nuclease PCR assays for the detection and quantification of GMOs. Classic TaqMan and LNA TaqMan probes were compared for the analysis of the maize MON810 transgene. The performance of the two types of probes was tested on the maize endogenous reference gene hmga, the CaMV 35S promoter, and the hsp70/cryIA(b) construct as well as for the event-specific 5'-integration junction of MON810, using plasmids as standard reference molecules. The results of our study demonstrate that the LNA 5'-nuclease PCR assays represent a valid and reliable analytical system for the detection and quantification of transgenes. Application of very short LNA TaqMan probes to GMO quantification can simplify the design of 5'-nuclease assays.

  20. A Simplified Approach for Simultaneous Measurements of Wavefront Velocity and Curvature in the Heart Using Activation Times.

    PubMed

    Mazeh, Nachaat; Haines, David E; Kay, Matthew W; Roth, Bradley J

    2013-12-01

    The velocity and curvature of a wave front are important factors governing the propagation of electrical activity through cardiac tissue, particularly during heart arrhythmias of clinical importance such as fibrillation. Presently, no simple computational model exists to determine these values simultaneously. The proposed model uses the arrival times at four or five sites to determine the wave front speed ( v ), direction (θ), and radius of curvature (ROC) ( r 0 ). If the arrival times are measured, then v , θ, and r 0 can be found from differences in arrival times and the distance between these sites. During isotropic conduction, we found good correlation between measured values of the ROC r 0 and the distance from the unipolar stimulus ( r = 0.9043 and p < 0.0001). The conduction velocity (m/s) was correlated ( r = 0.998, p < 0.0001) using our method (mean = 0.2403, SD = 0.0533) and an empirical method (mean = 0.2352, SD = 0.0560). The model was applied to a condition of anisotropy and a complex case of reentry with a high voltage extra stimulus. Again, results show good correlation between our simplified approach and established methods for multiple wavefront morphologies. In conclusion, insignificant measurement errors were observed between this simplified approach and an approach that was more computationally demanding. Accuracy was maintained when the requirement that ε (ε = b/r 0 , ratio of recording site spacing over wave fronts ROC) was between 0.001 and 0.5. The present simplified model can be applied to a variety of clinical conditions to predict behavior of planar, elliptical, and reentrant wave fronts. It may be used to study the genesis and propagation of rotors in human arrhythmias and could lead to rotor mapping using low density endocardial recording electrodes.

  1. Java Programs for Using Newmark's Method and Simplified Decoupled Analysis to Model Slope Performance During Earthquakes

    USGS Publications Warehouse

    Jibson, Randall W.; Jibson, Matthew W.

    2003-01-01

    Landslides typically cause a large proportion of earthquake damage, and the ability to predict slope performance during earthquakes is important for many types of seismic-hazard analysis and for the design of engineered slopes. Newmark's method for modeling a landslide as a rigid-plastic block sliding on an inclined plane provides a useful method for predicting approximate landslide displacements. Newmark's method estimates the displacement of a potential landslide block as it is subjected to earthquake shaking from a specific strong-motion record (earthquake acceleration-time history). A modification of Newmark's method, decoupled analysis, allows modeling landslides that are not assumed to be rigid blocks. This open-file report is available on CD-ROM and contains Java programs intended to facilitate performing both rigorous and simplified Newmark sliding-block analysis and a simplified model of decoupled analysis. For rigorous analysis, 2160 strong-motion records from 29 earthquakes are included along with a search interface for selecting records based on a wide variety of record properties. Utilities are available that allow users to add their own records to the program and use them for conducting Newmark analyses. Also included is a document containing detailed information about how to use Newmark's method to model dynamic slope performance. This program will run on any platform that supports the Java Runtime Environment (JRE) version 1.3, including Windows, Mac OSX, Linux, Solaris, etc. A minimum of 64 MB of available RAM is needed, and the fully installed program requires 400 MB of disk space.

  2. The Uncertainty of Mass Discharge Measurements Using Pumping Methods Under Simplified Conditions

    EPA Science Inventory

    Mass discharge measurements at contaminated sites have been used to assist with site management decisions, and can be divided into two broad categories: point-scale measurement techniques and pumping methods. Pumping methods can be sub-divided based on the pumping procedures use...

  3. 77 FR 5253 - Agency Information Collection Request. 60-Day Public Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-02

    ..., despite their different methods of assessing income or otherwise determining eligibility. CHIPRA.... The evaluation also provides an opportunity to understand other methods of simplified enrollment that states have been pursuing and to assess the benefits and potential costs of these methods compared to...

  4. Simplified methods of predicting aircraft rolling moments due to vortex encounters

    DOT National Transportation Integrated Search

    1977-05-01

    Computational methods suitable for fast and accurate prediction of rolling moments on aircraft : encountering wake vortices are presented. Appropriate modifications to strip theory are developed which account for the effects of finite wingspan. It is...

  5. A simplified counter diffusion method combined with a 1D simulation program for optimizing crystallization conditions.

    PubMed

    Tanaka, Hiroaki; Inaka, Koji; Sugiyama, Shigeru; Takahashi, Sachiko; Sano, Satoshi; Sato, Masaru; Yoshitomi, Susumu

    2004-01-01

    We developed a new protein crystallization method has been developed using a simplified counter-diffusion method for optimizing crystallization condition. It is composed of only a single capillary, the gel in the silicon tube and the screw-top test tube, which are readily available in the laboratory. The one capillary can continuously scan a wide range of crystallization conditions (combination of the concentrations of the precipitant and the protein) unless crystallization occurs, which means that it corresponds to many drops in the vapor-diffusion method. The amount of the precipitant and the protein solutions can be much less than in conventional methods. In this study, lysozyme and alpha-amylase were used as model proteins for demonstrating the efficiency of this method. In addition, one-dimensional (1-D) simulations of the crystal growth were performed based on the 1-D diffusion model. The optimized conditions can be applied to the initial crystallization conditions for both other counter-diffusion methods with the Granada Crystallization Box (GCB) and for the vapor-diffusion method after some modification.

  6. A gravimetric simplified method for nucleated marrow cell counting using an injection needle.

    PubMed

    Saitoh, Toshiki; Fang, Liu; Matsumoto, Kiyoshi

    2005-08-01

    A simplified gravimetric marrow cell counting method for rats is proposed for a regular screening method. After fresh bone marrow was aspirated by an injection needle, the marrow cells were suspended in carbonate buffered saline. The nucleated marrow cell count (NMC) was measured by an automated multi-blood cell analyzer. When this gravimetric method was applied to rats, the NMC of the left and right femurs had essentially identical values due to careful handling. The NMC at 4 to 10 weeks of age in male and female Crj:CD(SD)IGS rats was 2.72 to 1.96 and 2.75 to 1.98 (x10(6) counts/mg), respectively. More useful information for evaluation could be obtained by using this gravimetric method in addition to myelogram examination. However, some difficulties with this method include low NMC due to blood contamination and variation of NMC due to handling. Therefore, the utility of this gravimetric method for screening will be clarified by the accumulation of the data on myelotoxicity studies with this method.

  7. 76 FR 45673 - Methods of Accounting Used by Corporations That Acquire the Assets of Other Corporations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-01

    ... DEPARTMENT OF THE TREASURY Internal Revenue Service 26 CFR Part 1 [TD 9534] RIN 1545-BD81 Methods... regulations relating to the methods of accounting, including the inventory methods, to be used by corporations... liquidations. These regulations clarify and simplify the rules regarding the accounting methods to be used...

  8. An immersed boundary-simplified sphere function-based gas kinetic scheme for simulation of 3D incompressible flows

    NASA Astrophysics Data System (ADS)

    Yang, L. M.; Shu, C.; Yang, W. M.; Wang, Y.; Wu, J.

    2017-08-01

    In this work, an immersed boundary-simplified sphere function-based gas kinetic scheme (SGKS) is presented for the simulation of 3D incompressible flows with curved and moving boundaries. At first, the SGKS [Yang et al., "A three-dimensional explicit sphere function-based gas-kinetic flux solver for simulation of inviscid compressible flows," J. Comput. Phys. 295, 322 (2015) and Yang et al., "Development of discrete gas kinetic scheme for simulation of 3D viscous incompressible and compressible flows," J. Comput. Phys. 319, 129 (2016)], which is often applied for the simulation of compressible flows, is simplified to improve the computational efficiency for the simulation of incompressible flows. In the original SGKS, the integral domain along the spherical surface for computing conservative variables and numerical fluxes is usually not symmetric at the cell interface. This leads the expression of numerical fluxes at the cell interface to be relatively complicated. For incompressible flows, the sphere at the cell interface can be approximately considered to be symmetric as shown in this work. Besides that, the energy equation is usually not needed for the simulation of incompressible isothermal flows. With all these simplifications, the simple and explicit formulations for the conservative variables and numerical fluxes at the cell interface can be obtained. Second, to effectively implement the no-slip boundary condition for fluid flow problems with complex geometry as well as moving boundary, the implicit boundary condition-enforced immersed boundary method [Wu and Shu, "Implicit velocity correction-based immersed boundary-lattice Boltzmann method and its applications," J. Comput. Phys. 228, 1963 (2009)] is introduced into the simplified SGKS. That is, the flow field is solved by the simplified SGKS without considering the presence of an immersed body and the no-slip boundary condition is implemented by the immersed boundary method. The accuracy and efficiency of the present scheme are validated by simulating the decaying vortex flow, flow past a stationary and rotating sphere, flow past a stationary torus, and flows over dragonfly flight.

  9. Simplified estimation of age-specific reference intervals for skewed data.

    PubMed

    Wright, E M; Royston, P

    1997-12-30

    Age-specific reference intervals are commonly used in medical screening and clinical practice, where interest lies in the detection of extreme values. Many different statistical approaches have been published on this topic. The advantages of a parametric method are that they necessarily produce smooth centile curves, the entire density is estimated and an explicit formula is available for the centiles. The method proposed here is a simplified version of a recent approach proposed by Royston and Wright. Basic transformations of the data and multiple regression techniques are combined to model the mean, standard deviation and skewness. Using these simple tools, which are implemented in almost all statistical computer packages, age-specific reference intervals may be obtained. The scope of the method is illustrated by fitting models to several real data sets and assessing each model using goodness-of-fit techniques.

  10. A simplified model for dynamics of cell rolling and cell-surface adhesion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cimrák, Ivan, E-mail: ivan.cimrak@fri.uniza.sk

    2015-03-10

    We propose a three dimensional model for the adhesion and rolling of biological cells on surfaces. We study cells moving in shear flow above a wall to which they can adhere via specific receptor-ligand bonds based on receptors from selectin as well as integrin family. The computational fluid dynamics are governed by the lattice-Boltzmann method. The movement and the deformation of the cells is described by the immersed boundary method. Both methods are fully coupled by implementing a two-way fluid-structure interaction. The adhesion mechanism is modelled by adhesive bonds including stochastic rules for their creation and rupture. We explore amore » simplified model with dissociation rate independent of the length of the bonds. We demonstrate that this model is able to resemble the mesoscopic properties, such as velocity of rolling cells.« less

  11. GoPros™ as an underwater photogrammetry tool for citizen science

    PubMed Central

    David, Peter A.; Dupont, Sally F.; Mathewson, Ciaran P.; O’Neill, Samuel J.; Powell, Nicholas N.; Williamson, Jane E.

    2016-01-01

    Citizen science can increase the scope of research in the marine environment; however, it suffers from necessitating specialized training and simplified methodologies that reduce research output. This paper presents a simplified, novel survey methodology for citizen scientists, which combines GoPro imagery and structure from motion to construct an ortho-corrected 3D model of habitats for analysis. Results using a coral reef habitat were compared to surveys conducted with traditional snorkelling methods for benthic cover, holothurian counts, and coral health. Results were comparable between the two methods, and structure from motion allows the results to be analysed off-site for any chosen visual analysis. The GoPro method outlined in this study is thus an effective tool for citizen science in the marine environment, especially for comparing changes in coral cover or volume over time. PMID:27168973

  12. GoPros™ as an underwater photogrammetry tool for citizen science.

    PubMed

    Raoult, Vincent; David, Peter A; Dupont, Sally F; Mathewson, Ciaran P; O'Neill, Samuel J; Powell, Nicholas N; Williamson, Jane E

    2016-01-01

    Citizen science can increase the scope of research in the marine environment; however, it suffers from necessitating specialized training and simplified methodologies that reduce research output. This paper presents a simplified, novel survey methodology for citizen scientists, which combines GoPro imagery and structure from motion to construct an ortho-corrected 3D model of habitats for analysis. Results using a coral reef habitat were compared to surveys conducted with traditional snorkelling methods for benthic cover, holothurian counts, and coral health. Results were comparable between the two methods, and structure from motion allows the results to be analysed off-site for any chosen visual analysis. The GoPro method outlined in this study is thus an effective tool for citizen science in the marine environment, especially for comparing changes in coral cover or volume over time.

  13. Simplification of the DPPH assay for estimating the antioxidant activity of wine and wine by-products.

    PubMed

    Carmona-Jiménez, Yolanda; García-Moreno, M Valme; Igartuburu, Jose M; Garcia Barroso, Carmelo

    2014-12-15

    The DPPH assay is one of the most commonly employed methods for measuring antioxidant activity. Even though this method is considered very simple and efficient, it does present various limitations which make it complicated to perform. The range of linearity between the DPPH inhibition percentage and sample concentration has been studied with a view to simplifying the method for characterising samples of wine origin. It has been concluded that all the samples are linear in a range of inhibition below 40%, which allows the analysis to be simplified. A new parameter more appropriate for the simplification, the EC20, has been proposed to express the assay results. Additionally, the reaction time was analysed with the object of avoiding the need for kinetic studies in the method. The simplifications considered offer a more functional method, without significant errors, which could be used for routine analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Environmental and human monitoring of Americium-241 utilizing extraction chromatography and alpha-spectrometry.

    PubMed

    Goldstein, S J; Hensley, C A; Armenta, C E; Peters, R J

    1997-03-01

    Recent developments in extraction chromatography have simplified the separation of americium from complex matrices in preparation for alpha-spectroscopy relative to traditional methods. Here we present results of procedures developed/adapted for water, air, and bioassay samples with less than 1 g of inorganic residue. Prior analytical methods required the use of a complex, multistage procedure for separation of americium from these matrices. The newer, simplified procedure requires only a single 2 mL extraction chromatographic separation for isolation of Am and lanthanides from other components of the sample. This method has been implemented on an extensive variety of "real" environmental and bioassay samples from the Los Alamos area, and consistently reliable and accurate results with appropriate detection limits have been obtained. The new method increases analytical throughput by a factor of approximately 2 and decreases environmental hazards from acid and mixed-waste generation relative to the prior technique. Analytical accuracy, reproducibility, and reliability are also significantly improved over the more complex and laborious method used previously.

  15. An IMU-to-Body Alignment Method Applied to Human Gait Analysis.

    PubMed

    Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo

    2016-12-10

    This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.

  16. 3DHZETRN: Inhomogeneous Geometry Issues

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.

    2017-01-01

    Historical methods for assessing radiation exposure inside complicated geometries for space applications were limited by computational constraints and lack of knowledge associated with nuclear processes occurring over a broad range of particles and energies. Various methods were developed and utilized to simplify geometric representations and enable coupling with simplified but efficient particle transport codes. Recent transport code development efforts, leading to 3DHZETRN, now enable such approximate methods to be carefully assessed to determine if past exposure analyses and validation efforts based on those approximate methods need to be revisited. In this work, historical methods of representing inhomogeneous spacecraft geometry for radiation protection analysis are first reviewed. Two inhomogeneous geometry cases, previously studied with 3DHZETRN and Monte Carlo codes, are considered with various levels of geometric approximation. Fluence, dose, and dose equivalent values are computed in all cases and compared. It is found that although these historical geometry approximations can induce large errors in neutron fluences up to 100 MeV, errors on dose and dose equivalent are modest (<10%) for the cases studied here.

  17. Extended Finite Element Method with Simplified Spherical Harmonics Approximation for the Forward Model of Optical Molecular Imaging

    PubMed Central

    Li, Wei; Yi, Huangjian; Zhang, Qitan; Chen, Duofang; Liang, Jimin

    2012-01-01

    An extended finite element method (XFEM) for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SPN). In XFEM scheme of SPN equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC) method, the validation results show the merits and potential of the XFEM for optical imaging. PMID:23227108

  18. Extended finite element method with simplified spherical harmonics approximation for the forward model of optical molecular imaging.

    PubMed

    Li, Wei; Yi, Huangjian; Zhang, Qitan; Chen, Duofang; Liang, Jimin

    2012-01-01

    An extended finite element method (XFEM) for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SP(N)). In XFEM scheme of SP(N) equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC) method, the validation results show the merits and potential of the XFEM for optical imaging.

  19. 76 FR 22728 - Proposed Extension of Information Collection Request Submitted for Public Comment and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-22

    ... Collection Request Submitted for Public Comment and Recommendations; Alternative Method of Compliance for... alternative method of compliance for certain simplified employee pensions regulation (29 CFR 2520.104-49). A... Secretary to prescribe alternative methods of compliance with the reporting and disclosure requirements of...

  20. A Simplified and Inexpensive Method for Measuring Dissolved Oxygen in Water.

    ERIC Educational Resources Information Center

    Austin, John

    1983-01-01

    A modified Winkler method for determining dissolved oxygen in water is described. The method does not require use of a burette or starch indicator, is simple and inexpensive and can be used in the field or laboratory. Reagents/apparatus needed and specific procedures are included. (JN)

Top