Zhou, Shuntai; Jones, Corbin; Mieczkowski, Piotr
2015-01-01
ABSTRACT Validating the sampling depth and reducing sequencing errors are critical for studies of viral populations using next-generation sequencing (NGS). We previously described the use of Primer ID to tag each viral RNA template with a block of degenerate nucleotides in the cDNA primer. We now show that low-abundance Primer IDs (offspring Primer IDs) are generated due to PCR/sequencing errors. These artifactual Primer IDs can be removed using a cutoff model for the number of reads required to make a template consensus sequence. We have modeled the fraction of sequences lost due to Primer ID resampling. For a typical sequencing run, less than 10% of the raw reads are lost to offspring Primer ID filtering and resampling. The remaining raw reads are used to correct for PCR resampling and sequencing errors. We also demonstrate that Primer ID reveals bias intrinsic to PCR, especially at low template input or utilization. cDNA synthesis and PCR convert ca. 20% of RNA templates into recoverable sequences, and 30-fold sequence coverage recovers most of these template sequences. We have directly measured the residual error rate to be around 1 in 10,000 nucleotides. We use this error rate and the Poisson distribution to define the cutoff to identify preexisting drug resistance mutations at low abundance in an HIV-infected subject. Collectively, these studies show that >90% of the raw sequence reads can be used to validate template sampling depth and to dramatically reduce the error rate in assessing a genetically diverse viral population using NGS. IMPORTANCE Although next-generation sequencing (NGS) has revolutionized sequencing strategies, it suffers from serious limitations in defining sequence heterogeneity in a genetically diverse population, such as HIV-1 due to PCR resampling and PCR/sequencing errors. The Primer ID approach reveals the true sampling depth and greatly reduces errors. Knowing the sampling depth allows the construction of a model of how to maximize the recovery of sequences from input templates and to reduce resampling of the Primer ID so that appropriate multiplexing can be included in the experimental design. With the defined sampling depth and measured error rate, we are able to assign cutoffs for the accurate detection of minority variants in viral populations. This approach allows the power of NGS to be realized without having to guess about sampling depth or to ignore the problem of PCR resampling, while also being able to correct most of the errors in the data set. PMID:26041299
Ultra-deep mutant spectrum profiling: improving sequencing accuracy using overlapping read pairs.
Chen-Harris, Haiyin; Borucki, Monica K; Torres, Clinton; Slezak, Tom R; Allen, Jonathan E
2013-02-12
High throughput sequencing is beginning to make a transformative impact in the area of viral evolution. Deep sequencing has the potential to reveal the mutant spectrum within a viral sample at high resolution, thus enabling the close examination of viral mutational dynamics both within- and between-hosts. The challenge however, is to accurately model the errors in the sequencing data and differentiate real viral mutations, particularly those that exist at low frequencies, from sequencing errors. We demonstrate that overlapping read pairs (ORP) -- generated by combining short fragment sequencing libraries and longer sequencing reads -- significantly reduce sequencing error rates and improve rare variant detection accuracy. Using this sequencing protocol and an error model optimized for variant detection, we are able to capture a large number of genetic mutations present within a viral population at ultra-low frequency levels (<0.05%). Our rare variant detection strategies have important implications beyond viral evolution and can be applied to any basic and clinical research area that requires the identification of rare mutations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Canini, Laetitia; DebRoy, Swati; Mariño, Zoe
HCV kinetic analysis and modeling during antiviral therapy have not been performed in decompensated cirrhotic patients awaiting liver transplantation. Here, viral and host parameters were compared in patients treated with daily intravenous silibinin (SIL) monotherapy for 7 days according to the severity of their liver disease. Data were obtained from 25 patients, 12 non-cirrhotic, 8 with compensated cirrhosis and 5 with decompensated cirrhosis. The standard-biphasic model with time-varying SIL effectiveness (from 0 to ε max) was fit to viral kinetic data. Our results show that baseline viral load and age were significantly associated with the severity of liver disease (p<0.0001).more » A biphasic viral decline was observed in most patients with a higher first phase decline patients with less severe liver disease. The maximal effectiveness, ε max, was significantly (p≤0.032) associated with increasing severity of liver disease (ε max[s.e.]=0.86[0.05], ε max=0.69[0.06] and ε max=0.59[0.1]). The 2nd phase decline slope was not significantly different among groups (mean 1.88±0.15 log 10IU/ml/wk, p=0.75) as was the rate of change of SIL effectiveness (k=2.12/day[standard error, SE=0.18/day]). HCV-infected cell loss rate (δ[SE]=0.62/day[0.05/day]) was high and similar among groups. We conclude that the high loss rate of HCV-infected cells suggests that sufficient dose and duration of SIL might achieve viral suppression in advanced liver disease.« less
Canini, Laetitia; DebRoy, Swati; Mariño, Zoe; ...
2014-06-10
HCV kinetic analysis and modeling during antiviral therapy have not been performed in decompensated cirrhotic patients awaiting liver transplantation. Here, viral and host parameters were compared in patients treated with daily intravenous silibinin (SIL) monotherapy for 7 days according to the severity of their liver disease. Data were obtained from 25 patients, 12 non-cirrhotic, 8 with compensated cirrhosis and 5 with decompensated cirrhosis. The standard-biphasic model with time-varying SIL effectiveness (from 0 to ε max) was fit to viral kinetic data. Our results show that baseline viral load and age were significantly associated with the severity of liver disease (p<0.0001).more » A biphasic viral decline was observed in most patients with a higher first phase decline patients with less severe liver disease. The maximal effectiveness, ε max, was significantly (p≤0.032) associated with increasing severity of liver disease (ε max[s.e.]=0.86[0.05], ε max=0.69[0.06] and ε max=0.59[0.1]). The 2nd phase decline slope was not significantly different among groups (mean 1.88±0.15 log 10IU/ml/wk, p=0.75) as was the rate of change of SIL effectiveness (k=2.12/day[standard error, SE=0.18/day]). HCV-infected cell loss rate (δ[SE]=0.62/day[0.05/day]) was high and similar among groups. We conclude that the high loss rate of HCV-infected cells suggests that sufficient dose and duration of SIL might achieve viral suppression in advanced liver disease.« less
Pathways to extinction: beyond the error threshold.
Manrubia, Susanna C; Domingo, Esteban; Lázaro, Ester
2010-06-27
Since the introduction of the quasispecies and the error catastrophe concepts for molecular evolution by Eigen and their subsequent application to viral populations, increased mutagenesis has become a common strategy to cause the extinction of viral infectivity. Nevertheless, the high complexity of virus populations has shown that viral extinction can occur through several other pathways apart from crossing an error threshold. Increases in the mutation rate enhance the appearance of defective forms and promote the selection of mechanisms that are able to counteract the accelerated appearance of mutations. Current models of viral evolution take into account more realistic scenarios that consider compensatory and lethal mutations, a highly redundant genotype-to-phenotype map, rough fitness landscapes relating phenotype and fitness, and where phenotype is described as a set of interdependent traits. Further, viral populations cannot be understood without specifying the characteristics of the environment where they evolve and adapt. Altogether, it turns out that the pathways through which viral quasispecies go extinct are multiple and diverse.
Information Overload and Viral Marketing: Countermeasures and Strategies
NASA Astrophysics Data System (ADS)
Cheng, Jiesi; Sun, Aaron; Zeng, Daniel
Studying information diffusion through social networks has become an active research topic with important implications in viral marketing applications. One of the fundamental algorithmic problems related to viral marketing is the Influence Maximization (IM) problem: given an social network, which set of nodes should be considered by the viral marketer as the initial targets, in order to maximize the influence of the advertising message. In this work, we study the IM problem in an information-overloaded online social network. Information overload occurs when individuals receive more information than they can process, which can cause negative impacts on the overall marketing effectiveness. Many practical countermeasures have been proposed for alleviating the load of information on recipients. However, how these approaches can benefit viral marketers is not well understood. In our work, we have adapted the classic Information Cascade Model to incorporate information overload and study its countermeasures. Our results suggest that effective control of information overload has the potential to improve marketing effectiveness, but the targeting strategy should be re-designed in response to these countermeasures.
Error catastrophe and phase transition in the empirical fitness landscape of HIV
NASA Astrophysics Data System (ADS)
Hart, Gregory R.; Ferguson, Andrew L.
2015-03-01
We have translated clinical sequence databases of the p6 HIV protein into an empirical fitness landscape quantifying viral replicative capacity as a function of the amino acid sequence. We show that the viral population resides close to a phase transition in sequence space corresponding to an "error catastrophe" beyond which there is lethal accumulation of mutations. Our model predicts that the phase transition may be induced by drug therapies that elevate the mutation rate, or by forcing mutations at particular amino acids. Applying immune pressure to any combination of killer T-cell targets cannot induce the transition, providing a rationale for why the viral protein can exist close to the error catastrophe without sustaining fatal fitness penalties due to adaptive immunity.
Lapadat-Tapolsky, M; Gabus, C; Rau, M; Darlix, J L
1997-05-02
Retroviral nucleocapsid (NC) protein is an integral part of the virion nucleocapsid where it coats the dimeric RNA genome. Due to its nucleic acid binding and annealing activities, NC protein directs the annealing of the tRNA primer to the primer binding site and greatly facilitates minus strand DNA elongation and transfer while protecting the nucleic acids against nuclease degradation. To understand the role of NCp7 in viral DNA synthesis, we examined the influence of NCp7 on self-primed versus primer-specific reverse transcription. The results show that HIV-1 NCp7 can extensively inhibit self-primed reverse transcription of viral and cellular RNAs while promoting primer-specific synthesis of proviral DNA. The role of NCp7 vis-a-vis the presence of mutations in the viral DNA during minus strand elongation was examined. NCp7 maximized the annealing between a cDNA(-) primer containing one to five consecutive errors and an RNA representing the 3' end of the genome. The ability of reverse transcriptase (RT) in the presence of NCp7 to subsequently extend the mutated primers depended upon the position of the mismatch within the primer:template complex. When the mutations were at the polymerisation site, primer extension by RT in the presence of NCp7 was very high, about 40% for one mismatch and 3% for five consecutive mismatches. Mutations within the DNA primer or at its 5' end had little effect on the extension of viral DNA by RT. Taken together these results indicate that NCp7 plays major roles in proviral DNA synthesis within the virion core due to its ability to promote prime-specific proviral DNA synthesis while concurrently inhibiting non-specific reverse transcription of viral and cellular RNAs. Moreover, the observation that NCp7 enhances the incorporation of mutations during minus strand DNA elongation favours the notion that NCp7 is a factor contributing to the high mutation rate of HIV-1.
ERIC Educational Resources Information Center
Raykov, Tenko; Penev, Spiridon
2006-01-01
Unlike a substantial part of reliability literature in the past, this article is concerned with weighted combinations of a given set of congeneric measures with uncorrelated errors. The relationship between maximal coefficient alpha and maximal reliability for such composites is initially dealt with, and it is shown that the former is a lower…
Error correction and diversity analysis of population mixtures determined by NGS
Burroughs, Nigel J.; Evans, David J.; Ryabov, Eugene V.
2014-01-01
The impetus for this work was the need to analyse nucleotide diversity in a viral mix taken from honeybees. The paper has two findings. First, a method for correction of next generation sequencing error in the distribution of nucleotides at a site is developed. Second, a package of methods for assessment of nucleotide diversity is assembled. The error correction method is statistically based and works at the level of the nucleotide distribution rather than the level of individual nucleotides. The method relies on an error model and a sample of known viral genotypes that is used for model calibration. A compendium of existing and new diversity analysis tools is also presented, allowing hypotheses about diversity and mean diversity to be tested and associated confidence intervals to be calculated. The methods are illustrated using honeybee viral samples. Software in both Excel and Matlab and a guide are available at http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/, the Warwick University Systems Biology Centre software download site. PMID:25405074
Efficient error correction for next-generation sequencing of viral amplicons
2012-01-01
Background Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. Results In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Conclusions Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses. The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm PMID:22759430
Efficient error correction for next-generation sequencing of viral amplicons.
Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury
2012-06-25
Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.
Viral Evasion and Manipulation of Host RNA Quality Control Pathways
2016-01-01
Viruses have evolved diverse strategies to maximize the functional and coding capacities of their genetic material. Individual viral RNAs are often used as substrates for both replication and translation and can contain multiple, sometimes overlapping open reading frames. Further, viral RNAs engage in a wide variety of interactions with both host and viral proteins to modify the activities of important cellular factors and direct their own trafficking, packaging, localization, stability, and translation. However, adaptations increasing the information density of small viral genomes can have unintended consequences. In particular, viral RNAs have developed features that mark them as potential targets of host RNA quality control pathways. This minireview focuses on ways in which viral RNAs run afoul of the cellular mRNA quality control and decay machinery, as well as on strategies developed by viruses to circumvent or exploit cellular mRNA surveillance. PMID:27226372
Viral Evasion and Manipulation of Host RNA Quality Control Pathways.
Hogg, J Robert
2016-08-15
Viruses have evolved diverse strategies to maximize the functional and coding capacities of their genetic material. Individual viral RNAs are often used as substrates for both replication and translation and can contain multiple, sometimes overlapping open reading frames. Further, viral RNAs engage in a wide variety of interactions with both host and viral proteins to modify the activities of important cellular factors and direct their own trafficking, packaging, localization, stability, and translation. However, adaptations increasing the information density of small viral genomes can have unintended consequences. In particular, viral RNAs have developed features that mark them as potential targets of host RNA quality control pathways. This minireview focuses on ways in which viral RNAs run afoul of the cellular mRNA quality control and decay machinery, as well as on strategies developed by viruses to circumvent or exploit cellular mRNA surveillance. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Methods for increasing cooperation rates for surveys of family forest owners
Brett J. Butler; Jaketon H. Hewes; Mary L. Tyrrell; Sarah M. Butler
2016-01-01
To maximize the representativeness of results from surveys, coverage, sampling, nonresponse, measurement, and analysis errors must be minimized. Although not a cure-all, one approach for mitigating nonresponse errors is to maximize cooperation rates. In this study, personalizing mailings, token financial incentives, and the use of real stamps were tested for their...
Reach distance but not judgment error is associated with falls in older people.
Butler, Annie A; Lord, Stephen R; Fitzpatrick, Richard C
2011-08-01
Reaching is a vital action requiring precise motor coordination and attempting to reach for objects that are too far away can destabilize balance and result in falls and injury. This could be particularly important for many elderly people with age-related loss of sensorimotor function and a reduced ability to recover balance. Here, we investigate the interaction between reaching ability, errors in judging reach, and the incidence of falling (retrospectively and prospectively) in a large cohort of older people. Participants (n = 415, 70-90 years) had to estimate the furthest distance they could reach to retrieve a broomstick hanging in front of them. In an iterative dialog with the experimenter, the stick was moved until it was at the furthest distance they estimated to be reached successfully. At this point, participants were asked to attempt to retrieve the stick. Actual maximal reach was then measured. The difference between attempted reach and actual maximal reach provided a measure of judgment error. One-year retrospective fall rates were obtained at initial assessment and prospective falls were monitored by monthly calendar. Participants with poor maximal reach attempted shorter reaches than those who had good reaching ability. Those with the best reaching ability most accurately judged their maximal reach, whereas poor performers were dichotomous and either underestimated or overestimated their reach with few judging exactly. Fall rates were significantly associated with reach distance but not with reach judgment error. Maximal reach but not error in perceived reach is associated with falls in older people.
Optimal quantum error correcting codes from absolutely maximally entangled states
NASA Astrophysics Data System (ADS)
Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio
2018-02-01
Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \
Todt, Daniel; Walter, Stephanie; Brown, Richard J P; Steinmann, Eike
2016-10-13
Hepatitis E virus (HEV), an important agent of viral hepatitis worldwide, can cause severe courses of infection in pregnant women and immunosuppressed patients. To date, HEV infections can only be treated with ribavirin (RBV). Major drawbacks of this therapy are that RBV is not approved for administration to pregnant women and that the virus can acquire mutations, which render the intra-host population less sensitive or even resistant to RBV. One of the proposed modes of action of RBV is a direct mutagenic effect on viral genomes, inducing mismatches and subsequent nucleotide substitutions. These transition events can drive the already error-prone viral replication beyond an error threshold, causing viral population extinction. In contrast, the expanded heterogeneous viral population can facilitate selection of mutant viruses with enhanced replication fitness. Emergence of these mutant viruses can lead to therapeutic failure. Consequently, the onset of RBV treatment in chronically HEV-infected individuals can result in two divergent outcomes: viral extinction versus selection of fitness-enhanced viruses. Following an overview of RNA viruses treated with RBV in clinics and a summary of the different antiviral modes of action of this drug, we focus on the mutagenic effect of RBV on HEV intrahost populations, and how HEV is able to overcome lethal mutagenesis.
ERIC Educational Resources Information Center
Byars, Alvin Gregg
The objectives of this investigation are to develop, describe, assess, and demonstrate procedures for constructing mastery tests to minimize errors of classification and to maximize decision reliability. The guidelines are based on conditions where item exchangeability is a reasonable assumption and the test constructor can control the number of…
A method of bias correction for maximal reliability with dichotomous measures.
Penev, Spiridon; Raykov, Tenko
2010-02-01
This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.
Comparative Viral Metagenomics of Environmental Samples from Korea
Kim, Min-Soo; Whon, Tae Woong
2013-01-01
The introduction of metagenomics into the field of virology has facilitated the exploration of viral communities in various natural habitats. Understanding the viral ecology of a variety of sample types throughout the biosphere is important per se, but it also has potential applications in clinical and diagnostic virology. However, the procedures used by viral metagenomics may produce technical errors, such as amplification bias, while public viral databases are very limited, which may hamper the determination of the viral diversity in samples. This review considers the current state of viral metagenomics, based on examples from Korean viral metagenomic studies-i.e., rice paddy soil, fermented foods, human gut, seawater, and the near-surface atmosphere. Viral metagenomics has become widespread due to various methodological developments, and much attention has been focused on studies that consider the intrinsic role of viruses that interact with their hosts. PMID:24124407
Viral quasispecies inference from 454 pyrosequencing
2013-01-01
Background Many potentially life-threatening infectious viruses are highly mutable in nature. Characterizing the fittest variants within a quasispecies from infected patients is expected to allow unprecedented opportunities to investigate the relationship between quasispecies diversity and disease epidemiology. The advent of next-generation sequencing technologies has allowed the study of virus diversity with high-throughput sequencing, although these methods come with higher rates of errors which can artificially increase diversity. Results Here we introduce a novel computational approach that incorporates base quality scores from next-generation sequencers for reconstructing viral genome sequences that simultaneously infers the number of variants within a quasispecies that are present. Comparisons on simulated and clinical data on dengue virus suggest that the novel approach provides a more accurate inference of the underlying number of variants within the quasispecies, which is vital for clinical efforts in mapping the within-host viral diversity. Sequence alignments generated by our approach are also found to exhibit lower rates of error. Conclusions The ability to infer the viral quasispecies colony that is present within a human host provides the potential for a more accurate classification of the viral phenotype. Understanding the genomics of viruses will be relevant not just to studying how to control or even eradicate these viral infectious diseases, but also in learning about the innate protection in the human host against the viruses. PMID:24308284
He, Xin; Frey, Eric C
2006-08-01
Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.
Lethal mutagenesis: targeting the mutator phenotype in cancer.
Fox, Edward J; Loeb, Lawrence A
2010-10-01
The evolution of cancer and RNA viruses share many similarities. Both exploit high levels of genotypic diversity to enable extensive phenotypic plasticity and thereby facilitate rapid adaptation. In order to accumulate large numbers of mutations, we have proposed that cancers express a mutator phenotype. Similar to cancer cells, many viral populations, by replicating their genomes with low fidelity, carry a substantial mutational load. As high levels of mutation are potentially deleterious, the viral mutation frequency is thresholded at a level below which viral populations equilibrate in a traditional mutation-selection balance, and above which the population is no longer viable, i.e., the population undergoes an error catastrophe. Because their mutation frequencies are fine-tuned just below this error threshold, viral populations are susceptible to further increases in mutational load and, recently this phenomenon has been exploited therapeutically by a concept that has been termed lethal mutagenesis. Here we review the application of lethal mutagenesis to the treatment of HIV and discuss how lethal mutagenesis may represent a novel therapeutic approach for the treatment of solid cancers. Copyright © 2010 Elsevier Ltd. All rights reserved.
Nikolaitchik, Olga A.; Burdick, Ryan C.; Gorelick, Robert J.; Keele, Brandon F.; Hu, Wei-Shau; Pathak, Vinay K.
2016-01-01
Although the predominant effect of host restriction APOBEC3 proteins on HIV-1 infection is to block viral replication, they might inadvertently increase retroviral genetic variation by inducing G-to-A hypermutation. Numerous studies have disagreed on the contribution of hypermutation to viral genetic diversity and evolution. Confounding factors contributing to the debate include the extent of lethal (stop codon) and sublethal hypermutation induced by different APOBEC3 proteins, the inability to distinguish between G-to-A mutations induced by APOBEC3 proteins and error-prone viral replication, the potential impact of hypermutation on the frequency of retroviral recombination, and the extent to which viral recombination occurs in vivo, which can reassort mutations in hypermutated genomes. Here, we determined the effects of hypermutation on the HIV-1 recombination rate and its contribution to genetic variation through recombination to generate progeny genomes containing portions of hypermutated genomes without lethal mutations. We found that hypermutation did not significantly affect the rate of recombination, and recombination between hypermutated and wild-type genomes only increased the viral mutation rate by 3.9 × 10−5 mutations/bp/replication cycle in heterozygous virions, which is similar to the HIV-1 mutation rate. Since copackaging of hypermutated and wild-type genomes occurs very rarely in vivo, recombination between hypermutated and wild-type genomes does not significantly contribute to the genetic variation of replicating HIV-1. We also analyzed previously reported hypermutated sequences from infected patients and determined that the frequency of sublethal mutagenesis for A3G and A3F is negligible (4 × 10−21 and1 × 10−11, respectively) and its contribution to viral mutations is far below mutations generated during error-prone reverse transcription. Taken together, we conclude that the contribution of APOBEC3-induced hypermutation to HIV-1 genetic variation is substantially lower than that from mutations during error-prone replication. PMID:27186986
Delviks-Frankenberry, Krista A; Nikolaitchik, Olga A; Burdick, Ryan C; Gorelick, Robert J; Keele, Brandon F; Hu, Wei-Shau; Pathak, Vinay K
2016-05-01
Although the predominant effect of host restriction APOBEC3 proteins on HIV-1 infection is to block viral replication, they might inadvertently increase retroviral genetic variation by inducing G-to-A hypermutation. Numerous studies have disagreed on the contribution of hypermutation to viral genetic diversity and evolution. Confounding factors contributing to the debate include the extent of lethal (stop codon) and sublethal hypermutation induced by different APOBEC3 proteins, the inability to distinguish between G-to-A mutations induced by APOBEC3 proteins and error-prone viral replication, the potential impact of hypermutation on the frequency of retroviral recombination, and the extent to which viral recombination occurs in vivo, which can reassort mutations in hypermutated genomes. Here, we determined the effects of hypermutation on the HIV-1 recombination rate and its contribution to genetic variation through recombination to generate progeny genomes containing portions of hypermutated genomes without lethal mutations. We found that hypermutation did not significantly affect the rate of recombination, and recombination between hypermutated and wild-type genomes only increased the viral mutation rate by 3.9 × 10-5 mutations/bp/replication cycle in heterozygous virions, which is similar to the HIV-1 mutation rate. Since copackaging of hypermutated and wild-type genomes occurs very rarely in vivo, recombination between hypermutated and wild-type genomes does not significantly contribute to the genetic variation of replicating HIV-1. We also analyzed previously reported hypermutated sequences from infected patients and determined that the frequency of sublethal mutagenesis for A3G and A3F is negligible (4 × 10-21 and1 × 10-11, respectively) and its contribution to viral mutations is far below mutations generated during error-prone reverse transcription. Taken together, we conclude that the contribution of APOBEC3-induced hypermutation to HIV-1 genetic variation is substantially lower than that from mutations during error-prone replication.
Center of mass perception and inertial frames of reference.
Bingham, G P; Muchisky, M M
1993-11-01
Center of mass perception was investigated by varying the shape, size, and orientation of planar objects. Shape was manipulated to investigate symmetries as information. The number of reflective symmetry axes, the amount of rotational symmetry, and the presence of radial symmetry were varied. Orientation affected systematic errors. Judgments tended to undershoot the center of mass. Random errors increased with size and decreased with symmetry. Size had no effect on random errors for maximally symmetric objects, although orientation did. The spatial distributions of judgments were elliptical. Distribution axes were found to align with the principle moments of inertia. Major axes tended to align with gravity in maximally symmetric objects. A functional and physical account was given in terms of the repercussions of error. Overall, judgments were very accurate.
Stand-alone error characterisation of microwave satellite soil moisture using a Fourier method
USDA-ARS?s Scientific Manuscript database
Error characterisation of satellite-retrieved soil moisture (SM) is crucial for maximizing their utility in research and applications in hydro-meteorology and climatology. Error characteristics can provide insights for retrieval development and validation, and inform suitable strategies for data fus...
Using hidden Markov models and observed evolution to annotate viral genomes.
McCauley, Stephen; Hein, Jotun
2006-06-01
ssRNA (single stranded) viral genomes are generally constrained in length and utilize overlapping reading frames to maximally exploit the coding potential within the genome length restrictions. This overlapping coding phenomenon leads to complex evolutionary constraints operating on the genome. In regions which code for more than one protein, silent mutations in one reading frame generally have a protein coding effect in another. To maximize coding flexibility in all reading frames, overlapping regions are often compositionally biased towards amino acids which are 6-fold degenerate with respect to the 64 codon alphabet. Previous methodologies have used this fact in an ad hoc manner to look for overlapping genes by motif matching. In this paper differentiated nucleotide compositional patterns in overlapping regions are incorporated into a probabilistic hidden Markov model (HMM) framework which is used to annotate ssRNA viral genomes. This work focuses on single sequence annotation and applies an HMM framework to ssRNA viral annotation. A description of how the HMM is parameterized, whilst annotating within a missing data framework is given. A Phylogenetic HMM (Phylo-HMM) extension, as applied to 14 aligned HIV2 sequences is also presented. This evolutionary extension serves as an illustration of the potential of the Phylo-HMM framework for ssRNA viral genomic annotation. The single sequence annotation procedure (SSA) is applied to 14 different strains of the HIV2 virus. Further results on alternative ssRNA viral genomes are presented to illustrate more generally the performance of the method. The results of the SSA method are encouraging however there is still room for improvement, and since there is overwhelming evidence to indicate that comparative methods can improve coding sequence (CDS) annotation, the SSA method is extended to a Phylo-HMM to incorporate evolutionary information. The Phylo-HMM extension is applied to the same set of 14 HIV2 sequences which are pre-aligned. The performance improvement that results from including the evolutionary information in the analysis is illustrated.
The Binding of Learning to Action in Motor Adaptation
Gonzalez Castro, Luis Nicolas; Monsen, Craig Bryant; Smith, Maurice A.
2011-01-01
In motor tasks, errors between planned and actual movements generally result in adaptive changes which reduce the occurrence of similar errors in the future. It has commonly been assumed that the motor adaptation arising from an error occurring on a particular movement is specifically associated with the motion that was planned. Here we show that this is not the case. Instead, we demonstrate the binding of the adaptation arising from an error on a particular trial to the motion experienced on that same trial. The formation of this association means that future movements planned to resemble the motion experienced on a given trial benefit maximally from the adaptation arising from it. This reflects the idea that actual rather than planned motions are assigned ‘credit’ for motor errors because, in a computational sense, the maximal adaptive response would be associated with the condition credited with the error. We studied this process by examining the patterns of generalization associated with motor adaptation to novel dynamic environments during reaching arm movements in humans. We found that these patterns consistently matched those predicted by adaptation associated with the actual rather than the planned motion, with maximal generalization observed where actual motions were clustered. We followed up these findings by showing that a novel training procedure designed to leverage this newfound understanding of the binding of learning to action, can improve adaptation rates by greater than 50%. Our results provide a mechanistic framework for understanding the effects of partial assistance and error augmentation during neurologic rehabilitation, and they suggest ways to optimize their use. PMID:21731476
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai Jing; Read, Paul W.; Baisden, Joseph M.
Purpose: To evaluate the error in four-dimensional computed tomography (4D-CT) maximal intensity projection (MIP)-based lung tumor internal target volume determination using a simulation method based on dynamic magnetic resonance imaging (dMRI). Methods and Materials: Eight healthy volunteers and six lung tumor patients underwent a 5-min MRI scan in the sagittal plane to acquire dynamic images of lung motion. A MATLAB program was written to generate re-sorted dMRI using 4D-CT acquisition methods (RedCAM) by segmenting and rebinning the MRI scans. The maximal intensity projection images were generated from RedCAM and dMRI, and the errors in the MIP-based internal target area (ITA)more » from RedCAM ({epsilon}), compared with those from dMRI, were determined and correlated with the subjects' respiratory variability ({nu}). Results: Maximal intensity projection-based ITAs from RedCAM were comparatively smaller than those from dMRI in both phantom studies ({epsilon} = -21.64% {+-} 8.23%) and lung tumor patient studies ({epsilon} = -20.31% {+-} 11.36%). The errors in MIP-based ITA from RedCAM correlated linearly ({epsilon} = -5.13{nu} - 6.71, r{sup 2} = 0.76) with the subjects' respiratory variability. Conclusions: Because of the low temporal resolution and retrospective re-sorting, 4D-CT might not accurately depict the excursion of a moving tumor. Using a 4D-CT MIP image to define the internal target volume might therefore cause underdosing and an increased risk of subsequent treatment failure. Patient-specific respiratory variability might also be a useful predictor of the 4D-CT-induced error in MIP-based internal target volume determination.« less
Cai, Jing; Read, Paul W; Baisden, Joseph M; Larner, James M; Benedict, Stanley H; Sheng, Ke
2007-11-01
To evaluate the error in four-dimensional computed tomography (4D-CT) maximal intensity projection (MIP)-based lung tumor internal target volume determination using a simulation method based on dynamic magnetic resonance imaging (dMRI). Eight healthy volunteers and six lung tumor patients underwent a 5-min MRI scan in the sagittal plane to acquire dynamic images of lung motion. A MATLAB program was written to generate re-sorted dMRI using 4D-CT acquisition methods (RedCAM) by segmenting and rebinning the MRI scans. The maximal intensity projection images were generated from RedCAM and dMRI, and the errors in the MIP-based internal target area (ITA) from RedCAM (epsilon), compared with those from dMRI, were determined and correlated with the subjects' respiratory variability (nu). Maximal intensity projection-based ITAs from RedCAM were comparatively smaller than those from dMRI in both phantom studies (epsilon = -21.64% +/- 8.23%) and lung tumor patient studies (epsilon = -20.31% +/- 11.36%). The errors in MIP-based ITA from RedCAM correlated linearly (epsilon = -5.13nu - 6.71, r(2) = 0.76) with the subjects' respiratory variability. Because of the low temporal resolution and retrospective re-sorting, 4D-CT might not accurately depict the excursion of a moving tumor. Using a 4D-CT MIP image to define the internal target volume might therefore cause underdosing and an increased risk of subsequent treatment failure. Patient-specific respiratory variability might also be a useful predictor of the 4D-CT-induced error in MIP-based internal target volume determination.
Retrovirus purification: method that conserves envelope glycoprotein and maximizes infectivity.
McGrath, M; Witte, O; Pincus, T; Weissman, I L
1978-01-01
A Sepharose 4B chromatographic method for purification of retroviruses is described which was less time consuming, increased purified virus yields, conserved viral glycoprotein, and increased recovery of biological infectivity in comparison with conventional sucrose gradient ultracentrifugation techniques. Images PMID:205680
Quasispecies Analyses of the HIV-1 Near-full-length Genome With Illumina MiSeq
Ode, Hirotaka; Matsuda, Masakazu; Matsuoka, Kazuhiro; Hachiya, Atsuko; Hattori, Junko; Kito, Yumiko; Yokomaku, Yoshiyuki; Iwatani, Yasumasa; Sugiura, Wataru
2015-01-01
Human immunodeficiency virus type-1 (HIV-1) exhibits high between-host genetic diversity and within-host heterogeneity, recognized as quasispecies. Because HIV-1 quasispecies fluctuate in terms of multiple factors, such as antiretroviral exposure and host immunity, analyzing the HIV-1 genome is critical for selecting effective antiretroviral therapy and understanding within-host viral coevolution mechanisms. Here, to obtain HIV-1 genome sequence information that includes minority variants, we sought to develop a method for evaluating quasispecies throughout the HIV-1 near-full-length genome using the Illumina MiSeq benchtop deep sequencer. To ensure the reliability of minority mutation detection, we applied an analysis method of sequence read mapping onto a consensus sequence derived from de novo assembly followed by iterative mapping and subsequent unique error correction. Deep sequencing analyses of aHIV-1 clone showed that the analysis method reduced erroneous base prevalence below 1% in each sequence position and discarded only < 1% of all collected nucleotides, maximizing the usage of the collected genome sequences. Further, we designed primer sets to amplify the HIV-1 near-full-length genome from clinical plasma samples. Deep sequencing of 92 samples in combination with the primer sets and our analysis method provided sufficient coverage to identify >1%-frequency sequences throughout the genome. When we evaluated sequences of pol genes from 18 treatment-naïve patients' samples, the deep sequencing results were in agreement with Sanger sequencing and identified numerous additional minority mutations. The results suggest that our deep sequencing method would be suitable for identifying within-host viral population dynamics throughout the genome. PMID:26617593
Optimal Cytoplasmic Transport in Viral Infections
D'Orsogna, Maria R.; Chou, Tom
2009-01-01
For many viruses, the ability to infect eukaryotic cells depends on their transport through the cytoplasm and across the nuclear membrane of the host cell. During this journey, viral contents are biochemically processed into complexes capable of both nuclear penetration and genomic integration. We develop a stochastic model of viral entry that incorporates all relevant aspects of transport, including convection along microtubules, biochemical conversion, degradation, and nuclear entry. Analysis of the nuclear infection probabilities in terms of the transport velocity, degradation, and biochemical conversion rates shows how certain values of key parameters can maximize the nuclear entry probability of the viral material. The existence of such “optimal” infection scenarios depends on the details of the biochemical conversion process and implies potentially counterintuitive effects in viral infection, suggesting new avenues for antiviral treatment. Such optimal parameter values provide a plausible transport-based explanation of the action of restriction factors and of experimentally observed optimal capsid stability. Finally, we propose a new interpretation of how genetic mutations unrelated to the mechanism of drug action may nonetheless confer novel types of overall drug resistance. PMID:20046829
Discovering the influential users oriented to viral marketing based on online social networks
NASA Astrophysics Data System (ADS)
Zhu, Zhiguo
2013-08-01
The target of viral marketing on the platform of popular online social networks is to rapidly propagate marketing information at lower cost and increase sales, in which a key problem is how to precisely discover the most influential users in the process of information diffusion. A novel method is proposed in this paper for helping companies to identify such users as seeds to maximize information diffusion in the viral marketing. Firstly, the user trust network oriented to viral marketing and users’ combined interest degree in the network including isolated users are extensively defined. Next, we construct a model considering the time factor to simulate the process of information diffusion in viral marketing and propose a dynamic algorithm description. Finally, experiments are conducted with a real dataset extracted from the famous SNS website Epinions. The experimental results indicate that the proposed algorithm has better scalability and is less time-consuming. Compared with the classical model, the proposed algorithm achieved a better performance than does the classical method on the two aspects of network coverage rate and time-consumption in our four sub-datasets.
Jin, Long; Zhang, Yunong
2015-07-01
In this brief, a discrete-time Zhang neural network (DTZNN) model is first proposed, developed, and investigated for online time-varying nonlinear optimization (OTVNO). Then, Newton iteration is shown to be derived from the proposed DTZNN model. In addition, to eliminate the explicit matrix-inversion operation, the quasi-Newton Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is introduced, which can effectively approximate the inverse of Hessian matrix. A DTZNN-BFGS model is thus proposed and investigated for OTVNO, which is the combination of the DTZNN model and the quasi-Newton BFGS method. In addition, theoretical analyses show that, with step-size h=1 and/or with zero initial error, the maximal residual error of the DTZNN model has an O(τ(2)) pattern, whereas the maximal residual error of the Newton iteration has an O(τ) pattern, with τ denoting the sampling gap. Besides, when h ≠ 1 and h ∈ (0,2) , the maximal steady-state residual error of the DTZNN model has an O(τ(2)) pattern. Finally, an illustrative numerical experiment and an application example to manipulator motion generation are provided and analyzed to substantiate the efficacy of the proposed DTZNN and DTZNN-BFGS models for OTVNO.
Determining relative error bounds for the CVBEM
Hromadka, T.V.
1985-01-01
The Complex Variable Boundary Element Methods provides a measure of relative error which can be utilized to subsequently reduce the error or provide information for further modeling analysis. By maximizing the relative error norm on each boundary element, a bound on the total relative error for each boundary element can be evaluated. This bound can be utilized to test CVBEM convergence, to analyze the effects of additional boundary nodal points in reducing the modeling error, and to evaluate the sensitivity of resulting modeling error within a boundary element from the error produced in another boundary element as a function of geometric distance. ?? 1985.
Hughes, Paul; Deng, Wenjie; Olson, Scott C; Coombs, Robert W; Chung, Michael H; Frenkel, Lisa M
2016-03-01
Accurate analysis of minor populations of drug-resistant HIV requires analysis of a sufficient number of viral templates. We assessed the effect of experimental conditions on the analysis of HIV pol 454 pyrosequences generated from plasma using (1) the "Insertion-deletion (indel) and Carry Forward Correction" (ICC) pipeline, which clusters sequence reads using a nonsubstitution approach and can correct for indels and carry forward errors, and (2) the "Primer Identification (ID)" method, which facilitates construction of a consensus sequence to correct for sequencing errors and allelic skewing. The Primer ID and ICC methods produced similar estimates of viral diversity, but differed in the number of sequence variants generated. Sequence preparation for ICC was comparably simple, but was limited by an inability to assess the number of templates analyzed and allelic skewing. The more costly Primer ID method corrected for allelic skewing and provided the number of viral templates analyzed, which revealed that amplifiable HIV templates varied across specimens and did not correlate with clinical viral load. This latter observation highlights the value of the Primer ID method, which by determining the number of templates amplified, enables more accurate assessment of minority species in the virus population, which may be relevant to prescribing effective antiretroviral therapy.
Mirolli, Marco; Santucci, Vieri G; Baldassarre, Gianluca
2013-03-01
An important issue of recent neuroscientific research is to understand the functional role of the phasic release of dopamine in the striatum, and in particular its relation to reinforcement learning. The literature is split between two alternative hypotheses: one considers phasic dopamine as a reward prediction error similar to the computational TD-error, whose function is to guide an animal to maximize future rewards; the other holds that phasic dopamine is a sensory prediction error signal that lets the animal discover and acquire novel actions. In this paper we propose an original hypothesis that integrates these two contrasting positions: according to our view phasic dopamine represents a TD-like reinforcement prediction error learning signal determined by both unexpected changes in the environment (temporary, intrinsic reinforcements) and biological rewards (permanent, extrinsic reinforcements). Accordingly, dopamine plays the functional role of driving both the discovery and acquisition of novel actions and the maximization of future rewards. To validate our hypothesis we perform a series of experiments with a simulated robotic system that has to learn different skills in order to get rewards. We compare different versions of the system in which we vary the composition of the learning signal. The results show that only the system reinforced by both extrinsic and intrinsic reinforcements is able to reach high performance in sufficiently complex conditions. Copyright © 2013 Elsevier Ltd. All rights reserved.
The roles of picornavirus untranslated regions in infection and innate immunity
USDA-ARS?s Scientific Manuscript database
Viral genomes have evolved to maximize their potential of overcoming host defense mechanisms and to induce a variety of disease syndromes. Structurally, a genome of a virus consists of coding and noncoding regions, and both have been shown to contribute to initiation and progression of disease. Ac...
Genetic Engineering: The Modification of Man
ERIC Educational Resources Information Center
Sinsheimer, Robert L.
1970-01-01
Describes somatic and genetic manipulations of individual genotypes, using diabetes control as an example of the first mode that is potentially realizable be derepression or viral transduction of genes. Advocates the use of genetic engineering of the second mode to remove man from his biological limitations, but offers maxims to ensure the…
Canard, Bruno
2018-01-01
Viral RNA-dependent RNA polymerases (RdRps) play a central role not only in viral replication, but also in the genetic evolution of viral RNAs. After binding to an RNA template and selecting 5′-triphosphate ribonucleosides, viral RdRps synthesize an RNA copy according to Watson-Crick base-pairing rules. The copy process sometimes deviates from both the base-pairing rules specified by the template and the natural ribose selectivity and, thus, the process is error-prone due to the intrinsic (in)fidelity of viral RdRps. These enzymes share a number of conserved amino-acid sequence strings, called motifs A–G, which can be defined from a structural and functional point-of-view. A co-relation is gradually emerging between mutations in these motifs and viral genome evolution or observed mutation rates. Here, we review our current knowledge on these motifs and their role on the structural and mechanistic basis of the fidelity of nucleotide selection and RNA synthesis by Flavivirus RdRps. PMID:29385764
NASA Astrophysics Data System (ADS)
Tang, Jinjun; Zhang, Shen; Chen, Xinqiang; Liu, Fang; Zou, Yajie
2018-03-01
Understanding Origin-Destination distribution of taxi trips is very important for improving effects of transportation planning and enhancing quality of taxi services. This study proposes a new method based on Entropy-Maximizing theory to model OD distribution in Harbin city using large-scale taxi GPS trajectories. Firstly, a K-means clustering method is utilized to partition raw pick-up and drop-off location into different zones, and trips are assumed to start from and end at zone centers. A generalized cost function is further defined by considering travel distance, time and fee between each OD pair. GPS data collected from more than 1000 taxis at an interval of 30 s during one month are divided into two parts: data from first twenty days is treated as training dataset and last ten days is taken as testing dataset. The training dataset is used to calibrate model while testing dataset is used to validate model. Furthermore, three indicators, mean absolute error (MAE), root mean square error (RMSE) and mean percentage absolute error (MPAE), are applied to evaluate training and testing performance of Entropy-Maximizing model versus Gravity model. The results demonstrate Entropy-Maximizing model is superior to Gravity model. Findings of the study are used to validate the feasibility of OD distribution from taxi GPS data in urban system.
Untranslated regions of diverse plant viral RNAs vary greatly in translation enhancement efficiency
2012-01-01
Background Whole plants or plant cell cultures can serve as low cost bioreactors to produce massive amounts of a specific protein for pharmacological or industrial use. To maximize protein expression, translation of mRNA must be optimized. Many plant viral RNAs harbor extremely efficient translation enhancers. However, few of these different translation elements have been compared side-by-side. Thus, it is unclear which are the most efficient translation enhancers. Here, we compare the effects of untranslated regions (UTRs) containing translation elements from six plant viruses on translation in wheat germ extract and in monocotyledenous and dicotyledenous plant cells. Results The highest expressing uncapped mRNAs contained viral UTRs harboring Barley yellow dwarf virus (BYDV)-like cap-independent translation elements (BTEs). The BYDV BTE conferred the most efficient translation of a luciferase reporter in wheat germ extract and oat protoplasts, while uncapped mRNA containing the BTE from Tobacco necrosis virus-D translated most efficiently in tobacco cells. Capped mRNA containing the Tobacco mosaic virus omega sequence was the most efficient mRNA in tobacco cells. UTRs from Satellite tobacco necrosis virus, Tomato bushy stunt virus, and Crucifer-infecting tobamovirus (crTMV) did not stimulate translation efficiently. mRNA with the crTMV 5′ UTR was unstable in tobacco protoplasts. Conclusions BTEs confer the highest levels of translation of uncapped mRNAs in vitro and in vivo, while the capped omega sequence is most efficient in tobacco cells. These results provide a basis for understanding mechanisms of translation enhancement, and for maximizing protein synthesis in cell-free systems, transgenic plants, or in viral expression vectors. PMID:22559081
Barrett-Muir, W; Breuer, J; Millar, C; Thomas, J; Jeffries, D; Yaqoob, M; Aitken, C
2000-07-15
Quantitative commercial assays for cytomegalovirus (CMV) detection have recently been developed. Their role in the management of patients after transplantation needs to be evaluated. Widespread use of these assays will allow for comparison of results between centers and meaningful interpretation of the significance of viral load measurements. Sequential samples from 52 patients after renal transplantation were tested in the murex hybrid capture assay (HCA) and the Roche Amplicor CMV DNA assay (QPCR) and correlated with the development of CMV disease. A comparison of viral loads in plasma and whole blood was also made. Both assays were sensitive and detected all cases of CMV disease. The specificity and positive predictive value increased from 0.34 and 0.36 to 0.85 and 0.96 for the HCA and 0.37, 0.37 to 0.72 and 0.63 for the QPCR following a receiver operator curve analysis. Higher viral loads were measured using the HCA compared to the QPCR. Response to ganciclovir was associated with a greater than 80% reduction in viral load by HCA or greater than 70% using the QPCR. Both assays were highly sensitive. By using a receiver operator curve analysis a cutoff viral load can be determined which maximizes the clinical utility of these assays.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, Albin, E-mail: albin.fredriksson@raysearchlabs.com; Hårdemark, Björn; Forsgren, Anders
2015-07-15
Purpose: This paper introduces a method that maximizes the probability of satisfying the clinical goals in intensity-modulated radiation therapy treatments subject to setup uncertainty. Methods: The authors perform robust optimization in which the clinical goals are constrained to be satisfied whenever the setup error falls within an uncertainty set. The shape of the uncertainty set is included as a variable in the optimization. The goal of the optimization is to modify the shape of the uncertainty set in order to maximize the probability that the setup error will fall within the modified set. Because the constraints enforce the clinical goalsmore » to be satisfied under all setup errors within the uncertainty set, this is equivalent to maximizing the probability of satisfying the clinical goals. This type of robust optimization is studied with respect to photon and proton therapy applied to a prostate case and compared to robust optimization using an a priori defined uncertainty set. Results: Slight reductions of the uncertainty sets resulted in plans that satisfied a larger number of clinical goals than optimization with respect to a priori defined uncertainty sets, both within the reduced uncertainty sets and within the a priori, nonreduced, uncertainty sets. For the prostate case, the plans taking reduced uncertainty sets into account satisfied 1.4 (photons) and 1.5 (protons) times as many clinical goals over the scenarios as the method taking a priori uncertainty sets into account. Conclusions: Reducing the uncertainty sets enabled the optimization to find better solutions with respect to the errors within the reduced as well as the nonreduced uncertainty sets and thereby achieve higher probability of satisfying the clinical goals. This shows that asking for a little less in the optimization sometimes leads to better overall plan quality.« less
Robust radio interferometric calibration using the t-distribution
NASA Astrophysics Data System (ADS)
Kazemi, S.; Yatawatta, S.
2013-10-01
A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.
Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors
NASA Astrophysics Data System (ADS)
Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.
2018-04-01
The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.
Initial therapy with protease inhibitor-sparing regimens: evaluation of nevirapine and delavirdine.
Conway, B
2000-06-01
We have compared the results (on-treatment analyses) of 2 randomized clinical trials of protease inhibitor-sparing regimens in drug-naive patients. In the INCAS (Italy, Netherlands, Canada, Australia) study, the mean decrease in plasma viral load over 52 weeks was 2.2 log(10) copies/mL in 40 patients who were receiving zidovudine/didanosine/nevirapine (18 [45%] had maximal suppression), with a mean increase in CD4 T cell counts of 139 cells/microL. In protocol 0021 Part II, the mean decrease in plasma viral load over 52 weeks was 2.1 log(10) copies/mL in 34 patients who were receiving zidovudine/lamivudine/delavirdine (20 [59%] had maximal suppression), with a mean increase in CD4 T cell counts of 88 cells/microL. The virologic and immunologic efficacy of the 2 triple-drug regimens are similar. Until results of long-term studies are available to establish whether a preferred approach to initial therapy exists, nonnucleoside reverse transcriptase inhibitors may be a valuable alternative to protease inhibitors in the initial therapy of antiretroviral-naive, moderately immunosuppressed patients.
Design and scheduling for periodic concurrent error detection and recovery in processor arrays
NASA Technical Reports Server (NTRS)
Wang, Yi-Min; Chung, Pi-Yu; Fuchs, W. Kent
1992-01-01
Periodic application of time-redundant error checking provides the trade-off between error detection latency and performance degradation. The goal is to achieve high error coverage while satisfying performance requirements. We derive the optimal scheduling of checking patterns in order to uniformly distribute the available checking capability and maximize the error coverage. Synchronous buffering designs using data forwarding and dynamic reconfiguration are described. Efficient single-cycle diagnosis is implemented by error pattern analysis and direct-mapped recovery cache. A rollback recovery scheme using start-up control for local recovery is also presented.
Multivariate-$t$ nonlinear mixed models with application to censored multi-outcome AIDS studies.
Lin, Tsung-I; Wang, Wan-Lun
2017-10-01
In multivariate longitudinal HIV/AIDS studies, multi-outcome repeated measures on each patient over time may contain outliers, and the viral loads are often subject to a upper or lower limit of detection depending on the quantification assays. In this article, we consider an extension of the multivariate nonlinear mixed-effects model by adopting a joint multivariate-$t$ distribution for random effects and within-subject errors and taking the censoring information of multiple responses into account. The proposed model is called the multivariate-$t$ nonlinear mixed-effects model with censored responses (MtNLMMC), allowing for analyzing multi-outcome longitudinal data exhibiting nonlinear growth patterns with censorship and fat-tailed behavior. Utilizing the Taylor-series linearization method, a pseudo-data version of expectation conditional maximization either (ECME) algorithm is developed for iteratively carrying out maximum likelihood estimation. We illustrate our techniques with two data examples from HIV/AIDS studies. Experimental results signify that the MtNLMMC performs favorably compared to its Gaussian analogue and some existing approaches. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Automated error correction in IBM quantum computer and explicit generalization
NASA Astrophysics Data System (ADS)
Ghosh, Debjit; Agarwal, Pratik; Pandey, Pratyush; Behera, Bikash K.; Panigrahi, Prasanta K.
2018-06-01
Construction of a fault-tolerant quantum computer remains a challenging problem due to unavoidable noise and fragile quantum states. However, this goal can be achieved by introducing quantum error-correcting codes. Here, we experimentally realize an automated error correction code and demonstrate the nondestructive discrimination of GHZ states in IBM 5-qubit quantum computer. After performing quantum state tomography, we obtain the experimental results with a high fidelity. Finally, we generalize the investigated code for maximally entangled n-qudit case, which could both detect and automatically correct any arbitrary phase-change error, or any phase-flip error, or any bit-flip error, or combined error of all types of error.
Influenza A virus hemagglutinin glycosylation compensates for antibody escape fitness costs.
Kosik, Ivan; Ince, William L; Gentles, Lauren E; Oler, Andrew J; Kosikova, Martina; Angel, Matthew; Magadán, Javier G; Xie, Hang; Brooke, Christopher B; Yewdell, Jonathan W
2018-01-01
Rapid antigenic evolution enables the persistence of seasonal influenza A and B viruses in human populations despite widespread herd immunity. Understanding viral mechanisms that enable antigenic evolution is critical for designing durable vaccines and therapeutics. Here, we utilize the primerID method of error-correcting viral population sequencing to reveal an unexpected role for hemagglutinin (HA) glycosylation in compensating for fitness defects resulting from escape from anti-HA neutralizing antibodies. Antibody-free propagation following antigenic escape rapidly selected viruses with mutations that modulated receptor binding avidity through the addition of N-linked glycans to the HA globular domain. These findings expand our understanding of the viral mechanisms that maintain fitness during antigenic evolution to include glycan addition, and highlight the immense power of high-definition virus population sequencing to reveal novel viral adaptive mechanisms.
Rejman, Marek
2013-01-01
The aim of this study was to analyze the error structure in propulsive movements with regard to its influence on monofin swimming speed. The random cycles performed by six swimmers were filmed during a progressive test (900m). An objective method to estimate errors committed in the area of angular displacement of the feet and monofin segments was employed. The parameters were compared with a previously described model. Mutual dependences between the level of errors, stroke frequency, stroke length and amplitude in relation to swimming velocity were analyzed. The results showed that proper foot movements and the avoidance of errors, arising at the distal part of the fin, ensure the progression of swimming speed. The individual stroke parameters distribution which consists of optimally increasing stroke frequency to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Identification of key elements in the stroke structure based on the analysis of errors committed should aid in improving monofin swimming technique. Key points The monofin swimming technique was evaluated through the prism of objectively defined errors committed by the swimmers. The dependences between the level of errors, stroke rate, stroke length and amplitude in relation to swimming velocity were analyzed. Optimally increasing stroke rate to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Propriety foot movement and the avoidance of errors arising at the distal part of fin, provide for the progression of swimming speed. The key elements improving monofin swimming technique, based on the analysis of errors committed, were designated. PMID:24149742
Action errors, error management, and learning in organizations.
Frese, Michael; Keith, Nina
2015-01-03
Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.
Diverse fates of uracilated HIV-1 DNA during infection of myeloid lineage cells.
Hansen, Erik C; Ransom, Monica; Hesselberth, Jay R; Hosmane, Nina N; Capoferri, Adam A; Bruner, Katherine M; Pollack, Ross A; Zhang, Hao; Drummond, Michael Bradley; Siliciano, Janet M; Siliciano, Robert; Stivers, James T
2016-09-20
We report that a major subpopulation of monocyte-derived macrophages (MDMs) contains high levels of dUTP, which is incorporated into HIV-1 DNA during reverse transcription (U/A pairs), resulting in pre-integration restriction and post-integration mutagenesis. After entering the nucleus, uracilated viral DNA products are degraded by the uracil base excision repair (UBER) machinery with less than 1% of the uracilated DNA successfully integrating. Although uracilated proviral DNA showed few mutations, the viral genomic RNA was highly mutated, suggesting that errors occur during transcription. Viral DNA isolated from blood monocytes and alveolar macrophages (but not T cells) of drug-suppressed HIV-infected individuals also contained abundant uracils. The presence of viral uracils in short-lived monocytes suggests their recent infection through contact with virus producing cells in a tissue reservoir. These findings reveal new elements of a viral defense mechanism involving host UBER that may be relevant to the establishment and persistence of HIV-1 infection.
Domingo, Esteban; Perales, Celia
2018-05-01
Quasispecies theory has been instrumental in the understanding of RNA virus population dynamics because it considered for the first time mutation as an integral part of the replication process. The key influences of quasispecies theory on experimental virology have been: (1) to disclose the mutant spectrum nature of viral populations and to evaluate its consequences; (2) to unveil collective properties of genome ensembles that can render a mutant spectrum a unit of selection; and (3) to identify new vulnerability points of pathogenic RNA viruses on three fronts: the need to apply multiple selective constraints (in the form of drug combinations) to minimize selection of treatment-escape variants, to translate the error threshold concept into antiviral designs, and to construct attenuated vaccine viruses through alterations of viral polymerase copying fidelity or through displacements of viral genomes towards unfavorable regions of sequence space. These three major influences on the understanding of viral pathogens preceded extensions of quasispecies to non-viral systems such as bacterial and tumor cell collectivities and prions. These developments are summarized here.
Winner Takes All: Competing Viruses or Ideas on Fair-Play Networks
2012-01-01
ratio (up to some exponents ). Also, clearly, the maximal ratios are attained at one of the last two fixed points. 4.3 Special Case: Barbell Graph A...Huberman. The dynamics of viral marketing. In EC, 2006. [24] J. Leskovec, M. McGlohon, C. Faloutsos, N. Glance, and M. Hurst . Cascading behavior in large
Wood-Charlson, Elisha M; Weynberg, Karen D; Suttle, Curtis A; Roux, Simon; van Oppen, Madeleine J H
2015-10-01
Reef-building corals form close associations with organisms from all three domains of life and therefore have many potential viral hosts. Yet knowledge of viral communities associated with corals is barely explored. This complexity presents a number of challenges in terms of the metagenomic assessments of coral viral communities and requires specialized methods for purification and amplification of viral nucleic acids, as well as virome annotation. In this minireview, we conduct a meta-analysis of the limited number of existing coral virome studies, as well as available coral transcriptome and metagenome data, to identify trends and potential complications inherent in different methods. The analysis shows that the method used for viral nucleic acid isolation drastically affects the observed viral assemblage and interpretation of the results. Further, the small number of viral reference genomes available, coupled with short sequence read lengths might cause errors in virus identification. Despite these limitations and potential biases, the data show that viral communities associated with corals are diverse, with double- and single-stranded DNA and RNA viruses. The identified viruses are dominated by double-stranded DNA-tailed bacteriophages, but there are also viruses that infect eukaryote hosts, likely the endosymbiotic dinoflagellates, Symbiodinium spp., host coral and other eukaryotes in close association. © 2015 The Authors. Environmental Microbiology published by Society for Applied Microbiology and John Wiley & Sons Ltd.
Optimal number of spacers in CRISPR arrays
Severinov, Konstantin; Ispolatov, Iaroslav
2017-01-01
Prokaryotic organisms survive under constant pressure of viruses. CRISPR-Cas system provides its prokaryotic host with an adaptive immune defense against viruses that have been previously encountered. It consists of two components: Cas-proteins that cleave the foreign DNA and CRISPR array that suits as a virus recognition key. CRISPR array consists of a series of spacers, short pieces of DNA that originate from and match the corresponding parts of viral DNA called protospacers. Here we estimate the number of spacers in a CRISPR array of a prokaryotic cell which maximizes its protection against a viral attack. The optimality follows from a competition between two trends: too few distinct spacers make host vulnerable to an attack by a virus with mutated corresponding protospacers, while an excessive variety of spacers dilutes the number of the CRISPR complexes armed with the most recent and thus most useful spacers. We first evaluate the optimal number of spacers in a simple scenario of an infection by a single viral species and later consider a more general case of multiple viral species. We find that depending on such parameters as the concentration of CRISPR-Cas interference complexes and its preference to arm with more recently acquired spacers, the rate of viral mutation, and the number of viral species, the predicted optimal number of spacers lies within a range that agrees with experimentally-observed values. PMID:29253874
Vaccination of cattle against bovine viral diarrhea virus.
Newcomer, Benjamin W; Chamorro, Manuel F; Walz, Paul H
2017-07-01
Bovine viral diarrhea virus (BVDV) is responsible for significant losses to the cattle industry. Currently, modified-live viral (MLV) and inactivated viral vaccines are available against BVDV, often in combination with other viral and bacterial antigens. Inactivated and MLV vaccines provide cattle producers and veterinarians safe and efficacious options for herd immunization to limit disease associated with BVDV infection. Vaccination of young cattle against BVDV is motivated by prevention of clinical disease and limiting viral spread to susceptible animals. For reproductive-age cattle, vaccination to prevent viremia and birth of persistently infected offspring is considered more important, while also more difficult to achieve than prevention of clinical disease. Recent advances have been made in the understanding of BVDV vaccine efficacy. In terms of preventing clinical disease, current BVDV vaccines have been demonstrated to have a rapid onset of immunity and MLV vaccines can be effectively utilized in calves possessing maternal immunity. For reproductive protection, more recent studies using multivalent MLV vaccines have demonstrated consistent fetal protection rates in the range of 85-100% in experimental studies. Proper timing and administration of BVDV vaccines can be utilized to maximize vaccine efficacy to provide an important contribution to reducing risks associated with BVDV infection. With improvements in vaccine formulations and increased understanding of the protective immune response following vaccination, control of BVDV through vaccination can be enhanced. Copyright © 2017. Published by Elsevier B.V.
Harter, M L; Shanmugam, G; Wold, W S; Green, M
1976-01-01
(35S) methionine-labeled polypeptides synthesized by adenovirus type 2-infected cells have been analyzed by polyacrylamide gradient gel electrophoresis and autoradiography. Cycloheximide (CH) was added to infected cultures to accumulate early viral mRNA relative to host cell mRNA. This allowed viral proteins to be synthesized in increased amounts relative to host proteins after removal of CH and pulse-labeling with (35S)methionine. During the labeling period arabinosyl cytosine was added to prevent the synthesis of late viral proteins. This procedure facilitated the detection of six early viral-induced polypeptides, designated EP1 through EP6 (early protein), with apparent molecular weights of 75,000 (75K), 42K, 21K, 18K, 15K, and 11K. Supportive data were obtained by coelectrophoresis of (35S)- and (3H)methionine-labeled polypeptides from infected and uninfected cells, respectively. Three of these early polypeptides have not been previously reported. CH pretreatment enhanced the rates of synthesis of EP4 and EP6 20- to 30-fold and enhanced that of the others approximately twofold. The maximal rates of synthesis of the virus-induced proteins varied, in a different manner, with time postinfection and CH pretreatment. Since CH pretreatment appears to increase the levels of early viral proteins, it may be a useful procedure to assist their isolation and functional characterization. Images PMID:950686
Improved slow-light performance of 10 Gb/s NRZ, PSBT and DPSK signals in fiber broadband SBS.
Yi, Lilin; Jaouen, Yves; Hu, Weisheng; Su, Yikai; Bigo, Sébastien
2007-12-10
We have demonstrated error-free operations of slow-light via stimulated Brillouin scattering (SBS) in optical fiber for 10-Gb/s signals with different modulation formats, including non-return-to-zero (NRZ), phase-shaped binary transmission (PSBT) and differential phase-shiftkeying (DPSK). The SBS gain bandwidth is broadened by using current noise modulation of the pump laser diode. The gain shape is simply controlled by the noise density function. Super-Gaussian noise modulation of the Brillouin pump allows a flat-top and sharp-edge SBS gain spectrum, which can reduce slow-light induced distortion in case of 10-Gb/s NRZ signal. The corresponding maximal delay-time with error-free operation is 35 ps. Then we propose the PSBT format to minimize distortions resulting from SBS filtering effect and dispersion accompanied with slow light because of its high spectral efficiency and strong dispersion tolerance. The sensitivity of the 10-Gb/s PSBT signal is 5.2 dB better than the NRZ case with a same 35-ps delay. The maximal delay of 51 ps with error-free operation has been achieved. Futhermore, the DPSK format is directly demodulated through a Gaussian-shaped SBS gain, which is achieved using Gaussian-noise modulation of the Brillouin pump. The maximal error-free time delay after demodulation of a 10-Gb/s DPSK signal is as high as 81.5 ps, which is the best demonstrated result for 10-Gb/s slow-light.
Error field optimization in DIII-D using extremum seeking control
NASA Astrophysics Data System (ADS)
Lanctot, M. J.; Olofsson, K. E. J.; Capella, M.; Humphreys, D. A.; Eidietis, N.; Hanson, J. M.; Paz-Soldan, C.; Strait, E. J.; Walker, M. L.
2016-07-01
DIII-D experiments have demonstrated a new real-time approach to tokamak error field control based on maximizing the toroidal angular momentum. This approach uses extremum seeking control theory to optimize the error field in real time without inducing instabilities. Slowly-rotating n = 1 fields (the dither), generated by external coils, are used to perturb the angular momentum, monitored in real-time using a charge-exchange spectroscopy diagnostic. Simple signal processing of the rotation measurements extracts information about the rotation gradient with respect to the control coil currents. This information is used to converge the control coil currents to a point that maximizes the toroidal angular momentum. The technique is well-suited for multi-coil, multi-harmonic error field optimizations in disruption sensitive devices as it does not require triggering locked tearing modes or plasma current disruptions. Control simulations highlight the importance of the initial search direction on the rate of the convergence, and identify future algorithm upgrades that may allow more rapid convergence that projects to convergence times in ITER on the order of tens of seconds.
Lin, Lixin; Wang, Yunjia; Teng, Jiyao; Xi, Xiuxiu
2015-07-23
The measurement of soil total nitrogen (TN) by hyperspectral remote sensing provides an important tool for soil restoration programs in areas with subsided land caused by the extraction of natural resources. This study used the local correlation maximization-complementary superiority method (LCMCS) to establish TN prediction models by considering the relationship between spectral reflectance (measured by an ASD FieldSpec 3 spectroradiometer) and TN based on spectral reflectance curves of soil samples collected from subsided land which is determined by synthetic aperture radar interferometry (InSAR) technology. Based on the 1655 selected effective bands of the optimal spectrum (OSP) of the first derivate differential of reciprocal logarithm ([log{1/R}]'), (correlation coefficients, p < 0.01), the optimal model of LCMCS method was obtained to determine the final model, which produced lower prediction errors (root mean square error of validation [RMSEV] = 0.89, mean relative error of validation [MREV] = 5.93%) when compared with models built by the local correlation maximization (LCM), complementary superiority (CS) and partial least squares regression (PLS) methods. The predictive effect of LCMCS model was optional in Cangzhou, Renqiu and Fengfeng District. Results indicate that the LCMCS method has great potential to monitor TN in subsided lands caused by the extraction of natural resources including groundwater, oil and coal.
Li, Deyu; Fedeles, Bogdan I; Singh, Vipender; Peng, Chunte Sam; Silvestre, Katherine J; Simi, Allison K; Simpson, Jeffrey H; Tokmakoff, Andrei; Essigmann, John M
2014-08-12
Viral lethal mutagenesis is a strategy whereby the innate immune system or mutagenic pool nucleotides increase the error rate of viral replication above the error catastrophe limit. Lethal mutagenesis has been proposed as a mechanism for several antiviral compounds, including the drug candidate 5-aza-5,6-dihydro-2'-deoxycytidine (KP1212), which causes A-to-G and G-to-A mutations in the HIV genome, both in tissue culture and in HIV positive patients undergoing KP1212 monotherapy. This work explored the molecular mechanism(s) underlying the mutagenicity of KP1212, and specifically whether tautomerism, a previously proposed hypothesis, could explain the biological consequences of this nucleoside analog. Establishing tautomerism of nucleic acid bases under physiological conditions has been challenging because of the lack of sensitive methods. This study investigated tautomerism using an array of spectroscopic, theoretical, and chemical biology approaches. Variable temperature NMR and 2D infrared spectroscopic methods demonstrated that KP1212 existed as a broad ensemble of interconverting tautomers, among which enolic forms dominated. The mutagenic properties of KP1212 were determined empirically by in vitro and in vivo replication of a single-stranded vector containing a single KP1212. It was found that KP1212 paired with both A (10%) and G (90%), which is in accord with clinical observations. Moreover, this mutation frequency is sufficient for pushing a viral population over its error catastrophe limit, as observed before in cell culture studies. Finally, a model is proposed that correlates the mutagenicity of KP1212 with its tautomeric distribution in solution.
Towards optimized methods to study viral impacts on soil microbial carbon cycling
NASA Astrophysics Data System (ADS)
Trubl, G. G.; Roux, S.; Jang, H. B.; Solonenko, N.; Sullivan, M. B.; Rich, V. I.
2016-12-01
Permafrost contains 50% of global soil carbon and is rapidly thawing. While the fate of this carbon is currently unknown, it will undoubtedly be shaped by microbes and their associated viruses, which modulate host activities via mortality and metabolic control. However, little is known about soil viruses generally and their impact on terrestrial biogeochemistry; this is partially due to the presence of inhibitory substances (e.g. humic acids) in soils that interfere with sample processing and sequence-based metagenomics surveys. To address this problem, we examined viral populations in three different peat soils along a permafrost thaw gradient. These samples yielded low viral DNA recoveries, and shallow metagenomic sequencing, but still resulted in the recovery of 40 viral genome fragments. Genome- and network-based classification suggested that these new references represented 11 viral clusters, and ecological patterns (based upon non-redundant fragment recruitment) showed that viral populations were distinct in each habitat. Although only 31% of the genes could be functionally classified, pairwise genome comparisons classified 63% of the viruses taxonomically. Additionally, comparison of the 40 viral genome fragments to 53 previously recovered fragments from the same site showed no overlap, suggesting only a small portion of the resident viral community has been sampled. A follow-up experiment was performed to remove more humics during extraction and thereby obtain better viral metagenomes. Three DNA extraction protocols were tested (CTAB, PowerSoil, and Wizard columns) and the DNA was further purified with an AMPure clean-up. The PowerSoil kit maximized DNA yield (3x CTAB and 6x Wizard), and yielded the purest DNA (based on NanoDrop 260:230 ratio). Given the important roles of viruses in biogeochemical cycles in better-studied systems, further research and humic-removal optimization on these thawing permafrost-associated viral communities is needed to clarify their involvement in carbon cycle feedbacks.
Inferring HIV-1 Transmission Dynamics in Germany From Recently Transmitted Viruses.
Pouran Yousef, Kaveh; Meixenberger, Karolin; Smith, Maureen R; Somogyi, Sybille; Gromöller, Silvana; Schmidt, Daniel; Gunsenheimer-Bartmeyer, Barbara; Hamouda, Osamah; Kücherer, Claudia; von Kleist, Max
2016-11-01
Although HIV continues to spread globally, novel intervention strategies such as treatment as prevention (TasP) may bring the epidemic to a halt. However, their effective implementation requires a profound understanding of the underlying transmission dynamics. We analyzed parameters of the German HIV epidemic based on phylogenetic clustering of viral sequences from recently infected seroconverters with known infection dates. Viral baseline and follow-up pol sequences (n = 1943) from 1159 drug-naïve individuals were selected from a nationwide long-term observational study initiated in 1997. Putative transmission clusters were computed based on a maximum likelihood phylogeny. Using individual follow-up sequences, we optimized our clustering threshold to maximize the likelihood of co-clustering individuals connected by direct transmission. The sizes of putative transmission clusters scaled inversely with their abundance and their distribution exhibited a heavy tail. Clusters based on the optimal clustering threshold were significantly more likely to contain members of the same or bordering German federal states. Interinfection times between co-clustered individuals were significantly shorter (26 weeks; interquartile range: 13-83) than in a null model. Viral intraindividual evolution may be used to select criteria that maximize co-clustering of transmission pairs in the absence of strong adaptive selection pressure. Interinfection times of co-clustered individuals may then be an indicator of the typical time to onward transmission. Our analysis suggests that onward transmission may have occurred early after infection, when individuals are typically unaware of their serological status. The latter argues that TasP should be combined with HIV testing campaigns to reduce the possibility of transmission before TasP initiation.
Remien, Robert H.; Bauman, Laurie J.; Mantell, Joanne; Tsoi, Benjamin; Lopez-Rios, Javier; Chhabra, Rosy; DiCarlo, Abby; Watnick, Dana; Rivera, Angelic; Teitelman, Nehama; Cutler, Blayne; Warne, Patricia
2015-01-01
Background Engagement in HIV care helps to maximize viral suppression, which, in turn, reduces morbidity and mortality and prevents further HIV transmission. With more HIV cases than any other US city, New York City reported in 2012 that only 41% of all persons estimated to be living with HIV (PLWH) had a suppressed viral load, while nearly three-quarters of those in clinical care achieved viral suppression. Thus, retaining PLWH in HIV care addresses this central goal of both the US National HIV/AIDS Strategy and Governor Cuomo's plan to end the AIDS epidemic in New York State. Methods We conducted 80 in-depth qualitative interviews with PLWH in four NYC populations that were identified as being inconsistently engaged in HIV medical care: African immigrants, previously incarcerated adults, transgender women, and young men who have sex with men. Results Barriers to and facilitators of HIV care engagement fell into three domains: (1) system factors (e.g., patient-provider relationship, social service agencies, transitions between penal system and community); (2) social factors (e.g., family and other social support; stigma related to HIV, substance use, sexual orientation, gender identity, and incarceration); and (3) individual factors (e.g., mental illness, substance use, resilience). Similarities and differences in these themes across the four populations as well as research and public health implications were identified. Conclusions Engagement in care is maximized when the social challenges confronted by vulnerable groups are addressed; patient-provider communication is strong; and coordinated services are available, including housing, mental health and substance use treatment, and peer navigation. PMID:25867774
Diverse fates of uracilated HIV-1 DNA during infection of myeloid lineage cells
Hansen, Erik C; Ransom, Monica; Hesselberth, Jay R; Hosmane, Nina N; Capoferri, Adam A; Bruner, Katherine M; Pollack, Ross A; Zhang, Hao; Drummond, Michael Bradley; Siliciano, Janet M; Siliciano, Robert; Stivers, James T
2016-01-01
We report that a major subpopulation of monocyte-derived macrophages (MDMs) contains high levels of dUTP, which is incorporated into HIV-1 DNA during reverse transcription (U/A pairs), resulting in pre-integration restriction and post-integration mutagenesis. After entering the nucleus, uracilated viral DNA products are degraded by the uracil base excision repair (UBER) machinery with less than 1% of the uracilated DNA successfully integrating. Although uracilated proviral DNA showed few mutations, the viral genomic RNA was highly mutated, suggesting that errors occur during transcription. Viral DNA isolated from blood monocytes and alveolar macrophages (but not T cells) of drug-suppressed HIV-infected individuals also contained abundant uracils. The presence of viral uracils in short-lived monocytes suggests their recent infection through contact with virus producing cells in a tissue reservoir. These findings reveal new elements of a viral defense mechanism involving host UBER that may be relevant to the establishment and persistence of HIV-1 infection. DOI: http://dx.doi.org/10.7554/eLife.18447.001 PMID:27644592
Lee, Ji-Hye; Bae, Sun Young; Oh, Mi; Seok, Jong Hyeon; Kim, Sella; Chung, Yeon Bin; Gowda K, Giri; Mun, Ji Young; Chung, Mi Sook; Kim, Kyung Hyun
2016-06-01
Black raspberry seeds, a byproduct of wine and juice production, contain large quantities of polyphenolic compounds. The antiviral effects of black raspberry seed extract (RCS) and its fraction with molecular weight less than 1 kDa (RCS-F1) were examined against food-borne viral surrogates, murine norovirus-1 (MNV-1) and feline calicivirus-F9 (FCV-F9). The maximal antiviral effect was achieved when RCS or RCS-F1 was added simultaneously to cells with MNV-1 or FCV-F9, reaching complete inhibition at 0.1-1 mg/mL. Transmission electron microscopy (TEM) images showed enlarged viral capsids or disruption (from 35 nm to up to 100 nm) by RCS-F1. Our results thus suggest that RCS-F1 can interfere with the attachment of viral surface protein to host cells. Further, two polyphenolic compounds derived from RCS-F1, cyanidin-3-glucoside (C3G) and gallic acid, identified by liquid chromatography-tandem mass spectrometry, showed inhibitory effects against the viruses. C3G was suggested to bind to MNV-1 RNA polymerase and to enlarge viral capsids using differential scanning fluorimetry and TEM, respectively.
A Framework for Human Microbiome Research
2012-06-14
determined that many compo- nents of data production and processing can contribute errors and artefacts. We investigated methods that avoid these errors and...protocol that ensured consistency in the high-throughput production . To maximize accuracy and consistency, protocols were evaluated primarily using a...future benefits, this resource may promote the development of novel prophylactic strategies such as the application of prebiotics and probiotics to
NASA Astrophysics Data System (ADS)
Bhupal Dev, P. S.; Pilaftsis, Apostolos
2015-11-01
Here we correct some typesetting errors in ref. [1]. These corrections have been implemented in the latest version of [1] on arXiv and the corrected equations have also been reproduced in ref. [2] for the reader's convenience. We clarify that all numerical results presented in ref. [1] remain unaffected by these typographic errors.
Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di, Sheng; Cappello, Franck
Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points canmore » be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.« less
NASA Astrophysics Data System (ADS)
Khanal, Manakamana; Barras, Alexandre; Vausselin, Thibaut; Fénéant, Lucie; Boukherroub, Rabah; Siriwardena, Aloysius; Dubuisson, Jean; Szunerits, Sabine
2015-01-01
The search for viral entry inhibitors that selectively target viral envelope glycoproteins has attracted increasing interest in recent years. Amongst the handful of molecules reported to show activity as hepatitis C virus (HCV) entry inhibitors are a variety of glycan-binding proteins including the lectins, cyanovirin-N (CV-N) and griffithsin. We recently demonstrated that boronic acid-modified nanoparticles are able to reduce HCV entry through a similar mechanism to that of lectins. A major obstacle to any further development of these nanostructures as viral entry inhibitors is their only moderate maximal inhibition potential. In the present study, we report that lipid nanocapsules (LNCs), surface-functionalized with amphiphilic boronic acid (BA) through their post-insertion into the semi-rigid shell of the LNCs, are indeed far superior as HCV entry inhibitors when compared with previously reported nanostructures. These 2nd generation particles (BA-LNCs) are shown to prevent HCV infection in the micromolar range (IC50 = 5.4 μM of BA moieties), whereas the corresponding BA monomers show no significant effects even at the highest analyzed concentration (20 μM). The new BA-LNCs are the most promising boronolectin-based HCV entry inhibitors reported to date and are thus observed to show great promise in the development of a pseudolectin-based therapeutic agent.The search for viral entry inhibitors that selectively target viral envelope glycoproteins has attracted increasing interest in recent years. Amongst the handful of molecules reported to show activity as hepatitis C virus (HCV) entry inhibitors are a variety of glycan-binding proteins including the lectins, cyanovirin-N (CV-N) and griffithsin. We recently demonstrated that boronic acid-modified nanoparticles are able to reduce HCV entry through a similar mechanism to that of lectins. A major obstacle to any further development of these nanostructures as viral entry inhibitors is their only moderate maximal inhibition potential. In the present study, we report that lipid nanocapsules (LNCs), surface-functionalized with amphiphilic boronic acid (BA) through their post-insertion into the semi-rigid shell of the LNCs, are indeed far superior as HCV entry inhibitors when compared with previously reported nanostructures. These 2nd generation particles (BA-LNCs) are shown to prevent HCV infection in the micromolar range (IC50 = 5.4 μM of BA moieties), whereas the corresponding BA monomers show no significant effects even at the highest analyzed concentration (20 μM). The new BA-LNCs are the most promising boronolectin-based HCV entry inhibitors reported to date and are thus observed to show great promise in the development of a pseudolectin-based therapeutic agent. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr03875d
VirSorter: mining viral signal from microbial genomic data.
Roux, Simon; Enault, Francois; Hurwitz, Bonnie L; Sullivan, Matthew B
2015-01-01
Viruses of microbes impact all ecosystems where microbes drive key energy and substrate transformations including the oceans, humans and industrial fermenters. However, despite this recognized importance, our understanding of viral diversity and impacts remains limited by too few model systems and reference genomes. One way to fill these gaps in our knowledge of viral diversity is through the detection of viral signal in microbial genomic data. While multiple approaches have been developed and applied for the detection of prophages (viral genomes integrated in a microbial genome), new types of microbial genomic data are emerging that are more fragmented and larger scale, such as Single-cell Amplified Genomes (SAGs) of uncultivated organisms or genomic fragments assembled from metagenomic sequencing. Here, we present VirSorter, a tool designed to detect viral signal in these different types of microbial sequence data in both a reference-dependent and reference-independent manner, leveraging probabilistic models and extensive virome data to maximize detection of novel viruses. Performance testing shows that VirSorter's prophage prediction capability compares to that of available prophage predictors for complete genomes, but is superior in predicting viral sequences outside of a host genome (i.e., from extrachromosomal prophages, lytic infections, or partially assembled prophages). Furthermore, VirSorter outperforms existing tools for fragmented genomic and metagenomic datasets, and can identify viral signal in assembled sequence (contigs) as short as 3kb, while providing near-perfect identification (>95% Recall and 100% Precision) on contigs of at least 10kb. Because VirSorter scales to large datasets, it can also be used in "reverse" to more confidently identify viral sequence in viral metagenomes by sorting away cellular DNA whether derived from gene transfer agents, generalized transduction or contamination. Finally, VirSorter is made available through the iPlant Cyberinfrastructure that provides a web-based user interface interconnected with the required computing resources. VirSorter thus complements existing prophage prediction softwares to better leverage fragmented, SAG and metagenomic datasets in a way that will scale to modern sequencing. Given these features, VirSorter should enable the discovery of new viruses in microbial datasets, and further our understanding of uncultivated viral communities across diverse ecosystems.
VirSorter: mining viral signal from microbial genomic data
Roux, Simon; Enault, Francois; Hurwitz, Bonnie L.
2015-01-01
Viruses of microbes impact all ecosystems where microbes drive key energy and substrate transformations including the oceans, humans and industrial fermenters. However, despite this recognized importance, our understanding of viral diversity and impacts remains limited by too few model systems and reference genomes. One way to fill these gaps in our knowledge of viral diversity is through the detection of viral signal in microbial genomic data. While multiple approaches have been developed and applied for the detection of prophages (viral genomes integrated in a microbial genome), new types of microbial genomic data are emerging that are more fragmented and larger scale, such as Single-cell Amplified Genomes (SAGs) of uncultivated organisms or genomic fragments assembled from metagenomic sequencing. Here, we present VirSorter, a tool designed to detect viral signal in these different types of microbial sequence data in both a reference-dependent and reference-independent manner, leveraging probabilistic models and extensive virome data to maximize detection of novel viruses. Performance testing shows that VirSorter’s prophage prediction capability compares to that of available prophage predictors for complete genomes, but is superior in predicting viral sequences outside of a host genome (i.e., from extrachromosomal prophages, lytic infections, or partially assembled prophages). Furthermore, VirSorter outperforms existing tools for fragmented genomic and metagenomic datasets, and can identify viral signal in assembled sequence (contigs) as short as 3kb, while providing near-perfect identification (>95% Recall and 100% Precision) on contigs of at least 10kb. Because VirSorter scales to large datasets, it can also be used in “reverse” to more confidently identify viral sequence in viral metagenomes by sorting away cellular DNA whether derived from gene transfer agents, generalized transduction or contamination. Finally, VirSorter is made available through the iPlant Cyberinfrastructure that provides a web-based user interface interconnected with the required computing resources. VirSorter thus complements existing prophage prediction softwares to better leverage fragmented, SAG and metagenomic datasets in a way that will scale to modern sequencing. Given these features, VirSorter should enable the discovery of new viruses in microbial datasets, and further our understanding of uncultivated viral communities across diverse ecosystems. PMID:26038737
Lin, Lixin; Wang, Yunjia; Teng, Jiyao; Xi, Xiuxiu
2015-01-01
The measurement of soil total nitrogen (TN) by hyperspectral remote sensing provides an important tool for soil restoration programs in areas with subsided land caused by the extraction of natural resources. This study used the local correlation maximization-complementary superiority method (LCMCS) to establish TN prediction models by considering the relationship between spectral reflectance (measured by an ASD FieldSpec 3 spectroradiometer) and TN based on spectral reflectance curves of soil samples collected from subsided land which is determined by synthetic aperture radar interferometry (InSAR) technology. Based on the 1655 selected effective bands of the optimal spectrum (OSP) of the first derivate differential of reciprocal logarithm ([log{1/R}]′), (correlation coefficients, p < 0.01), the optimal model of LCMCS method was obtained to determine the final model, which produced lower prediction errors (root mean square error of validation [RMSEV] = 0.89, mean relative error of validation [MREV] = 5.93%) when compared with models built by the local correlation maximization (LCM), complementary superiority (CS) and partial least squares regression (PLS) methods. The predictive effect of LCMCS model was optional in Cangzhou, Renqiu and Fengfeng District. Results indicate that the LCMCS method has great potential to monitor TN in subsided lands caused by the extraction of natural resources including groundwater, oil and coal. PMID:26213935
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
Li, Deyu; Fedeles, Bogdan I.; Singh, Vipender; Peng, Chunte Sam; Silvestre, Katherine J.; Simi, Allison K.; Simpson, Jeffrey H.; Tokmakoff, Andrei; Essigmann, John M.
2014-01-01
Viral lethal mutagenesis is a strategy whereby the innate immune system or mutagenic pool nucleotides increase the error rate of viral replication above the error catastrophe limit. Lethal mutagenesis has been proposed as a mechanism for several antiviral compounds, including the drug candidate 5-aza-5,6-dihydro-2′-deoxycytidine (KP1212), which causes A-to-G and G-to-A mutations in the HIV genome, both in tissue culture and in HIV positive patients undergoing KP1212 monotherapy. This work explored the molecular mechanism(s) underlying the mutagenicity of KP1212, and specifically whether tautomerism, a previously proposed hypothesis, could explain the biological consequences of this nucleoside analog. Establishing tautomerism of nucleic acid bases under physiological conditions has been challenging because of the lack of sensitive methods. This study investigated tautomerism using an array of spectroscopic, theoretical, and chemical biology approaches. Variable temperature NMR and 2D infrared spectroscopic methods demonstrated that KP1212 existed as a broad ensemble of interconverting tautomers, among which enolic forms dominated. The mutagenic properties of KP1212 were determined empirically by in vitro and in vivo replication of a single-stranded vector containing a single KP1212. It was found that KP1212 paired with both A (10%) and G (90%), which is in accord with clinical observations. Moreover, this mutation frequency is sufficient for pushing a viral population over its error catastrophe limit, as observed before in cell culture studies. Finally, a model is proposed that correlates the mutagenicity of KP1212 with its tautomeric distribution in solution. PMID:25071207
Quantification of type I error probabilities for heterogeneity LOD scores.
Abreu, Paula C; Hodge, Susan E; Greenberg, David A
2002-02-01
Locus heterogeneity is a major confounding factor in linkage analysis. When no prior knowledge of linkage exists, and one aims to detect linkage and heterogeneity simultaneously, classical distribution theory of log-likelihood ratios does not hold. Despite some theoretical work on this problem, no generally accepted practical guidelines exist. Nor has anyone rigorously examined the combined effect of testing for linkage and heterogeneity and simultaneously maximizing over two genetic models (dominant, recessive). The effect of linkage phase represents another uninvestigated issue. Using computer simulation, we investigated type I error (P value) of the "admixture" heterogeneity LOD (HLOD) score, i.e., the LOD score maximized over both recombination fraction theta and admixture parameter alpha and we compared this with the P values when one maximizes only with respect to theta (i.e., the standard LOD score). We generated datasets of phase-known and -unknown nuclear families, sizes k = 2, 4, and 6 children, under fully penetrant autosomal dominant inheritance. We analyzed these datasets (1) assuming a single genetic model, and maximizing the HLOD over theta and alpha; and (2) maximizing the HLOD additionally over two dominance models (dominant vs. recessive), then subtracting a 0.3 correction. For both (1) and (2), P values increased with family size k; rose less for phase-unknown families than for phase-known ones, with the former approaching the latter as k increased; and did not exceed the one-sided mixture distribution xi = (1/2) chi1(2) + (1/2) chi2(2). Thus, maximizing the HLOD over theta and alpha appears to add considerably less than an additional degree of freedom to the associated chi1(2) distribution. We conclude with practical guidelines for linkage investigators. Copyright 2002 Wiley-Liss, Inc.
While evidence now supports a causal link between maternal Zika viral infection and microcephaly, genetic errors and chemical stressors may also precipitate this malformation through disruption of neuroprogenitor cell (NPC) proliferation, migration and differentiation in the earl...
The Efffect of Image Apodization on Global Mode Parameters and Rotational Inversions
NASA Astrophysics Data System (ADS)
Larson, Tim; Schou, Jesper
2016-10-01
It has long been known that certain systematic errors in the global mode analysis of data from both MDI and HMI depend on how the input images were apodized. Recently it has come to light, while investigating a six-month period in f-mode frequencies, that mode coverage is highest when B0 is maximal. Recalling that the leakage matrix is calculated in the approximation that B0=0, it comes as a surprise that more modes are fitted when the leakage matrix is most incorrect. It is now believed that the six-month oscillation has primarily to do with what portion of the solar surface is visible. Other systematic errors that depend on the part of the disk used include high-latitude anomalies in the rotation rate and a prominent feature in the normalized residuals of odd a-coefficients. Although the most likely cause of all these errors is errors in the leakage matrix, extensive recalculation of the leaks has not made any difference. Thus we conjecture that another effect may be at play, such as errors in the noise model or one that has to do with the alignment of the apodization with the spherical harmonics. In this poster we explore how differently shaped apodizations affect the results of inversions for internal rotation, for both maximal and minimal absolute values of B0.
Conformation of Tax-response elements in the human T-cell leukemia virus type I promoter.
Cox, J M; Sloan, L S; Schepartz, A
1995-12-01
HTLV-I Tax is believed to activate viral gene expression by binding bZIP proteins (such as CREB) and increasing their affinities for proviral TRE target sites. Each 21 bp TRE target site contains an imperfect copy of the intrinsically bent CRE target site (the TRE core) surrounded by highly conserved flanking sequences. These flanking sequences are essential for maximal increases in DNA affinity and transactivation, but they are not, apparently, contacted by protein. Here we employ non-denaturing gel electrophoresis to evaluate TRE conformation in the presence and absence of bZIP proteins, and to explore the role of DNA conformation in viral transactivation. Our results show that the TRE-1 flanking sequences modulate the structure and modestly increase the affinity of a CREB bZIP peptide for the TRE-1 core recognition sequence. These flanking sequences are also essential for a maximal increase in stability of the CREB-DNA complex in the presence of Tax. The CRE-like TRE core and the TRE flanking sequences are both essential for formation of stable CREB-TRE-1 and Tax-CREB-TRE-1 complexes. These two DNA segments may have co-evolved into a unique structure capable of recognizing Tax and a bZIP protein.
NASA Astrophysics Data System (ADS)
Gao, Cheng-Yan; Wang, Guan-Yu; Zhang, Hao; Deng, Fu-Guo
2017-01-01
We present a self-error-correction spatial-polarization hyperentanglement distribution scheme for N-photon systems in a hyperentangled Greenberger-Horne-Zeilinger state over arbitrary collective-noise channels. In our scheme, the errors of spatial entanglement can be first averted by encoding the spatial-polarization hyperentanglement into the time-bin entanglement with identical polarization and defined spatial modes before it is transmitted over the fiber channels. After transmission over the noisy channels, the polarization errors introduced by the depolarizing noise can be corrected resorting to the time-bin entanglement. Finally, the parties in quantum communication can in principle share maximally hyperentangled states with a success probability of 100%.
Adverse effects in dual-feed interferometry
NASA Astrophysics Data System (ADS)
Colavita, M. Mark
2009-11-01
Narrow-angle dual-star interferometric astrometry can provide very high accuracy in the presence of the Earth's turbulent atmosphere. However, to exploit the high atmospherically-limited accuracy requires control of systematic errors in measurement of the interferometer baseline, internal OPDs, and fringe phase. In addition, as high photometric SNR is required, care must be taken to maximize throughput and coherence to obtain high accuracy on faint stars. This article reviews the key aspects of the dual-star approach and implementation, the main contributors to the systematic error budget, and the coherence terms in the photometric error budget.
Innate host barriers to viral trafficking and population diversity: Lessons learned from poliovirus
Pfeiffer, Julie K.
2011-01-01
Poliovirus is an error-prone enteric virus spread by the fecal-oral route, and rarely invades the central nervous system (CNS). However, in the rare instances when poliovirus invades the CNS, the resulting damage to motor neurons is striking and often permanent. In the pre-vaccine era, it is likely that most individuals within an epidemic community were infected; however, only 0.5% of infected individuals developed paralytic poliomyelitis. Paralytic poliomyelitis terrified the public and initiated a huge research effort, which was rewarded with two outstanding vaccines. During research to develop the vaccines, many questions were asked: Why did certain people develop paralysis? How does the virus move from the gut to the CNS? What limits viral trafficking to the CNS in the vast majority of infected individuals? Despite over 100 years of poliovirus research, many of these questions remain unanswered. The goal of this chapter is to review our knowledge of how poliovirus moves within and between hosts, how host barriers limit viral movement, how viral population dynamics impact viral fitness and virulence, and to offer hypotheses to explain the rare incidence of paralytic poliovirus disease. PMID:20951871
Ho, Cynthia K. Y.; Raghwani, Jayna; Koekkoek, Sylvie; Liang, Richard H.; Van der Meer, Jan T. M.; Van Der Valk, Marc; De Jong, Menno; Pybus, Oliver G.
2016-01-01
ABSTRACT In contrast to other available next-generation sequencing platforms, PacBio single-molecule, real-time (SMRT) sequencing has the advantage of generating long reads albeit with a relatively higher error rate in unprocessed data. Using this platform, we longitudinally sampled and sequenced the hepatitis C virus (HCV) envelope genome region (1,680 nucleotides [nt]) from individuals belonging to a cluster of sexually transmitted cases. All five subjects were coinfected with HIV-1 and a closely related strain of HCV genotype 4d. In total, 50 samples were analyzed by using SMRT sequencing. By using 7 passes of circular consensus sequencing, the error rate was reduced to 0.37%, and the median number of sequences was 612 per sample. A further reduction of insertions was achieved by alignment against a sample-specific reference sequence. However, in vitro recombination during PCR amplification could not be excluded. Phylogenetic analysis supported close relationships among HCV sequences from the four male subjects and subsequent transmission from one subject to his female partner. Transmission was characterized by a strong genetic bottleneck. Viral genetic diversity was low during acute infection and increased upon progression to chronicity but subsequently fluctuated during chronic infection, caused by the alternate detection of distinct coexisting lineages. SMRT sequencing combines long reads with sufficient depth for many phylogenetic analyses and can therefore provide insights into within-host HCV evolutionary dynamics without the need for haplotype reconstruction using statistical algorithms. IMPORTANCE Next-generation sequencing has revolutionized the study of genetically variable RNA virus populations, but for phylogenetic and evolutionary analyses, longer sequences than those generated by most available platforms, while minimizing the intrinsic error rate, are desired. Here, we demonstrate for the first time that PacBio SMRT sequencing technology can be used to generate full-length HCV envelope sequences at the single-molecule level, providing a data set with large sequencing depth for the characterization of intrahost viral dynamics. The selection of consensus reads derived from at least 7 full circular consensus sequencing rounds significantly reduced the intrinsic high error rate of this method. We used this method to genetically characterize a unique transmission cluster of sexually transmitted HCV infections, providing insight into the distinct evolutionary pathways in each patient over time and identifying the transmission-associated genetic bottleneck as well as fluctuations in viral genetic diversity over time, accompanied by dynamic shifts in viral subpopulations. PMID:28077634
Casillas, Jean-Marie; Joussain, Charles; Gremeaux, Vincent; Hannequin, Armelle; Rapin, Amandine; Laurent, Yves; Benaïm, Charles
2015-02-01
To develop a new predictive model of maximal heart rate based on two walking tests at different speeds (comfortable and brisk walking) as an alternative to a cardiopulmonary exercise test during cardiac rehabilitation. Evaluation of a clinical assessment tool. A Cardiac Rehabilitation Department in France. A total of 148 patients (133 men), mean age of 59 ±9 years, at the end of an outpatient cardiac rehabilitation programme. Patients successively performed a 6-minute walk test, a 200 m fast-walk test (200mFWT), and a cardiopulmonary exercise test, with measure of heart rate at the end of each test. An all-possible regression procedure was used to determine the best predictive regression models of maximal heart rate. The best model was compared with the Fox equation in term of predictive error of maximal heart rate using the paired t-test. Results of the two walking tests correlated significantly with maximal heart rate determined during the cardiopulmonary exercise test, whereas anthropometric parameters and resting heart rate did not. The simplified predictive model with the most acceptable mean error was: maximal heart rate = 130 - 0.6 × age + 0.3 × HR200mFWT (R(2) = 0.24). This model was superior to the Fox formula (R(2) = 0.138). The relationship between training target heart rate calculated from measured reserve heart rate and that established using this predictive model was statistically significant (r = 0.528, p < 10(-6)). A formula combining heart rate measured during a safe simple fast walk test and age is more efficient than an equation only including age to predict maximal heart rate and training target heart rate. © The Author(s) 2014.
Influence maximization in social networks under an independent cascade-based model
NASA Astrophysics Data System (ADS)
Wang, Qiyao; Jin, Yuehui; Lin, Zhen; Cheng, Shiduan; Yang, Tan
2016-02-01
The rapid growth of online social networks is important for viral marketing. Influence maximization refers to the process of finding influential users who make the most of information or product adoption. An independent cascade-based model for influence maximization, called IMIC-OC, was proposed to calculate positive influence. We assumed that influential users spread positive opinions. At the beginning, users held positive or negative opinions as their initial opinions. When more users became involved in the discussions, users balanced their own opinions and those of their neighbors. The number of users who did not change positive opinions was used to determine positive influence. Corresponding influential users who had maximum positive influence were then obtained. Experiments were conducted on three real networks, namely, Facebook, HEP-PH and Epinions, to calculate maximum positive influence based on the IMIC-OC model and two other baseline methods. The proposed model resulted in larger positive influence, thus indicating better performance compared with the baseline methods.
Verbist, Bie; Clement, Lieven; Reumers, Joke; Thys, Kim; Vapirev, Alexander; Talloen, Willem; Wetzels, Yves; Meys, Joris; Aerssens, Jeroen; Bijnens, Luc; Thas, Olivier
2015-02-22
Deep-sequencing allows for an in-depth characterization of sequence variation in complex populations. However, technology associated errors may impede a powerful assessment of low-frequency mutations. Fortunately, base calls are complemented with quality scores which are derived from a quadruplet of intensities, one channel for each nucleotide type for Illumina sequencing. The highest intensity of the four channels determines the base that is called. Mismatch bases can often be corrected by the second best base, i.e. the base with the second highest intensity in the quadruplet. A virus variant model-based clustering method, ViVaMBC, is presented that explores quality scores and second best base calls for identifying and quantifying viral variants. ViVaMBC is optimized to call variants at the codon level (nucleotide triplets) which enables immediate biological interpretation of the variants with respect to their antiviral drug responses. Using mixtures of HCV plasmids we show that our method accurately estimates frequencies down to 0.5%. The estimates are unbiased when average coverages of 25,000 are reached. A comparison with the SNP-callers V-Phaser2, ShoRAH, and LoFreq shows that ViVaMBC has a superb sensitivity and specificity for variants with frequencies above 0.4%. Unlike the competitors, ViVaMBC reports a higher number of false-positive findings with frequencies below 0.4% which might partially originate from picking up artificial variants introduced by errors in the sample and library preparation step. ViVaMBC is the first method to call viral variants directly at the codon level. The strength of the approach lies in modeling the error probabilities based on the quality scores. Although the use of second best base calls appeared very promising in our data exploration phase, their utility was limited. They provided a slight increase in sensitivity, which however does not warrant the additional computational cost of running the offline base caller. Apparently a lot of information is already contained in the quality scores enabling the model based clustering procedure to adjust the majority of the sequencing errors. Overall the sensitivity of ViVaMBC is such that technical constraints like PCR errors start to form the bottleneck for low frequency variant detection.
Synchronization Design and Error Analysis of Near-Infrared Cameras in Surgical Navigation.
Cai, Ken; Yang, Rongqian; Chen, Huazhou; Huang, Yizhou; Wen, Xiaoyan; Huang, Wenhua; Ou, Shanxing
2016-01-01
The accuracy of optical tracking systems is important to scientists. With the improvements reported in this regard, such systems have been applied to an increasing number of operations. To enhance the accuracy of these systems further and to reduce the effect of synchronization and visual field errors, this study introduces a field-programmable gate array (FPGA)-based synchronization control method, a method for measuring synchronous errors, and an error distribution map in field of view. Synchronization control maximizes the parallel processing capability of FPGA, and synchronous error measurement can effectively detect the errors caused by synchronization in an optical tracking system. The distribution of positioning errors can be detected in field of view through the aforementioned error distribution map. Therefore, doctors can perform surgeries in areas with few positioning errors, and the accuracy of optical tracking systems is considerably improved. The system is analyzed and validated in this study through experiments that involve the proposed methods, which can eliminate positioning errors attributed to asynchronous cameras and different fields of view.
AAV viral vector delivery to the brain by shape-conforming MR-guided infusions.
Bankiewicz, Krystof S; Sudhakar, Vivek; Samaranch, Lluis; San Sebastian, Waldy; Bringas, John; Forsayeth, John
2016-10-28
Gene transfer technology offers great promise as a potential therapeutic approach to the brain but has to be viewed as a very complex technology. Success of ongoing clinical gene therapy trials depends on many factors such as selection of the correct genetic and anatomical target in the brain. In addition, selection of the viral vector capable of transfer of therapeutic gene into target cells, along with long-term expression that avoids immunotoxicity has to be established. As with any drug development strategy, delivery of gene therapy has to be consistent and predictable in each study subject. Failed drug and vector delivery will lead to failed clinical trials. In this article, we describe our experience with AAV viral vector delivery system, that allows us to optimize and monitor in real time viral vector administration into affected regions of the brain. In addition to discussing MRI-guided technology for administration of AAV vectors we have developed and now employ in current clinical trials, we also describe ways in which infusion cannula design and stereotactic trajectory may be used to maximize the anatomical coverage by using fluid backflow. This innovative approach enables more precise coverage by fitting the shape of the infusion to the shape of the anatomical target. Copyright © 2016 Elsevier B.V. All rights reserved.
BVDV vaccination in North America: risks versus benefits.
Griebel, Philip J
2015-06-01
The control and prevention of bovine viral diarrhea virus (BVDV) infections has provided substantial challenges. Viral genetic variation, persistent infections, and viral tropism for immune cells have complicated disease control strategies. Vaccination has, however, provided an effective tool to prevent acute systemic infections and increase reproductive efficiency through fetal protection. There has been substantial controversy about the safety and efficacy of BVDV vaccines, especially when comparing killed versus modified-live viral (MLV) vaccines. Furthermore, numerous vaccination protocols have been proposed to protect the fetus and ensure maternal antibody transfer to the calf. These issues have been further complicated by reports of immune suppression during natural infections and following vaccination. While killed BVDV vaccines provide the greatest safety, their limited immunogenicity makes multiple vaccinations necessary. In contrast, MLV BVDV vaccines induce a broader range of immune responses with a longer duration of immunity, but require strategic vaccination to minimize potential risks. Vaccination strategies for breeding females and young calves, in the face of maternal antibody, are discussed. With intranasal vaccination of young calves it is possible to avoid maternal antibody interference and induce immune memory that persists for 6-8 months. Thus, with an integrated vaccination protocol for both breeding cows and calves it is possible to maximize disease protection while minimizing vaccine risks.
NASA Astrophysics Data System (ADS)
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
A simple method for measurement of maximal downstroke power on friction-loaded cycle ergometer.
Morin, Jean-Benoît; Belli, Alain
2004-01-01
The aim of this study was to propose and validate a post-hoc correction method to obtain maximal power values taking into account inertia of the flywheel during sprints on friction-loaded cycle ergometers. This correction method was obtained from a basic postulate of linear deceleration-time evolution during the initial phase (until maximal power) of a sprint and included simple parameters as flywheel inertia, maximal velocity, time to reach maximal velocity and friction force. The validity of this model was tested by comparing measured and calculated maximal power values for 19 sprint bouts performed by five subjects against 0.6-1 N kg(-1) friction loads. Non-significant differences between measured and calculated maximal power (1151+/-169 vs. 1148+/-170 W) and a mean error index of 1.31+/-1.20% (ranging from 0.09% to 4.20%) showed the validity of this method. Furthermore, the differences between measured maximal power and power neglecting inertia (20.4+/-7.6%, ranging from 9.5% to 33.2%) emphasized the usefulness of power correcting in studies about anaerobic power which do not include inertia, and also the interest of this simple post-hoc method.
Rich, Katherine M; Valencia Huamaní, Javier; Kiani, Sara N; Cabello, Robinson; Elish, Paul; Florez Arce, Jorge; Pizzicato, Lia N; Soria, Jaime; Wickersham, Jeffrey A; Sanchez, Jorge; Altice, Frederick L
2018-05-30
In Peru, HIV is concentrated among men who have sex with men (MSM) and transgender women (TGW). Between June 2015 and August 2016, 591 HIV-positive MSM and TGW were recruited at five clinical care sites in Lima, Peru. We found that 82.4% of the participants had achieved viral suppression (VS; VL < 200) and 73.6% had achieved maximal viral suppression (MVS; VL < 50). Multivariable modeling indicated that patients reporting transportation as a barrier to HIV care were less likely to achieve VS (aOR = 0.47; 95% CI = 0.30-0.75) and MVS (aOR = 0.56; 95% CI = 0.37-0.84). Alcohol use disorders were negatively associated with MVS (aOR = 0.62; 95% CI = 0.30-0.75) and age was positively associated with achieving MVS (aOR = 1.29; 95% CI = 1.04-1.59). These findings underscore the need for more accessible HIV care with integrated behavioral health services in Lima, Peru.
viral abundance distribution in deep waters of the Northern of South China Sea
NASA Astrophysics Data System (ADS)
He, Lei; Yin, Kedong
2017-04-01
Little is known about the vertical distribution and interaction of viruses and bacteria in the deep ocean water column. The vertical distribution of viral-like particles and bacterial abundance was investigated in the deep water column in the South China Sea during September 2005 along with salinity, temperature and dissolved oxygen. There were double maxima in the ratio of viral to bacterial abundance (VBR) in the water column: the subsurface maximum located at 50-100 m near the pycnocline layer, and the deep maximum at 800-1000 m. At the subsurface maximum of VBR, both viral and bacterial abundance were maximal in the water column, and at the deep maximum of VBR, both viral and bacterial abundance were low, but bacterial abundance was relatively lower than viral abundance. The subsurface VBR maximum coincided with the subsurface chlorophyll maximum while the deep VBR maximum coincided with the minimum in dissolved oxygen (2.91mg L-1). Therefore, we hypothesize that the two maxima were formed by different mechanisms. The subsurface VBR maximum was formed due to an increase in bacterial abundance resulting from the stimulation of abundant organic supply at the subsurface chlorophyll maximum, whereas the deep VBR maximum was formed due to a decrease in bacterial abundance caused by more limitation of organic matter at the oxygen minimum. The evidence suggests that viruses play an important role in controlling bacterial abundance in the deep water column due to the limitation of organic matter supply. In turn, this slows down the formation of the oxygen minimum in which oxygen may be otherwise lower. The mechanism has a great implication that viruses could control bacterial decomposition of organic matter, oxygen consumption and nutrient remineralization in the deep oceans.
Practical scheme for error control using feedback
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarovar, Mohan; Milburn, Gerard J.; Ahn, Charlene
2004-05-01
We describe a scheme for quantum-error correction that employs feedback and weak measurement rather than the standard tools of projective measurement and fast controlled unitary gates. The advantage of this scheme over previous protocols [for example, Ahn et al. Phys. Rev. A 65, 042301 (2001)], is that it requires little side processing while remaining robust to measurement inefficiency, and is therefore considerably more practical. We evaluate the performance of our scheme by simulating the correction of bit flips. We also consider implementation in a solid-state quantum-computation architecture and estimate the maximal error rate that could be corrected with current technology.
Yang, Shu; Xu, Miao; Lee, Emily M; Gorshkov, Kirill; Shiryaev, Sergey A; He, Shihua; Sun, Wei; Cheng, Yu-Shan; Hu, Xin; Tharappel, Anil Mathew; Lu, Billy; Pinto, Antonella; Farhy, Chen; Huang, Chun-Teng; Zhang, Zirui; Zhu, Wenjun; Wu, Yuying; Zhou, Yi; Song, Guang; Zhu, Heng; Shamim, Khalida; Martínez-Romero, Carles; García-Sastre, Adolfo; Preston, Richard A; Jayaweera, Dushyantha T; Huang, Ruili; Huang, Wenwei; Xia, Menghang; Simeonov, Anton; Ming, Guoli; Qiu, Xiangguo; Terskikh, Alexey V; Tang, Hengli; Song, Hongjun; Zheng, Wei
2018-01-01
The re-emergence of Zika virus (ZIKV) and Ebola virus (EBOV) poses serious and continued threats to the global public health. Effective therapeutics for these maladies is an unmet need. Here, we show that emetine, an anti-protozoal agent, potently inhibits ZIKV and EBOV infection with a low nanomolar half maximal inhibitory concentration (IC 50 ) in vitro and potent activity in vivo. Two mechanisms of action for emetine are identified: the inhibition of ZIKV NS5 polymerase activity and disruption of lysosomal function. Emetine also inhibits EBOV entry. Cephaeline, a desmethyl analog of emetine, which may be better tolerated in patients than emetine, exhibits a similar efficacy against both ZIKV and EBOV infections. Hence, emetine and cephaeline offer pharmaceutical therapies against both ZIKV and EBOV infection.
Wind farm optimization using evolutionary algorithms
NASA Astrophysics Data System (ADS)
Ituarte-Villarreal, Carlos M.
In recent years, the wind power industry has focused its efforts on solving the Wind Farm Layout Optimization (WFLO) problem. Wind resource assessment is a pivotal step in optimizing the wind-farm design and siting and, in determining whether a project is economically feasible or not. In the present work, three (3) different optimization methods are proposed for the solution of the WFLO: (i) A modified Viral System Algorithm applied to the optimization of the proper location of the components in a wind-farm to maximize the energy output given a stated wind environment of the site. The optimization problem is formulated as the minimization of energy cost per unit produced and applies a penalization for the lack of system reliability. The viral system algorithm utilized in this research solves three (3) well-known problems in the wind-energy literature; (ii) a new multiple objective evolutionary algorithm to obtain optimal placement of wind turbines while considering the power output, cost, and reliability of the system. The algorithm presented is based on evolutionary computation and the objective functions considered are the maximization of power output, the minimization of wind farm cost and the maximization of system reliability. The final solution to this multiple objective problem is presented as a set of Pareto solutions and, (iii) A hybrid viral-based optimization algorithm adapted to find the proper component configuration for a wind farm with the introduction of the universal generating function (UGF) analytical approach to discretize the different operating or mechanical levels of the wind turbines in addition to the various wind speed states. The proposed methodology considers the specific probability functions of the wind resource to describe their proper behaviors to account for the stochastic comportment of the renewable energy components, aiming to increase their power output and the reliability of these systems. The developed heuristic considers a variable number of system components and wind turbines with different operating characteristics and sizes, to have a more heterogeneous model that can deal with changes in the layout and in the power generation requirements over the time. Moreover, the approach evaluates the impact of the wind-wake effect of the wind turbines upon one another to describe and evaluate the power production capacity reduction of the system depending on the layout distribution of the wind turbines.
ERIC Educational Resources Information Center
Khovanova, Tanya
2012-01-01
When Martin Gardner first presented the Two-Children problem, he made a mistake in its solution. Later he corrected the error, but unfortunately the incorrect solution is more widely known than his correction. In fact, a Tuesday-Child variation of this problem went viral in 2010, and the same flaw keeps reappearing in proposed solutions of that…
Zhu, Yuan O; Aw, Pauline P K; de Sessions, Paola Florez; Hong, Shuzhen; See, Lee Xian; Hong, Lewis Z; Wilm, Andreas; Li, Chen Hao; Hue, Stephane; Lim, Seng Gee; Nagarajan, Niranjan; Burkholder, William F; Hibberd, Martin
2017-10-27
Viral populations are complex, dynamic, and fast evolving. The evolution of groups of closely related viruses in a competitive environment is termed quasispecies. To fully understand the role that quasispecies play in viral evolution, characterizing the trajectories of viral genotypes in an evolving population is the key. In particular, long-range haplotype information for thousands of individual viruses is critical; yet generating this information is non-trivial. Popular deep sequencing methods generate relatively short reads that do not preserve linkage information, while third generation sequencing methods have higher error rates that make detection of low frequency mutations a bioinformatics challenge. Here we applied BAsE-Seq, an Illumina-based single-virion sequencing technology, to eight samples from four chronic hepatitis B (CHB) patients - once before antiviral treatment and once after viral rebound due to resistance. With single-virion sequencing, we obtained 248-8796 single-virion sequences per sample, which allowed us to find evidence for both hard and soft selective sweeps. We were able to reconstruct population demographic history that was independently verified by clinically collected data. We further verified four of the samples independently through PacBio SMRT and Illumina Pooled deep sequencing. Overall, we showed that single-virion sequencing yields insight into viral evolution and population dynamics in an efficient and high throughput manner. We believe that single-virion sequencing is widely applicable to the study of viral evolution in the context of drug resistance and host adaptation, allows differentiation between soft or hard selective sweeps, and may be useful in the reconstruction of intra-host viral population demographic history.
ATC system error and appraisal of controller proficiency.
DOT National Transportation Integrated Search
1965-07-01
The report presents suggestions for the design of an air traffic control (ATC) incident-reporting system aimed at maximizing the amount of corrective feedback to the ATC system. The approach taken is system-oriented rather than controller-oriented. I...
Fusion of Scores in a Detection Context Based on Alpha Integration.
Soriano, Antonio; Vergara, Luis; Ahmed, Bouziane; Salazar, Addisson
2015-09-01
We present a new method for fusing scores corresponding to different detectors (two-hypotheses case). It is based on alpha integration, which we have adapted to the detection context. Three optimization methods are presented: least mean square error, maximization of the area under the ROC curve, and minimization of the probability of error. Gradient algorithms are proposed for the three methods. Different experiments with simulated and real data are included. Simulated data consider the two-detector case to illustrate the factors influencing alpha integration and demonstrate the improvements obtained by score fusion with respect to individual detector performance. Two real data cases have been considered. In the first, multimodal biometric data have been processed. This case is representative of scenarios in which the probability of detection is to be maximized for a given probability of false alarm. The second case is the automatic analysis of electroencephalogram and electrocardiogram records with the aim of reproducing the medical expert detections of arousal during sleeping. This case is representative of scenarios in which probability of error is to be minimized. The general superior performance of alpha integration verifies the interest of optimizing the fusing parameters.
An extension of the receiver operating characteristic curve and AUC-optimal classification.
Takenouchi, Takashi; Komori, Osamu; Eguchi, Shinto
2012-10-01
While most proposed methods for solving classification problems focus on minimization of the classification error rate, we are interested in the receiver operating characteristic (ROC) curve, which provides more information about classification performance than the error rate does. The area under the ROC curve (AUC) is a natural measure for overall assessment of a classifier based on the ROC curve. We discuss a class of concave functions for AUC maximization in which a boosting-type algorithm including RankBoost is considered, and the Bayesian risk consistency and the lower bound of the optimum function are discussed. A procedure derived by maximizing a specific optimum function has high robustness, based on gross error sensitivity. Additionally, we focus on the partial AUC, which is the partial area under the ROC curve. For example, in medical screening, a high true-positive rate to the fixed lower false-positive rate is preferable and thus the partial AUC corresponding to lower false-positive rates is much more important than the remaining AUC. We extend the class of concave optimum functions for partial AUC optimality with the boosting algorithm. We investigated the validity of the proposed method through several experiments with data sets in the UCI repository.
Error field optimization in DIII-D using extremum seeking control
Lanctot, M. J.; Olofsson, K. E. J.; Capella, M.; ...
2016-06-03
A closed-loop error field control algorithm is implemented in the Plasma Control System of the DIII-D tokamak and used to identify optimal control currents during a single plasma discharge. The algorithm, based on established extremum seeking control theory, exploits the link in tokamaks between maximizing the toroidal angular momentum and minimizing deleterious non-axisymmetric magnetic fields. Slowly-rotating n = 1 fields (the dither), generated by external coils, are used to perturb the angular momentum, monitored in real-time using a charge-exchange spectroscopy diagnostic. Simple signal processing of the rotation measurements extracts information about the rotation gradient with respect to the control coilmore » currents. This information is used to converge the control coil currents to a point that maximizes the toroidal angular momentum. The technique is well-suited for multi-coil, multi-harmonic error field optimizations in disruption sensitive devices as it does not require triggering locked tearing modes or plasma current disruptions. Control simulations highlight the importance of the initial search direction on the rate of the convergence, and identify future algorithm upgrades that may allow more rapid convergence that projects to convergence times in ITER on the order of tens of seconds.« less
Pavone, Enea Francesco; Tieri, Gaetano; Rizza, Giulia; Tidoni, Emmanuele; Grisoni, Luigi; Aglioti, Salvatore Maria
2016-01-13
Brain monitoring of errors in one's own and other's actions is crucial for a variety of processes, ranging from the fine-tuning of motor skill learning to important social functions, such as reading out and anticipating the intentions of others. Here, we combined immersive virtual reality and EEG recording to explore whether embodying the errors of an avatar by seeing it from a first-person perspective may activate the error monitoring system in the brain of an onlooker. We asked healthy participants to observe, from a first- or third-person perspective, an avatar performing a correct or an incorrect reach-to-grasp movement toward one of two virtual mugs placed on a table. At the end of each trial, participants reported verbally how much they embodied the avatar's arm. Ratings were maximal in first-person perspective, indicating that immersive virtual reality can be a powerful tool to induce embodiment of an artificial agent, even through mere visual perception and in the absence of any cross-modal boosting. Observation of erroneous grasping from a first-person perspective enhanced error-related negativity and medial-frontal theta power in the trials where human onlookers embodied the virtual character, hinting at the tight link between early, automatic coding of error detection and sense of embodiment. Error positivity was similar in 1PP and 3PP, suggesting that conscious coding of errors is similar for self and other. Thus, embodiment plays an important role in activating specific components of the action monitoring system when others' errors are coded as if they are one's own errors. Detecting errors in other's actions is crucial for social functions, such as reading out and anticipating the intentions of others. Using immersive virtual reality and EEG recording, we explored how the brain of an onlooker reacted to the errors of an avatar seen from a first-person perspective. We found that mere observation of erroneous actions enhances electrocortical markers of error detection in the trials where human onlookers embodied the virtual character. Thus, the cerebral system for action monitoring is maximally activated when others' errors are coded as if they are one's own errors. The results have important implications for understanding how the brain can control the external world and thus creating new brain-computer interfaces. Copyright © 2016 the authors 0270-6474/16/360268-12$15.00/0.
Influencing Busy People in a Social Network
Sarkar, Kaushik; Sundaram, Hari
2016-01-01
We identify influential early adopters in a social network, where individuals are resource constrained, to maximize the spread of multiple, costly behaviors. A solution to this problem is especially important for viral marketing. The problem of maximizing influence in a social network is challenging since it is computationally intractable. We make three contributions. First, we propose a new model of collective behavior that incorporates individual intent, knowledge of neighbors actions and resource constraints. Second, we show that the multiple behavior influence maximization is NP-hard. Furthermore, we show that the problem is submodular, implying the existence of a greedy solution that approximates the optimal solution to within a constant. However, since the greedy algorithm is expensive for large networks, we propose efficient heuristics to identify the influential individuals, including heuristics to assign behaviors to the different early adopters. We test our approach on synthetic and real-world topologies with excellent results. We evaluate the effectiveness under three metrics: unique number of participants, total number of active behaviors and network resource utilization. Our heuristics produce 15-51% increase in expected resource utilization over the naïve approach. PMID:27711127
Influencing Busy People in a Social Network.
Sarkar, Kaushik; Sundaram, Hari
2016-01-01
We identify influential early adopters in a social network, where individuals are resource constrained, to maximize the spread of multiple, costly behaviors. A solution to this problem is especially important for viral marketing. The problem of maximizing influence in a social network is challenging since it is computationally intractable. We make three contributions. First, we propose a new model of collective behavior that incorporates individual intent, knowledge of neighbors actions and resource constraints. Second, we show that the multiple behavior influence maximization is NP-hard. Furthermore, we show that the problem is submodular, implying the existence of a greedy solution that approximates the optimal solution to within a constant. However, since the greedy algorithm is expensive for large networks, we propose efficient heuristics to identify the influential individuals, including heuristics to assign behaviors to the different early adopters. We test our approach on synthetic and real-world topologies with excellent results. We evaluate the effectiveness under three metrics: unique number of participants, total number of active behaviors and network resource utilization. Our heuristics produce 15-51% increase in expected resource utilization over the naïve approach.
Polarity related influence maximization in signed social networks.
Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng
2014-01-01
Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods.
Polarity Related Influence Maximization in Signed Social Networks
Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng
2014-01-01
Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods. PMID:25061986
Eckey-Kaltenbach, H.; Ernst, D.; Heller, W.; Sandermann, H.
1994-01-01
Parsley (Petroselinum crispum L.) is known to respond to ultraviolet irradiation by the synthesis of flavone glycosides, whereas fungal or elicitor stress leads to the synthesis of furanocoumarin phytoalexins. We tested how these defensive pathways are affected by a single ozone treatment (200 nL L-1; 10 h). Assays were performed at the levels of transcripts, for enzyme activities, and for secondary products. The most rapid transcript accumulation was maximal at 3 h, whereas flavone glycosides and furanocoumarins were maximally induced at 12 and 24 h, respectively, after the start of ozone treatment. Ozone acted as a cross-inducer because the two distinct pathways were simultaneously induced. These results are consistent with the previously observed ozone induction of fungal and viral defense reactions in tobacco, spruce, and pine. PMID:12232062
Depsides: Lichen Metabolites Active against Hepatitis C Virus
Vu, Thi Huyen; Le Lamer, Anne-Cécile; Lalli, Claudia; Boustie, Joël; Samson, Michel
2015-01-01
A thorough phytochemical study of Stereocaulon evolutum was conducted, for the isolation of structurally related atranorin derivatives. Indeed, pilot experiments suggested that atranorin (1), the main metabolite of this lichen, would interfere with the lifecycle of hepatitis C virus (HCV). Eight compounds, including one reported for the first time (2), were isolated and characterized. Two analogs (5, 6) were also synthesized, to enlarge the panel of atranorin-related structures. Most of these compounds were active against HCV, with a half-maximal inhibitory concentration of about 10 to 70 µM, with depsides more potent than monoaromatic phenols. The most effective inhibitors (1, 5 and 6) were then added at different steps of the HCV lifecycle. Interestingly, atranorin (1), bearing an aldehyde function at C-3, inhibited only viral entry, whereas the synthetic compounds 5 and 6, bearing a hydroxymethyl and a methyl function, respectively, at C-3 interfered with viral replication. PMID:25793970
Learning from Bees: An Approach for Influence Maximization on Viral Campaigns
Sankar, C. Prem; S., Asharaf
2016-01-01
Maximisation of influence propagation is a key ingredient to any viral marketing or socio-political campaigns. However, it is an NP-hard problem, and various approximate algorithms have been suggested to address the issue, though not largely successful. In this paper, we propose a bio-inspired approach to select the initial set of nodes which is significant in rapid convergence towards a sub-optimal solution in minimal runtime. The performance of the algorithm is evaluated using the re-tweet network of the hashtag #KissofLove on Twitter associated with the non-violent protest against the moral policing spread to many parts of India. Comparison with existing centrality based node ranking process the proposed method significant improvement on influence propagation. The proposed algorithm is one of the hardly few bio-inspired algorithms in network theory. We also report the results of the exploratory analysis of the network kiss of love campaign. PMID:27992472
Schulte, Michael B; Draghi, Jeremy A; Plotkin, Joshua B; Andino, Raul
2015-01-01
Life history theory posits that the sequence and timing of events in an organism's lifespan are fine-tuned by evolution to maximize the production of viable offspring. In a virus, a life history strategy is largely manifested in its replication mode. Here, we develop a stochastic mathematical model to infer the replication mode shaping the structure and mutation distribution of a poliovirus population in an intact single infected cell. We measure production of RNA and poliovirus particles through the infection cycle, and use these data to infer the parameters of our model. We find that on average the viral progeny produced from each cell are approximately five generations removed from the infecting virus. Multiple generations within a single cell infection provide opportunities for significant accumulation of mutations per viral genome and for intracellular selection. DOI: http://dx.doi.org/10.7554/eLife.03753.001 PMID:25635405
Dog response to inactivated canine parvovirus and feline panleukopenia virus vaccines.
Pollock, R V; Carmichael, L E
1982-01-01
Inactivated canine parvovirus (CPV) and inactivated feline panleukopenia virus (FPV) vaccines were evaluated in dogs. Maximal serologic response occurred within 1-2 weeks after vaccination. Antibody titers then declined rapidly to low levels that persisted at least 20 weeks. Immunity to CPV, defined as complete resistance to infection, was correlated with serum antibody titer and did not persist longer than 6 weeks after vaccination with inactivated virus. However, protection against generalized infection was demonstrated 20 weeks after vaccination. In unvaccinated dogs, viremia and generalized infection occurred after oronasal challenge with virulent CPV. In contrast, viral replication was restricted to the intestinal tract and gut-associated lymphoid tissue of vaccinated dogs. Canine parvovirus was inactivated by formalin, beta-propiolactone (BPL), and binary ethylenimine (BEI) in serum-free media; inactivation kinetics were determined. Formalin resulted in a greater loss of viral HA than either BEI of BPL, and antigenicity was correspondingly reduced.
[Analysis of the results of the SEIMC External Quality Control Program. Year 2013].
de Gopegui Bordes, Enrique Ruiz; Orta Mira, Nieves; Del Remedio Guna Serrano, M; Medina González, Rafael; Rosario Ovies, María; Poveda, Marta; Gimeno Cardona, Concepción
2015-07-01
The External Quality Control Program of the Spanish Society of Infectious Diseases and Clinical Microbiology (SEIMC) include controls for bacteriology, serology, mycology, parasitology, mycobacteria, virology, molecular microbiology and HIV-1, HCV and HBV viral loads. This manuscript presents the analysis of results obtained of the participants from the 2013 SEIMC External Quality Control Programme, except viral loads controls, that they are summarized in a manuscript abroad. As a whole, the results obtained in 2013 confirm the excellent skill and good technical standards found in previous editions. However, erroneous results can be obtained in any laboratory and in clinically relevant determinations. Once again, the results of this program highlighted the need to implement both internal and external controls in order to assure the maximal quality of the microbiological tests. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.
Ground support system methodology and architecture
NASA Technical Reports Server (NTRS)
Schoen, P. D.
1991-01-01
A synergistic approach to systems test and support is explored. A building block architecture provides transportability of data, procedures, and knowledge. The synergistic approach also lowers cost and risk for life cycle of a program. The determination of design errors at the earliest phase reduces cost of vehicle ownership. Distributed scaleable architecture is based on industry standards maximizing transparency and maintainability. Autonomous control structure provides for distributed and segmented systems. Control of interfaces maximizes compatibility and reuse, reducing long term program cost. Intelligent data management architecture also reduces analysis time and cost (automation).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Depto, A.S.; Stenberg, R.M.
1989-03-01
To better understand the regulation of late gene expression in human cytomegalovirus (CMV)-infected cells, the authors examined expression of the gene that codes for the 65-kilodalton lower-matrix phosphoprotein (pp65). Analysis of RNA isolated at 72 h from cells infected with CMV Towne or ts66, a DNA-negative temperature-sensitive mutant, supported the fact that pp65 is expressed at low levels prior to viral DNA replication but maximally expressed after the initiation of viral DNA replication. To investigate promoter activation in a transient expression assay, the pp65 promoter was cloned into the indicator plasmid containing the gene for chloramphenicol acetyltransferase (CAT). Transfection ofmore » the promoter-CAT construct and subsequent superinfection with CMV resulted in activation of the promoter at early times after infection. Cotransfection with plasmids capable of expressing immediate-early (IE) proteins demonstrated that the promoter was activated by IE proteins and that both IE regions 1 and 2 were necessary. These studies suggest that interactions between IE proteins and this octamer sequence may be important for the regulation and expression of this CMV gene.« less
Atkins, Katherine E; Read, Andrew F; Savill, Nicholas J; Renz, Katrin G; Islam, A F M Fakhrul; Walkden-Brown, Stephen W; Woolhouse, Mark E J
2013-03-01
Marek's disease virus (MDV), a commercially important disease of poultry, has become substantially more virulent over the last 60 years. This evolution was presumably a consequence of changes in virus ecology associated with the intensification of the poultry industry. Here, we assess whether vaccination or reduced host life span could have generated natural selection, which favored more virulent strains. Using previously published experimental data, we estimated viral fitness under a range of cohort durations and vaccine treatments on broiler farms. We found that viral fitness maximized at intermediate virulence, as a result of a trade-off between virulence and transmission previously reported. Our results suggest that vaccination, acting on this trade-off, could have led to the evolution of increased virulence. By keeping the host alive, vaccination prolongs infectious periods of virulent strains. Improvements in host genetics and nutrition, which reduced broiler life spans below 50 days, could have also increased the virulence of the circulating MDV strains because shortened cohort duration reduces the impact of host death on viral fitness. These results illustrate the dramatic impact anthropogenic change can potentially have on pathogen virulence. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.
Liu, Chao; Yao, Yong; Sun, Yun Xu; Xiao, Jun Jun; Zhao, Xin Hui
2010-10-01
A model is proposed to study the average capacity optimization in free-space optical (FSO) channels, accounting for effects of atmospheric turbulence and pointing errors. For a given transmitter laser power, it is shown that both transmitter beam divergence angle and beam waist can be tuned to maximize the average capacity. Meanwhile, their optimum values strongly depend on the jitter and operation wavelength. These results can be helpful for designing FSO communication systems.
BOETTIGER, David C; NGUYEN, Van Kinh; DURIER, Nicolas; BUI, Huy Vu; SIM, Benedict Lim Heng; AZWA, Iskandar; LAW, Matthew; RUXRUNGTHAM, Kiat
2014-01-01
Background Roughly 4% of the 1.25 million patients on antiretroviral therapy (ART) in Asia are using second-line therapy. To maximize patient benefit and regional resources it is important to optimize the timing of second-line ART initiation and use the most effective compounds available. Methods HIV positive patients enrolled in the TREAT Asia HIV Observational Database who had used second-line ART for ≥6 months were included. ART use and rates and predictors of second-line treatment failure were evaluated. Results There were 302 eligible patients. Most were male (76.5%) and exposed to HIV via heterosexual contact (71.5%). Median age at second-line initiation was 39.2 years, median CD4 cell count was 146 cells/mm3, and median HIV viral load was 16,224 copies/mL. Patients started second-line ART before 2007 (n=105), 2007-2010 (n=147) and after 2010 (n=50). Ritonavir-boosted lopinavir and atazanavir accounted for the majority of protease inhibitor use after 2006. Median follow-up time on second-line was 2.3 years. The rates of treatment failure and mortality per 100 patient/years were 8.8 (95%CI 7.1 to 10.9) and 1.1 (95%CI 0.6 to 1.9), respectively. Older age, high baseline viral load and use of a protease inhibitor other than lopinavir or atazanavir were associated with a significantly shorter time to second-line failure. Conclusions Increased access to viral load monitoring to facilitate early detection of first-line ART failure and subsequent treatment switch is important for maximizing the durability of second-line therapy in Asia. Although second-line ART is highly effective in the region, the reported rate of failure emphasizes the need for third-line ART in a small portion of patients. PMID:25590271
Boettiger, David C; Nguyen, Van K; Durier, Nicolas; Bui, Huy V; Heng Sim, Benedict L; Azwa, Iskandar; Law, Matthew; Ruxrungtham, Kiat
2015-02-01
Roughly 4% of the 1.25 million patients on antiretroviral therapy (ART) in Asia are using second-line therapy. To maximize patient benefit and regional resources, it is important to optimize the timing of second-line ART initiation and use the most effective compounds available. HIV-positive patients enrolled in the TREAT Asia HIV Observational Database who had used second-line ART for ≥6 months were included. ART use and rates and predictors of second-line treatment failure were evaluated. There were 302 eligible patients. Most were male (76.5%) and exposed to HIV via heterosexual contact (71.5%). Median age at second-line initiation was 39.2 years, median CD4 cell count was 146 cells per cubic millimeter, and median HIV viral load was 16,224 copies per milliliter. Patients started second-line ART before 2007 (n = 105), 2007-2010 (n = 147) and after 2010 (n = 50). Ritonavir-boosted lopinavir and atazanavir accounted for the majority of protease inhibitor use after 2006. Median follow-up time on second-line therapy was 2.3 years. The rates of treatment failure and mortality per 100 patient/years were 8.8 (95% confidence interval: 7.1 to 10.9) and 1.1 (95% confidence interval: 0.6 to 1.9), respectively. Older age, high baseline viral load, and use of a protease inhibitor other than lopinavir or atazanavir were associated with a significantly shorter time to second-line failure. Increased access to viral load monitoring to facilitate early detection of first-line ART failure and subsequent treatment switch is important for maximizing the durability of second-line therapy in Asia. Although second-line ART is highly effective in the region, the reported rate of failure emphasizes the need for third-line ART in a small portion of patients.
Sequential structures provide insights into the fidelity of RNA replication.
Ferrer-Orta, Cristina; Arias, Armando; Pérez-Luque, Rosa; Escarmís, Cristina; Domingo, Esteban; Verdaguer, Nuria
2007-05-29
RNA virus replication is an error-prone event caused by the low fidelity of viral RNA-dependent RNA polymerases. Replication fidelity can be decreased further by the use of mutagenic ribonucleoside analogs to a point where viral genetic information can no longer be maintained. For foot-and-mouth disease virus, the antiviral analogs ribavirin and 5-fluorouracil have been shown to be mutagenic, contributing to virus extinction through lethal mutagenesis. Here, we report the x-ray structure of four elongation complexes of foot-and-mouth disease virus polymerase 3D obtained in presence of natural substrates, ATP and UTP, or mutagenic nucleotides, ribavirin triphosphate and 5-fluorouridine triphosphate with different RNAs as template-primer molecules. The ability of these complexes to synthesize RNA in crystals allowed us to capture different successive replication events and to define the critical amino acids involved in (i) the recognition and positioning of the incoming nucleotide or analog; (ii) the positioning of the acceptor base of the template strand; and (iii) the positioning of the 3'-OH group of the primer nucleotide during RNA replication. The structures identify key interactions involved in viral RNA replication and provide insights into the molecular basis of the low fidelity of viral RNA polymerases.
vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments
2010-01-01
Background The replication rate (or fitness) between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV). HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Results Based on a mathematical model and several statistical methods (least-squares approach and measurement error models), a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1). Conclusions Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/. PMID:20482791
vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments.
Ma, Jingming; Dykes, Carrie; Wu, Tao; Huang, Yangxin; Demeter, Lisa; Wu, Hulin
2010-05-18
The replication rate (or fitness) between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV). HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Based on a mathematical model and several statistical methods (least-squares approach and measurement error models), a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1). Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/.
Making a structured psychiatric diagnostic interview faithful to the nomenclature.
Robins, Lee N; Cottler, Linda B
2004-10-15
Psychiatric diagnostic interviews to be used in epidemiologic studies by lay interviewers have, since the 1970s, attempted to operationalize existing psychiatric nomenclatures. How to maximize the chances that they do so successfully has not previously been spelled out. In this article, the authors discuss strategies for each of the seven steps involved in writing, updating, or modifying a diagnostic interview and its supporting materials: 1) writing questions that match the nomenclature's criteria, 2) checking that respondents will be willing and able to answer the questions, 3) choosing a format acceptable to interviewers that maximizes accurate answering and recording of answers, 4) constructing a data entry and cleaning program that highlights errors to be corrected, 5) creating a diagnostic scoring program that matches the nomenclature's algorithms, 6) developing an interviewer training program that maximizes reliability, and 7) computerizing the interview. For each step, the authors discuss how to identify errors, correct them, and validate the revisions. Although operationalization will never be perfect because of ambiguities in the nomenclature, specifying methods for minimizing divergence from the nomenclature is timely as users modify existing interviews and look forward to updating interviews based on the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, and the International Classification of Diseases, Eleventh Revision.
Maximizing the Detection Probability of Kilonovae Associated with Gravitational Wave Observations
NASA Astrophysics Data System (ADS)
Chan, Man Leong; Hu, Yi-Ming; Messenger, Chris; Hendry, Martin; Heng, Ik Siong
2017-01-01
Estimates of the source sky location for gravitational wave signals are likely to span areas of up to hundreds of square degrees or more, making it very challenging for most telescopes to search for counterpart signals in the electromagnetic spectrum. To boost the chance of successfully observing such counterparts, we have developed an algorithm that optimizes the number of observing fields and their corresponding time allocations by maximizing the detection probability. As a proof-of-concept demonstration, we optimize follow-up observations targeting kilonovae using telescopes including the CTIO-Dark Energy Camera, Subaru-HyperSuprimeCam, Pan-STARRS, and the Palomar Transient Factory. We consider three simulated gravitational wave events with 90% credible error regions spanning areas from ∼ 30 {\\deg }2 to ∼ 300 {\\deg }2. Assuming a source at 200 {Mpc}, we demonstrate that to obtain a maximum detection probability, there is an optimized number of fields for any particular event that a telescope should observe. To inform future telescope design studies, we present the maximum detection probability and corresponding number of observing fields for a combination of limiting magnitudes and fields of view over a range of parameters. We show that for large gravitational wave error regions, telescope sensitivity rather than field of view is the dominating factor in maximizing the detection probability.
Guo, Jiin-Huarng; Luh, Wei-Ming
2009-05-01
When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.
Accurate reconstruction of viral quasispecies spectra through improved estimation of strain richness
2015-01-01
Background Estimating the number of different species (richness) in a mixed microbial population has been a main focus in metagenomic research. Existing methods of species richness estimation ride on the assumption that the reads in each assembled contig correspond to only one of the microbial genomes in the population. This assumption and the underlying probabilistic formulations of existing methods are not useful for quasispecies populations where the strains are highly genetically related. The lack of knowledge on the number of different strains in a quasispecies population is observed to hinder the precision of existing Viral Quasispecies Spectrum Reconstruction (QSR) methods due to the uncontrolled reconstruction of a large number of in silico false positives. In this work, we formulated a novel probabilistic method for strain richness estimation specifically targeting viral quasispecies. By using this approach we improved our recently proposed spectrum reconstruction pipeline ViQuaS to achieve higher levels of precision in reconstructed quasispecies spectra without compromising the recall rates. We also discuss how one other existing popular QSR method named ShoRAH can be improved using this new approach. Results On benchmark data sets, our estimation method provided accurate richness estimates (< 0.2 median estimation error) and improved the precision of ViQuaS by 2%-13% and F-score by 1%-9% without compromising the recall rates. We also demonstrate that our estimation method can be used to improve the precision and F-score of ShoRAH by 0%-7% and 0%-5% respectively. Conclusions The proposed probabilistic estimation method can be used to estimate the richness of viral populations with a quasispecies behavior and to improve the accuracy of the quasispecies spectra reconstructed by the existing methods ViQuaS and ShoRAH in the presence of a moderate level of technical sequencing errors. Availability http://sourceforge.net/projects/viquas/ PMID:26678073
The Association of Viral Hepatitis and Acute Pancreatitis
Geokas, Michael C.; Olsen, Harvey; Swanson, Virginia; Rinderknecht, Heinrich
1972-01-01
The histological features of 24 pancreases obtained from patients who died of causes other than hepatitis, pancreatitis or pancreatic tumors, included a variable degree of autolysis, rare foci of inflammatory reaction but no hemorrhagic fat necrosis or destruction of elastic tissue in vessel walls (elastolysis). Assays of elastase in extracts of these pancreases showed no free enzyme, but varying amounts of proelastase. A review of autopsy findings in 33 patients with fatal liver necrosis attributed to halothane anesthesia, demonstrated changes of acute pancreatitis only in two. On the other hand, a review of 16 cases of fulminant viral hepatitis revealed changes characteristic of acute pancreatitis in seven – interstitial edema, hemorrhagic fat necrosis, inflammatory reaction and frequently elastolysis in vessel walls. Determination of elastase in extracts of one pancreas showed the bulk of the enzyme in free form. Furthermore, assays of urinary amylase in 44 patients with viral hepatitis showed increased levels of this enzyme (2583 ± 398 mean value ± standard error, Somogyi units per 100 ml in 13, or 29.5 percent). The evidence suggests that acute pancreatitis may at times complicate viral hepatitis. Although direct proof of viral pancreatic involvement is not feasible at present, a rational hypothesis is advanced which underlines similar mechanisms of tissue involvement in both liver and pancreas that may be brought about by the hepatitis viruses. PMID:5070694
ESTIMATION OF THE NUMBER OF INFECTIOUS BACTERIAL OR VIRAL PARTICLES BY THE DILUTION METHOD
Seligman, Stephen J.; Mickey, M. Ray
1964-01-01
Seligman, Stephen J. (University of California, Los Angeles), and M. Ray Mickey. Estimation of the number of infectious bacterial or viral particles by the dilution method. J. Bacteriol. 88:31–36. 1964.—For viral or bacterial systems in which discrete foci of infection are not obtainable, it is possible to obtain an estimate of the number of infectious particles by use of the quantal response if the assay system is such that one infectious particle can elicit the response. Unfortunately, the maximum likelihood estimate is difficult to calculate, but, by the use of a modification of Haldane's approximation, it is possible to construct a table which facilitates calculation of both the average number of infectious particles and its relative error. Additional advantages of the method are that the number of test units per dilution can be varied, the dilutions need not bear any fixed relation to each other, and the one-particle hypothesis can be readily tested. PMID:14197902
A Combined Fabrication and Instrumentation Platform for Sample Preparation.
Guckenberger, David J; Thomas, Peter C; Rothbauer, Jacob; LaVanway, Alex J; Anderson, Meghan; Gilson, Dan; Fawcett, Kevin; Berto, Tristan; Barrett, Kevin; Beebe, David J; Berry, Scott M
2014-06-01
While potentially powerful, access to molecular diagnostics is substantially limited in the developing world. Here we present an approach to reduced cost molecular diagnostic instrumentation that has the potential to empower developing world communities by reducing costs through streamlining the sample preparation process. In addition, this instrument is capable of producing its own consumable devices on demand, reducing reliance on assay suppliers. Furthermore, this instrument is designed with an "open" architecture, allowing users to visually observe the assay process and make modifications as necessary (as opposed to traditional "black box" systems). This open environment enables integration of microfluidic fabrication and viral RNA purification onto an easy-to-use modular system via the use of interchangeable trays. Here we employ this system to develop a protocol to fabricate microfluidic devices and then use these devices to isolate viral RNA from serum for the measurement of human immunodeficiency virus (HIV) viral load. Results obtained from this method show significantly reduced error compared with similar nonautomated sample preparation processes. © 2014 Society for Laboratory Automation and Screening.
McNair, Peter J; Colvin, Matt; Reid, Duncan
2011-02-01
To compare the accuracy of 12 maximal strength (1-repetition maximum [1-RM]) equations for predicting quadriceps strength in people with osteoarthritis (OA) of the knee joint. Eighteen subjects with OA of the knee joint attended a rehabilitation gymnasium on 3 occasions: 1) a familiarization session, 2) a session where the 1-RM of the quadriceps was established using a weights machine for an open-chain knee extension exercise and a leg press exercise, and 3) a session where the subjects performed with a load at which they could lift for approximately 10 repetitions only. The data were used in 12 prediction equations to calculate 1-RM strength and compared to the actual 1-RM data. Data were examined using Bland and Altman graphs and statistics, intraclass correlation coefficients (ICCs), and typical error values between the actual 1-RM and the respective 1-RM prediction equation data. Difference scores (predicted 1-RM--actual 1-RM) across the injured and control legs were also compared. For the knee extension exercise, the Brown, Brzycki, Epley, Lander, Mayhew et al, Poliquin, and Wathen prediction equations demonstrated the greatest levels of predictive accuracy. All of the ICCs were high (range 0.96–0.99), and typical errors were between 3% and 4%. For the knee press exercise, the Adams, Berger, Kemmler et al, and O'Conner et al equations demonstrated the greatest levels of predictive accuracy. All of the ICCs were high (range 0.95-0.98), and the typical errors ranged from 5.9-6.3%. This study provided evidence supporting the use of prediction equations to assess maximal strength in individuals with a knee joint with OA.
NASA Astrophysics Data System (ADS)
Li, Qiang; Zhang, Ying; Lin, Jingran; Wu, Sissi Xiaoxiao
2017-09-01
Consider a full-duplex (FD) bidirectional secure communication system, where two communication nodes, named Alice and Bob, simultaneously transmit and receive confidential information from each other, and an eavesdropper, named Eve, overhears the transmissions. Our goal is to maximize the sum secrecy rate (SSR) of the bidirectional transmissions by optimizing the transmit covariance matrices at Alice and Bob. To tackle this SSR maximization (SSRM) problem, we develop an alternating difference-of-concave (ADC) programming approach to alternately optimize the transmit covariance matrices at Alice and Bob. We show that the ADC iteration has a semi-closed-form beamforming solution, and is guaranteed to converge to a stationary solution of the SSRM problem. Besides the SSRM design, this paper also deals with a robust SSRM transmit design under a moment-based random channel state information (CSI) model, where only some roughly estimated first and second-order statistics of Eve's CSI are available, but the exact distribution or other high-order statistics is not known. This moment-based error model is new and different from the widely used bounded-sphere error model and the Gaussian random error model. Under the consider CSI error model, the robust SSRM is formulated as an outage probability-constrained SSRM problem. By leveraging the Lagrangian duality theory and DC programming, a tractable safe solution to the robust SSRM problem is derived. The effectiveness and the robustness of the proposed designs are demonstrated through simulations.
Motl, Robert W; Fernhall, Bo
2012-03-01
To examine the accuracy of predicting peak oxygen consumption (VO(2peak)) primarily from peak work rate (WR(peak)) recorded during a maximal, incremental exercise test on a cycle ergometer among persons with relapsing-remitting multiple sclerosis (RRMS) who had minimal disability. Cross-sectional study. Clinical research laboratory. Women with RRMS (n=32) and sex-, age-, height-, and weight-matched healthy controls (n=16) completed an incremental exercise test on a cycle ergometer to volitional termination. Not applicable. Measured and predicted VO(2peak) and WR(peak). There were strong, statistically significant associations between measured and predicted VO(2peak) in the overall sample (R(2)=.89, standard error of the estimate=127.4 mL/min) and subsamples with (R(2)=.89, standard error of the estimate=131.3 mL/min) and without (R(2)=.85, standard error of the estimate=126.8 mL/min) multiple sclerosis (MS) based on the linear regression analyses. Based on the 95% confidence limits for worst-case errors, the equation predicted VO(2peak) within 10% of its true value in 95 of every 100 subjects with MS. Peak VO(2) can be accurately predicted in persons with RRMS who have minimal disability as it is in controls by using established equations and WR(peak) recorded from a maximal, incremental exercise test on a cycle ergometer. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Feys, Marjolein; Anseel, Frederik
2015-03-01
People's affective forecasts are often inaccurate because they tend to overestimate how they will feel after an event. As life decisions are often based on affective forecasts, it is crucial to find ways to manage forecasting errors. We examined the impact of a fair treatment on forecasting errors in candidates in a Belgian reality TV talent show. We found that perceptions of fair treatment increased the forecasting error for losers (a negative audition decision) but decreased it for winners (a positive audition decision). For winners, this effect was even more pronounced when candidates were highly invested in their self-view as a future pop idol whereas for losers, the effect was more pronounced when importance was low. The results in this study point to a potential paradox between maximizing happiness and decreasing forecasting errors. A fair treatment increased the forecasting error for losers, but actually made them happier. © 2014 The British Psychological Society.
Generalized Ordinary Differential Equation Models 1
Miao, Hongyu; Wu, Hulin; Xue, Hongqi
2014-01-01
Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method. PMID:25544787
Generalized Ordinary Differential Equation Models.
Miao, Hongyu; Wu, Hulin; Xue, Hongqi
2014-10-01
Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method.
Estimating linear-nonlinear models using Rényi divergences
Kouh, Minjoon; Sharpee, Tatyana O.
2009-01-01
This paper compares a family of methods for characterizing neural feature selectivity using natural stimuli in the framework of the linear-nonlinear model. In this model, the spike probability depends in a nonlinear way on a small number of stimulus dimensions. The relevant stimulus dimensions can be found by optimizing a Rényi divergence that quantifies a change in the stimulus distribution associated with the arrival of single spikes. Generally, good reconstructions can be obtained based on optimization of Rényi divergence of any order, even in the limit of small numbers of spikes. However, the smallest error is obtained when the Rényi divergence of order 1 is optimized. This type of optimization is equivalent to information maximization, and is shown to saturate the Cramér-Rao bound describing the smallest error allowed for any unbiased method. We also discuss conditions under which information maximization provides a convenient way to perform maximum likelihood estimation of linear-nonlinear models from neural data. PMID:19568981
Estimating linear-nonlinear models using Renyi divergences.
Kouh, Minjoon; Sharpee, Tatyana O
2009-01-01
This article compares a family of methods for characterizing neural feature selectivity using natural stimuli in the framework of the linear-nonlinear model. In this model, the spike probability depends in a nonlinear way on a small number of stimulus dimensions. The relevant stimulus dimensions can be found by optimizing a Rényi divergence that quantifies a change in the stimulus distribution associated with the arrival of single spikes. Generally, good reconstructions can be obtained based on optimization of Rényi divergence of any order, even in the limit of small numbers of spikes. However, the smallest error is obtained when the Rényi divergence of order 1 is optimized. This type of optimization is equivalent to information maximization, and is shown to saturate the Cramer-Rao bound describing the smallest error allowed for any unbiased method. We also discuss conditions under which information maximization provides a convenient way to perform maximum likelihood estimation of linear-nonlinear models from neural data.
Beamforming Based Full-Duplex for Millimeter-Wave Communication
Liu, Xiao; Xiao, Zhenyu; Bai, Lin; Choi, Jinho; Xia, Pengfei; Xia, Xiang-Gen
2016-01-01
In this paper, we study beamforming based full-duplex (FD) systems in millimeter-wave (mmWave) communications. A joint transmission and reception (Tx/Rx) beamforming problem is formulated to maximize the achievable rate by mitigating self-interference (SI). Since the optimal solution is difficult to find due to the non-convexity of the objective function, suboptimal schemes are proposed in this paper. A low-complexity algorithm, which iteratively maximizes signal power while suppressing SI, is proposed and its convergence is proven. Moreover, two closed-form solutions, which do not require iterations, are also derived under minimum-mean-square-error (MMSE), zero-forcing (ZF), and maximum-ratio transmission (MRT) criteria. Performance evaluations show that the proposed iterative scheme converges fast (within only two iterations on average) and approaches an upper-bound performance, while the two closed-form solutions also achieve appealing performances, although there are noticeable differences from the upper bound depending on channel conditions. Interestingly, these three schemes show different robustness against the geometry of Tx/Rx antenna arrays and channel estimation errors. PMID:27455256
A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting
Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao
2014-01-01
We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813
Liu, Yang; Chiaromonte, Francesca; Ross, Howard; Malhotra, Raunaq; Elleder, Daniel; Poss, Mary
2015-06-30
Infection with feline immunodeficiency virus (FIV) causes an immunosuppressive disease whose consequences are less severe if cats are co-infected with an attenuated FIV strain (PLV). We use virus diversity measurements, which reflect replication ability and the virus response to various conditions, to test whether diversity of virulent FIV in lymphoid tissues is altered in the presence of PLV. Our data consisted of the 3' half of the FIV genome from three tissues of animals infected with FIV alone, or with FIV and PLV, sequenced by 454 technology. Since rare variants dominate virus populations, we had to carefully distinguish sequence variation from errors due to experimental protocols and sequencing. We considered an exponential-normal convolution model used for background correction of microarray data, and modified it to formulate an error correction approach for minor allele frequencies derived from high-throughput sequencing. Similar to accounting for over-dispersion in counts, this accounts for error-inflated variability in frequencies - and quite effectively reproduces empirically observed distributions. After obtaining error-corrected minor allele frequencies, we applied ANalysis Of VAriance (ANOVA) based on a linear mixed model and found that conserved sites and transition frequencies in FIV genes differ among tissues of dual and single infected cats. Furthermore, analysis of minor allele frequencies at individual FIV genome sites revealed 242 sites significantly affected by infection status (dual vs. single) or infection status by tissue interaction. All together, our results demonstrated a decrease in FIV diversity in bone marrow in the presence of PLV. Importantly, these effects were weakened or undetectable when error correction was performed with other approaches (thresholding of minor allele frequencies; probabilistic clustering of reads). We also queried the data for cytidine deaminase activity on the viral genome, which causes an asymmetric increase in G to A substitutions, but found no evidence for this host defense strategy. Our error correction approach for minor allele frequencies (more sensitive and computationally efficient than other algorithms) and our statistical treatment of variation (ANOVA) were critical for effective use of high-throughput sequencing data in understanding viral diversity. We found that co-infection with PLV shifts FIV diversity from bone marrow to lymph node and spleen.
Sampling Based Influence Maximization on Linear Threshold Model
NASA Astrophysics Data System (ADS)
Jia, Su; Chen, Ling
2018-04-01
A sampling based influence maximization on linear threshold (LT) model method is presented. The method samples the routes in the possible worlds in the social networks, and uses Chernoff bound to estimate the number of samples so that the error can be constrained within a given bound. Then the active possibilities of the routes in the possible worlds are calculated, and are used to compute the influence spread of each node in the network. Our experimental results show that our method can effectively select appropriate seed nodes set that spreads larger influence than other similar methods.
Improved Quality in Aerospace Testing Through the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
DeLoach, R.
2000-01-01
This paper illustrates how, in the presence of systematic error, the quality of an experimental result can be influenced by the order in which the independent variables are set. It is suggested that in typical experimental circumstances in which systematic errors are significant, the common practice of organizing the set point order of independent variables to maximize data acquisition rate results in a test matrix that fails to produce the highest quality research result. With some care to match the volume of data required to satisfy inference error risk tolerances, it is possible to accept a lower rate of data acquisition and still produce results of higher technical quality (lower experimental error) with less cost and in less time than conventional test procedures, simply by optimizing the sequence in which independent variable levels are set.
A method for calibrating pH meters using standard solutions with low electrical conductivity
NASA Astrophysics Data System (ADS)
Rodionov, A. K.
2011-07-01
A procedure for obtaining standard solutions with low electrical conductivity that reproduce pH values both in acid and alkali regions is proposed. Estimates of the maximal possible error of reproducing the pH values of these solutions are obtained.
(In)sensitivity of GNSS techniques to geocenter motion
NASA Astrophysics Data System (ADS)
Rebischung, Paul; Altamimi, Zuheir; Springer, Tim
2013-04-01
As a satellite-based technique, GNSS should be sensitive to motions of the Earth's center of mass (CM) with respect to the Earth's crust. In theory, the weekly solutions of the IGS Analysis Centers (ACs) should indeed have the "instantaneous" CM as their origin, and the net translations between the weekly AC frames and a secular frame such as ITRF2008 should thus approximate the non-linear motion of CM with respect to the Earth's center of figure. However, the comparison of the AC translation time series with each other, with SLR geocenter estimates or with geophysical models reveals that this way of observing geocenter motion with GNSS currently gives unreliable results. The fact that the origin of the weekly AC solutions shoud be CM stems from the satellite equations of motion, in which no degree-1 Stokes coefficients are included. It is therefore reasonable to think that any mis-modeling or uncertainty about the forces acting on GNSS satellites can potentially offset the network origin from CM. That is why defects in radiation pressure modeling have long been assumed to be the main origin of the GNSS geocenter errors. In particular, Meindl et al. (2012) incriminate the correlation between the Z component of the origin and the direct radiation pressure parameters D0. We review here the sensitivity of GNSS techniques to geocenter motion from a different perspective. Our approach consists in determining the signature of a geocenter error on GNSS observations, and seeing how and how well such an error can be compensated by all other usual GNSS parameters. (In other words, we look for the linear combinations of parameters which have the maximal partial correlations with each of the 3 components of the origin, and evaluate these maximal partial correlations.) Without setting up any empirical radiation pressure parameter, we obtain maximal partial correlations of 99.98 % for all 3 components of the origin: a geocenter error can almost perfectly be absorbed by the other GNSS parameters. Satellite clock offsets, if estimated epoch-wise, especially devastate the sensitivity of GNSS to geocenter motion. The numerous station-related parameters (station positions, station clock offsets, ZWDs and horizontal tropospheric gradients) do the rest of the job. The maximal partial correlations increase a bit more when the classic "ECOM" set of 5 radiation pressure parameters is set up for each satellite. But this increase is almost fully attributable to the once-per-revolution parameters BC & BS. In particular, we do not find the direct radiation pressure parameters D0 to play a predominant role in the GNSS geocenter determination problem.
Synthesis of robust nonlinear autopilots using differential game theory
NASA Technical Reports Server (NTRS)
Menon, P. K. A.
1991-01-01
A synthesis technique for handling unmodeled disturbances in nonlinear control law synthesis was advanced using differential game theory. Two types of modeling inaccuracies can be included in the formulation. The first is a bias-type error, while the second is the scale-factor-type error in the control variables. The disturbances were assumed to satisfy an integral inequality constraint. Additionally, it was assumed that they act in such a way as to maximize a quadratic performance index. Expressions for optimal control and worst-case disturbance were then obtained using optimal control theory.
Analysis of a planetary-rotation system for evaporated optical coatings.
Oliver, J B
2016-10-20
The impact of planetary design considerations for optical coating deposition is analyzed, including the ideal number of planets, variations in system performance, and the deviation of planet motion from the ideal. System capacity is maximized for four planets, although substrate size can significantly influence this result. Guidance is provided in the design of high-performance deposition systems based on the relative impact of different error modes. Errors in planet mounting such that the planet surface is not perpendicular to the axis of rotation are particularly problematic, suggesting planetary design modifications would be appropriate.
Lee, It Ee; Ghassemlooy, Zabih; Ng, Wai Pang; Khalighi, Mohammad-Ali
2013-02-01
Joint beam width and spatial coherence length optimization is proposed to maximize the average capacity in partially coherent free-space optical links, under the combined effects of atmospheric turbulence and pointing errors. An optimization metric is introduced to enable feasible translation of the joint optimal transmitter beam parameters into an analogous level of divergence of the received optical beam. Results show that near-ideal average capacity is best achieved through the introduction of a larger receiver aperture and the joint optimization technique.
A new universal dynamic model to describe eating rate and cumulative intake curves123
Paynter, Jonathan; Peterson, Courtney M; Heymsfield, Steven B
2017-01-01
Background: Attempts to model cumulative intake curves with quadratic functions have not simultaneously taken gustatory stimulation, satiation, and maximal food intake into account. Objective: Our aim was to develop a dynamic model for cumulative intake curves that captures gustatory stimulation, satiation, and maximal food intake. Design: We developed a first-principles model describing cumulative intake that universally describes gustatory stimulation, satiation, and maximal food intake using 3 key parameters: 1) the initial eating rate, 2) the effective duration of eating, and 3) the maximal food intake. These model parameters were estimated in a study (n = 49) where eating rates were deliberately changed. Baseline data was used to determine the quality of model's fit to data compared with the quadratic model. The 3 parameters were also calculated in a second study consisting of restrained and unrestrained eaters. Finally, we calculated when the gustatory stimulation phase is short or absent. Results: The mean sum squared error for the first-principles model was 337.1 ± 240.4 compared with 581.6 ± 563.5 for the quadratic model, or a 43% improvement in fit. Individual comparison demonstrated lower errors for 94% of the subjects. Both sex (P = 0.002) and eating duration (P = 0.002) were associated with the initial eating rate (adjusted R2 = 0.23). Sex was also associated (P = 0.03 and P = 0.012) with the effective eating duration and maximum food intake (adjusted R2 = 0.06 and 0.11). In participants directed to eat as much as they could compared with as much as they felt comfortable with, the maximal intake parameter was approximately double the amount. The model found that certain parameter regions resulted in both stimulation and satiation phases, whereas others only produced a satiation phase. Conclusions: The first-principles model better quantifies interindividual differences in food intake, shows how aspects of food intake differ across subpopulations, and can be applied to determine how eating behavior factors influence total food intake. PMID:28077377
May, Jared; Johnson, Philip; Saleem, Huma
2017-01-01
ABSTRACT To maximize the coding potential of viral genomes, internal ribosome entry sites (IRES) can be used to bypass the traditional requirement of a 5′ cap and some/all of the associated translation initiation factors. Although viral IRES typically contain higher-order RNA structure, an unstructured sequence of about 84 nucleotides (nt) immediately upstream of the Turnip crinkle virus (TCV) coat protein (CP) open reading frame (ORF) has been found to promote internal expression of the CP from the genomic RNA (gRNA) both in vitro and in vivo. An absence of extensive RNA structure was predicted using RNA folding algorithms and confirmed by selective 2′-hydroxyl acylation analyzed by primer extension (SHAPE) RNA structure probing. Analysis of the IRES region in vitro by use of both the TCV gRNA and reporter constructs did not reveal any sequence-specific elements but rather suggested that an overall lack of structure was an important feature for IRES activity. The CP IRES is A-rich, independent of orientation, and strongly conserved among viruses in the same genus. The IRES was dependent on eIF4G, but not eIF4E, for activity. Low levels of CP accumulated in vivo in the absence of detectable TCV subgenomic RNAs, strongly suggesting that the IRES was active in the gRNA in vivo. Since the TCV CP also serves as the viral silencing suppressor, early translation of the CP from the viral gRNA is likely important for countering host defenses. Cellular mRNA IRES also lack extensive RNA structures or sequence conservation, suggesting that this viral IRES and cellular IRES may have similar strategies for internal translation initiation. IMPORTANCE Cap-independent translation is a common strategy among positive-sense, single-stranded RNA viruses for bypassing the host cell requirement of a 5′ cap structure. Viral IRES, in general, contain extensive secondary structure that is critical for activity. In contrast, we demonstrate that a region of viral RNA devoid of extensive secondary structure has IRES activity and produces low levels of viral coat protein in vitro and in vivo. Our findings may be applicable to cellular mRNA IRES that also have little or no sequences/structures in common. PMID:28179526
Flisiak, Robert; Horban, Andrzej; Gallay, Philippe; Bobardt, Michael; Selvarajah, Suganya; Wiercinska-Drapalo, Alicja; Siwak, Ewa; Cielniak, Iwona; Higersberger, Jozef; Kierkus, Jarek; Aeschlimann, Christian; Grosgurin, Pierre; Nicolas-Métral, Valérie; Dumont, Jean-Maurice; Porchet, Hervé; Crabbé, Raf; Scalfaro, Pietro
2008-03-01
Debio-025 is an oral cyclophilin (Cyp) inhibitor with potent anti-hepatitis C virus activity in vitro. Its effect on viral load as well as its influence on intracellular Cyp levels was investigated in a randomized, double-blind, placebo-controlled study. Mean hepatitis C viral load decreased significantly by 3.6 log(10) after a 14-day oral treatment with 1200 mg twice daily (P < 0.0001) with an effect against the 3 genotypes (1, 3, and 4) represented in the study. In addition, the absence of viral rebound during treatment indicates that Debio-025 has a high barrier for the selection of resistance. In Debio-025-treated patients, cyclophilin B (CypB) levels in peripheral blood mononuclear cells decreased from 67 +/- 6 (standard error) ng/mg protein (baseline) to 5 +/- 1 ng/mg protein at day 15 (P < 0.01). Debio-025 induced a strong drop in CypB levels, coinciding with the decrease in hepatitis C viral load. These are the first preliminary human data supporting the hypothesis that CypB may play an important role in hepatitis C virus replication and that Cyp inhibition is a valid target for the development of anti-hepatitis C drugs.
Defining the Relationship Between Human Error Classes and Technology Intervention Strategies
NASA Technical Reports Server (NTRS)
Wiegmann, Douglas A.; Rantanen, Esa; Crisp, Vicki K. (Technical Monitor)
2002-01-01
One of the main factors in all aviation accidents is human error. The NASA Aviation Safety Program (AvSP), therefore, has identified several human-factors safety technologies to address this issue. Some technologies directly address human error either by attempting to reduce the occurrence of errors or by mitigating the negative consequences of errors. However, new technologies and system changes may also introduce new error opportunities or even induce different types of errors. Consequently, a thorough understanding of the relationship between error classes and technology "fixes" is crucial for the evaluation of intervention strategies outlined in the AvSP, so that resources can be effectively directed to maximize the benefit to flight safety. The purpose of the present project, therefore, was to examine the repositories of human factors data to identify the possible relationship between different error class and technology intervention strategies. The first phase of the project, which is summarized here, involved the development of prototype data structures or matrices that map errors onto "fixes" (and vice versa), with the hope of facilitating the development of standards for evaluating safety products. Possible follow-on phases of this project are also discussed. These additional efforts include a thorough and detailed review of the literature to fill in the data matrix and the construction of a complete database and standards checklists.
Viral symbiosis and the holobiontic nature of the human genome.
Ryan, Francis Patrick
2016-01-01
The human genome is a holobiontic union of the mammalian nuclear genome, the mitochondrial genome and large numbers of endogenized retroviral genomes. This article defines and explores this symbiogenetic pattern of evolution, looking at the implications for human genetics, epigenetics, embryogenesis, physiology and the pathogenesis of inborn errors of metabolism and many other diseases. © 2016 APMIS. Published by John Wiley & Sons Ltd.
An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models
ERIC Educational Resources Information Center
Lee, Taehun
2010-01-01
In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faden, H.; Hong, J.J.; Ogra, P.L.
1986-03-01
The effect of RSV infection on the adherence of Streptococcus pneumoniae (SP), Haemophilus influenzae (HI) and Staphylococcus aureus (SA) to human epithelial cells was determined. RSV-infected Hep-2 cell cultures at different stages of expression of surface viral antigens and bacteria labeled with /sup 3/H-thymidine were employed to examine the kinetics of bacterial adherence to virus-infected cells. RSV infection did not alter the magnitude of adherence of HI or SA to HEp-2 cells. However, adherence of SP to HEp-2 cells was significantly (P < 0.01) enhanced by prior RSV infection. The degree of adherence was directly related to the amount ofmore » viral antigen expressed on the cell surface. The adherence was temperature dependent, with maximal adherence observed at 37/sup 0/C. Heat-inactivation of SP did not alter adherence characteristics. These data suggest that RSV infection increases adherence of SP to the surface of epithelial cells in vitro. Since attachment of bacteria to mucosal surfaces is the first step in many infections, it is suggested that viral infections of epithelial cells render them more susceptible to bacterial adherence. Thus, RSV infection in vivo may predispose children to SP infections, such as in otitis media, by increasing colonization with SP.« less
Jiang, Z; Dou, Z; Song, W L; Xu, J; Wu, Z Y
2017-11-10
Objective: To compare results of different methods: in organizing HIV viral load (VL) data with missing values mechanism. Methods We used software SPSS 17.0 to simulate complete and missing data with different missing value mechanism from HIV viral loading data collected from MSM in 16 cities in China in 2013. Maximum Likelihood Methods Using the Expectation and Maximization Algorithm (EM), regressive method, mean imputation, delete method, and Markov Chain Monte Carlo (MCMC) were used to supplement missing data respectively. The results: of different methods were compared according to distribution characteristics, accuracy and precision. Results HIV VL data could not be transferred into a normal distribution. All the methods showed good results in iterating data which is Missing Completely at Random Mechanism (MCAR). For the other types of missing data, regressive and MCMC methods were used to keep the main characteristic of the original data. The means of iterating database with different methods were all close to the original one. The EM, regressive method, mean imputation, and delete method under-estimate VL while MCMC overestimates it. Conclusion: MCMC can be used as the main imputation method for HIV virus loading missing data. The iterated data can be used as a reference for mean HIV VL estimation among the investigated population.
Receiver Diversity Combining Using Evolutionary Algorithms in Rayleigh Fading Channel
Akbari, Mohsen; Manesh, Mohsen Riahi
2014-01-01
In diversity combining at the receiver, the output signal-to-noise ratio (SNR) is often maximized by using the maximal ratio combining (MRC) provided that the channel is perfectly estimated at the receiver. However, channel estimation is rarely perfect in practice, which results in deteriorating the system performance. In this paper, an imperialistic competitive algorithm (ICA) is proposed and compared with two other evolutionary based algorithms, namely, particle swarm optimization (PSO) and genetic algorithm (GA), for diversity combining of signals travelling across the imperfect channels. The proposed algorithm adjusts the combiner weights of the received signal components in such a way that maximizes the SNR and minimizes the bit error rate (BER). The results indicate that the proposed method eliminates the need of channel estimation and can outperform the conventional diversity combining methods. PMID:25045725
Beating the Clauser-Horne-Shimony-Holt and the Svetlichny games with optimal states
NASA Astrophysics Data System (ADS)
Su, Hong-Yi; Ren, Changliang; Chen, Jing-Ling; Zhang, Fu-Lin; Wu, Chunfeng; Xu, Zhen-Peng; Gu, Mile; Vinjanampathy, Sai; Kwek, L. C.
2016-02-01
We study the relation between the maximal violation of Svetlichny's inequality and the mixedness of quantum states and obtain the optimal state (i.e., maximally nonlocal mixed states, or MNMS, for each value of linear entropy) to beat the Clauser-Horne-Shimony-Holt and the Svetlichny games. For the two-qubit and three-qubit MNMS, we showed that these states are also the most tolerant state against white noise, and thus serve as valuable quantum resources for such games. In particular, the quantum prediction of the MNMS decreases as the linear entropy increases, and then ceases to be nonlocal when the linear entropy reaches the critical points 2 /3 and 9 /14 for the two- and three-qubit cases, respectively. The MNMS are related to classical errors in experimental preparation of maximally entangled states.
Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen
2014-01-01
This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829
Modelling Hepatitis B Virus Antiviral Therapy and Drug Resistant Mutant Strains
NASA Astrophysics Data System (ADS)
Bernal, Julie; Dix, Trevor; Allison, Lloyd; Bartholomeusz, Angeline; Yuen, Lilly
Despite the existence of vaccines, the Hepatitis B virus (HBV) is still a serious global health concern. HBV targets liver cells. It has an unusual replication process involving an RNA pre-genome that the reverse transcriptase domain of the viral polymerase protein translates into viral DNA. The reverse transcription process is error prone and together with the high replication rates of the virus, allows the virus to exist as a heterogeneous population of mutants, known as a quasispecies, that can adapt and become resistant to antiviral therapy. This study presents an individual-based model of HBV inside an artificial liver, and associated blood serum, undergoing antiviral therapy. This model aims to provide insights into the evolution of the HBV quasispecies and the individual contribution of HBV mutations in the outcome of therapy.
NASA Astrophysics Data System (ADS)
Park, Jisang
In this dissertation, we investigate MIMO stability margin inference of a large number of controllers using pre-established stability margins of a small number of nu-gap-wise adjacent controllers. The generalized stability margin and the nu-gap metric are inherently able to handle MIMO system analysis without the necessity of repeating multiple channel-by-channel SISO analyses. This research consists of three parts: (i) development of a decision support tool for inference of the stability margin, (ii) computational considerations for yielding the maximal stability margin with the minimal nu-gap metric in a less conservative manner, and (iii) experiment design for estimating the generalized stability margin with an assured error bound. A modern problem from aerospace control involves the certification of a large set of potential controllers with either a single plant or a fleet of potential plant systems, with both plants and controllers being MIMO and, for the moment, linear. Experiments on a limited number of controller/plant pairs should establish the stability and a certain level of margin of the complete set. We consider this certification problem for a set of controllers and provide algorithms for selecting an efficient subset for testing. This is done for a finite set of candidate controllers and, at least for SISO plants, for an infinite set. In doing this, the nu-gap metric will be the main tool. We provide a theorem restricting a radius of a ball in the parameter space so that the controller can guarantee a prescribed level of stability and performance if parameters of the controllers are contained in the ball. Computational examples are given, including one of certification of an aircraft engine controller. The overarching aim is to introduce truly MIMO margin calculations and to understand their efficacy in certifying stability over a set of controllers and in replacing legacy single-loop gain and phase margin calculations. We consider methods for the computation of; maximal MIMO stability margins bP̂,C, minimal nu-gap metrics deltanu , and the maximal difference between these two values, through the use of scaling and weighting functions. We propose simultaneous scaling selections that attempt to maximize the generalized stability margin and minimize the nu-gap. The minimization of the nu-gap by scaling involves a non-convex optimization. We modify the XY-centering algorithm to handle this non-convexity. This is done for applications in controller certification. Estimating the generalized stability margin with an accurate error bound has significant impact on controller certification. We analyze an error bound of the generalized stability margin as the infinity norm of the MIMO empirical transfer function estimate (ETFE). Input signal design to reduce the error on the estimate is also studied. We suggest running the system for a certain amount of time prior to recording of each output data set. The assured upper bound of estimation error can be tuned by the amount of the pre-experiment.
Dynamically correcting two-qubit gates against any systematic logical error
NASA Astrophysics Data System (ADS)
Calderon Vargas, Fernando Antonio
The reliability of quantum information processing depends on the ability to deal with noise and error in an efficient way. A significant source of error in many settings is coherent, systematic gate error. This work introduces a set of composite pulse sequences that generate maximally entangling gates and correct all systematic errors within the logical subspace to arbitrary order. These sequences are applica- ble for any two-qubit interaction Hamiltonian, and make no assumptions about the underlying noise mechanism except that it is constant on the timescale of the opera- tion. The prime use for our results will be in cases where one has limited knowledge of the underlying physical noise and control mechanisms, highly constrained control, or both. In particular, we apply these composite pulse sequences to the quantum system formed by two capacitively coupled singlet-triplet qubits, which is charac- terized by having constrained control and noise sources that are low frequency and of a non-Markovian nature.
Powers, C. D.; Miller, B. A.; Kurtz, H.; Ackermann, W. W.
1969-01-01
Inhibition of HeLa cell deoxyribonucleic acid (DNA) synthesis, which occurred by the 4th to 5th hr after infection with poliovirus, could be blocked completely by guanidine only when it was present before the 2nd hr. At the 2nd hr, there was no significant ribonucleic acid (RNA)-replicase activity, and addition of guanidine inhibited all production of virus but allowed 57% of maximal DNA inhibition to develop. Maximum DNA inhibition developed in cells infected for 4 hr in the presence of guanidine when the guanidine was removed for a 10-min interval. RNA-replicase activity was not enzymatically detectable and viral multiplication did not develop in these cells unless the interval without guanidine was extended to 60 min. The interpretation of the data was that the effect of guanidine on viral-induced inhibition of DNA synthesis was distinct and not a consequence of the inhibition of RNA-replicase. PMID:4305675
SARS-CoV replicates in primary human alveolar type II cell cultures but not in type I-like cells
Mossel, Eric C.; Wang, Jieru; Jeffers, Scott; Edeen, Karen E.; Wang, Shuanglin; Cosgrove, Gregory P.; Funk, C. Joel; Manzer, Rizwan; Miura, Tanya A.; Pearson, Leonard D.; Holmes, Kathryn V.; Mason, Robert J.
2008-01-01
Severe acute respiratory syndrome (SARS) is a disease characterized by diffuse alveolar damage. We isolated alveolar type II cells and maintained them in a highly differentiated state. Type II cell cultures supported SARS-CoV replication as evidenced by RT-PCR detection of viral subgenomic RNA and an increase in virus titer. Virus titers were maximal by 24 hours and peaked at approximately 105 pfu/mL. Two cell types within the cultures were infected. One cell type was type II cells, which were positive for SP-A, SP-C, cytokeratin, a type II cell-specific monoclonal antibody, and Ep-CAM. The other cell type was composed of spindle-shaped cells that were positive for vimentin and collagen III and likely fibroblasts. Viral replication was not detected in type I-like cells or macrophages. Hence, differentiated adult human alveolar type II cells were infectible but alveolar type I-like cells and alveolar macrophages did not support productive infection. PMID:18022664
A transmission-virulence evolutionary trade-off explains attenuation of HIV-1 in Uganda
Blanquart, François; Grabowski, Mary Kate; Herbeck, Joshua; Nalugoda, Fred; Serwadda, David; Eller, Michael A; Robb, Merlin L; Gray, Ronald; Kigozi, Godfrey; Laeyendecker, Oliver; Lythgoe, Katrina A; Nakigozi, Gertrude; Quinn, Thomas C; Reynolds, Steven J; Wawer, Maria J; Fraser, Christophe
2016-01-01
Evolutionary theory hypothesizes that intermediate virulence maximizes pathogen fitness as a result of a trade-off between virulence and transmission, but empirical evidence remains scarce. We bridge this gap using data from a large and long-standing HIV-1 prospective cohort, in Uganda. We use an epidemiological-evolutionary model parameterised with this data to derive evolutionary predictions based on analysis and detailed individual-based simulations. We robustly predict stabilising selection towards a low level of virulence, and rapid attenuation of the virus. Accordingly, set-point viral load, the most common measure of virulence, has declined in the last 20 years. Our model also predicts that subtype A is slowly outcompeting subtype D, with both subtypes becoming less virulent, as observed in the data. Reduction of set-point viral loads should have resulted in a 20% reduction in incidence, and a three years extension of untreated asymptomatic infection, increasing opportunities for timely treatment of infected individuals. DOI: http://dx.doi.org/10.7554/eLife.20492.001 PMID:27815945
Adoptive T Cell Immunotherapy for Patients with Primary Immunodeficiency Disorders.
McLaughlin, Lauren P; Bollard, Catherine M; Keller, Michael
2017-01-01
Primary immunodeficiency disorders (PID) are a group of inborn errors of immunity with a broad range of clinical severity but often associated with recurrent and serious infections. While hematopoietic stem cell transplantation (HSCT) can be curative for some forms of PID, chronic and/or refractory viral infections remain a cause of morbidity and mortality both before and after HSCT. Although antiviral pharmacologic agents exist for many viral pathogens, these are associated with significant costs and toxicities and may not be effective for increasingly drug-resistant pathogens. Thus, the emergence of adoptive immunotherapy with virus-specific T lymphocytes (VSTs) is an attractive option for addressing the underlying impaired T cell immunity in many PID patients. VSTs have been utilized for PID patients following HSCT in many prior phase I trials, and may potentially be beneficial before HSCT in patients with chronic viral infections. We review the various methods of generating VSTs, clinical experience using VSTs for PID patients, and current limitations as well as potential ways to broaden the clinical applicability of adoptive immunotherapy for PID patients.
Implications of segment mismatch for influenza A virus evolution
White, Maria C.; Lowen, Anice C.
2018-01-01
Influenza A virus (IAV) is an RNA virus with a segmented genome. These viral properties allow for the rapid evolution of IAV under selective pressure, due to mutation occurring from error-prone replication and the exchange of gene segments within a co-infected cell, termed reassortment. Both mutation and reassortment give rise to genetic diversity, but constraints shape their impact on viral evolution: just as most mutations are deleterious, most reassortment events result in genetic incompatibilities. The phenomenon of segment mismatch encompasses both RNA- and protein-based incompatibilities between co-infecting viruses and results in the production of progeny viruses with fitness defects. Segment mismatch is an important determining factor of the outcomes of mixed IAV infections and has been addressed in multiple risk assessment studies undertaken to date. However, due to the complexity of genetic interactions among the eight viral gene segments, our understanding of segment mismatch and its underlying mechanisms remain incomplete. Here, we summarize current knowledge regarding segment mismatch and discuss the implications of this phenomenon for IAV reassortment and diversity. PMID:29244017
Body mass index, immune status, and virological control in HIV-infected men who have sex with men.
Blashill, Aaron J; Mayer, Kenneth H; Crane, Heidi M; Grasso, Chris; Safren, Steven A
2013-01-01
Prior cross-sectional studies have found inconsistent relationships between body mass index (BMI) and disease progression in HIV-infected individuals. Cross-sectional and longitudinal analyses were conducted on data from a sample of 864 HIV-infected men who have sex with men (MSM) obtained from a large, nationally distributed HIV clinical cohort. Of the 864 HIV-infected MSM, 394 (46%) were of normal weight, 363 (42%) were overweight, and 107 (12%) were obese at baseline. The baseline CD4 count was 493 (standard error [SE] = 9), with viral load (log10) = 2.4 (SE = .04), and 561 (65%) were virologically suppressed. Over time, controlling for viral load, highly active antiretroviral therapy (HAART) adherence, age, and race/ethnicity, overweight and obese HIV-infected men possessed higher CD4 counts than that of normal weight HIV-infected men. Further, overweight and obese men possessed lower viral loads than that of normal weight HIV-infected men. For HIV-infected MSM, in this longitudinal cohort study, possessing a heavier than normal BMI is longitudinally associated with improved immunological health.
Adaptive control of theophylline therapy: importance of blood sampling times.
D'Argenio, D Z; Khakmahd, K
1983-10-01
A two-observation protocol for estimating theophylline clearance during a constant-rate intravenous infusion is used to examine the importance of blood sampling schedules with regard to the information content of resulting concentration data. Guided by a theory for calculating maximally informative sample times, population simulations are used to assess the effect of specific sampling times on the precision of resulting clearance estimates and subsequent predictions of theophylline plasma concentrations. The simulations incorporated noise terms for intersubject variability, dosing errors, sample collection errors, and assay error. Clearance was estimated using Chiou's method, least squares, and a Bayesian estimation procedure. The results of these simulations suggest that clinically significant estimation and prediction errors may result when using the above two-point protocol for estimating theophylline clearance if the time separating the two blood samples is less than one population mean elimination half-life.
[Detection and classification of medication errors at Joan XXIII University Hospital].
Jornet Montaña, S; Canadell Vilarrasa, L; Calabuig Mũoz, M; Riera Sendra, G; Vuelta Arce, M; Bardají Ruiz, A; Gallart Mora, M J
2004-01-01
Medication errors are multifactorial and multidisciplinary, and may originate in processes such as drug prescription, transcription, dispensation, preparation and administration. The goal of this work was to measure the incidence of detectable medication errors that arise within a unit dose drug distribution and control system, from drug prescription to drug administration, by means of an observational method confined to the Pharmacy Department, as well as a voluntary, anonymous report system. The acceptance of this voluntary report system's implementation was also assessed. A prospective descriptive study was conducted. Data collection was performed at the Pharmacy Department from a review of prescribed medical orders, a review of pharmaceutical transcriptions, a review of dispensed medication and a review of medication returned in unit dose medication carts. A voluntary, anonymous report system centralized in the Pharmacy Department was also set up to detect medication errors. Prescription errors were the most frequent (1.12%), closely followed by dispensation errors (1.04%). Transcription errors (0.42%) and administration errors (0.69%) had the lowest overall incidence. Voluntary report involved only 4.25% of all detected errors, whereas unit dose medication cart review contributed the most to error detection. Recognizing the incidence and types of medication errors that occur in a health-care setting allows us to analyze their causes and effect changes in different stages of the process in order to ensure maximal patient safety.
NASA Technical Reports Server (NTRS)
Bugbee, B.; Monje, O.
1992-01-01
Plant scientists have sought to maximize the yield of food crops since the beginning of agriculture. There are numerous reports of record food and biomass yields (per unit area) in all major crop plants, but many of the record yield reports are in error because they exceed the maximal theoretical rates of the component processes. In this article, we review the component processes that govern yield limits and describe how each process can be individually measured. This procedure has helped us validate theoretical estimates and determine what factors limit yields in optimal environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imam, Neena; Barhen, Jacob; Glover, Charles Wayne
2012-01-01
Multi-sensor networks may face resource limitations in a dynamically evolving multiple target tracking scenario. It is necessary to task the sensors efficiently so that the overall system performance is maximized within the system constraints. The central sensor resource manager may control the sensors to meet objective functions that are formulated to meet system goals such as minimization of track loss, maximization of probability of target detection, and minimization of track error. This paper discusses the variety of techniques that may be utilized to optimize sensor performance for either near term gain or future reward over a longer time horizon.
Hsu, Shih-Feng; Su, Wen-Chi; Jeng, King-Song
2015-01-01
ABSTRACT Influenza A virus (IAV) depends on cellular factors to complete its replication cycle; thus, investigation of the factors utilized by IAV may facilitate antiviral drug development. To this end, a cellular transcriptional repressor, DR1, was identified from a genome-wide RNA interference (RNAi) screen. Knockdown (KD) of DR1 resulted in reductions of viral RNA and protein production, demonstrating that DR1 acts as a positive host factor in IAV replication. Genome-wide transcriptomic analysis showed that there was a strong induction of interferon-stimulated gene (ISG) expression after prolonged DR1 KD. We found that beta interferon (IFN-β) was induced by DR1 KD, thereby activating the JAK-STAT pathway to turn on ISG expression, which led to a strong inhibition of IAV replication. This result suggests that DR1 in normal cells suppresses IFN induction, probably to prevent undesired cytokine production, but that this suppression may create a milieu that favors IAV replication once cells are infected. Furthermore, biochemical assays of viral RNA replication showed that DR1 KD suppressed viral RNA replication. We also showed that DR1 associated with all three subunits of the viral RNA-dependent RNA polymerase (RdRp) complex, indicating that DR1 may interact with individual components of the viral RdRp complex to enhance viral RNA replication. Thus, DR1 may be considered a novel host susceptibility gene for IAV replication via a dual mechanism, not only suppressing the host defense to indirectly favor IAV replication but also directly facilitating viral RNA replication. IMPORTANCE Investigations of virus-host interactions involved in influenza A virus (IAV) replication are important for understanding viral pathogenesis and host defenses, which may manipulate influenza virus infection or prevent the emergence of drug resistance caused by a high error rate during viral RNA replication. For this purpose, a cellular transcriptional repressor, DR1, was identified from a genome-wide RNAi screen as a positive regulator in IAV replication. In the current studies, we showed that DR1 suppressed the gene expression of a large set of host innate immunity genes, which indirectly facilitated IAV replication in the event of IAV infection. Besides this scenario, DR1 also directly enhanced the viral RdRp activity, likely through associating with individual components of the viral RdRp complex. Thus, DR1 represents a novel host susceptibility gene for IAV replication via multiple functions, not only suppressing the host defense but also enhancing viral RNA replication. DR1 may be a potential target for drug development against influenza virus infection. PMID:25589657
Active learning: learning a motor skill without a coach.
Huang, Vincent S; Shadmehr, Reza; Diedrichsen, Jörn
2008-08-01
When we learn a new skill (e.g., golf) without a coach, we are "active learners": we have to choose the specific components of the task on which to train (e.g., iron, driver, putter, etc.). What guides our selection of the training sequence? How do choices that people make compare with choices made by machine learning algorithms that attempt to optimize performance? We asked subjects to learn the novel dynamics of a robotic tool while moving it in four directions. They were instructed to choose their practice directions to maximize their performance in subsequent tests. We found that their choices were strongly influenced by motor errors: subjects tended to immediately repeat an action if that action had produced a large error. This strategy was correlated with better performance on test trials. However, even when participants performed perfectly on a movement, they did not avoid repeating that movement. The probability of repeating an action did not drop below chance even when no errors were observed. This behavior led to suboptimal performance. It also violated a strong prediction of current machine learning algorithms, which solve the active learning problem by choosing a training sequence that will maximally reduce the learner's uncertainty about the task. While we show that these algorithms do not provide an adequate description of human behavior, our results suggest ways to improve human motor learning by helping people choose an optimal training sequence.
Relationship auditing of the FMA ontology
Gu, Huanying (Helen); Wei, Duo; Mejino, Jose L.V.; Elhanan, Gai
2010-01-01
The Foundational Model of Anatomy (FMA) ontology is a domain reference ontology based on a disciplined modeling approach. Due to its large size, semantic complexity and manual data entry process, errors and inconsistencies are unavoidable and might remain within the FMA structure without detection. In this paper, we present computable methods to highlight candidate concepts for various relationship assignment errors. The process starts with locating structures formed by transitive structural relationships (part_of, tributary_of, branch_of) and examine their assignments in the context of the IS-A hierarchy. The algorithms were designed to detect five major categories of possible incorrect relationship assignments: circular, mutually exclusive, redundant, inconsistent, and missed entries. A domain expert reviewed samples of these presumptive errors to confirm the findings. Seven thousand and fifty-two presumptive errors were detected, the largest proportion related to part_of relationship assignments. The results highlight the fact that errors are unavoidable in complex ontologies and that well designed algorithms can help domain experts to focus on concepts with high likelihood of errors and maximize their effort to ensure consistency and reliability. In the future similar methods might be integrated with data entry processes to offer real-time error detection. PMID:19475727
Analysis of a planetary-rotation system for evaporated optical coatings
Oliver, J. B.
2016-01-01
The impact of planetary-design considerations for optical coating deposition is analyzed, including the ideal number of planets, variations in system performance, and the deviation of planet motion from the ideal. System capacity is maximized for four planets, although substrate size can significantly influence this result. Guidance is provided in the design of high-performance deposition systems based on the relative impact of different error modes. As a result, errors in planet mounting such that the planet surface is not perpendicular to its axis of rotation are particularly problematic, suggesting planetary design modifications would be appropriate.
Cost-efficient selection of a marker panel in genetic studies
Jamie S. Sanderlin; Nicole Lazar; Michael J. Conroy; Jaxk Reeves
2012-01-01
Genetic techniques are frequently used to sample and monitor wildlife populations. The goal of these studies is to maximize the ability to distinguish individuals for various genetic inference applications, a process which is often complicated by genotyping error. However, wildlife studies usually have fixed budgets, which limit the number of geneticmarkers available...
A Strategy to Use Soft Data Effectively in Randomized Controlled Clinical Trials.
ERIC Educational Resources Information Center
Kraemer, Helena Chmura; Thiemann, Sue
1989-01-01
Sees soft data, measures having substantial intrasubject variability due to errors of measurement or response inconsistency, as important measures of response in randomized clinical trials. Shows that using intensive design and slope of response on time as outcome measure maximizes sample retention and decreases within-group variability, thus…
Model-Based Reinforcement Learning under Concurrent Schedules of Reinforcement in Rodents
ERIC Educational Resources Information Center
Huh, Namjung; Jo, Suhyun; Kim, Hoseok; Sul, Jung Hoon; Jung, Min Whan
2009-01-01
Reinforcement learning theories postulate that actions are chosen to maximize a long-term sum of positive outcomes based on value functions, which are subjective estimates of future rewards. In simple reinforcement learning algorithms, value functions are updated only by trial-and-error, whereas they are updated according to the decision-maker's…
Economic optimization of operations for hybrid energy systems under variable markets
Chen, Jen; Garcia, Humberto E.
2016-05-21
We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less
Achieving High Throughput for Data Transfer over ATM Networks
NASA Technical Reports Server (NTRS)
Johnson, Marjory J.; Townsend, Jeffrey N.
1996-01-01
File-transfer rates for ftp are often reported to be relatively slow, compared to the raw bandwidth available in emerging gigabit networks. While a major bottleneck is disk I/O, protocol issues impact performance as well. Ftp was developed and optimized for use over the TCP/IP protocol stack of the Internet. However, TCP has been shown to run inefficiently over ATM. In an effort to maximize network throughput, data-transfer protocols can be developed to run over UDP or directly over IP, rather than over TCP. If error-free transmission is required, techniques for achieving reliable transmission can be included as part of the transfer protocol. However, selected image-processing applications can tolerate a low level of errors in images that are transmitted over a network. In this paper we report on experimental work to develop a high-throughput protocol for unreliable data transfer over ATM networks. We attempt to maximize throughput by keeping the communications pipe full, but still keep packet loss under five percent. We use the Bay Area Gigabit Network Testbed as our experimental platform.
Economic optimization of operations for hybrid energy systems under variable markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jen; Garcia, Humberto E.
We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less
NASA Technical Reports Server (NTRS)
Bauer, Frank (Technical Monitor); Luquette, Richard J.; Sanner, Robert M.
2003-01-01
Precision Formation Flying is an enabling technology for a variety of proposed space-based observatories, including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), the associated MAXIM pathfinder mission, and the Stellar Imager. An essential element of the technology is the control algorithm. This paper discusses the development of a nonlinear, six-degree of freedom (6DOF) control algorithm for maintaining the relative position and attitude of a spacecraft within a formation. The translation dynamics are based on the equations of motion for the restricted three body problem. The control law guarantees the tracking error convergences to zero, based on a Lyapunov analysis. The simulation, modelled after the MAXIM Pathfinder mission, maintains the relative position and attitude of a Follower spacecraft with respect to a Leader spacecraft, stationed near the L2 libration point in the Sun-Earth system.
Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2pk)
NASA Technical Reports Server (NTRS)
Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan
2016-01-01
Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.
Li, Ao; Zhao, Haizhou; Lai, Qingying; Huang, Zhihong; Yuan, Meijin
2015-01-01
ABSTRACT Many viruses utilize viral or cellular chromatin machinery for efficient infection. Baculoviruses encode a conserved protamine-like protein, P6.9. This protein plays essential roles in various viral physiological processes during infection. However, the mechanism by which P6.9 regulates transcription remains unknown. In this study, 7 phosphorylated species of P6.9 were resolved in Sf9 cells infected with the baculovirus type species Autographa californica multiple nucleopolyhedrovirus (AcMNPV). Mass spectrometry identified 22 phosphorylation and 10 methylation sites but no acetylation sites in P6.9. Immunofluorescence demonstrated that the P6.9 and virus-encoded serine/threonine kinase PK1 exhibited similar distribution patterns in infected cells, and coimmunoprecipitation confirmed the interaction between them. Upon pk1 deletion, nucleocapsid assembly and polyhedron formation were interrupted and the transcription of viral very late genes was downregulated. Interestingly, we found that the 3 most phosphorylated P6.9 species vanished from Sf9 cells transfected with the pk1 deletion mutant, suggesting that PK1 is involved in the hyperphosphorylation of P6.9. Mass spectrometry suggested that the phosphorylation of the 7 Ser/Thr and 5 Arg residues in P6.9 was PK1 dependent. Replacement of the 7 Ser/Thr residues with Ala resulted in a P6.9 phosphorylation pattern similar to that of the pk1 deletion mutant. Importantly, the decreases in the transcription level of viral very late genes and viral infectivity were consistent. Our findings reveal that P6.9 hyperphosphorylation is a precondition for the maximal hyperexpression of baculovirus very late genes and provide the first experimental insights into the function of the baculovirus protamine-like protein and the related protein kinase in epigenetics. IMPORTANCE Diverse posttranslational modifications (PTMs) of histones constitute a code that creates binding platforms that recruit transcription factors to regulate gene expression. Many viruses also utilize host- or virus-induced chromatin machinery to promote efficient infections. Baculoviruses encode a protamine-like protein, P6.9, which is required for a variety of processes in the infection cycle. Currently, P6.9's PTM sites and its regulating factors remain unknown. Here, we found that P6.9 could be categorized as unphosphorylated, hypophosphorylated, and hyperphosphorylated species and that a virus-encoded serine/threonine kinase, PK1, was essential for P6.9 hyperphosphorylation. Abundant PTM sites on P6.9 were identified, among which 7 Ser/Thr phosphorylated sites were PK1 dependent. Mutation of these Ser/Thr sites reduced very late viral gene transcription and viral infectivity, indicating that the PK1-mediated P6.9 hyperphosphorylation contributes to viral proliferation. These data suggest that a code exists in the sophisticated PTM of viral protamine-like proteins and participates in viral gene transcription. PMID:25972542
Modeling the Violation of Reward Maximization and Invariance in Reinforcement Schedules
La Camera, Giancarlo; Richmond, Barry J.
2008-01-01
It is often assumed that animals and people adjust their behavior to maximize reward acquisition. In visually cued reinforcement schedules, monkeys make errors in trials that are not immediately rewarded, despite having to repeat error trials. Here we show that error rates are typically smaller in trials equally distant from reward but belonging to longer schedules (referred to as “schedule length effect”). This violates the principles of reward maximization and invariance and cannot be predicted by the standard methods of Reinforcement Learning, such as the method of temporal differences. We develop a heuristic model that accounts for all of the properties of the behavior in the reinforcement schedule task but whose predictions are not different from those of the standard temporal difference model in choice tasks. In the modification of temporal difference learning introduced here, the effect of schedule length emerges spontaneously from the sensitivity to the immediately preceding trial. We also introduce a policy for general Markov Decision Processes, where the decision made at each node is conditioned on the motivation to perform an instrumental action, and show that the application of our model to the reinforcement schedule task and the choice task are special cases of this general theoretical framework. Within this framework, Reinforcement Learning can approach contextual learning with the mixture of empirical findings and principled assumptions that seem to coexist in the best descriptions of animal behavior. As examples, we discuss two phenomena observed in humans that often derive from the violation of the principle of invariance: “framing,” wherein equivalent options are treated differently depending on the context in which they are presented, and the “sunk cost” effect, the greater tendency to continue an endeavor once an investment in money, effort, or time has been made. The schedule length effect might be a manifestation of these phenomena in monkeys. PMID:18688266
Modeling the violation of reward maximization and invariance in reinforcement schedules.
La Camera, Giancarlo; Richmond, Barry J
2008-08-08
It is often assumed that animals and people adjust their behavior to maximize reward acquisition. In visually cued reinforcement schedules, monkeys make errors in trials that are not immediately rewarded, despite having to repeat error trials. Here we show that error rates are typically smaller in trials equally distant from reward but belonging to longer schedules (referred to as "schedule length effect"). This violates the principles of reward maximization and invariance and cannot be predicted by the standard methods of Reinforcement Learning, such as the method of temporal differences. We develop a heuristic model that accounts for all of the properties of the behavior in the reinforcement schedule task but whose predictions are not different from those of the standard temporal difference model in choice tasks. In the modification of temporal difference learning introduced here, the effect of schedule length emerges spontaneously from the sensitivity to the immediately preceding trial. We also introduce a policy for general Markov Decision Processes, where the decision made at each node is conditioned on the motivation to perform an instrumental action, and show that the application of our model to the reinforcement schedule task and the choice task are special cases of this general theoretical framework. Within this framework, Reinforcement Learning can approach contextual learning with the mixture of empirical findings and principled assumptions that seem to coexist in the best descriptions of animal behavior. As examples, we discuss two phenomena observed in humans that often derive from the violation of the principle of invariance: "framing," wherein equivalent options are treated differently depending on the context in which they are presented, and the "sunk cost" effect, the greater tendency to continue an endeavor once an investment in money, effort, or time has been made. The schedule length effect might be a manifestation of these phenomena in monkeys.
A theoretical framework to predict the most likely ion path in particle imaging.
Collins-Fekete, Charles-Antoine; Volz, Lennart; Portillo, Stephen K N; Beaulieu, Luc; Seco, Joao
2017-03-07
In this work, a generic rigorous Bayesian formalism is introduced to predict the most likely path of any ion crossing a medium between two detection points. The path is predicted based on a combination of the particle scattering in the material and measurements of its initial and final position, direction and energy. The path estimate's precision is compared to the Monte Carlo simulated path. Every ion from hydrogen to carbon is simulated in two scenarios, (1) where the range is fixed and (2) where the initial velocity is fixed. In the scenario where the range is kept constant, the maximal root-mean-square error between the estimated path and the Monte Carlo path drops significantly between the proton path estimate (0.50 mm) and the helium path estimate (0.18 mm), but less so up to the carbon path estimate (0.09 mm). However, this scenario is identified as the configuration that maximizes the dose while minimizing the path resolution. In the scenario where the initial velocity is fixed, the maximal root-mean-square error between the estimated path and the Monte Carlo path drops significantly between the proton path estimate (0.29 mm) and the helium path estimate (0.09 mm) but increases for heavier ions up to carbon (0.12 mm). As a result, helium is found to be the particle with the most accurate path estimate for the lowest dose, potentially leading to tomographic images of higher spatial resolution.
Cooper, Jessica A.; Gorlick, Marissa A.; Denny, Taylor; Worthy, Darrell A.; Beevers, Christopher G.; Maddox, W. Todd
2013-01-01
Depression is often characterized by attentional biases toward negative items and away from positive items, which likely affects reward and punishment processing. Recent work reported that training attention away from negative stimuli reduced this bias and reduced depressive symptoms. However, the effect of attention training on subsequent learning has yet to be explored. In the current study, participants were required to learn to maximize reward during decision-making. Undergraduates with elevated self-reported depressive symptoms received attention training toward positive stimuli prior to performing the decision-making task (n=20; active training). The active training group was compared to two groups: undergraduates with elevated self-reported depressive symptoms who received placebo training (n=22; placebo training) and control subjects with low levels of depressive symptoms (n=33; non-depressive control). The placebo-training depressive group performed worse and switched between options more than non-depressive controls on the reward maximization task. However, depressives that received active training performed as well as non-depressive controls. Computational modeling indicated that the placebo-trained group learned more from negative than from positive prediction errors, leading to more frequent switching. The non-depressive control and active training depressive groups showed similar learning from positive and negative prediction errors, leading to less frequent switching and better performance. Our results indicate that individuals with elevated depressive symptoms are impaired at reward maximization, but that the deficit can be improved with attention training toward positive stimuli. PMID:24197612
Cooper, Jessica A; Gorlick, Marissa A; Denny, Taylor; Worthy, Darrell A; Beevers, Christopher G; Maddox, W Todd
2014-06-01
Depression is often characterized by attentional biases toward negative items and away from positive items, which likely affects reward and punishment processing. Recent work has reported that training attention away from negative stimuli reduced this bias and reduced depressive symptoms. However, the effect of attention training on subsequent learning has yet to be explored. In the present study, participants were required to learn to maximize reward during decision making. Undergraduates with elevated self-reported depressive symptoms received attention training toward positive stimuli prior to performing the decision-making task (n = 20; active training). The active-training group was compared to two other groups: undergraduates with elevated self-reported depressive symptoms who received placebo training (n = 22; placebo training) and a control group with low levels of depressive symptoms (n = 33; nondepressive control). The placebo-training depressive group performed worse and switched between options more than did the nondepressive controls on the reward maximization task. However, depressives that received active training performed as well as the nondepressive controls. Computational modeling indicated that the placebo-trained group learned more from negative than from positive prediction errors, leading to more frequent switching. The nondepressive control and active-training depressive groups showed similar learning from positive and negative prediction errors, leading to less-frequent switching and better performance. Our results indicate that individuals with elevated depressive symptoms are impaired at reward maximization, but that the deficit can be improved with attention training toward positive stimuli.
Lavysh, Daria; Sokolova, Maria; Slashcheva, Marina; Förstner, Konrad U; Severinov, Konstantin
2017-02-14
Bacteriophage AR9 is a recently sequenced jumbo phage that encodes two multisubunit RNA polymerases. Here we investigated the AR9 transcription strategy and the effect of AR9 infection on the transcription of its host, Bacillus subtilis Analysis of whole-genome transcription revealed early, late, and continuously expressed AR9 genes. Alignment of sequences upstream of the 5' ends of AR9 transcripts revealed consensus sequences that define early and late phage promoters. Continuously expressed AR9 genes have both early and late promoters in front of them. Early AR9 transcription is independent of protein synthesis and must be determined by virion RNA polymerase injected together with viral DNA. During infection, the overall amount of host mRNAs is significantly decreased. Analysis of relative amounts of host transcripts revealed notable differences in the levels of some mRNAs. The physiological significance of up- or downregulation of host genes for AR9 phage infection remains to be established. AR9 infection is significantly affected by rifampin, an inhibitor of host RNA polymerase transcription. The effect is likely caused by the antibiotic-induced killing of host cells, while phage genome transcription is solely performed by viral RNA polymerases. IMPORTANCE Phages regulate the timing of the expression of their own genes to coordinate processes in the infected cell and maximize the release of viral progeny. Phages also alter the levels of host transcripts. Here we present the results of a temporal analysis of the host and viral transcriptomes of Bacillus subtilis infected with a giant phage, AR9. We identify viral promoters recognized by two virus-encoded RNA polymerases that are a unique feature of the phiKZ-related group of phages to which AR9 belongs. Our results set the stage for future analyses of highly unusual RNA polymerases encoded by AR9 and other phiKZ-related phages. Copyright © 2017 Lavysh et al.
Martenot, Claire; Segarra, Amélie; Baillon, Laury; Faury, Nicole; Houssin, Maryline; Renault, Tristan
2016-05-01
Immunohistochemistry (IHC) assays were conducted on paraffin sections from experimentally infected spat and unchallenged spat produced in hatchery to determine the tissue distribution of three viral proteins within the Pacific oyster, Crassostrea gigas. Polyclonal antibodies were produced from recombinant proteins corresponding to two putative membrane proteins and one putative apoptosis inhibitor encoded by ORF 25, 72, and 87, respectively. Results were then compared to those obtained by in situ hybridization performed on the same individuals, and showed a substantial agreement according to Landis and Koch numeric scale. Positive signals were mainly observed in connective tissue of gills, mantle, adductor muscle, heart, digestive gland, labial palps, and gonads of infected spat. Positive signals were also reported in digestive epithelia. However, few positive signals were also observed in healthy appearing oysters (unchallenged spat) and could be due to virus persistence after a primary infection. Cellular localization of staining seemed to be linked to the function of the viral protein targeted. A nucleus staining was preferentially observed with antibodies targeting the putative apoptosis inhibitor protein whereas a cytoplasmic localization was obtained using antibodies recognizing putative membrane proteins. The detection of viral proteins was often associated with histopathological changes previously reported during OsHV-1 infection by histology and transmission electron microscopy. Within the 6h after viral suspension injection, positive signals were almost at the maximal level with the three antibodies and all studied organs appeared infected at 28h post viral injection. Connective tissue appeared to be a privileged site for OsHV-1 replication even if positive signals were observed in the epithelium cells of different organs which may be interpreted as a hypothetical portal of entry or release for the virus. IHC constitutes a suited method for analyzing the early infection stages of OsHV-1 infection and a useful tool to investigate interactions between OsHV-1 and its host at a protein level. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
The DiskMass Survey. II. Error Budget
NASA Astrophysics Data System (ADS)
Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas
2010-06-01
We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.
Optimized 3D stitching algorithm for whole body SPECT based on transition error minimization (TEM)
NASA Astrophysics Data System (ADS)
Cao, Xinhua; Xu, Xiaoyin; Voss, Stephan
2017-02-01
Standard Single Photon Emission Computed Tomography (SPECT) has a limited field of view (FOV) and cannot provide a 3D image of an entire long whole body SPECT. To produce a 3D whole body SPECT image, two to five overlapped SPECT FOVs from head to foot are acquired and assembled using image stitching. Most commercial software from medical imaging manufacturers applies a direct mid-slice stitching method to avoid blurring or ghosting from 3D image blending. Due to intensity changes across the middle slice of overlapped images, direct mid-slice stitching often produces visible seams in the coronal and sagittal views and maximal intensity projection (MIP). In this study, we proposed an optimized algorithm to reduce the visibility of stitching edges. The new algorithm computed, based on transition error minimization (TEM), a 3D stitching interface between two overlapped 3D SPECT images. To test the suggested algorithm, four studies of 2-FOV whole body SPECT were used and included two different reconstruction methods (filtered back projection (FBP) and ordered subset expectation maximization (OSEM)) as well as two different radiopharmaceuticals (Tc-99m MDP for bone metastases and I-131 MIBG for neuroblastoma tumors). Relative transition errors of stitched whole body SPECT using mid-slice stitching and the TEM-based algorithm were measured for objective evaluation. Preliminary experiments showed that the new algorithm reduced the visibility of the stitching interface in the coronal, sagittal, and MIP views. Average relative transition errors were reduced from 56.7% of mid-slice stitching to 11.7% of TEM-based stitching. The proposed algorithm also avoids blurring artifacts by preserving the noise properties of the original SPECT images.
Vanderhoof, Melanie; Distler, Hayley; Mendiola, Di Ana; Lang, Megan
2017-01-01
Natural variability in surface-water extent and associated characteristics presents a challenge to gathering timely, accurate information, particularly in environments that are dominated by small and/or forested wetlands. This study mapped inundation extent across the Upper Choptank River Watershed on the Delmarva Peninsula, occurring within both Maryland and Delaware. We integrated six quad-polarized Radarsat-2 images, Worldview-3 imagery, and an enhanced topographic wetness index in a random forest model. Output maps were filtered using light detection and ranging (lidar)-derived depressions to maximize the accuracy of forested inundation extent. Overall accuracy within the integrated and filtered model was 94.3%, with 5.5% and 6.0% errors of omission and commission for inundation, respectively. Accuracy of inundation maps obtained using Radarsat-2 alone were likely detrimentally affected by less than ideal angles of incidence and recent precipitation, but were likely improved by targeting the period between snowmelt and leaf-out for imagery collection. Across the six Radarsat-2 dates, filtering inundation outputs by lidar-derived depressions slightly elevated errors of omission for water (+1.0%), but decreased errors of commission (−7.8%), resulting in an average increase of 5.4% in overall accuracy. Depressions were derived from lidar datasets collected under both dry and average wetness conditions. Although antecedent wetness conditions influenced the abundance and total area mapped as depression, the two versions of the depression datasets showed a similar ability to reduce error in the inundation maps. Accurate mapping of surface water is critical to predicting and monitoring the effect of human-induced change and interannual variability on water quantity and quality.
USDA-ARS?s Scientific Manuscript database
When estimating severity of a plant disease, a disease interval (or category) scale comprises a number of categories of known numeric values – with plant disease this is generally based on the percent area with symptoms (e.g. the Horsfall-Barratt (H-B) scale). Studies in plant pathology and plant br...
Motion compensated shape error concealment.
Schuster, Guido M; Katsaggelos, Aggelos K
2006-02-01
The introduction of Video Objects (VOs) is one of the innovations of MPEG-4. The alpha-plane of a VO defines its shape at a given instance in time and hence determines the boundary of its texture. In packet-based networks, shape, motion, and texture are subject to loss. While there has been considerable attention paid to the concealment of texture and motion errors, little has been done in the field of shape error concealment. In this paper we propose a post-processing shape error concealment technique that uses the motion compensated boundary information of the previously received alpha-plane. The proposed approach is based on matching received boundary segments in the current frame to the boundary in the previous frame. This matching is achieved by finding a maximally smooth motion vector field. After the current boundary segments are matched to the previous boundary, the missing boundary pieces are reconstructed by motion compensation. Experimental results demonstrating the performance of the proposed motion compensated shape error concealment method, and comparing it with the previously proposed weighted side matching method are presented.
Estimating errors in least-squares fitting
NASA Technical Reports Server (NTRS)
Richter, P. H.
1995-01-01
While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.
Suppression of vapor cell temperature error for spin-exchange-relaxation-free magnetometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Jixi, E-mail: lujixi@buaa.edu.cn; Qian, Zheng; Fang, Jiancheng
2015-08-15
This paper presents a method to reduce the vapor cell temperature error of the spin-exchange-relaxation-free (SERF) magnetometer. The fluctuation of cell temperature can induce variations of the optical rotation angle, resulting in a scale factor error of the SERF magnetometer. In order to suppress this error, we employ the variation of the probe beam absorption to offset the variation of the optical rotation angle. The theoretical discussion of our method indicates that the scale factor error introduced by the fluctuation of the cell temperature could be suppressed by setting the optical depth close to one. In our experiment, we adjustmore » the probe frequency to obtain various optical depths and then measure the variation of scale factor with respect to the corresponding cell temperature changes. Our experimental results show a good agreement with our theoretical analysis. Under our experimental condition, the error has been reduced significantly compared with those when the probe wavelength is adjusted to maximize the probe signal. The cost of this method is the reduction of the scale factor of the magnetometer. However, according to our analysis, it only has minor effect on the sensitivity under proper operating parameters.« less
NASA Astrophysics Data System (ADS)
Jun, Brian; Giarra, Matthew; Golz, Brian; Main, Russell; Vlachos, Pavlos
2016-11-01
We present a methodology to mitigate the major sources of error associated with two-dimensional confocal laser scanning microscopy (CLSM) images of nanoparticles flowing through a microfluidic channel. The correlation-based velocity measurements from CLSM images are subject to random error due to the Brownian motion of nanometer-sized tracer particles, and a bias error due to the formation of images by raster scanning. Here, we develop a novel ensemble phase correlation with dynamic optimal filter that maximizes the correlation strength, which diminishes the random error. In addition, we introduce an analytical model of CLSM measurement bias error correction due to two-dimensional image scanning of tracer particles. We tested our technique using both synthetic and experimental images of nanoparticles flowing through a microfluidic channel. We observed that our technique reduced the error by up to a factor of ten compared to ensemble standard cross correlation (SCC) for the images tested in the present work. Subsequently, we will assess our framework further, by interrogating nanoscale flow in the cell culture environment (transport within the lacunar-canalicular system) to demonstrate our ability to accurately resolve flow measurements in a biological system.
Huy, Nguyen Tien; Thao, Nguyen Thanh Hong; Tuan, Nguyen Anh; Khiem, Nguyen Tuan; Moore, Christopher C.; Thi Ngoc Diep, Doan; Hirayama, Kenji
2012-01-01
Background and Purpose Successful outcomes from bacterial meningitis require rapid antibiotic treatment; however, unnecessary treatment of viral meningitis may lead to increased toxicities and expense. Thus, improved diagnostics are required to maximize treatment and minimize side effects and cost. Thirteen clinical decision rules have been reported to identify bacterial from viral meningitis. However, few rules have been tested and compared in a single study, while several rules are yet to be tested by independent researchers or in pediatric populations. Thus, simultaneous test and comparison of these rules are required to enable clinicians to select an optimal diagnostic rule for bacterial meningitis in settings and populations similar to ours. Methods A retrospective cross-sectional study was conducted at the Infectious Department of Pediatric Hospital Number 1, Ho Chi Minh City, Vietnam. The performance of the clinical rules was evaluated by area under a receiver operating characteristic curve (ROC-AUC) using the method of DeLong and McNemar test for specificity comparison. Results Our study included 129 patients, of whom 80 had bacterial meningitis and 49 had presumed viral meningitis. Spanos's rule had the highest AUC at 0.938 but was not significantly greater than other rules. No rule provided 100% sensitivity with a specificity higher than 50%. Based on our calculation of theoretical sensitivity and specificity, we suggest that a perfect rule requires at least four independent variables that posses both sensitivity and specificity higher than 85–90%. Conclusions No clinical decision rules provided an acceptable specificity (>50%) with 100% sensitivity when applying our data set in children. More studies in Vietnam and developing countries are required to develop and/or validate clinical rules and more very good biomarkers are required to develop such a perfect rule. PMID:23209715
Huy, Nguyen Tien; Thao, Nguyen Thanh Hong; Tuan, Nguyen Anh; Khiem, Nguyen Tuan; Moore, Christopher C; Thi Ngoc Diep, Doan; Hirayama, Kenji
2012-01-01
Successful outcomes from bacterial meningitis require rapid antibiotic treatment; however, unnecessary treatment of viral meningitis may lead to increased toxicities and expense. Thus, improved diagnostics are required to maximize treatment and minimize side effects and cost. Thirteen clinical decision rules have been reported to identify bacterial from viral meningitis. However, few rules have been tested and compared in a single study, while several rules are yet to be tested by independent researchers or in pediatric populations. Thus, simultaneous test and comparison of these rules are required to enable clinicians to select an optimal diagnostic rule for bacterial meningitis in settings and populations similar to ours. A retrospective cross-sectional study was conducted at the Infectious Department of Pediatric Hospital Number 1, Ho Chi Minh City, Vietnam. The performance of the clinical rules was evaluated by area under a receiver operating characteristic curve (ROC-AUC) using the method of DeLong and McNemar test for specificity comparison. Our study included 129 patients, of whom 80 had bacterial meningitis and 49 had presumed viral meningitis. Spanos's rule had the highest AUC at 0.938 but was not significantly greater than other rules. No rule provided 100% sensitivity with a specificity higher than 50%. Based on our calculation of theoretical sensitivity and specificity, we suggest that a perfect rule requires at least four independent variables that posses both sensitivity and specificity higher than 85-90%. No clinical decision rules provided an acceptable specificity (>50%) with 100% sensitivity when applying our data set in children. More studies in Vietnam and developing countries are required to develop and/or validate clinical rules and more very good biomarkers are required to develop such a perfect rule.
Digital Filters for Digital Phase-locked Loops
NASA Technical Reports Server (NTRS)
Simon, M.; Mileant, A.
1985-01-01
An s/z hybrid model for a general phase locked loop is proposed. The impact of the loop filter on the stability, gain margin, noise equivalent bandwidth, steady state error and time response is investigated. A specific digital filter is selected which maximizes the overall gain margin of the loop. This filter can have any desired number of integrators. Three integrators are sufficient in order to track a phase jerk with zero steady state error at loop update instants. This filter has one zero near z = 1.0 for each integrator. The total number of poles of the filter is equal to the number of integrators plus two.
Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications.
Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R; Taylor, Jeremy F; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart
2016-01-01
Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The utility of this MOLO algorithm was also demonstrated in a real application, in which a 6K SNP panel was optimized conditional on 5,260 obligatory SNP selected based on SNP-trait association in U.S. Holstein animals. With this MOLO algorithm, both imputation error rate and genomic prediction error rate were minimal.
Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications
Wu, Xiao-Lin; Xu, Jiaqi; Feng, Guofei; Wiggans, George R.; Taylor, Jeremy F.; He, Jun; Qian, Changsong; Qiu, Jiansheng; Simpson, Barry; Walker, Jeremy; Bauck, Stewart
2016-01-01
Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD) or high-density (HD) SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE) or haplotype-averaged Shannon entropy (HASE) and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced) or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus population. The utility of this MOLO algorithm was also demonstrated in a real application, in which a 6K SNP panel was optimized conditional on 5,260 obligatory SNP selected based on SNP-trait association in U.S. Holstein animals. With this MOLO algorithm, both imputation error rate and genomic prediction error rate were minimal. PMID:27583971
NASA Astrophysics Data System (ADS)
Das, Debobrato
Current methods for gene delivery utilize nanocarriers such as liposomes and viral vectors that may produce in vivo toxicity, immunogenicity, or mutagenesis. Moreover, these common high-cost systems have a low efficacy of gene-vehicle transport across the cell plasma membrane followed by inadequate release and weak intracellular stability of the genetic sequence. Thus, this study aims to maximize gene transfection while minimizing cytotoxicity by utilizing supersaturated blood-plasma ions derived from simulated body fluids (SBF). With favorable electrostatic interactions to create biocompatible calcium-phosphate nanoparticles (NPs) derived from biomimetic apatite (BA), results suggest that the SBF system, though naturally sensitive to reaction conditions, after optimization can serve as a tunable and versatile platform for the delivery of various types of nucleic acids. From a systematic exploration of the effects of nucleation pH, incubation temperature, and time on transfection efficiency, the study proposes distinct characteristic trends in SBF BA-NP morphology, cellular uptake, cell viability, and gene modulation. Specifically, with aggressive nucleation and growth of BA-NPs in solution (observed via scanning electron microscopy), the ensuing microenvironment imposes a more toxic cellular interaction (indicated by alamarBlue and BCA assays), limiting particle uptake (fluorescence experiments) and subsequent gene knockdown (quantitative loss of function assays). Controlled precipitation of BA-NPs function to increase particle accessibility by surrounding cells, and subsequently enhance uptake and transfection efficiency. By closely examining such trends, an optimal fabrication condition of pH 6.5-37C can be observed where particle growth is more tamed and less chaotic, providing improved, favorable cellular interactions that increase cell uptake and consequently maximize gene transfection, without compromising cellular viability.
Adeno-associated virus rep protein synthesis during productive infection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redemann, B.E.; Mendelson, E.; Carter, B.J.
1989-02-01
Adeno-associated virus (AAV) Rep proteins mediate viral DNA replication and can regulate expression from AAV genes. The authors studied the kinetics of synthesis of the four Rep proteins, Rep78, Rep68, Rep52, and Rep40, during infection of human 293 or KB cells with AAV and helper adenovirus by in vivo labeling with (/sup 35/S)methionine, immunoprecipitation, and immunoblotting analyses. Rep78 and Rep52 were readily detected concomitantly with detection of viral monomer duplex DNA replicating about 10 to 12 h after infection, and Rep68 and Rep40 were detected 2 h later. Rep78 and Rep52 were more abundant than Rep68 and Rep40 owing tomore » a higher synthesis rate throughout the infectious cycle. In some experiments, very low levels of Rep78 could be detected as early as 4 h after infection. The synthesis rates of Rep proteins were maximal between 14 and 24 h and then decreased later after infection. Isotopic pulse-chase experiments showed that each of the Rep proteins was synthesized independently and was stable for at least 15 h. A slower-migrating, modified form of Rep78 was identified late after infection. AAV capsid protein synthesis was detected at 10 to 12 h after infection and also exhibited synthesis kinetics similar to those of the Rep proteins. AAV DNA replication showed at least two clearly defined stages. Bulk duplex replicating DNA accumulation began around 10 to 12 h and reached a maximum level at about 20 h when Rep and capsid protein synthesis was maximal. Progeny single-stranded DNA accumulation began about 12 to 13 h, but most of this DNA accumulated after 24 h when Rep and capsid protein synthesis had decreased.« less
Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara
2017-01-01
In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.
Zhou, Baozhu; Li, Maoxing; Cao, Xinyuan; Zhang, Quanlong; Liu, Yantong; Ma, Qiang; Qiu, Yan; Luan, Fei; Wang, Xianmin
2016-04-01
Exposure to hypobaric hypoxia causes oxidative stress, neuronal degeneration and apoptosis that leads to memory impairment. Though oxidative stress contributes to neuronal degeneration and apoptosis in hypobaric hypoxia, the ability for phenylethanoid glycosides of Pedicularis muscicola Maxim (PhGs) to reverse high altitude memory impairment has not been studied. Rats were supplemented with PhGs orally for a week. After the fourth day of drug administration, rats were exposed to a 7500 m altitude simulation in a specially designed animal decompression chamber for 3 days. Spatial memory was assessed by the 8-arm radial maze test before and after exposure to hypobaric hypoxia. Histological assessment of neuronal degeneration was performed by hematoxylin-eosin (HE) staining. Changes in oxidative stress markers and changes in the expression of the apoptotic marker, caspase-3, were assessed in the hippocampus. Our results demonstrated that after exposure to hypobaric hypoxia, PhGs ameliorated high altitude memory impairment, as shown by the decreased values obtained for reference memory error (RME), working memory error (WME), and total error (TE). Meanwhile, administration of PhGs decreased hippocampal reactive oxygen species levels and consequent lipid peroxidation by elevating reduced glutathione levels and enhancing the free radical scavenging enzyme system. There was also a decrease in the number of pyknotic neurons and a reduction in caspase-3 expression in the hippocampus. These findings suggest that PhGs may be used therapeutically to ameliorate high altitude memory impairment. Copyright © 2016 Elsevier Inc. All rights reserved.
Why a simulation system doesn`t match the plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sowell, R.
1998-03-01
Process simulations, or mathematical models, are widely used by plant engineers and planners to obtain a better understanding of a particular process. These simulations are used to answer questions such as how can feed rate be increased, how can yields be improved, how can energy consumption be decreased, or how should the available independent variables be set to maximize profit? Although current process simulations are greatly improved over those of the `70s and `80s, there are many reasons why a process simulation doesn`t match the plant. Understanding these reasons can assist in using simulations to maximum advantage. The reasons simulationsmore » do not match the plant may be placed in three main categories: simulation effects or inherent error, sampling and analysis effects of measurement error, and misapplication effects or set-up error.« less
Deterministic error correction for nonlocal spatial-polarization hyperentanglement
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-01-01
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication. PMID:26861681
Deterministic error correction for nonlocal spatial-polarization hyperentanglement.
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-02-10
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication.
Goulet, Eric D B; Baker, Lindsay B
2017-12-01
The B-722 Laqua Twin is a low cost, portable, and battery operated sodium analyzer, which can be used for the assessment of sweat sodium concentration. The Laqua Twin is reliable and provides a degree of accuracy similar to more expensive analyzers; however, its interunit measurement error remains unknown. The purpose of this study was to compare the sodium concentration values of 70 sweat samples measured using three different Laqua Twin units. Mean absolute errors, random errors and constant errors among the different Laqua Twins ranged respectively between 1.7 mmol/L to 3.5 mmol/L, 2.5 mmol/L to 3.7 mmol/L and -0.6 mmol/L to 3.9 mmol/L. Proportional errors among Laqua Twins were all < 2%. Based on a within-subject biological variability in sweat sodium concentration of ± 12%, the maximal allowable imprecision among instruments was considered to be £ 6%. In that respect, the within (2.9%), between (4.5%), and total (5.4%) measurement error coefficient of variations were all < 6%. For a given sweat sodium concentration value, the largest observed difference in mean and lower and upper bound error of measurements among instruments were, respectively, 4.7 mmol/L, 2.3 mmol/L, and 7.0 mmol/L. In conclusion, our findings show that the interunit measurement error of the B-722 Laqua Twin is low and methodologically acceptable.
Yang, Yang; DeGruttola, Victor
2016-01-01
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients. PMID:22740584
Yang, Yang; DeGruttola, Victor
2012-06-22
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.
Portilho, Débora M.; Fernandez, Juliette; Ringeard, Mathieu; Machado, Anthony K.; Boulay, Aude; Mayer, Martha; Müller-Trutwin, Michaela; Beignon, Anne-Sophie; Kirchhoff, Frank; Nisole, Sébastien; Arhel, Nathalie J.
2015-01-01
Summary During retroviral infection, viral capsids are subject to restriction by the cellular factor TRIM5α. Here, we show that dendritic cells (DCs) derived from human and non-human primate species lack efficient TRIM5α-mediated retroviral restriction. In DCs, endogenous TRIM5α accumulates in nuclear bodies (NB) that partly co-localize with Cajal bodies in a SUMOylation-dependent manner. Nuclear sequestration of TRIM5α allowed potent induction of type I interferon (IFN) responses during infection, mediated by sensing of reverse transcribed DNA by cGAS. Overexpression of TRIM5α or treatment with the SUMOylation inhibitor ginkgolic acid (GA) resulted in enforced cytoplasmic TRIM5α expression and restored efficient viral restriction but abrogated type I IFN production following infection. Our results suggest that there is an evolutionary trade-off specific to DCs in which restriction is minimized to maximize sensing. TRIM5α regulation via SUMOylation-dependent nuclear sequestration adds to our understanding of how restriction factors are regulated. PMID:26748714
Morichi, Shinichiro; Yamanaka, Gaku; Ishida, Yu; Oana, Shingo; Kashiwagi, Yasuyo; Kawashima, Hisashi
2014-11-01
We investigated changes in the brain-derived neurotrophic factor (BDNF) and interleukin (IL)-6 levels in pediatric patients with central nervous system (CNS) infections, particularly viral infection-induced encephalopathy. Over a 5-year study period, 24 children hospitalized with encephalopathy were grouped based on their acute encephalopathy type (the excitotoxicity, cytokine storm, and metabolic error types). Children without CNS infections served as controls. In serum and cerebrospinal fluid (CSF) samples, BDNF and IL-6 levels were increased in all encephalopathy groups, and significant increases were noted in the influenza-associated and cytokine storm encephalopathy groups. Children with sequelae showed higher BDNF and IL-6 levels than those without sequelae. In pediatric patients, changes in serum and CSF BDNF and IL-6 levels may serve as a prognostic index of CNS infections, particularly for the diagnosis of encephalopathy and differentiation of encephalopathy types.
Quantification of HCV RNA in Clinical Specimens by Branched DNA (bDNA) Technology.
Wilber, J C; Urdea, M S
1999-01-01
The diagnosis and monitoring of hepatitis C virus (HCV) infection have been aided by the development of HCV RNA quantification assays A direct measure of viral load, HCV RNA quantification has the advantage of providing information on viral kinetics and provides unique insight into the disease process. Branched DNA (bDNA) signal amplification technology provides a novel approach for the direct quantification of HCV RNA in patient specimens. The bDNA assay measures HCV RNA at physiological levels by boosting the reporter signal, rather than by replicating target sequences as the means of detection, and thus avoids the errors inherent in the extraction and amplification of target sequences. Inherently quantitative and nonradioactive, the bDNA assay is amenable to routine use in a clinical research setting, and has been used by several groups to explore the natural history, pathogenesis, and treatment of HCV infection.
Mapping membrane activity in undiscovered peptide sequence space using machine learning
Fulan, Benjamin M.; Wong, Gerard C. L.
2016-01-01
There are some ∼1,100 known antimicrobial peptides (AMPs), which permeabilize microbial membranes but have diverse sequences. Here, we develop a support vector machine (SVM)-based classifier to investigate ⍺-helical AMPs and the interrelated nature of their functional commonality and sequence homology. SVM is used to search the undiscovered peptide sequence space and identify Pareto-optimal candidates that simultaneously maximize the distance σ from the SVM hyperplane (thus maximize its “antimicrobialness”) and its ⍺-helicity, but minimize mutational distance to known AMPs. By calibrating SVM machine learning results with killing assays and small-angle X-ray scattering (SAXS), we find that the SVM metric σ correlates not with a peptide’s minimum inhibitory concentration (MIC), but rather its ability to generate negative Gaussian membrane curvature. This surprising result provides a topological basis for membrane activity common to AMPs. Moreover, we highlight an important distinction between the maximal recognizability of a sequence to a trained AMP classifier (its ability to generate membrane curvature) and its maximal antimicrobial efficacy. As mutational distances are increased from known AMPs, we find AMP-like sequences that are increasingly difficult for nature to discover via simple mutation. Using the sequence map as a discovery tool, we find a unexpectedly diverse taxonomy of sequences that are just as membrane-active as known AMPs, but with a broad range of primary functions distinct from AMP functions, including endogenous neuropeptides, viral fusion proteins, topogenic peptides, and amyloids. The SVM classifier is useful as a general detector of membrane activity in peptide sequences. PMID:27849600
ERIC Educational Resources Information Center
Green, Samuel B.; Thompson, Marilyn S.; Poirier, Jennifer
1999-01-01
The use of Lagrange multiplier (LM) tests in specification searches and the efforts that involve the addition of extraneous parameters to models are discussed. Presented are a rationale and strategy for conducting specification searches in two stages that involve adding parameters to LM tests to maximize fit and then deleting parameters not needed…
Robust Rate Maximization for Heterogeneous Wireless Networks under Channel Uncertainties
Xu, Yongjun; Hu, Yuan; Li, Guoquan
2018-01-01
Heterogeneous wireless networks are a promising technology in next generation wireless communication networks, which has been shown to efficiently reduce the blind area of mobile communication and improve network coverage compared with the traditional wireless communication networks. In this paper, a robust power allocation problem for a two-tier heterogeneous wireless networks is formulated based on orthogonal frequency-division multiplexing technology. Under the consideration of imperfect channel state information (CSI), the robust sum-rate maximization problem is built while avoiding sever cross-tier interference to macrocell user and maintaining the minimum rate requirement of each femtocell user. To be practical, both of channel estimation errors from the femtocells to the macrocell and link uncertainties of each femtocell user are simultaneously considered in terms of outage probabilities of users. The optimization problem is analyzed under no CSI feedback with some cumulative distribution function and partial CSI with Gaussian distribution of channel estimation error. The robust optimization problem is converted into the convex optimization problem which is solved by using Lagrange dual theory and subgradient algorithm. Simulation results demonstrate the effectiveness of the proposed algorithm by the impact of channel uncertainties on the system performance. PMID:29466315
Inverse problem of HIV cell dynamics using Genetic Algorithms
NASA Astrophysics Data System (ADS)
González, J. A.; Guzmán, F. S.
2017-01-01
In order to describe the cell dynamics of T-cells in a patient infected with HIV, we use a flavour of Perelson's model. This is a non-linear system of Ordinary Differential Equations that describes the evolution of healthy, latently infected, infected T-cell concentrations and the free viral cells. Different parameters in the equations give different dynamics. Considering the concentration of these types of cells is known for a particular patient, the inverse problem consists in estimating the parameters in the model. We solve this inverse problem using a Genetic Algorithm (GA) that minimizes the error between the solutions of the model and the data from the patient. These errors depend on the parameters of the GA, like mutation rate and population, although a detailed analysis of this dependence will be described elsewhere.
Park, Se-yeon; Yoo, Won-gyu
2013-10-01
The aim of this study was to compare muscular activation during five different normalization techniques that induced maximal isometric contraction of the latissimus dorsi. Sixteen healthy men participated in the study. Each participant performed three repetitions each of five types of isometric exertion: (1) conventional shoulder extension in the prone position, (2) caudal shoulder depression in the prone position, (3) body lifting with shoulder depression in the seated position, (4) trunk bending to the right in the lateral decubitus position, and (5) downward bar pulling in the seated position. In most participants, maximal activation of the latissimus dorsi was observed during conventional shoulder extension in the prone position; the percentage of maximal voluntary contraction was significantly greater for this exercise than for all other normalization techniques except downward bar pulling in the seated position. Although differences in electrode placement among various electromyographic studies represent a limitation, normalization techniques for the latissimus dorsi are recommended to minimize error in assessing maximal muscular activation of the latissimus dorsi through the combined use of shoulder extension in the prone position and downward pulling. Copyright © 2013 Elsevier Ltd. All rights reserved.
Concomitant Lethal Mutagenesis of Human Immunodeficiency Virus Type 1
Dapp, Michael J.; Holtz, Colleen M.; Mansky, Louis M.
2012-01-01
RNA virus population dynamics is complex, and sophisticated approaches are needed in many cases for therapeutic intervention. One such approach, termed lethal mutagenesis, is directed at targeting the virus population structure for extinction or error catastrophe. Previous studies have demonstrated the concept of this approach with human immunodeficiency virus type 1 (HIV-1) by use of chemical mutagens (i.e., 5-azacytidine) as well as by host factors with mutagenic properties (i.e., APOBEC3G). In this study, these two unrelated mutagenic agents were used concomitantly to investigate the interplay of these distinct mutagenic mechanisms. Specifically, an HIV-1 was produced from APOBEC3G (A3G)-expressing cells and used to infect permissive target cells treated with 5-azacytidine (5-AZC). Reduced viral infectivity and increased viral mutagenesis was observed with both the viral mutagen (i.e., G-to-C mutations) and the host restriction factor (i.e., G-to-A mutations); however, when combined, had complex interactions. Intriguingly, nucleotide sequence analysis revealed that concomitant HIV-1 exposure to both 5-AZC and A3G resulted in an increase of G-to-A viral mutagenesis at the expense of G-to-C mutagenesis. A3G catalytic activity was required for the diminution in G-to-C mutagenesis. Taken together, our findings provide the first demonstration for potentiation of the mutagenic effect of a cytosine analog by A3G expression, resulting in concomitant HIV-1 lethal mutagenesis. PMID:22426127
Optimal Halbach Permanent Magnet Designs for Maximally Pulling and Pushing Nanoparticles
Sarwar, A.; Nemirovski, A.; Shapiro, B.
2011-01-01
Optimization methods are presented to design Halbach arrays to maximize the forces applied on magnetic nanoparticles at deep tissue locations. In magnetic drug targeting, where magnets are used to focus therapeutic nanoparticles to disease locations, the sharp fall off of magnetic fields and forces with distances from magnets has limited the depth of targeting. Creating stronger forces at depth by optimally designed Halbach arrays would allow treatment of a wider class of patients, e.g. patients with deeper tumors. The presented optimization methods are based on semi-definite quadratic programming, yield provably globally optimal Halbach designs in 2 and 3-dimensions, for maximal pull or push magnetic forces (stronger pull forces can collect nano-particles against blood forces in deeper vessels; push forces can be used to inject particles into precise locations, e.g. into the inner ear). These Halbach designs, here tested in simulations of Maxwell’s equations, significantly outperform benchmark magnets of the same size and strength. For example, a 3-dimensional 36 element 2000 cm3 volume optimal Halbach design yields a ×5 greater force at a 10 cm depth compared to a uniformly magnetized magnet of the same size and strength. The designed arrays should be feasible to construct, as they have a similar strength (≤ 1 Tesla), size (≤ 2000 cm3), and number of elements (≤ 36) as previously demonstrated arrays, and retain good performance for reasonable manufacturing errors (element magnetization direction errors ≤ 5°), thus yielding practical designs to improve magnetic drug targeting treatment depths. PMID:23335834
Stroke maximizing and high efficient hysteresis hybrid modeling for a rhombic piezoelectric actuator
NASA Astrophysics Data System (ADS)
Shao, Shubao; Xu, Minglong; Zhang, Shuwen; Xie, Shilin
2016-06-01
Rhombic piezoelectric actuator (RPA), which employs a rhombic mechanism to amplify the small stroke of PZT stack, has been widely used in many micro-positioning machineries due to its remarkable properties such as high displacement resolution and compact structure. In order to achieve large actuation range along with high accuracy, the stroke maximizing and compensation for the hysteresis are two concerns in the use of RPA. However, existing maximization methods based on theoretical model can hardly accurately predict the maximum stroke of RPA because of approximation errors that are caused by the simplifications that must be made in the analysis. Moreover, despite the high hysteresis modeling accuracy of Preisach model, its modeling procedure is trivial and time-consuming since a large set of experimental data is required to determine the model parameters. In our research, to improve the accuracy of theoretical model of RPA, the approximation theory is employed in which the approximation errors can be compensated by two dimensionless coefficients. To simplify the hysteresis modeling procedure, a hybrid modeling method is proposed in which the parameters of Preisach model can be identified from only a small set of experimental data by using the combination of discrete Preisach model (DPM) with particle swarm optimization (PSO) algorithm. The proposed novel hybrid modeling method can not only model the hysteresis with considerable accuracy but also significantly simplified the modeling procedure. Finally, the inversion of hysteresis is introduced to compensate for the hysteresis non-linearity of RPA, and consequently a pseudo-linear system can be obtained.
Optimal Halbach Permanent Magnet Designs for Maximally Pulling and Pushing Nanoparticles.
Sarwar, A; Nemirovski, A; Shapiro, B
2012-03-01
Optimization methods are presented to design Halbach arrays to maximize the forces applied on magnetic nanoparticles at deep tissue locations. In magnetic drug targeting, where magnets are used to focus therapeutic nanoparticles to disease locations, the sharp fall off of magnetic fields and forces with distances from magnets has limited the depth of targeting. Creating stronger forces at depth by optimally designed Halbach arrays would allow treatment of a wider class of patients, e.g. patients with deeper tumors. The presented optimization methods are based on semi-definite quadratic programming, yield provably globally optimal Halbach designs in 2 and 3-dimensions, for maximal pull or push magnetic forces (stronger pull forces can collect nano-particles against blood forces in deeper vessels; push forces can be used to inject particles into precise locations, e.g. into the inner ear). These Halbach designs, here tested in simulations of Maxwell's equations, significantly outperform benchmark magnets of the same size and strength. For example, a 3-dimensional 36 element 2000 cm(3) volume optimal Halbach design yields a ×5 greater force at a 10 cm depth compared to a uniformly magnetized magnet of the same size and strength. The designed arrays should be feasible to construct, as they have a similar strength (≤ 1 Tesla), size (≤ 2000 cm(3)), and number of elements (≤ 36) as previously demonstrated arrays, and retain good performance for reasonable manufacturing errors (element magnetization direction errors ≤ 5°), thus yielding practical designs to improve magnetic drug targeting treatment depths.
Maximizing return on socioeconomic investment in phase II proof-of-concept trials.
Chen, Cong; Beckman, Robert A
2014-04-01
Phase II proof-of-concept (POC) trials play a key role in oncology drug development, determining which therapeutic hypotheses will undergo definitive phase III testing according to predefined Go-No Go (GNG) criteria. The number of possible POC hypotheses likely far exceeds available public or private resources. We propose a design strategy for maximizing return on socioeconomic investment in phase II trials that obtains the greatest knowledge with the minimum patient exposure. We compare efficiency using the benefit-cost ratio, defined to be the risk-adjusted number of truly active drugs correctly identified for phase III development divided by the risk-adjusted total sample size in phase II and III development, for different POC trial sizes, powering schemes, and associated GNG criteria. It is most cost-effective to conduct small POC trials and set the corresponding GNG bars high, so that more POC trials can be conducted under socioeconomic constraints. If δ is the minimum treatment effect size of clinical interest in phase II, the study design with the highest benefit-cost ratio has approximately 5% type I error rate and approximately 20% type II error rate (80% power) for detecting an effect size of approximately 1.5δ. A Go decision to phase III is made when the observed effect size is close to δ. With the phenomenal expansion of our knowledge in molecular biology leading to an unprecedented number of new oncology drug targets, conducting more small POC trials and setting high GNG bars maximize the return on socioeconomic investment in phase II POC trials. ©2014 AACR.
Balakrishnan, Narayanaswamy; Pal, Suvra
2016-08-01
Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.
Optimal single-shot strategies for discrimination of quantum measurements
NASA Astrophysics Data System (ADS)
Sedlák, Michal; Ziman, Mário
2014-11-01
We study discrimination of m quantum measurements in the scenario when the unknown measurement with n outcomes can be used only once. We show that ancilla-assisted discrimination procedures provide a nontrivial advantage over simple (ancilla-free) schemes for perfect distinguishability and we prove that inevitably m ≤n . We derive necessary and sufficient conditions of perfect distinguishability of general binary measurements. We show that the optimization of the discrimination of projective qubit measurements and their mixtures with white noise is equivalent to the discrimination of specific quantum states. In particular, the optimal protocol for discrimination of projective qubit measurements with fixed failure rate (exploiting maximally entangled test state) is described. While minimum-error discrimination of two projective qubit measurements can be realized without any need of entanglement, we show that discrimination of three projective qubit measurements requires a bipartite probe state. Moreover, when the measurements are not projective, the non-maximally entangled test states can outperform the maximally entangled ones. Finally, we rephrase the unambiguous discrimination of measurements as quantum key distribution protocol.
NASA Astrophysics Data System (ADS)
Ahmadi, Mohammad H.; Amin Nabakhteh, Mohammad; Ahmadi, Mohammad-Ali; Pourfayaz, Fathollah; Bidi, Mokhtar
2017-10-01
The motivation behind this work is to explore a nanoscale irreversible Stirling refrigerator with respect to size impacts and shows two novel thermo-ecological criteria. Two distinct strategies were suggested in the optimization process and the consequences of every strategy were examined independently. In the primary strategy, with the purpose of maximizing the energetic sustainability index and modified the ecological coefficient of performance (MECOP) and minimizing the dimensionless Ecological function, a multi-objective optimization algorithm (MOEA) was used. In the second strategy, with the purpose of maximizing the ECOP and MECOP and minimizing the dimensionless Ecological function, a MOEA was used. To conclude the final solution from each strategy, three proficient decision makers were utilized. Additionally, to quantify the deviation of the results gained from each decision makers, two different statistical error indexes were employed. Finally, based on the comparison between the results achieved from proposed scenarios reveals that by maximizing the MECOP the maximum values of ESI, ECOP, and a minimum of ecfare achieved.
Optimal design of multichannel equalizers for the structural similarity index.
Chai, Li; Sheng, Yuxia
2014-12-01
The optimization of multichannel equalizers is studied for the structural similarity (SSIM) criteria. The closed-form formula is provided for the optimal equalizer when the mean of the source is zero. The formula shows that the equalizer with maximal SSIM index is equal to the one with minimal mean square error (MSE) multiplied by a positive real number, which is shown to be equal to the inverse of the achieved SSIM index. The relation of the maximal SSIM index to the minimal MSE is also established for given blurring filters and fixed length equalizers. An algorithm is also presented to compute the suboptimal equalizer for the general sources. Various numerical examples are given to demonstrate the effectiveness of the results.
Perceptually-Based Adaptive JPEG Coding
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)
1996-01-01
An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.
Improved Snow Mapping Accuracy with Revised MODIS Snow Algorithm
NASA Technical Reports Server (NTRS)
Riggs, George; Hall, Dorothy K.
2012-01-01
The MODIS snow cover products have been used in over 225 published studies. From those reports, and our ongoing analysis, we have learned about the accuracy and errors in the snow products. Revisions have been made in the algorithms to improve the accuracy of snow cover detection in Collection 6 (C6), the next processing/reprocessing of the MODIS data archive planned to start in September 2012. Our objective in the C6 revision of the MODIS snow-cover algorithms and products is to maximize the capability to detect snow cover while minimizing snow detection errors of commission and omission. While the basic snow detection algorithm will not change, new screens will be applied to alleviate snow detection commission and omission errors, and only the fractional snow cover (FSC) will be output (the binary snow cover area (SCA) map will no longer be included).
Influenza viruses production: Evaluation of a novel avian cell line DuckCelt®-T17.
Petiot, Emma; Proust, Anaïs; Traversier, Aurélien; Durous, Laurent; Dappozze, Frédéric; Gras, Marianne; Guillard, Chantal; Balloul, Jean-Marc; Rosa-Calatrava, Manuel
2018-05-24
The influenza vaccine manufacturing industry is looking for production cell lines that are easily scalable, highly permissive to multiple viruses, and more effective in term of viral productivity. One critical characteristic of such cell lines is their ability to grow in suspension, in serum free conditions and at high cell densities. Influenza virus causing severe epidemics both in human and animals is an important threat to world healthcare. The repetitive apparition of influenza pandemic outbreaks in the last 20years explains that manufacturing sector is still looking for more effective production processes to replace/supplement embryonated egg-based process. Cell-based production strategy, with a focus on avian cell lines, is one of the promising solutions. Three avian cell lines, namely duck EB66®cells (Valneva), duck AGE.CR® cells (Probiogen) and quail QOR/2E11 cells (Baxter), are now competing with traditional mammalian cell platforms (Vero and MDCK cells) used for influenza vaccine productions and are currently at advance stage of commercial development for the manufacture of influenza vaccines. The DuckCelt®-T17 cell line presented in this work is a novel avian cell line developed by Transgene. This cell line was generated from primary embryo duck cells with the constitutive expression of the duck telomerase reverse transcriptase (dTERT). The DuckCelt®-T17 cells were able to grow in batch suspension cultures and serum-free conditions up to 6.5×10 6 cell/ml and were easily scaled from 10ml up to 3l bioreactor. In the present study, DuckCelt®-T17 cell line was tested for its abilities to produce various human, avian and porcine influenza strains. Most of the viral strains were produced at significant infectious titers (>5.8 log TCID50/ml) with optimization of the infection conditions. Human strains H1N1 and H3N2, as well as all the avian strains tested (H5N2, H7N1, H3N8, H11N9, H12N5) were the most efficiently produced with highest titre reached of 9.05 log TCID50/ml for A/Panama/2007/99 influenza H3N2. Porcine strains were also greatly rescued with titres from 4 to 7 log TCID50/ml depending of the subtypes. Interestingly, viral kinetics showed maximal titers reached at 24h post-infection for most of the strains, allowing early harvest time (Time Of Harvest: TOH). The B strains present specific production kinetics with a delay of 24h before reaching the maximal viral particle release. Process optimization on H1N1 2009 human pandemic strain allowed identifying best operating conditions for production (MOI, trypsin concentration, cell density at infection) allowing improving the production level by 2 log. Our results suggest that the DuckCelt®-T17 cell line is a very promising platform for industrial production of influenza viruses and particularly for avian viral strains. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dunham, Richard M; Gordon, Shari N; Vaccari, Monica; Piatak, Michael; Huang, Yong; Deeks, Steven G; Lifson, Jeffrey; Franchini, Genoveffa; McCune, Joseph M
2013-02-01
Even in the setting of maximally suppressive antiretroviral therapy (ART), HIV persists indefinitely. Several mechanisms might contribute to this persistence, including chronic inflammation and immune dysfunction. In this study, we have explored a preclinical model for the evaluation of potential interventions that might serve to eradicate or to minimize the level of persistent virus. Given data that metabolic products of the inducible enzyme indoleamine 2,3-dioxygeanse (IDO) might foster inflammation and viral persistence, chronically simian immunodeficiency virus (SIV)-infected, ART-treated rhesus macaques were treated with the IDO inhibitor 1-methyl tryptophan (1mT). Orally administered 1mT achieved targeted plasma levels, but did not impact tryptophan metabolism or decrease viral RNA or DNA in plasma or in intestinal tissues beyond levels achieved by ART alone. Animals treated with 1mT showed no difference in the levels of T cell activation or differentiation, or in the kinetics or magnitude of viral rebound following cessation of ART. Notwithstanding these negative results, our observations suggest that the chronically SIV-infected rhesus macaque on suppressive ART can serve as a tractable model in which to test and to prioritize the selection of other potential interventions designed to eradicate HIV in vivo. In addition, this model might be used to optimize the route and dose by which such interventions are administered and the methods by which their effects are monitored.
Exploration of multiphoton entangled states by using weak nonlinearities
He, Ying-Qiu; Ding, Dong; Yan, Feng-Li; Gao, Ting
2016-01-01
We propose a fruitful scheme for exploring multiphoton entangled states based on linear optics and weak nonlinearities. Compared with the previous schemes the present method is more feasible because there are only small phase shifts instead of a series of related functions of photon numbers in the process of interaction with Kerr nonlinearities. In the absence of decoherence we analyze the error probabilities induced by homodyne measurement and show that the maximal error probability can be made small enough even when the number of photons is large. This implies that the present scheme is quite tractable and it is possible to produce entangled states involving a large number of photons. PMID:26751044
There is more to accommodation of the eye than simply minimizing retinal blur
Marín-Franch, I.; Del Águila-Carrasco, A. J.; Bernal-Molina, P.; Esteve-Taboada, J. J.; López-Gil, N.; Montés-Micó, R.; Kruger, P. B.
2017-01-01
Eyes of children and young adults change their optical power to focus nearby objects at the retina. But does accommodation function by trial and error to minimize blur and maximize contrast as is generally accepted? Three experiments in monocular and monochromatic vision were performed under two conditions while aberrations were being corrected. In the first condition, feedback was available to the eye from both optical vergence and optical blur. In the second, feedback was only available from target blur. Accommodation was less precise for the second condition, suggesting that it is more than a trial-and-error function. Optical vergence itself seems to be an important cue for accommodation. PMID:29082097
Local alignment of two-base encoded DNA sequence
Homer, Nils; Merriman, Barry; Nelson, Stanley F
2009-01-01
Background DNA sequence comparison is based on optimal local alignment of two sequences using a similarity score. However, some new DNA sequencing technologies do not directly measure the base sequence, but rather an encoded form, such as the two-base encoding considered here. In order to compare such data to a reference sequence, the data must be decoded into sequence. The decoding is deterministic, but the possibility of measurement errors requires searching among all possible error modes and resulting alignments to achieve an optimal balance of fewer errors versus greater sequence similarity. Results We present an extension of the standard dynamic programming method for local alignment, which simultaneously decodes the data and performs the alignment, maximizing a similarity score based on a weighted combination of errors and edits, and allowing an affine gap penalty. We also present simulations that demonstrate the performance characteristics of our two base encoded alignment method and contrast those with standard DNA sequence alignment under the same conditions. Conclusion The new local alignment algorithm for two-base encoded data has substantial power to properly detect and correct measurement errors while identifying underlying sequence variants, and facilitating genome re-sequencing efforts based on this form of sequence data. PMID:19508732
A negentropy minimization approach to adaptive equalization for digital communication systems.
Choi, Sooyong; Lee, Te-Won
2004-07-01
In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE). Negentropy includes higher order statistical information and its minimization provides improved converge, performance and accuracy compared to traditional methods such as MMSE in terms of bit error rate (BER). The proposed negentropy minimization (NEGMIN) equalizer has two kinds of solutions, the MMSE solution and the other one, depending on the ratio of the normalization parameters. The NEGMIN equalizer has best BER performance when the ratio of the normalization parameters is properly adjusted to maximize the output power(variance) of the NEGMIN equalizer. Simulation experiments show that BER performance of the NEGMIN equalizer with the other solution than the MMSE one has similar characteristics to the adaptive minimum bit error rate (AMBER) equalizer. The main advantage of the proposed equalizer is that it needs significantly fewer training symbols than the AMBER equalizer. Furthermore, the proposed equalizer is more robust to nonlinear distortions than the MMSE equalizer.
The Nature of the Nodes, Weights and Degree of Precision in Gaussian Quadrature Rules
ERIC Educational Resources Information Center
Prentice, J. S. C.
2011-01-01
We present a comprehensive proof of the theorem that relates the weights and nodes of a Gaussian quadrature rule to its degree of precision. This level of detail is often absent in modern texts on numerical analysis. We show that the degree of precision is maximal, and that the approximation error in Gaussian quadrature is minimal, in a…
Altered motor control patterns in whiplash and chronic neck pain.
Woodhouse, Astrid; Vasseljen, Ottar
2008-06-20
Persistent whiplash associated disorders (WAD) have been associated with alterations in kinesthetic sense and motor control. The evidence is however inconclusive, particularly for differences between WAD patients and patients with chronic non-traumatic neck pain. The aim of this study was to investigate motor control deficits in WAD compared to chronic non-traumatic neck pain and healthy controls in relation to cervical range of motion (ROM), conjunct motion, joint position error and ROM-variability. Participants (n = 173) were recruited to three groups: 59 patients with persistent WAD, 57 patients with chronic non-traumatic neck pain and 57 asymptomatic volunteers. A 3D motion tracking system (Fastrak) was used to record maximal range of motion in the three cardinal planes of the cervical spine (sagittal, frontal and horizontal), and concurrent motion in the two associated cardinal planes relative to each primary plane were used to express conjunct motion. Joint position error was registered as the difference in head positions before and after cervical rotations. Reduced conjunct motion was found for WAD and chronic neck pain patients compared to asymptomatic subjects. This was most evident during cervical rotation. Reduced conjunct motion was not explained by current pain or by range of motion in the primary plane. Total conjunct motion during primary rotation was 13.9 degrees (95% CI; 12.2-15.6) for the WAD group, 17.9 degrees (95% CI; 16.1-19.6) for the chronic neck pain group and 25.9 degrees (95% CI; 23.7-28.1) for the asymptomatic group. As expected, maximal cervical range of motion was significantly reduced among the WAD patients compared to both control groups. No group differences were found in maximal ROM-variability or joint position error. Altered movement patterns in the cervical spine were found for both pain groups, indicating changes in motor control strategies. The changes were not related to a history of neck trauma, nor to current pain, but more likely due to long-lasting pain. No group differences were found for kinaesthetic sense.
Altered motor control patterns in whiplash and chronic neck pain
Woodhouse, Astrid; Vasseljen, Ottar
2008-01-01
Background Persistent whiplash associated disorders (WAD) have been associated with alterations in kinesthetic sense and motor control. The evidence is however inconclusive, particularly for differences between WAD patients and patients with chronic non-traumatic neck pain. The aim of this study was to investigate motor control deficits in WAD compared to chronic non-traumatic neck pain and healthy controls in relation to cervical range of motion (ROM), conjunct motion, joint position error and ROM-variability. Methods Participants (n = 173) were recruited to three groups: 59 patients with persistent WAD, 57 patients with chronic non-traumatic neck pain and 57 asymptomatic volunteers. A 3D motion tracking system (Fastrak) was used to record maximal range of motion in the three cardinal planes of the cervical spine (sagittal, frontal and horizontal), and concurrent motion in the two associated cardinal planes relative to each primary plane were used to express conjunct motion. Joint position error was registered as the difference in head positions before and after cervical rotations. Results Reduced conjunct motion was found for WAD and chronic neck pain patients compared to asymptomatic subjects. This was most evident during cervical rotation. Reduced conjunct motion was not explained by current pain or by range of motion in the primary plane. Total conjunct motion during primary rotation was 13.9° (95% CI; 12.2–15.6) for the WAD group, 17.9° (95% CI; 16.1–19.6) for the chronic neck pain group and 25.9° (95% CI; 23.7–28.1) for the asymptomatic group. As expected, maximal cervical range of motion was significantly reduced among the WAD patients compared to both control groups. No group differences were found in maximal ROM-variability or joint position error. Conclusion Altered movement patterns in the cervical spine were found for both pain groups, indicating changes in motor control strategies. The changes were not related to a history of neck trauma, nor to current pain, but more likely due to long-lasting pain. No group differences were found for kinaesthetic sense. PMID:18570647
Error baseline rates of five sample preparation methods used to characterize RNA virus populations.
Kugelman, Jeffrey R; Wiley, Michael R; Nagle, Elyse R; Reyes, Daniel; Pfeffer, Brad P; Kuhn, Jens H; Sanchez-Lockhart, Mariano; Palacios, Gustavo F
2017-01-01
Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA) as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5) of all compared methods.
Error baseline rates of five sample preparation methods used to characterize RNA virus populations
Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.
2017-01-01
Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717
NASA Astrophysics Data System (ADS)
Chupin, Marie; Hasboun, Dominique; Mukuna-Bantumbakulu, Romain; Bardinet, Eric; Baillet, Sylvain; Kinkingnéhun, Serge; Lemieux, Louis; Dubois, Bruno; Garnero, Line
2006-03-01
The hippocampus (Hc) and the amygdala (Am) are two cerebral structures that play a central role in main cognitive processes. Their segmentation allows atrophy in specific neurological illnesses to be quantified, but is made difficult by the complexity of the structures. In this work, a new algorithm for the simultaneous segmentation of Hc and Am based on competitive homotopic region deformations is presented. The deformations are constrained by relational priors derived from anatomical knowledge, namely probabilities for each structure around automatically retrieved landmarks at the border of the objects. The approach is designed to perform well on data from diseased subjects. The segmentation is initialized by extracting a bounding box and positioning two seeds; total execution time for both sides is between 10 and 15 minutes including initialization for the two structures. We present the results of validation based on comparison with manual segmentation, using volume error, spatial overlap and border distance measures. For 8 young healthy subjects the mean volume error was 7% for Hc and 11% for Am, the overlap: 84% for Hc and 83% for Am, the maximal distance: 4.2mm for Hc and 3.1mm for Am; for 4 Alzheimer's disease patients the mean volume error was 9% for Hc and Am, the overlap: 83% for Hc and 78% for Am, the maximal distance: 6mm for Hc and 4.4mm for Am. We conclude that the performance of the proposed method compares favourably with that of other published approaches in terms of accuracy and has a short execution time.
Monjo, Florian; Forestier, Nicolas
2018-04-01
This study was designed to explore the effects of intrafusal thixotropy, a property affecting muscle spindle sensitivity, on the sense of force. For this purpose, psychophysical measurements of force perception were performed using an isometric force matching paradigm of elbow flexors consisting of matching different force magnitudes (5, 10 and 20% of subjects' maximal voluntary force). We investigated participants' capacity to match these forces after their indicator arm had undergone voluntary isometric conditioning contractions known to alter spindle thixotropy, i.e., contractions performed at long ('hold long') or short muscle lengths ('hold short'). In parallel, their reference arm was conditioned at the intermediate muscle length ('hold-test') at which the matchings were performed. The thixotropy hypothesis predicts that estimation errors should only be observed at low force levels (up to 10% of the maximal voluntary force) with overestimation of the forces produced following 'hold short' conditioning and underestimation following 'hold long' conditioning. We found the complete opposite, especially following 'hold-short' conditioning where subjects underestimated the force they generated with similar relative error magnitudes across force levels. In a second experiment, we tested the hypothesis that estimation errors depended on the degree of afferent-induced facilitation using the Kohnstamm phenomenon as a probe of motor pathway excitability. Because the stronger post-effects were observed following 'hold-short' conditioning, it appears that the conditioning-induced excitation of spindle afferents leads to force misjudgments by introducing a decoupling between the central effort and the cortical motor outputs.
Schaufele, Fred
2013-01-01
Förster resonance energy transfer (FRET) between fluorescent proteins (FPs) provides insights into the proximities and orientations of FPs as surrogates of the biochemical interactions and structures of the factors to which the FPs are genetically fused. As powerful as FRET methods are, technical issues have impeded their broad adoption in the biologic sciences. One hurdle to accurate and reproducible FRET microscopy measurement stems from variable fluorescence backgrounds both within a field and between different fields. Those variations introduce errors into the precise quantification of fluorescence levels on which the quantitative accuracy of FRET measurement is highly dependent. This measurement error is particularly problematic for screening campaigns since minimal well-to-well variation is necessary to faithfully identify wells with altered values. High content screening depends also upon maximizing the numbers of cells imaged, which is best achieved by low magnification high throughput microscopy. But, low magnification introduces flat-field correction issues that degrade the accuracy of background correction to cause poor reproducibility in FRET measurement. For live cell imaging, fluorescence of cell culture media in the fluorescence collection channels for the FPs commonly used for FRET analysis is a high source of background error. These signal-to-noise problems are compounded by the desire to express proteins at biologically meaningful levels that may only be marginally above the strong fluorescence background. Here, techniques are presented that correct for background fluctuations. Accurate calculation of FRET is realized even from images in which a non-flat background is 10-fold higher than the signal. PMID:23927839
Elliot, Catherine A; Hamlin, Michael J; Lizamore, Catherine A
2017-07-28
The purpose of this study was to investigate the validity and reliability of the Hexoskin® vest for measuring respiration and heart rate (HR) in elite cyclists during a progressive test to exhaustion. Ten male elite cyclists (age 28.8 ± 12.5 yr, height 179.3 ± 6.0 cm, weight 73.2 ± 9.1 kg, V˙ O2max 60.7 ± 7.8 ml.kg.min mean ± SD) conducted a maximal aerobic cycle ergometer test using a ramped protocol (starting at 100W with 25W increments each min to failure) during two separate occasions over a 3-4 day period. Compared to the criterion measure (Metamax 3B) the Hexoskin® vest showed mainly small typical errors (1.3-6.2%) for HR and breathing frequency (f), but larger typical errors (9.5-19.6%) for minute ventilation (V˙E) during the progressive test to exhaustion. The typical error indicating the reliability of the Hexoskin® vest at moderate intensity exercise between tests was small for HR (2.6-2.9%) and f (2.5-3.2%) but slightly larger for V˙E (5.3-7.9%). We conclude that the Hexoskin® vest is sufficiently valid and reliable for measurements of HR and f in elite athletes during high intensity cycling but the calculated V˙E value the Hexoskin® vest produces during such exercise should be used with caution due to the lower validity and reliability of this variable.
Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error
NASA Astrophysics Data System (ADS)
Miller, Austin
In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.
Isakov, Ofer; Bordería, Antonio V; Golan, David; Hamenahem, Amir; Celniker, Gershon; Yoffe, Liron; Blanc, Hervé; Vignuzzi, Marco; Shomron, Noam
2015-07-01
The study of RNA virus populations is a challenging task. Each population of RNA virus is composed of a collection of different, yet related genomes often referred to as mutant spectra or quasispecies. Virologists using deep sequencing technologies face major obstacles when studying virus population dynamics, both experimentally and in natural settings due to the relatively high error rates of these technologies and the lack of high performance pipelines. In order to overcome these hurdles we developed a computational pipeline, termed ViVan (Viral Variance Analysis). ViVan is a complete pipeline facilitating the identification, characterization and comparison of sequence variance in deep sequenced virus populations. Applying ViVan on deep sequenced data obtained from samples that were previously characterized by more classical approaches, we uncovered novel and potentially crucial aspects of virus populations. With our experimental work, we illustrate how ViVan can be used for studies ranging from the more practical, detection of resistant mutations and effects of antiviral treatments, to the more theoretical temporal characterization of the population in evolutionary studies. Freely available on the web at http://www.vivanbioinfo.org : nshomron@post.tau.ac.il Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
Kim, Yoonjung; Han, Mi-Soon; Kim, Juwon; Kwon, Aerin; Lee, Kyung-A
2014-01-01
A total of 84 nasopharyngeal swab specimens were collected from 84 patients. Viral nucleic acid was extracted by three automated extraction systems: QIAcube (Qiagen, Germany), EZ1 Advanced XL (Qiagen), and MICROLAB Nimbus IVD (Hamilton, USA). Fourteen RNA viruses and two DNA viruses were detected using the Anyplex II RV16 Detection kit (Seegene, Republic of Korea). The EZ1 Advanced XL system demonstrated the best analytical sensitivity for all the three viral strains. The nucleic acids extracted by EZ1 Advanced XL showed higher positive rates for virus detection than the others. Meanwhile, the MICROLAB Nimbus IVD system was comprised of fully automated steps from nucleic extraction to PCR setup function that could reduce human errors. For the nucleic acids recovered from nasopharyngeal swab specimens, the QIAcube system showed the fewest false negative results and the best concordance rate, and it may be more suitable for detecting various viruses including RNA and DNA virus strains. Each system showed different sensitivity and specificity for detection of certain viral pathogens and demonstrated different characteristics such as turnaround time and sample capacity. Therefore, these factors should be considered when new nucleic acid extraction systems are introduced to the laboratory.
Kravatsky, Yuri; Chechetkin, Vladimir; Fedoseeva, Daria; Gorbacheva, Maria; Kravatskaya, Galina; Kretova, Olga; Tchurikov, Nickolai
2017-11-23
The efficient development of antiviral drugs, including efficient antiviral small interfering RNAs (siRNAs), requires continuous monitoring of the strict correspondence between a drug and the related highly variable viral DNA/RNA target(s). Deep sequencing is able to provide an assessment of both the general target conservation and the frequency of particular mutations in the different target sites. The aim of this study was to develop a reliable bioinformatic pipeline for the analysis of millions of short, deep sequencing reads corresponding to selected highly variable viral sequences that are drug target(s). The suggested bioinformatic pipeline combines the available programs and the ad hoc scripts based on an original algorithm of the search for the conserved targets in the deep sequencing data. We also present the statistical criteria for the threshold of reliable mutation detection and for the assessment of variations between corresponding data sets. These criteria are robust against the possible sequencing errors in the reads. As an example, the bioinformatic pipeline is applied to the study of the conservation of RNA interference (RNAi) targets in human immunodeficiency virus 1 (HIV-1) subtype A. The developed pipeline is freely available to download at the website http://virmut.eimb.ru/. Brief comments and comparisons between VirMut and other pipelines are also presented.
NASA Technical Reports Server (NTRS)
Patrick, Sean; Oliver, Emerson
2018-01-01
One of the SLS Navigation System's key performance requirements is a constraint on the payload system's delta-v allocation to correct for insertion errors due to vehicle state uncertainty at payload separation. The SLS navigation team has developed a Delta-Delta-V analysis approach to assess the effect on trajectory correction maneuver (TCM) design needed to correct for navigation errors. This approach differs from traditional covariance analysis based methods and makes no assumptions with regard to the propagation of the state dynamics. This allows for consideration of non-linearity in the propagation of state uncertainties. The Delta-Delta-V analysis approach re-optimizes perturbed SLS mission trajectories by varying key mission states in accordance with an assumed state error. The state error is developed from detailed vehicle 6-DOF Monte Carlo analysis or generated using covariance analysis. These perturbed trajectories are compared to a nominal trajectory to determine necessary TCM design. To implement this analysis approach, a tool set was developed which combines the functionality of a 3-DOF trajectory optimization tool, Copernicus, and a detailed 6-DOF vehicle simulation tool, Marshall Aerospace Vehicle Representation in C (MAVERIC). In addition to delta-v allocation constraints on SLS navigation performance, SLS mission requirement dictate successful upper stage disposal. Due to engine and propellant constraints, the SLS Exploration Upper Stage (EUS) must dispose into heliocentric space by means of a lunar fly-by maneuver. As with payload delta-v allocation, upper stage disposal maneuvers must place the EUS on a trajectory that maximizes the probability of achieving a heliocentric orbit post Lunar fly-by considering all sources of vehicle state uncertainty prior to the maneuver. To ensure disposal, the SLS navigation team has developed an analysis approach to derive optimal disposal guidance targets. This approach maximizes the state error covariance prior to the maneuver to develop and re-optimize a nominal disposal maneuver (DM) target that, if achieved, would maximize the potential for successful upper stage disposal. For EUS disposal analysis, a set of two tools was developed. The first considers only the nominal pre-disposal maneuver state, vehicle constraints, and an a priori estimate of the state error covariance. In the analysis, the optimal nominal disposal target is determined. This is performed by re-formulating the trajectory optimization to consider constraints on the eigenvectors of the error ellipse applied to the nominal trajectory. A bisection search methodology is implemented in the tool to refine these dispersions resulting in the maximum dispersion feasible for successful disposal via lunar fly-by. Success is defined based on the probability that the vehicle will not impact the lunar surface and will achieve a characteristic energy (C3) relative to the Earth such that it is no longer in the Earth-Moon system. The second tool propagates post-disposal maneuver states to determine the success of disposal for provided trajectory achieved states. This is performed using the optimized nominal target within the 6-DOF vehicle simulation. This paper will discuss the application of the Delta-Delta-V analysis approach for performance evaluation as well as trajectory re-optimization so as to demonstrate the system's capability in meeting performance constraints. Additionally, further discussion of the implementation of assessing disposal analysis will be provided.
Design of a robust baseband LPC coder for speech transmission over 9.6 kbit/s noisy channels
NASA Astrophysics Data System (ADS)
Viswanathan, V. R.; Russell, W. H.; Higgins, A. L.
1982-04-01
This paper describes the design of a baseband Linear Predictive Coder (LPC) which transmits speech over 9.6 kbit/sec synchronous channels with random bit errors of up to 1%. Presented are the results of our investigation of a number of aspects of the baseband LPC coder with the goal of maximizing the quality of the transmitted speech. Important among these aspects are: bandwidth of the baseband, coding of the baseband residual, high-frequency regeneration, and error protection of important transmission parameters. The paper discusses these and other issues, presents the results of speech-quality tests conducted during the various stages of optimization, and describes the details of the optimized speech coder. This optimized speech coding algorithm has been implemented as a real-time full-duplex system on an array processor. Informal listening tests of the real-time coder have shown that the coder produces good speech quality in the absence of channel bit errors and introduces only a slight degradation in quality for channel bit error rates of up to 1%.
Refactoring the Genetic Code for Increased Evolvability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pines, Gur; Winkler, James D.; Pines, Assaf
ABSTRACT The standard genetic code is robust to mutations during transcription and translation. Point mutations are likely to be synonymous or to preserve the chemical properties of the original amino acid. Saturation mutagenesis experiments suggest that in some cases the best-performing mutant requires replacement of more than a single nucleotide within a codon. These replacements are essentially inaccessible to common error-based laboratory engineering techniques that alter a single nucleotide per mutation event, due to the extreme rarity of adjacent mutations. In this theoretical study, we suggest a radical reordering of the genetic code that maximizes the mutagenic potential of singlemore » nucleotide replacements. We explore several possible genetic codes that allow a greater degree of accessibility to the mutational landscape and may result in a hyperevolvable organism that could serve as an ideal platform for directed evolution experiments. We then conclude by evaluating the challenges of constructing such recoded organisms and their potential applications within the field of synthetic biology. IMPORTANCE The conservative nature of the genetic code prevents bioengineers from efficiently accessing the full mutational landscape of a gene via common error-prone methods. Here, we present two computational approaches to generate alternative genetic codes with increased accessibility. These new codes allow mutational transitions to a larger pool of amino acids and with a greater extent of chemical differences, based on a single nucleotide replacement within the codon, thus increasing evolvability both at the single-gene and at the genome levels. Given the widespread use of these techniques for strain and protein improvement, along with more fundamental evolutionary biology questions, the use of recoded organisms that maximize evolvability should significantly improve the efficiency of directed evolution, library generation, and fitness maximization.« less
Unger, Ewald; Bijak, Manfred; Stoiber, Martin; Lanmüller, Hermann; Jarvis, Jonathan Charles
2017-01-01
Direct measurements of muscular forces usually require a substantial rearrangement of the biomechanical system. To circumvent this problem, various indirect techniques have been used in the past. We introduce a novel direct method, using a lightweight (~0.5 g) miniature (3 x 3 x 7 mm) in-line load-cell to measure tension in the tibialis anterior tendon of rats. A linear motor was used to produce force-profiles to assess linearity, step-response, hysteresis and frequency behavior under controlled conditions. Sensor responses to a series of rectangular force-pulses correlated linearly (R2 = 0.999) within the range of 0–20 N. The maximal relative error at full scale (20 N) was 0.07% of the average measured signal. The standard deviation of the mean response to repeated 20 N force pulses was ± 0.04% of the mean response. The step-response of the load-cell showed the behavior of a PD2T2-element in control-engineering terminology. The maximal hysteretic error was 5.4% of the full-scale signal. Sinusoidal signals were attenuated maximally (-4 dB) at 200 Hz, within a measured range of 0.01–200 Hz. When measuring muscular forces this should be of minor concern as the fusion-frequency of muscles is generally much lower. The newly developed load-cell measured tensile forces of up to 20 N, without inelastic deformation of the sensor. It qualifies for various applications in which it is of interest directly to measure forces within a particular tendon causing only minimal disturbance to the biomechanical system. PMID:28934327
Schmoll, Martin; Unger, Ewald; Bijak, Manfred; Stoiber, Martin; Lanmüller, Hermann; Jarvis, Jonathan Charles
2017-01-01
Direct measurements of muscular forces usually require a substantial rearrangement of the biomechanical system. To circumvent this problem, various indirect techniques have been used in the past. We introduce a novel direct method, using a lightweight (~0.5 g) miniature (3 x 3 x 7 mm) in-line load-cell to measure tension in the tibialis anterior tendon of rats. A linear motor was used to produce force-profiles to assess linearity, step-response, hysteresis and frequency behavior under controlled conditions. Sensor responses to a series of rectangular force-pulses correlated linearly (R2 = 0.999) within the range of 0-20 N. The maximal relative error at full scale (20 N) was 0.07% of the average measured signal. The standard deviation of the mean response to repeated 20 N force pulses was ± 0.04% of the mean response. The step-response of the load-cell showed the behavior of a PD2T2-element in control-engineering terminology. The maximal hysteretic error was 5.4% of the full-scale signal. Sinusoidal signals were attenuated maximally (-4 dB) at 200 Hz, within a measured range of 0.01-200 Hz. When measuring muscular forces this should be of minor concern as the fusion-frequency of muscles is generally much lower. The newly developed load-cell measured tensile forces of up to 20 N, without inelastic deformation of the sensor. It qualifies for various applications in which it is of interest directly to measure forces within a particular tendon causing only minimal disturbance to the biomechanical system.
Refactoring the Genetic Code for Increased Evolvability
Pines, Gur; Winkler, James D.; Pines, Assaf; ...
2017-11-14
ABSTRACT The standard genetic code is robust to mutations during transcription and translation. Point mutations are likely to be synonymous or to preserve the chemical properties of the original amino acid. Saturation mutagenesis experiments suggest that in some cases the best-performing mutant requires replacement of more than a single nucleotide within a codon. These replacements are essentially inaccessible to common error-based laboratory engineering techniques that alter a single nucleotide per mutation event, due to the extreme rarity of adjacent mutations. In this theoretical study, we suggest a radical reordering of the genetic code that maximizes the mutagenic potential of singlemore » nucleotide replacements. We explore several possible genetic codes that allow a greater degree of accessibility to the mutational landscape and may result in a hyperevolvable organism that could serve as an ideal platform for directed evolution experiments. We then conclude by evaluating the challenges of constructing such recoded organisms and their potential applications within the field of synthetic biology. IMPORTANCE The conservative nature of the genetic code prevents bioengineers from efficiently accessing the full mutational landscape of a gene via common error-prone methods. Here, we present two computational approaches to generate alternative genetic codes with increased accessibility. These new codes allow mutational transitions to a larger pool of amino acids and with a greater extent of chemical differences, based on a single nucleotide replacement within the codon, thus increasing evolvability both at the single-gene and at the genome levels. Given the widespread use of these techniques for strain and protein improvement, along with more fundamental evolutionary biology questions, the use of recoded organisms that maximize evolvability should significantly improve the efficiency of directed evolution, library generation, and fitness maximization.« less
Leff, Daniel R; Aggarwal, Rajesh; Rana, Mariam; Nakhjavani, Batool; Purkayastha, Sanjay; Khullar, Vik; Darzi, Ara W
2008-03-01
Research evaluating fatigue-induced skills decline has focused on acute sleep deprivation rather than the effects of circadian desynchronization associated with multiple shifts. As a result, the number of consecutive night shifts that residents can safely be on duty without detrimental effects to their technical skills remains unknown. A prospective observational cohort study was conducted to assess the impact of 7 successive night shifts on the technical surgical performance of junior residents. The interventional strategy included training 21 residents from surgery and allied disciplines on a virtual reality surgical simulator, towards the achievement of preset benchmark scores, followed by 294 technical skills assessments conducted over 1764 manpower night shift hours. Primary outcomes comprised serial technical skills assessments on 2 tasks of a virtual reality surgical simulator. Secondary outcomes included assessments of introspective fatigue, duration of sleep, and prospective recordings of activity (number of "calls" received, steps walked, and patients evaluated). Maximal deterioration in performance was observed following the first night shift. Residents took significantly longer to complete the first (P = 0.002) and second tasks (P = 0.005) compared with baseline. They also committed significantly greater numbers of errors (P = 0.025) on the first task assessed. Improved performance was observed across subsequent shifts towards baseline levels. Newly acquired technical surgical skills deteriorate maximally after the first night shift, emphasizing the importance of adequate preparation for night rotas. Performance improvements across successive shifts may be due to ongoing learning or adaptation to chronic fatigue. Further research should focus on assessments of both technical procedural skills and cognitive abilities to determine the rotas that best minimize errors and maximize patient safety.
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-01-01
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C/2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C/2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi’s model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments. PMID:29401668
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-02-03
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.
Approximated mutual information training for speech recognition using myoelectric signals.
Guo, Hua J; Chan, A D C
2006-01-01
A new training algorithm called the approximated maximum mutual information (AMMI) is proposed to improve the accuracy of myoelectric speech recognition using hidden Markov models (HMMs). Previous studies have demonstrated that automatic speech recognition can be performed using myoelectric signals from articulatory muscles of the face. Classification of facial myoelectric signals can be performed using HMMs that are trained using the maximum likelihood (ML) algorithm; however, this algorithm maximizes the likelihood of the observations in the training sequence, which is not directly associated with optimal classification accuracy. The AMMI training algorithm attempts to maximize the mutual information, thereby training the HMMs to optimize their parameters for discrimination. Our results show that AMMI training consistently reduces the error rates compared to these by the ML training, increasing the accuracy by approximately 3% on average.
Alterations in Neural Control of Constant Isometric Contraction with the Size of Error Feedback
Hwang, Ing-Shiou; Lin, Yen-Ting; Huang, Wei-Min; Yang, Zong-Ru; Hu, Chia-Ling; Chen, Yi-Ching
2017-01-01
Discharge patterns from a population of motor units (MUs) were estimated with multi-channel surface electromyogram and signal processing techniques to investigate parametric differences in low-frequency force fluctuations, MU discharges, and force-discharge relation during static force-tracking with varying sizes of execution error presented via visual feedback. Fourteen healthy adults produced isometric force at 10% of maximal voluntary contraction through index abduction under three visual conditions that scaled execution errors with different amplification factors. Error-augmentation feedback that used a high amplification factor (HAF) to potentiate visualized error size resulted in higher sample entropy, mean frequency, ratio of high-frequency components, and spectral dispersion of force fluctuations than those of error-reducing feedback using a low amplification factor (LAF). In the HAF condition, MUs with relatively high recruitment thresholds in the dorsal interosseous muscle exhibited a larger coefficient of variation for inter-spike intervals and a greater spectral peak of the pooled MU coherence at 13–35 Hz than did those in the LAF condition. Manipulation of the size of error feedback altered the force-discharge relation, which was characterized with non-linear approaches such as mutual information and cross sample entropy. The association of force fluctuations and global discharge trace decreased with increasing error amplification factor. Our findings provide direct neurophysiological evidence that favors motor training using error-augmentation feedback. Amplification of the visualized error size of visual feedback could enrich force gradation strategies during static force-tracking, pertaining to selective increases in the discharge variability of higher-threshold MUs that receive greater common oscillatory inputs in the β-band. PMID:28125658
Polyhedral Interpolation for Optimal Reaction Control System Jet Selection
NASA Technical Reports Server (NTRS)
Gefert, Leon P.; Wright, Theodore
2014-01-01
An efficient algorithm is described for interpolating optimal values for spacecraft Reaction Control System jet firing duty cycles. The algorithm uses the symmetrical geometry of the optimal solution to reduce the number of calculations and data storage requirements to a level that enables implementation on the small real time flight control systems used in spacecraft. The process minimizes acceleration direction errors, maximizes control authority, and minimizes fuel consumption.
UWB channel estimation using new generating TR transceivers
Nekoogar, Faranak [San Ramon, CA; Dowla, Farid U [Castro Valley, CA; Spiridon, Alex [Palo Alto, CA; Haugen, Peter C [Livermore, CA; Benzel, Dave M [Livermore, CA
2011-06-28
The present invention presents a simple and novel channel estimation scheme for UWB communication systems. As disclosed herein, the present invention maximizes the extraction of information by incorporating a new generation of transmitted-reference (Tr) transceivers that utilize a single reference pulse(s) or a preamble of reference pulses to provide improved channel estimation while offering higher Bit Error Rate (BER) performance and data rates without diluting the transmitter power.
NASA Astrophysics Data System (ADS)
Wang, Yiguang; Huang, Xingxing; Shi, Jianyang; Wang, Yuan-quan; Chi, Nan
2016-05-01
Visible light communication (VLC) has no doubt become a promising candidate for future wireless communications due to the increasing trends in the usage of light-emitting diodes (LEDs). In addition to indoor high-speed wireless access and positioning applications, VLC usage in outdoor scenarios, such as vehicle networks and intelligent transportation systems, are also attracting significant interest. However, the complex outdoor environment and ambient noise are the key challenges for long-range high-speed VLC outdoor applications. To improve system performance and transmission distance, we propose to use receiver diversity technology in an outdoor VLC system. Maximal ratio combining-based receiver diversity technology is utilized in two receivers to achieve the maximal signal-to-noise ratio. A 400-Mb/s VLC transmission using a phosphor-based white LED and a 1-Gb/s wavelength division multiplexing VLC transmission using a red-green-blue LED are both successfully achieved over a 100-m outdoor distance with the bit error rate below the 7% forward error correction limit of 3.8×10-3. To the best of our knowledge, this is the highest data rate at 100-m outdoor VLC transmission ever achieved. The experimental results clearly prove the benefit and feasibility of receiver diversity technology for long-range high-speed outdoor VLC systems.
The Dolinar Receiver in an Information Theoretic Framework
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Birnbaum, Kevin M.; Moision, Bruce E.; Dolinar, Samuel J.
2011-01-01
Optical communication at the quantum limit requires that measurements on the optical field be maximally informative, but devising physical measurements that accomplish this objective has proven challenging. The Dolinar receiver exemplifies a rare instance of success in distinguishing between two coherent states: an adaptive local oscillator is mixed with the signal prior to photodetection, which yields an error probability that meets the Helstrom lower bound with equality. Here we apply the same local-oscillator-based architecture with aninformation-theoretic optimization criterion. We begin with analysis of this receiver in a general framework for an arbitrary coherent-state modulation alphabet, and then we concentrate on two relevant examples. First, we study a binary antipodal alphabet and show that the Dolinar receiver's feedback function not only minimizes the probability of error, but also maximizes the mutual information. Next, we study ternary modulation consistingof antipodal coherent states and the vacuum state. We derive an analytic expression for a near-optimal local oscillator feedback function, and, via simulation, we determine its photon information efficiency (PIE). We provide the PIE versus dimensional information efficiency (DIE) trade-off curve and show that this modulation and the our receiver combination performs universally better than (generalized) on-off keying plus photoncounting, although, the advantage asymptotically vanishes as the bits-per-photon diverges towards infinity.
Understanding seasonal variability of uncertainty in hydrological prediction
NASA Astrophysics Data System (ADS)
Li, M.; Wang, Q. J.
2012-04-01
Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.
Viglianti, G A; Rubinstein, E P; Graves, K L
1992-01-01
The untranslated leader sequences of rhesus macaque simian immunodeficiency virus mRNAs form a stable secondary structure, TAR. This structure can be modified by RNA splicing. In this study, the role of TAR splicing in virus replication was investigated. The proportion of viral RNAs containing a spliced TAR structure is high early after infection and decreases at later times. Moreover, proviruses containing mutations which prevent TAR splicing are significantly delayed in replication. These mutant viruses require approximately 20 days to achieve half-maximal virus production, in contrast to wild-type viruses, which require approximately 8 days. We attribute this delay to the inefficient translation of unspliced-TAR-containing mRNAs. The molecular basis for this translational effect was examined in in vitro assays. We found that spliced-TAR-containing mRNAs were translated up to 8.5 times more efficiently than were similar mRNAs containing an unspliced TAR leader. Furthermore, these spliced-TAR-containing mRNAs were more efficiently associated with ribosomes. We postulate that the level of TAR splicing provides a balance for the optimal expression of both viral proteins and genomic RNA and therefore ultimately controls the production of infectious virions. Images PMID:1629957
Darwinian demons, evolutionary complexity, and information maximization.
Krakauer, David C
2011-09-01
Natural selection is shown to be an extended instance of a Maxwell's demon device. A demonic selection principle is introduced that states that organisms cannot exceed the complexity of their selective environment. Thermodynamic constraints on error repair impose a fundamental limit to the rate that information can be transferred from the environment (via the selective demon) to the genome. Evolved mechanisms of learning and inference can overcome this limitation, but remain subject to the same fundamental constraint, such that plastic behaviors cannot exceed the complexity of reward signals. A natural measure of evolutionary complexity is provided by mutual information, and niche construction activity--the organismal contribution to the construction of selection pressures--might in principle lead to its increase, bounded by thermodynamic free energy required for error correction.
Force Control Characteristics for Generation and Relaxation in the Lower Limb.
Ohtaka, Chiaki; Fujiwara, Motoko
2018-05-29
We investigated the characteristics for force generation and relaxation using graded isometric contractions of the knee extensors. Participants performed the following tasks as quickly and accurately as possible. For the force generation task, force was increased from 0% to 20%, 40% and 60% of the maximal voluntary force (MVF). For the force relaxation task, force was decreased from 60% to 40%, 20% and 0%. The following parameters of the recorded force were calculated: error, time, and rate of force development. The error was consistently greater for force relaxation than generation. Reaction and adjustment times were independent of the tasks. The control strategy was markedly different for force relaxation and generation, this tendency was particularly evident for the lower limb compared to the upper limb.
Systems Issues Pertaining to Holographic Optical Data Storage in Thick Bacteriorhodopsin Films
NASA Technical Reports Server (NTRS)
Downie, John D.; Timucin, Dogan A.; Gary, Charles K.; Oezcan, Meric; Smithey, Daniel T.; Crew, Marshall; Lau, Sonie (Technical Monitor)
1998-01-01
The optical data storage capacity and raw bit-error-rate achievable with thick photochromic bacteriorhodopsin (BR) films are investigated for sequential recording and read- out of angularly- and shift-multiplexed digital holograms inside a thick blue-membrane D85N BR film. We address the determination of an exposure schedule that produces equal diffraction efficiencies among each of the multiplexed holograms. This exposure schedule is determined by numerical simulations of the holographic recording process within the BR material, and maximizes the total grating strength. We also experimentally measure the shift selectivity and compare the results to theoretical predictions. Finally, we evaluate the bit-error-rate of a single hologram, and of multiple holograms stored within the film.
Using data to make decisions and drive results: a LEAN implementation strategy.
Panning, Rick
2005-03-28
During the process of facility planning, Fairview Laboratory Services utilized LEAN manufacturing to maximize efficiency, simplify processes, and improve laboratory support of patient care services. By incorporating the LEAN program's concepts in our pilot program, we were able to reduce turnaround time by 50%, improve productivity by greater than 40%, reduce costs by 31%, save more than 440 square feet of space, standardize work practices, reduce errors and error potential, continuously measure performance, eliminate excess unused inventory and visual noise, and cross-train 100% of staff in the core laboratory. In addition, we trained a core team of people that is available to coordinate future LEAN projects in the laboratory and other areas of the organization.
Mutation-adapted U1 snRNA corrects a splicing error of the dopa decarboxylase gene.
Lee, Ni-Chung; Lee, Yu-May; Chen, Pin-Wen; Byrne, Barry J; Hwu, Wuh-Liang
2016-12-01
Aromatic l-amino acid decarboxylase (AADC) deficiency is an inborn error of monoamine neurotransmitter synthesis, which results in dopamine, serotonin, epinephrine and norepinephrine deficiencies. The DDC gene founder mutation IVS6 + 4A > T is highly prevalent in Chinese patients with AADC deficiency. In this study, we designed several U1 snRNA vectors to adapt U1 snRNA binding sequences of the mutated DDC gene. We found that only the modified U1 snRNA (IVS-AAA) that completely matched both the intronic and exonic U1 binding sequences of the mutated DDC gene could correct splicing errors of either the mutated human DDC minigene or the mouse artificial splicing construct in vitro. We further injected an adeno-associated viral (AAV) vector to express IVS-AAA in the brain of a knock-in mouse model. This treatment was well tolerated and improved both the survival and brain dopamine and serotonin levels of mice with AADC deficiency. Therefore, mutation-adapted U1 snRNA gene therapy can be a promising method to treat genetic diseases caused by splicing errors, but the efficiency of such a treatment still needs improvements. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Myrmel, Helge; Ulvestad, Elling; Asjø, Birgitta
2009-05-01
Hepatitis C virus (HCV) has a high propensity to establish chronic infection with end-stage liver disease. The high turnover of virus particles and high transcription error rates due to lack of proof-reading function of the viral polymerase imply that HCV exists as quasispecies, thus enabling the virus to evade the host immune response. Clearance of the virus is characterized by a multispecific, vigorous and persistent T-cell response, whereas T-cell responses are weak, narrow and transient in patients who develop chronic infection. At present, standard treatment is a combination of pegylated interferon-alpha and ribavirin, with a sustained viral response rate of 40-80%, depending on genotype. The mechanisms for the observed synergistic effects of the two drugs are still not known in detail, but in addition to direct antiviral mechanisms, the immunomodulatory effects of both drugs seem to be important, with a shift from Th2- to Th1-cytokine profiles in successfully treated patients. This article describes virus-host relations in the natural course of HCV infection and during treatment.
Kim, Young-In; Pareek, Rajat; Murphy, Ryan; Harrison, Lisa; Farrell, Eric; Cook, Robert; DeVincenzo, John
2017-11-01
Respiratory syncytial virus (RSV) viral load and disease severity associate, and the timing of viral load and disease run in parallel. An antiviral must be broadly effective against the natural spectrum of RSV genotypes and must attain concentrations capable of inhibiting viral replication within the human respiratory tract. We evaluated a novel RSV fusion inhibitor, MDT-637, and compared it with ribavirin for therapeutic effect in vitro to identify relative therapeutic doses achievable in humans. MDT-637 and ribavirin were co-incubated with RSV in HEp-2 cells. Quantitative PCR assessed viral concentrations; 50% inhibitory concentrations (IC 50 ) were compared to achievable human MDT-637 and ribavirin peak and trough concentrations. The IC 50 for MDT-637 and ribavirin (against RSV-A Long) was 1.42 and 16 973 ng/mL, respectively. The ratio of achievable peak respiratory secretion concentration to IC 50 was 6041-fold for MDT-637 and 25-fold for aerosolized ribavirin. The ratio of trough concentration to IC 50 was 1481-fold for MDT-637 and 3.29-fold for aerosolized ribavirin. Maximal peak and trough levels of oral or intravenous ribavirin were significantly lower than their IC 50 s. We also measured MDT-637 IC 50 s in 3 lab strains and 4 clinical strains. The IC 50 s ranged from 0.36 to 3.4 ng/mL. Achievable human MDT-637 concentrations in respiratory secretions exceed the IC 50 s by factors from hundreds to thousands of times greater than does ribavirin. Furthermore, MDT-637 has broad in vitro antiviral activity on clinical strains of different RSV genotypes and clades. Together, these data imply that MDT-637 may produce a superior clinical effect compared to ribavirin on natural RSV infections. © 2017 The Authors. Influenza and Other Respiratory Viruses Published by John Wiley & Sons Ltd.
Segovia, José C.; Gallego, Jesús M.; Bueren, Juan A.; Almendral, José M.
1999-01-01
Parvovirus minute virus of mice strain i (MVMi) infects committed granulocyte-macrophage CFU and erythroid burst-forming unit (CFU-GM and BFU-E, respectively) and pluripotent (CFU-S) mouse hematopoietic progenitors in vitro. To study the effects of MVMi infection on mouse hemopoiesis in the absence of a specific immune response, adult SCID mice were inoculated by the natural intranasal route of infection and monitored for hematopoietic and viral multiplication parameters. Infected animals developed a very severe viral-dose-dependent leukopenia by 30 days postinfection (d.p.i.) that led to death within 100 days, even though the number of circulating platelets and erythrocytes remained unaltered throughout the disease. In the bone marrow of every lethally inoculated mouse, a deep suppression of CFU-GM and BFU-E clonogenic progenitors occurring during the 20- to 35-d.p.i. interval corresponded with the maximal MVMi production, as determined by the accumulation of virus DNA replicative intermediates and the yield of infectious virus. Viral productive infection was limited to a small subset of primitive cells expressing the major replicative viral antigen (NS-1 protein), the numbers of which declined with the disease. However, the infection induced a sharp and lasting unbalance of the marrow hemopoiesis, denoted by a marked depletion of granulomacrophagic cells (GR-1+ and MAC-1+) concomitant with a twofold absolute increase in erythroid cells (TER-119+). A stimulated definitive erythropoiesis in the infected mice was further evidenced by a 12-fold increase per femur of recognizable proerythroblasts, a quantitative apoptosis confined to uninfected TER-119+ cells, as well as by a 4-fold elevation in the number of circulating reticulocytes. Therefore, MVMi targets and suppresses primitive hemopoietic progenitors leading to a very severe leukopenia, but compensatory mechanisms are mounted specifically by the erythroid lineage that maintain an effective erythropoiesis. The results show that infection of SCID mice with the parvovirus MVMi causes a novel dysregulation of murine hemopoiesis in vivo. PMID:9971754
Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts
NASA Astrophysics Data System (ADS)
Gingrich, Mark
Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.
Application of viromics: a new approach to the understanding of viral infections in humans.
Ramamurthy, Mageshbabu; Sankar, Sathish; Kannangai, Rajesh; Nandagopal, Balaji; Sridharan, Gopalan
2017-12-01
This review is focused at exploring the strengths of modern technology driven data compiled in the areas of virus gene sequencing, virus protein structures and their implication to viral diagnosis and therapy. The information for virome analysis (viromics) is generated by the study of viral genomes (entire nucleotide sequence) and viral genes (coding for protein). Presently, the study of viral infectious diseases in terms of etiopathogenesis and development of newer therapeutics is undergoing rapid changes. Currently, viromics relies on deep sequencing, next generation sequencing (NGS) data and public domain databases like GenBank and unique virus specific databases. Two commonly used NGS platforms: Illumina and Ion Torrent, recommend maximum fragment lengths of about 300 and 400 nucleotides for analysis respectively. Direct detection of viruses in clinical samples is now evolving using these methods. Presently, there are a considerable number of good treatment options for HBV/HIV/HCV. These viruses however show development of drug resistance. The drug susceptibility regions of the genomes are sequenced and the prediction of drug resistance is now possible from 3 public domains available on the web. This has been made possible through advances in the technology with the advent of high throughput sequencing and meta-analysis through sophisticated and easy to use software and the use of high speed computers for bioinformatics. More recently NGS technology has been improved with single-molecule real-time sequencing. Here complete long reads can be obtained with less error overcoming a limitation of the NGS which is inherently prone to software anomalies that arise in the hands of personnel without adequate training. The development in understanding the viruses in terms of their genome, pathobiology, transcriptomics and molecular epidemiology constitutes viromics. It could be stated that these developments will bring about radical changes and advancement especially in the field of antiviral therapy and diagnostic virology.
A clinical measure of maximal and rapid stepping in older women.
Medell, J L; Alexander, N B
2000-08-01
In older adults, clinical measures have been used to assess fall risk based on the ability to maintain stance or to complete a functional task. However, in an impending fall situation, a stepping response is often used when strategies to maintain stance are inadequate. We examined how maximal and rapid stepping performance might differ among healthy young, healthy older, and balance-impaired older adults, and how this stepping performance related to other measures of balance and fall risk. Young (Y; n = 12; mean age, 21 years), unimpaired older (UO; n = 12; mean age, 69 years), and balance-impaired older women IO; n = 10; mean age, 77 years) were tested in their ability to take a maximal step (Maximum Step Length or MSL) and in their ability to take rapid steps in three directions (front, side, and back), termed the Rapid Step Test (RST). Time to complete the RST and stepping errors occurring during the RST were noted. The IO group, compared with the Y and UO groups, demonstrated significantly poorer balance and higher fall risk, based on performance on tasks such as unipedal stance. Mean MSL was significantly higher (by 16%) in the Y than in the UO group and in the UO (by 30%) than in the IO group. Mean RST time was significantly faster in the Y group versus the UO group (by 24%) and in the UO group versus the IO group (by 15%). Mean RST errors tended to be higher in the UO than in the Y group, but were significantly higher only in the UO versus the IO group. Both MSL and RST time correlated strongly (0.5 to 0.8) with other measures of balance and fall risk including unipedal stance, tandem walk, leg strength, and the Activities-Specific Balance Confidence (ABC) scale. We found substantial declines in the ability of both unimpaired and balance-impaired older adults to step maximally and to step rapidly. Stepping performance is closely related to other measures of balance and fall risk and might be considered in future studies as a predictor of falls and fall-related injuries.
Dagley, Ashley; Downs, Brittney; Hagloch, Joseph; Tarbet, E. Bart
2014-01-01
The treatment of progressive vaccinia in individuals has involved antiviral drugs, such as cidofovir (CDV), brincidofovir, and/or tecovirimat, combined with vaccinia immune globulin (VIG). VIG is costly, and its supply is limited, so sparing the use of VIG during treatment is an important objective. VIG sparing was modeled in immunosuppressed mice by maximizing the treatment benefits of CDV combined with VIG to determine the effective treatments that delayed the time to death, reduced cutaneous lesion severity, and/or decreased tissue viral titers. SKH-1 hairless mice immunosuppressed with cyclophosphamide and hairless SCID mice (SHO strain) were infected cutaneously with vaccinia virus. Monotherapy, dual combinations (CDV plus VIG), or triple therapy (topical CDV, parenteral CDV, and VIG) were initiated 2 days postinfection and were given every 3 to 4 days through day 11. The efficacy assessment included survival rate, cutaneous lesion severity, and viral titers. Delays in the time to death and the reduction in lesion severity occurred in the following order of efficacy: triple therapy had greater efficacy than double combinations (CDV plus VIG or topical plus parenteral CDV), which had greater efficacy than VIG alone. Parenteral administration of CDV or VIG was necessary to suppress virus titers in internal organs (liver, lung, and spleen). The skin viral titers were significantly reduced by triple therapy only. The greatest efficacy was achieved by triple therapy. In humans, this regimen should translate to a faster cure rate, thus sparing the amount of VIG used for treatment. PMID:25385098
Adaptation to sensory-motor reflex perturbations is blind to the source of errors.
Hudson, Todd E; Landy, Michael S
2012-01-06
In the study of visual-motor control, perhaps the most familiar findings involve adaptation to externally imposed movement errors. Theories of visual-motor adaptation based on optimal information processing suppose that the nervous system identifies the sources of errors to effect the most efficient adaptive response. We report two experiments using a novel perturbation based on stimulating a visually induced reflex in the reaching arm. Unlike adaptation to an external force, our method induces a perturbing reflex within the motor system itself, i.e., perturbing forces are self-generated. This novel method allows a test of the theory that error source information is used to generate an optimal adaptive response. If the self-generated source of the visually induced reflex perturbation is identified, the optimal response will be via reflex gain control. If the source is not identified, a compensatory force should be generated to counteract the reflex. Gain control is the optimal response to reflex perturbation, both because energy cost and movement errors are minimized. Energy is conserved because neither reflex-induced nor compensatory forces are generated. Precision is maximized because endpoint variance is proportional to force production. We find evidence against source-identified adaptation in both experiments, suggesting that sensory-motor information processing is not always optimal.
Bartram, Jack; Mountjoy, Edward; Brooks, Tony; Hancock, Jeremy; Williamson, Helen; Wright, Gary; Moppett, John; Goulden, Nick; Hubank, Mike
2016-07-01
High-throughput sequencing (HTS) (next-generation sequencing) of the rearranged Ig and T-cell receptor genes promises to be less expensive and more sensitive than current methods of monitoring minimal residual disease (MRD) in patients with acute lymphoblastic leukemia. However, the adoption of new approaches by clinical laboratories requires careful evaluation of all potential sources of error and the development of strategies to ensure the highest accuracy. Timely and efficient clinical use of HTS platforms will depend on combining multiple samples (multiplexing) in each sequencing run. Here we examine the Ig heavy-chain gene HTS on the Illumina MiSeq platform for MRD. We identify errors associated with multiplexing that could potentially impact the accuracy of MRD analysis. We optimize a strategy that combines high-purity, sequence-optimized oligonucleotides, dual indexing, and an error-aware demultiplexing approach to minimize errors and maximize sensitivity. We present a probability-based, demultiplexing pipeline Error-Aware Demultiplexer that is suitable for all MiSeq strategies and accurately assigns samples to the correct identifier without excessive loss of data. Finally, using controls quantified by digital PCR, we show that HTS-MRD can accurately detect as few as 1 in 10(6) copies of specific leukemic MRD. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Shipitsin, M; Small, C; Choudhury, S; Giladi, E; Friedlander, S; Nardone, J; Hussain, S; Hurley, A D; Ernst, C; Huang, Y E; Chang, H; Nifong, T P; Rimm, D L; Dunyak, J; Loda, M; Berman, D M; Blume-Jensen, P
2014-09-09
Key challenges of biopsy-based determination of prostate cancer aggressiveness include tumour heterogeneity, biopsy-sampling error, and variations in biopsy interpretation. The resulting uncertainty in risk assessment leads to significant overtreatment, with associated costs and morbidity. We developed a performance-based strategy to identify protein biomarkers predictive of prostate cancer aggressiveness and lethality regardless of biopsy-sampling variation. Prostatectomy samples from a large patient cohort with long follow-up were blindly assessed by expert pathologists who identified the tissue regions with the highest and lowest Gleason grade from each patient. To simulate biopsy-sampling error, a core from a high- and a low-Gleason area from each patient sample was used to generate a 'high' and a 'low' tumour microarray, respectively. Using a quantitative proteomics approach, we identified from 160 candidates 12 biomarkers that predicted prostate cancer aggressiveness (surgical Gleason and TNM stage) and lethal outcome robustly in both high- and low-Gleason areas. Conversely, a previously reported lethal outcome-predictive marker signature for prostatectomy tissue was unable to perform under circumstances of maximal sampling error. Our results have important implications for cancer biomarker discovery in general and development of a sampling error-resistant clinical biopsy test for prediction of prostate cancer aggressiveness.
Weatherwax, Ryan M; Harris, Nigel K; Kilding, Andrew E; Dalleck, Lance C
2018-01-01
Even though cardiorespiratory fitness (CRF) training elicits numerous health benefits, not all individuals have positive training responses following a structured CRF intervention. It has been suggested that the technical error (TE), a combination of biological variability and measurement error, should be used to establish specific training responsiveness criteria to gain further insight on the effectiveness of the training program. To date, most training interventions use an absolute change or a TE from previous findings, which do not take into consideration the training site and equipment used to establish training outcomes or the specific cohort being evaluated. The purpose of this investigation was to retrospectively analyze training responsiveness of two CRF training interventions using two common criteria and a site-specific TE. Sixteen men and women completed two maximal graded exercise tests and verification bouts to identify maximal oxygen consumption (VO 2 max) and establish a site-specific TE. The TE was then used to retrospectively analyze training responsiveness in comparison to commonly used criteria: percent change of >0% and >+5.6% in VO 2 max. The TE was found to be 7.7% for relative VO 2 max. χ 2 testing showed significant differences in all training criteria for each intervention and pooled data from both interventions, except between %Δ >0 and %Δ >+7.7% in one of the investigations. Training nonresponsiveness ranged from 11.5% to 34.6%. Findings from the present study support the utility of site-specific TE criterion to quantify training responsiveness. A similar methodology of establishing a site-specific and even cohort specific TE should be considered to establish when true cardiorespiratory training adaptations occur.
Comparison of three optical tracking systems in a complex navigation scenario.
Rudolph, Tobias; Ebert, Lars; Kowal, Jens
2010-01-01
Three-dimensional rotational X-ray imaging with the SIREMOBIL Iso-C3D (Siemens AG, Medical Solutions, Erlangen, Germany) has become a well-established intra-operative imaging modality. In combination with a tracking system, the Iso-C3D provides inherently registered image volumes ready for direct navigation. This is achieved by means of a pre-calibration procedure. The aim of this study was to investigate the influence of the tracking system used on the overall navigation accuracy of direct Iso-C3D navigation. Three models of tracking system were used in the study: Two Optotrak 3020s, a Polaris P4 and a Polaris Spectra system, with both Polaris systems being in the passive operation mode. The evaluation was carried out at two different sites using two Iso-C3D devices. To measure the navigation accuracy, a number of phantom experiments were conducted using an acrylic phantom equipped with titanium spheres. After scanning, a special pointer was used to pinpoint these markers. The difference between the digitized and navigated positions served as the accuracy measure. Up to 20 phantom scans were performed for each tracking system. The average accuracy measured was 0.86 mm and 0.96 mm for the two Optotrak 3020 systems, 1.15 mm for the Polaris P4, and 1.04 mm for the Polaris Spectra system. For the Polaris systems a higher maximal error was found, but all three systems yielded similar minimal errors. On average, all tracking systems used in this study could deliver similar navigation accuracy. The passive Polaris system showed – as expected – higher maximal errors; however, depending on the application constraints, this might be negligible.
NASA Astrophysics Data System (ADS)
Hinder, Ian; Buonanno, Alessandra; Boyle, Michael; Etienne, Zachariah B.; Healy, James; Johnson-McDaniel, Nathan K.; Nagar, Alessandro; Nakano, Hiroyuki; Pan, Yi; Pfeiffer, Harald P.; Pürrer, Michael; Reisswig, Christian; Scheel, Mark A.; Schnetter, Erik; Sperhake, Ulrich; Szilágyi, Bela; Tichy, Wolfgang; Wardell, Barry; Zenginoğlu, Anıl; Alic, Daniela; Bernuzzi, Sebastiano; Bode, Tanja; Brügmann, Bernd; Buchman, Luisa T.; Campanelli, Manuela; Chu, Tony; Damour, Thibault; Grigsby, Jason D.; Hannam, Mark; Haas, Roland; Hemberger, Daniel A.; Husa, Sascha; Kidder, Lawrence E.; Laguna, Pablo; London, Lionel; Lovelace, Geoffrey; Lousto, Carlos O.; Marronetti, Pedro; Matzner, Richard A.; Mösta, Philipp; Mroué, Abdul; Müller, Doreen; Mundim, Bruno C.; Nerozzi, Andrea; Paschalidis, Vasileios; Pollney, Denis; Reifenberger, George; Rezzolla, Luciano; Shapiro, Stuart L.; Shoemaker, Deirdre; Taracchini, Andrea; Taylor, Nicholas W.; Teukolsky, Saul A.; Thierfelder, Marcus; Witek, Helvi; Zlochower, Yosef
2013-01-01
The Numerical-Relativity-Analytical-Relativity (NRAR) collaboration is a joint effort between members of the numerical relativity, analytical relativity and gravitational-wave data analysis communities. The goal of the NRAR collaboration is to produce numerical-relativity simulations of compact binaries and use them to develop accurate analytical templates for the LIGO/Virgo Collaboration to use in detecting gravitational-wave signals and extracting astrophysical information from them. We describe the results of the first stage of the NRAR project, which focused on producing an initial set of numerical waveforms from binary black holes with moderate mass ratios and spins, as well as one non-spinning binary configuration which has a mass ratio of 10. All of the numerical waveforms are analysed in a uniform and consistent manner, with numerical errors evaluated using an analysis code created by members of the NRAR collaboration. We compare previously-calibrated, non-precessing analytical waveforms, notably the effective-one-body (EOB) and phenomenological template families, to the newly-produced numerical waveforms. We find that when the binary's total mass is ˜100-200M⊙, current EOB and phenomenological models of spinning, non-precessing binary waveforms have overlaps above 99% (for advanced LIGO) with all of the non-precessing-binary numerical waveforms with mass ratios ⩽4, when maximizing over binary parameters. This implies that the loss of event rate due to modelling error is below 3%. Moreover, the non-spinning EOB waveforms previously calibrated to five non-spinning waveforms with mass ratio smaller than 6 have overlaps above 99.7% with the numerical waveform with a mass ratio of 10, without even maximizing on the binary parameters.
A boundary-optimized rejection region test for the two-sample binomial problem.
Gabriel, Erin E; Nason, Martha; Fay, Michael P; Follmann, Dean A
2018-03-30
Testing the equality of 2 proportions for a control group versus a treatment group is a well-researched statistical problem. In some settings, there may be strong historical data that allow one to reliably expect that the control proportion is one, or nearly so. While one-sample tests or comparisons to historical controls could be used, neither can rigorously control the type I error rate in the event the true control rate changes. In this work, we propose an unconditional exact test that exploits the historical information while controlling the type I error rate. We sequentially construct a rejection region by first maximizing the rejection region in the space where all controls have an event, subject to the constraint that our type I error rate does not exceed α for any true event rate; then with any remaining α we maximize the additional rejection region in the space where one control avoids the event, and so on. When the true control event rate is one, our test is the most powerful nonrandomized test for all points in the alternative space. When the true control event rate is nearly one, we demonstrate that our test has equal or higher mean power, averaging over the alternative space, than a variety of well-known tests. For the comparison of 4 controls and 4 treated subjects, our proposed test has higher power than all comparator tests. We demonstrate the properties of our proposed test by simulation and use our method to design a malaria vaccine trial. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Kokkinos, Peter; Kaminsky, Leonard A; Arena, Ross; Zhang, Jiajia; Myers, Jonathan
2017-08-15
Impaired cardiorespiratory fitness (CRF) is closely linked to chronic illness and associated with adverse events. The American College of Sports Medicine (ACSM) regression equations (ACSM equations) developed to estimate oxygen uptake have known limitations leading to well-documented overestimation of CRF, especially at higher work rates. Thus, there is a need to explore alternative equations to more accurately predict CRF. We assessed maximal oxygen uptake (VO 2 max) obtained directly by open-circuit spirometry in 7,983 apparently healthy subjects who participated in the Fitness Registry and the Importance of Exercise National Database (FRIEND). We randomly sampled 70% of the participants from each of the following age categories: <40, 40 to 50, 50 to 70, and ≥70 and used the remaining 30% for validation. Multivariable linear regression analysis was applied to identify the most relevant variables and construct the best prediction model for VO 2 max. Treadmill speed and treadmill speed × grade were considered in the final model as predictors of measured VO 2 max and the following equation was generated: VO 2 max in ml O 2 /kg/min = speed (m/min) × (0.17 + fractional grade × 0.79) + 3.5. The FRIEND equation predicted VO 2 max with an overall error >4 times lower than the error associated with the traditional ACSM equations (5.1 ± 18.3% vs 21.4 ± 24.9%, respectively). Overestimation associated with the ACSM equation was accentuated when different protocols were considered separately. In conclusion, The FRIEND equation predicts VO 2 max more precisely than the traditional ACSM equations with an overall error >4 times lower than that associated with the ACSM equations. Published by Elsevier Inc.
A long-term follow-up evaluation of electronic health record prescribing safety
Abramson, Erika L; Malhotra, Sameer; Osorio, S Nena; Edwards, Alison; Cheriff, Adam; Cole, Curtis; Kaushal, Rainu
2013-01-01
Objective To be eligible for incentives through the Electronic Health Record (EHR) Incentive Program, many providers using older or locally developed EHRs will be transitioning to new, commercial EHRs. We previously evaluated prescribing errors made by providers in the first year following transition from a locally developed EHR with minimal prescribing clinical decision support (CDS) to a commercial EHR with robust CDS. Following system refinements, we conducted this study to assess the rates and types of errors 2 years after transition and determine the evolution of errors. Materials and methods We conducted a mixed methods cross-sectional case study of 16 physicians at an academic-affiliated ambulatory clinic from April to June 2010. We utilized standardized prescription and chart review to identify errors. Fourteen providers also participated in interviews. Results We analyzed 1905 prescriptions. The overall prescribing error rate was 3.8 per 100 prescriptions (95% CI 2.8 to 5.1). Error rates were significantly lower 2 years after transition (p<0.001 compared to pre-implementation, 12 weeks and 1 year after transition). Rates of near misses remained unchanged. Providers positively appreciated most system refinements, particularly reduced alert firing. Discussion Our study suggests that over time and with system refinements, use of a commercial EHR with advanced CDS can lead to low prescribing error rates, although more serious errors may require targeted interventions to eliminate them. Reducing alert firing frequency appears particularly important. Our results provide support for federal efforts promoting meaningful use of EHRs. Conclusions Ongoing error monitoring can allow CDS to be optimally tailored and help achieve maximal safety benefits. Clinical Trials Registration ClinicalTrials.gov, Identifier: NCT00603070. PMID:23578816
NASA Astrophysics Data System (ADS)
Lakshminarayanan, Abirami; Reddy, B. Uma; Raghav, Nallani; Ravi, Vijay Kumar; Kumar, Anuj; Maiti, Prabal K.; Sood, A. K.; Jayaraman, N.; Das, Saumitra
2015-10-01
A RNAi based antiviral strategy holds the promise to impede hepatitis C viral (HCV) infection overcoming the problem of emergence of drug resistant variants, usually encountered in the interferon free direct-acting antiviral therapy. Targeted delivery of siRNA helps minimize adverse `off-target' effects and maximize the efficacy of therapeutic response. Herein, we report the delivery of siRNA against the conserved 5'-untranslated region (UTR) of HCV RNA using a liver-targeted dendritic nano-vector functionalized with a galactopyranoside ligand (DG). Physico-chemical characterization revealed finer details of complexation of DG with siRNA, whereas molecular dynamic simulations demonstrated sugar moieties projecting ``out'' in the complex. Preferential delivery of siRNA to the liver was achieved through a highly specific ligand-receptor interaction between dendritic galactose and the asialoglycoprotein receptor. The siRNA-DG complex exhibited perinuclear localization in liver cells and co-localization with viral proteins. The histopathological studies showed the systemic tolerance and biocompatibility of DG. Further, whole body imaging and immunohistochemistry studies confirmed the preferential delivery of the nucleic acid to mice liver. Significant decrease in HCV RNA levels (up to 75%) was achieved in HCV subgenomic replicon and full length HCV-JFH1 infectious cell culture systems. The multidisciplinary approach provides the `proof of concept' for restricted delivery of therapeutic siRNAs using a target oriented dendritic nano-vector.A RNAi based antiviral strategy holds the promise to impede hepatitis C viral (HCV) infection overcoming the problem of emergence of drug resistant variants, usually encountered in the interferon free direct-acting antiviral therapy. Targeted delivery of siRNA helps minimize adverse `off-target' effects and maximize the efficacy of therapeutic response. Herein, we report the delivery of siRNA against the conserved 5'-untranslated region (UTR) of HCV RNA using a liver-targeted dendritic nano-vector functionalized with a galactopyranoside ligand (DG). Physico-chemical characterization revealed finer details of complexation of DG with siRNA, whereas molecular dynamic simulations demonstrated sugar moieties projecting ``out'' in the complex. Preferential delivery of siRNA to the liver was achieved through a highly specific ligand-receptor interaction between dendritic galactose and the asialoglycoprotein receptor. The siRNA-DG complex exhibited perinuclear localization in liver cells and co-localization with viral proteins. The histopathological studies showed the systemic tolerance and biocompatibility of DG. Further, whole body imaging and immunohistochemistry studies confirmed the preferential delivery of the nucleic acid to mice liver. Significant decrease in HCV RNA levels (up to 75%) was achieved in HCV subgenomic replicon and full length HCV-JFH1 infectious cell culture systems. The multidisciplinary approach provides the `proof of concept' for restricted delivery of therapeutic siRNAs using a target oriented dendritic nano-vector. Electronic supplementary information (ESI) available: Spectral data and experimental details. See DOI: 10.1039/c5nr02898a
Repression of the Chromatin-Tethering Domain of Murine Leukemia Virus p12.
Brzezinski, Jonathon D; Modi, Apexa; Liu, Mengdan; Roth, Monica J
2016-12-15
Murine leukemia virus (MLV) p12, encoded within Gag, binds the viral preintegration complex (PIC) to the mitotic chromatin. This acts to anchor the viral PIC in the nucleus as the nuclear envelope re-forms postmitosis. Mutations within the p12 C terminus (p12 PM13 to PM15) block early stages in viral replication. Within the p12 PM13 region (p12 60 PSPMA 65 ), our studies indicated that chromatin tethering was not detected when the wild-type (WT) p12 protein (M63) was expressed as a green fluorescent protein (GFP) fusion; however, constructs bearing p12-I63 were tethered. N-terminal truncations of the activated p12-I63-GFP indicated that tethering increased further upon deletion of p12 25 DLLTEDPPPY 34 , which includes the late domain required for viral assembly. The p12 PM15 sequence (p12 70 RREPP 74 ) is critical for wild-type viral viability; however, virions bearing the PM15 mutation (p12 70 AAAAA 74 ) with a second M63I mutant were viable, with a titer 18-fold lower than that of the WT. The p12 M63I mutation amplified chromatin tethering and compensated for the loss of chromatin binding of p12 PM15. Rescue of the p12-M63-PM15 nonviable mutant with prototype foamy virus (PFV) and Kaposi's sarcoma herpesvirus (KSHV) tethering sequences confirmed the function of p12 70-74 in chromatin binding. Minimally, full-strength tethering was seen with only p12 61 SPIASRLRGRR 71 fused to GFP. These results indicate that the p12 C terminus alone is sufficient for chromatin binding and that the presence of the p12 25 DLLTEDPPPY 34 motif in the N terminus suppresses the ability to tether. This study defines a regulatory mechanism controlling the differential roles of the MLV p12 protein in early and late replication. During viral assembly and egress, the late domain within the p12 N terminus functions to bind host vesicle release factors. During viral entry, the C terminus of p12 is required for tethering to host mitotic chromosomes. Our studies indicate that the p12 domain including the PPPY late sequence temporally represses the p12 chromatin tethering motif. Maximal p12 tethering was identified with only an 11-amino-acid minimal chromatin tethering motif encoded at p12 61-71 Within this region, the p12-M63I substitution switches p12 into a tethering-competent state, partially rescuing the p12-PM15 tethering mutant. A model for how this conformational change regulates early versus late functions is presented. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Network Dynamics Underlying Speed-Accuracy Trade-Offs in Response to Errors
Agam, Yigal; Carey, Caitlin; Barton, Jason J. S.; Dyckman, Kara A.; Lee, Adrian K. C.; Vangel, Mark; Manoach, Dara S.
2013-01-01
The ability to dynamically and rapidly adjust task performance based on its outcome is fundamental to adaptive, flexible behavior. Over trials of a task, responses speed up until an error is committed and after the error responses slow down. These dynamic adjustments serve to optimize performance and are well-described by the speed-accuracy trade-off (SATO) function. We hypothesized that SATOs based on outcomes reflect reciprocal changes in the allocation of attention between the internal milieu and the task-at-hand, as indexed by reciprocal changes in activity between the default and dorsal attention brain networks. We tested this hypothesis using functional MRI to examine the pattern of network activation over a series of trials surrounding and including an error. We further hypothesized that these reciprocal changes in network activity are coordinated by the posterior cingulate cortex (PCC) and would rely on the structural integrity of its white matter connections. Using diffusion tensor imaging, we examined whether fractional anisotropy of the posterior cingulum bundle correlated with the magnitude of reciprocal changes in network activation around errors. As expected, reaction time (RT) in trials surrounding errors was consistent with predictions from the SATO function. Activation in the default network was: (i) inversely correlated with RT, (ii) greater on trials before than after an error and (iii) maximal at the error. In contrast, activation in the right intraparietal sulcus of the dorsal attention network was (i) positively correlated with RT and showed the opposite pattern: (ii) less activation before than after an error and (iii) the least activation on the error. Greater integrity of the posterior cingulum bundle was associated with greater reciprocity in network activation around errors. These findings suggest that dynamic changes in attention to the internal versus external milieu in response to errors underlie SATOs in RT and are mediated by the PCC. PMID:24069223
Pennation angle dependency in skeletal muscle tissue doppler strain in dynamic contractions.
Lindberg, Frida; Öhberg, Fredrik; Granåsen, Gabriel; Brodin, Lars-Åke; Grönlund, Christer
2011-07-01
Tissue velocity imaging (TVI) is a Doppler based ultrasound technique that can be used to study regional deformation in skeletal muscle tissue. The aim of this study was to develop a biomechanical model to describe the TVI strain's dependency on the pennation angle. We demonstrate its impact as the subsequent strain measurement error using dynamic elbow contractions from the medial and the lateral part of biceps brachii at two different loadings; 5% and 25% of maximum voluntary contraction (MVC). The estimated pennation angles were on average about 4° in extended position and increased to a maximal of 13° in flexed elbow position. The corresponding relative angular error spread from around 7% up to around 40%. To accurately apply TVI on skeletal muscles, the error due to angle changes should be compensated for. As a suggestion, this could be done according to the presented model. Copyright © 2011 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Precoded spatial multiplexing MIMO system with spatial component interleaver.
Gao, Xiang; Wu, Zhanji
In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.
Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)
NASA Technical Reports Server (NTRS)
Schmalz, Tyler; Ryan, Jack
2011-01-01
Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.
NASA Astrophysics Data System (ADS)
Jarabo-Amores, María-Pilar; la Mata-Moya, David de; Gil-Pita, Roberto; Rosa-Zurera, Manuel
2013-12-01
The application of supervised learning machines trained to minimize the Cross-Entropy error to radar detection is explored in this article. The detector is implemented with a learning machine that implements a discriminant function, which output is compared to a threshold selected to fix a desired probability of false alarm. The study is based on the calculation of the function the learning machine approximates to during training, and the application of a sufficient condition for a discriminant function to be used to approximate the optimum Neyman-Pearson (NP) detector. In this article, the function a supervised learning machine approximates to after being trained to minimize the Cross-Entropy error is obtained. This discriminant function can be used to implement the NP detector, which maximizes the probability of detection, maintaining the probability of false alarm below or equal to a predefined value. Some experiments about signal detection using neural networks are also presented to test the validity of the study.
Super-linear Precision in Simple Neural Population Codes
NASA Astrophysics Data System (ADS)
Schwab, David; Fiete, Ila
2015-03-01
A widely used tool for quantifying the precision with which a population of noisy sensory neurons encodes the value of an external stimulus is the Fisher Information (FI). Maximizing the FI is also a commonly used objective for constructing optimal neural codes. The primary utility and importance of the FI arises because it gives, through the Cramer-Rao bound, the smallest mean-squared error achievable by any unbiased stimulus estimator. However, it is well-known that when neural firing is sparse, optimizing the FI can result in codes that perform very poorly when considering the resulting mean-squared error, a measure with direct biological relevance. Here we construct optimal population codes by minimizing mean-squared error directly and study the scaling properties of the resulting network, focusing on the optimal tuning curve width. We then extend our results to continuous attractor networks that maintain short-term memory of external stimuli in their dynamics. Here we find similar scaling properties in the structure of the interactions that minimize diffusive information loss.
Reis, Victor M.; Silva, António J.; Ascensão, António; Duarte, José A.
2005-01-01
The present study intended to verify if the inclusion of intensities above lactate threshold (LT) in the VO2/running speed regression (RSR) affects the estimation error of accumulated oxygen deficit (AOD) during a treadmill running performed by endurance-trained subjects. Fourteen male endurance-trained runners performed a sub maximal treadmill running test followed by an exhaustive supra maximal test 48h later. The total energy demand (TED) and the AOD during the supra maximal test were calculated from the RSR established on first testing. For those purposes two regressions were used: a complete regression (CR) including all available sub maximal VO2 measurements and a sub threshold regression (STR) including solely the VO2 values measured during exercise intensities below LT. TED mean values obtained with CR and STR were not significantly different under the two conditions of analysis (177.71 ± 5.99 and 174.03 ± 6.53 ml·kg-1, respectively). Also the mean values of AOD obtained with CR and STR did not differ under the two conditions (49.75 ± 8.38 and 45.8 9 ± 9.79 ml·kg-1, respectively). Moreover, the precision of those estimations was also similar under the two procedures. The mean error for TED estimation was 3.27 ± 1.58 and 3.41 ± 1.85 ml·kg-1 (for CR and STR, respectively) and the mean error for AOD estimation was 5.03 ± 0.32 and 5.14 ± 0.35 ml·kg-1 (for CR and STR, respectively). The results indicated that the inclusion of exercise intensities above LT in the RSR does not improve the precision of the AOD estimation in endurance-trained runners. However, the use of STR may induce an underestimation of AOD comparatively to the use of CR. Key Points It has been suggested that the inclusion of exercise intensities above the lactate threshold in the VO2/power regression can significantly affect the estimation of the energy cost and, thus, the estimation of the AOD. However data on the precision of those AOD measurements is rarely provided. We have evaluated the effects of the inclusion of those exercise intensities on the AOD precision. The results have indicated that the inclusion of exercise intensities above the lactate threshold in the VO2/running speed regression does not improve the precision of AOD estimation in endurance-trained runners. However, the use of sub threshold regressions may induce an underestimation of AOD comparatively to the use of complete regressions. PMID:24501560
Boehme, Philip; Stellberger, Thorsten; Solanki, Manish; Zhang, Wenli; Schulz, Eric; Bergmann, Thorsten; Liu, Jing; Doerner, Johannes; Baiker, Armin E.
2015-01-01
Abstract High-capacity adenoviral vectors (HCAdVs) are promising tools for gene therapy as well as for genetic engineering. However, one limitation of the HCAdV vector system is the complex, time-consuming, and labor-intensive production process and the following quality control procedure. Since HCAdVs are deleted for all viral coding sequences, a helper virus (HV) is needed in the production process to provide the sequences for all viral proteins in trans. For the purification procedure of HCAdV, cesium chloride density gradient centrifugation is usually performed followed by buffer exchange using dialysis or comparable methods. However, performing these steps is technically difficult, potentially error-prone, and not scalable. Here, we establish a new protocol for small-scale production of HCAdV based on commercially available adenovirus purification systems and a standard method for the quality control of final HCAdV preparations. For titration of final vector preparations, we established a droplet digital polymerase chain reaction (ddPCR) that uses a standard free-end-point PCR in small droplets of defined volume. By using different probes, this method is capable of detecting and quantifying HCAdV and HV in one reaction independent of reference material, rendering this method attractive for accurately comparing viral titers between different laboratories. In summary, we demonstrate that it is possible to produce HCAdV in a small scale of sufficient quality and quantity to perform experiments in cell culture, and we established a reliable protocol for vector titration based on ddPCR. Our method significantly reduces time and required equipment to perform HCAdV production. In the future the ddPCR technology could be advantageous for titration of other viral vectors commonly used in gene therapy. PMID:25640117
Sánchez-Luque, Francisco J.; Stich, Michael; Manrubia, Susanna; Briones, Carlos; Berzal-Herranz, Alfredo
2014-01-01
The human immunodeficiency virus type-1 (HIV-1) genome contains multiple, highly conserved structural RNA domains that play key roles in essential viral processes. Interference with the function of these RNA domains either by disrupting their structures or by blocking their interaction with viral or cellular factors may seriously compromise HIV-1 viability. RNA aptamers are amongst the most promising synthetic molecules able to interact with structural domains of viral genomes. However, aptamer shortening up to their minimal active domain is usually necessary for scaling up production, what requires very time-consuming, trial-and-error approaches. Here we report on the in vitro selection of 64 nt-long specific aptamers against the complete 5′-untranslated region of HIV-1 genome, which inhibit more than 75% of HIV-1 production in a human cell line. The analysis of the selected sequences and structures allowed for the identification of a highly conserved 16 nt-long stem-loop motif containing a common 8 nt-long apical loop. Based on this result, an in silico designed 16 nt-long RNA aptamer, termed RNApt16, was synthesized, with sequence 5′-CCCCGGCAAGGAGGGG-3′. The HIV-1 inhibition efficiency of such an aptamer was close to 85%, thus constituting the shortest RNA molecule so far described that efficiently interferes with HIV-1 replication. PMID:25175101
Cell-Mediated Immunity to Target the Persistent Human Immunodeficiency Virus Reservoir
Montaner, Luis J.
2017-01-01
Abstract Effective clearance of virally infected cells requires the sequential activity of innate and adaptive immunity effectors. In human immunodeficiency virus (HIV) infection, naturally induced cell-mediated immune responses rarely eradicate infection. However, optimized immune responses could potentially be leveraged in HIV cure efforts if epitope escape and lack of sustained effector memory responses were to be addressed. Here we review leading HIV cure strategies that harness cell-mediated control against HIV in stably suppressed antiretroviral-treated subjects. We focus on strategies that may maximize target recognition and eradication by the sequential activation of a reconstituted immune system, together with delivery of optimal T-cell responses that can eliminate the reservoir and serve as means to maintain control of HIV spread in the absence of antiretroviral therapy (ART). As evidenced by the evolution of ART, we argue that a combination of immune-based strategies will be a superior path to cell-mediated HIV control and eradication. Available data from several human pilot trials already identify target strategies that may maximize antiviral pressure by joining innate and engineered T cell responses toward testing for sustained HIV remission and/or cure. PMID:28520969
Experimental test of visuomotor updating models that explain perisaccadic mislocalization.
Van Wetter, Sigrid M C I; Van Opstal, A John
2008-10-23
Localization of a brief visual target is inaccurate when presented around saccade onset. Perisaccadic mislocalization is maximal in the saccade direction and varies systematically with the target-saccade onset disparity. It has been hypothesized that this effect is either due to a sluggish representation of eye position, to low-pass filtering of the visual event, to saccade-induced compression of visual space, or to a combination of these effects. Despite their differences, these schemes all predict that the pattern of localization errors varies systematically with the saccade amplitude and kinematics. We tested these predictions for the double-step paradigm by analyzing the errors for saccades of widely varying amplitudes. Our data show that the measured error patterns are only mildly influenced by the primary-saccade amplitude over a large range of saccade properties. An alternative possibility, better accounting for the data, assumes that around saccade onset perceived target location undergoes a uniform shift in the saccade direction that varies with amplitude only for small saccades. The strength of this visual effect saturates at about 10 deg and also depends on target duration. Hence, we propose that perisaccadic mislocalization results from errors in visual-spatial perception rather than from sluggish oculomotor feedback.
Controlled sound field with a dual layer loudspeaker array
NASA Astrophysics Data System (ADS)
Shin, Mincheol; Fazi, Filippo M.; Nelson, Philip A.; Hirono, Fabio C.
2014-08-01
Controlled sound interference has been extensively investigated using a prototype dual layer loudspeaker array comprised of 16 loudspeakers. Results are presented for measures of array performance such as input signal power, directivity of sound radiation and accuracy of sound reproduction resulting from the application of conventional control methods such as minimization of error in mean squared pressure, maximization of energy difference and minimization of weighted pressure error and energy. Procedures for selecting the tuning parameters have also been introduced. With these conventional concepts aimed at the production of acoustically bright and dark zones, all the control methods used require a trade-off between radiation directivity and reproduction accuracy in the bright zone. An alternative solution is proposed which can achieve better performance based on the measures presented simultaneously by inserting a low priority zone named as the “gray” zone. This involves the weighted minimization of mean-squared errors in both bright and dark zones together with the gray zone in which the minimization error is given less importance. This results in the production of directional bright zone in which the accuracy of sound reproduction is maintained with less required input power. The results of simulations and experiments are shown to be in excellent agreement.
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Gupta, N. K.; Hansen, R. S.
1978-01-01
An integrated approach to rotorcraft system identification is described. This approach consists of sequential application of (1) data filtering to estimate states of the system and sensor errors, (2) model structure estimation to isolate significant model effects, and (3) parameter identification to quantify the coefficient of the model. An input design algorithm is described which can be used to design control inputs which maximize parameter estimation accuracy. Details of each aspect of the rotorcraft identification approach are given. Examples of both simulated and actual flight data processing are given to illustrate each phase of processing. The procedure is shown to provide means of calibrating sensor errors in flight data, quantifying high order state variable models from the flight data, and consequently computing related stability and control design models.
On the problem of data assimilation by means of synchronization
NASA Astrophysics Data System (ADS)
Szendro, Ivan G.; RodríGuez, Miguel A.; López, Juan M.
2009-10-01
The potential use of synchronization as a method for data assimilation is investigated in a Lorenz96 model. Data representing the reality are obtained from a Lorenz96 model with added noise. We study the assimilation scheme by means of synchronization for different noise intensities. We use a novel plot representation of the synchronization error in a phase diagram consisting of two variables: the amplitude and the width of the error after a suitable logarithmic transformation (the so-called mean-variance of logarithms diagram). Our main result concerns the existence of an "optimal" coupling for which the synchronization is maximal. We finally show how this allows us to quantify the degree of assimilation, providing a criterion for the selection of optimal couplings and validity of models.
Criteria for the use of regression analysis for remote sensing of sediment and pollutants
NASA Technical Reports Server (NTRS)
Whitlock, C. H.; Kuo, C. Y.; Lecroy, S. R.
1982-01-01
An examination of limitations, requirements, and precision of the linear multiple-regression technique for quantification of marine environmental parameters is conducted. Both environmental and optical physics conditions have been defined for which an exact solution to the signal response equations is of the same form as the multiple regression equation. Various statistical parameters are examined to define a criteria for selection of an unbiased fit when upwelled radiance values contain error and are correlated with each other. Field experimental data are examined to define data smoothing requirements in order to satisfy the criteria of Daniel and Wood (1971). Recommendations are made concerning improved selection of ground-truth locations to maximize variance and to minimize physical errors associated with the remote sensing experiment.
Schistosomiasis and international travel.
Corachan, Manuel
2002-08-15
Infection with Schistosoma species is acquired by exposure to fresh water that harbors cercariae released by infected snails. Although the route of infection is clear, clinical presentation of the established infection in the nonimmune tourist typically differs from that in the local population of areas of endemicity. For the health care practitioner, the traveler's syndrome presents distinctive management problems: water-transmitted bacterial and viral infections may coexist, and identification of the stage of disease at presentation, along with identification of the causative species, will maximize treatment options. Travel medicine clinics serve as epidemiological antennae, helping to identify the dynamics of species transmission in geographically distinct areas. Education of persons traveling to areas of endemicity and the development of mechanical protection against exposure are needed.
Omorczyk, Jarosław; Nosiadek, Leszek; Ambroży, Tadeusz; Nosiadek, Andrzej
2015-01-01
The main aim of this study was to verify the usefulness of selected simple methods of recording and fast biomechanical analysis performed by judges of artistic gymnastics in assessing a gymnast's movement technique. The study participants comprised six artistic gymnastics judges, who assessed back handsprings using two methods: a real-time observation method and a frame-by-frame video analysis method. They also determined flexion angles of knee and hip joints using the computer program. In the case of the real-time observation method, the judges gave a total of 5.8 error points with an arithmetic mean of 0.16 points for the flexion of the knee joints. In the high-speed video analysis method, the total amounted to 8.6 error points and the mean value amounted to 0.24 error points. For the excessive flexion of hip joints, the sum of the error values was 2.2 error points and the arithmetic mean was 0.06 error points during real-time observation. The sum obtained using frame-by-frame analysis method equaled 10.8 and the mean equaled 0.30 error points. Error values obtained through the frame-by-frame video analysis of movement technique were higher than those obtained through the real-time observation method. The judges were able to indicate the number of the frame in which the maximal joint flexion occurred with good accuracy. Using the real-time observation method as well as the high-speed video analysis performed without determining the exact angle for assessing movement technique were found to be insufficient tools for improving the quality of judging.
Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity and Efficient Estimators
2012-09-27
particular, we require no entangling gates or ancillary systems for the procedure. In contrast with [19], our method is not restricted to processes that are...of states, such as those recently developed for use with permutation-invariant states [60], matrix product states [61] or multi-scale entangled states...process tomography: first prepare the Jamiołkowski state ρE (by adjoining an ancilla, preparing the maximally entangled state |ψ0, and applying E); then
Errorless Learning in Cognitive Rehabilitation: A Critical Review
Middleton, Erica L.; Schwartz, Myrna F.
2012-01-01
Cognitive rehabilitation research is increasingly exploring errorless learning interventions, which prioritize the avoidance of errors during treatment. The errorless learning approach was originally developed for patients with severe anterograde amnesia, who were deemed to be at particular risk for error learning. Errorless learning has since been investigated in other memory-impaired populations (e.g., Alzheimer's disease) and acquired aphasia. In typical errorless training, target information is presented to the participant for study or immediate reproduction, a method that prevents participants from attempting to retrieve target information from long-term memory (i.e., retrieval practice). However, assuring error elimination by preventing difficult (and error-permitting) retrieval practice is a potential major drawback of the errorless approach. This review begins with discussion of research in the psychology of learning and memory that demonstrates the importance of difficult (and potentially errorful) retrieval practice for robust learning and prolonged performance gains. We then review treatment research comparing errorless and errorful methods in amnesia and aphasia, where only the latter provides (difficult) retrieval practice opportunities. In each clinical domain we find the advantage of the errorless approach is limited and may be offset by the therapeutic potential of retrieval practice. Gaps in current knowledge are identified that preclude strong conclusions regarding a preference for errorless treatments over methods that prioritize difficult retrieval practice. We offer recommendations for future research aimed at a strong test of errorless learning treatments, which involves direct comparison with methods where retrieval practice effects are maximized for long-term gains. PMID:22247957
Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor
2016-07-01
The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors Δ ω N was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal Δ ω N was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.
Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám
2016-01-01
The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations. PMID:27493566
NASA Astrophysics Data System (ADS)
Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.
2009-05-01
Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment Canada, is a coupled land-surface and hydrologic model. Results will demonstrate the conclusions a modeller might make regarding the value of additional watershed spatial discretization under both an aggregated (single-objective) and multi-objective model comparison framework.
Use of the HR index to predict maximal oxygen uptake during different exercise protocols.
Haller, Jeannie M; Fehling, Patricia C; Barr, David A; Storer, Thomas W; Cooper, Christopher B; Smith, Denise L
2013-10-01
This study examined the ability of the HRindex model to accurately predict maximal oxygen uptake ([Formula: see text]O2max) across a variety of incremental exercise protocols. Ten men completed five incremental protocols to volitional exhaustion. Protocols included three treadmill (Bruce, UCLA running, Wellness Fitness Initiative [WFI]), one cycle, and one field (shuttle) test. The HRindex prediction equation (METs = 6 × HRindex - 5, where HRindex = HRmax/HRrest) was used to generate estimates of energy expenditure, which were converted to body mass-specific estimates of [Formula: see text]O2max. Estimated [Formula: see text]O2max was compared with measured [Formula: see text]O2max. Across all protocols, the HRindex model significantly underestimated [Formula: see text]O2max by 5.1 mL·kg(-1)·min(-1) (95% CI: -7.4, -2.7) and the standard error of the estimate (SEE) was 6.7 mL·kg(-1)·min(-1). Accuracy of the model was protocol-dependent, with [Formula: see text]O2max significantly underestimated for the Bruce and WFI protocols but not the UCLA, Cycle, or Shuttle protocols. Although no significant differences in [Formula: see text]O2max estimates were identified for these three protocols, predictive accuracy among them was not high, with root mean squared errors and SEEs ranging from 7.6 to 10.3 mL·kg(-1)·min(-1) and from 4.5 to 8.0 mL·kg(-1)·min(-1), respectively. Correlations between measured and predicted [Formula: see text]O2max were between 0.27 and 0.53. Individual prediction errors indicated that prediction accuracy varied considerably within protocols and among participants. In conclusion, across various protocols the HRindex model significantly underestimated [Formula: see text]O2max in a group of aerobically fit young men. Estimates generated using the model did not differ from measured [Formula: see text]O2max for three of the five protocols studied; nevertheless, some individual prediction errors were large. The lack of precision among estimates may limit the utility of the HRindex model; however, further investigation to establish the model's predictive accuracy is warranted.
Graf, Alexandra C; Bauer, Peter
2011-06-30
We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.
Learning, memory, and the role of neural network architecture.
Hermundstad, Ann M; Brown, Kevin S; Bassett, Danielle S; Carlson, Jean M
2011-06-01
The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.
Strong, weak, and missing links in a microbial community of the N.W. Mediterranean Sea.
Bettarel, Y; Dolan, J R; Hornak, K; Lemée, R; Masin, M; Pedrotti, M-L; Rochelle-Newall, E; Simek, K; Sime-Ngando, T
2002-12-01
Planktonic microbial communities often appear stable over periods of days and thus tight links are assumed to exist between different functional groups (i.e. producers and consumers). We examined these links by characterizing short-term temporal correspondences in the concentrations and activities of microbial groups sampled from 1 m depth, at a coastal site of the N.W. Mediterranean Sea, in September 2001 every 3 h for 3 days. We estimated the abundance and activity rates of the autotrophic prokaryote Synechococcus, heterotrophic bacteria, viruses, heterotrophic nanoflagellates, as well as dissolved organic carbon concentrations. We found that Synechococcus, heterotrophic bacteria, and viruses displayed distinct patterns. Synechococcus abundance was greatest at midnight and lowest at 21:00 and showed the common pattern of an early evening maximum in dividing cells. In contrast, viral concentrations were minimal at midnight and maximal at 18:00. Viral infection of heterotrophic bacteria was rare (0.5-2.5%) and appeared to peak at 03:00. Heterotrophic bacteria, as % eubacteria-positive cells, peaked at midday, appearing loosely related to relative changes in dissolved organic carbon concentration. Bacterial production as assessed by leucine incorporation showed no consistent temporal pattern but could be related to shifts in the grazing rates of heterotrophic nanoflagellates and viral infection rates. Estimates of virus-induced mortality of heterotrophic bacteria, based on infection frequencies, were only about 10% of cell production. Overall, the dynamics of viruses appeared more closely related to Synechococcus than to heterotrophic bacteria. Thus, we found weak links between dissolved organic carbon concentration, or grazing, and bacterial activity, a possibly strong link between Synechococcus and viruses, and a missing link between light and viruses.
LaRocca, Christopher J.; Han, Joohee; Oliveira, Amanda R.; Davydova, Julia; Herzberg, Mark; Gopalakrishnan, Rajaram; Yamamoto, Masato
2016-01-01
Objectives In recent years, the incidence of Human Papilloma Virus (HPV)-positive head and neck squamous cell carcinomas (HNSCC) has markedly increased. Our aim was to design a novel therapeutic agent through the use of conditionally replicative adenoviruses (CRAds) that are targeted to the HPV E6 and E7 oncoproteins. Methods Each adenovirus included small deletion(s) in the E1a region of the genome (Δ24 or CB016) intended to allow for selective replication in HPV-positive cells. In vitro assays were performed to analyze the transduction efficiency of the vectors and the cell viability following viral infection. Then, the UPCI SCC 090 cell line (HPV-positive) was used to establish subcutaneous tumors in the flanks of nude mice. The tumors were then treated with either one dose of the virus or four doses (injected every fourth day). Results The transduction analysis with luciferase-expressing viruses demonstrated that the 5/3 fiber modification maximized virus infectivity. In vitro, both viruses (5/3Δ24 and 5/3CB016) demonstrated profound oncolytic effects. The 5/3CB016 virus was selective for only HPV-positive HNSCC cells, whereas the 5/3Δ24 virus killed HNSCC cells regardless of HPV status. In vivo, single injections of both viruses demonstrated anti-tumor effects until only 6–8 days following viral inoculation. However, after four viral injections, there was statistically significant reduction in tumor growth when compared to the control group (p<0.05). Conclusion CRAds targeted to HPV-positive HNSCCs demonstrated excellent in vitro and in vivo therapeutic effects, and they have the potential to be clinically translated as a novel treatment modality for this emerging disease. PMID:27086483
Virus-Specific RNA Synthesis in Cells Infected by Infectious Pancreatic Necrosis Virus
Somogyi, Paul; Dobos, Peter
1980-01-01
Pulse-labeling experiments with [3H]uridine revealed that the rate of infections pancreatic necrosis virus-specific RNA synthesis was maximal at 8 to 10 h after infection and was completely diminished by 12 to 14 h. Three forms of RNA intermediates were detected: (i) a putative transcription intermediate (TRI) which comigrated in acrylamide gels with virion double-stranded RNA (dsRNA) after RNase treatment; (ii) a 24S genome length mRNA which could be resolved into two bands by polyacrylamide gel electrophoresis; and (iii) a 14S dsRNA component indistinguishable from virion RNA by gradient centrifugation and gel electrophoresis. The TRI (i) was LiCl precipitable; (ii) sedimented slightly faster and broader (14 to 16S) than the 14S virion dsRNA; (iii) had a lower electrophoretic mobility in acrylamide gels than dsRNA, barely entering acrylamide gels as a heterogenous component; (iv) yielded genome-sized pieces of dsRNA after RNase digestion; and (v) was the most abundant RNA form early in the infectious cycle. The 24S single-stranded RNA was thought to be the viral mRNA since it: (i) became labeled during short pulses; (ii) was found in the polysomal fraction of infected cells; and (iii) hybridized to denatured viral RNA, forming two segments of RNase-resistant RNA that comigrated with virion dsRNA in gels. The 24S mRNA component was formed before the synthesis of dsRNA, and radioactivity could be chased from 24S single-stranded RNA to dsRNA, indicating that 24S RNA may serve as template for the synthesis of complementary strands to form dsRNA. Similar to reovirus, infectious pancreatic necrosis viral 24S mRNA contained no polyadenylic acid tracts. Images PMID:16789184
Buggio, Maurizio; Towe, Christopher; Annan, Anand; Kaliberov, Sergey; Lu, Zhi Hong; Stephens, Calvin; Arbeit, Jeffrey M; Curiel, David T
2016-01-01
Gene therapy for inherited serum deficiency disorders has previously been limited by the balance between obtaining adequate expression and causing hepatic toxicity. Our group has previously described modifications of a replication deficient human adenovirus serotype 5 that increase pulmonary vasculature transgene expression. In the present study, we use a modified pulmonary targeted adenovirus to express human alpha-1 antitrypsin (A1AT) in C57BL/6 J mice. Using the targeted adenovirus, we were able to achieve similar increases in serum A1AT levels with less liver viral uptake. We also increased pulmonary epithelial lining fluid A1AT levels by more than an order of magnitude compared to that of untargeted adenovirus expressing A1AT in a mouse model. These gains are achieved along with evidence of decreased systemic inflammation and no evidence for increased inflammation within the vector-targeted end organ. In addition to comprising a step towards clinically viable gene therapy for A1AT, maximization of protein production at the site of action represents a significant technical advancement in the field of systemically delivered pulmonary targeted gene therapy. It also provides an alternative to the previous limitations of hepatic viral transduction and associated toxicities. Copyright © 2016 John Wiley & Sons, Ltd.
Recent Progress in Understanding Coxsackievirus Replication, Dissemination, and Pathogenesis
Sin, Jon; Mangale, Vrushali; Thienphrapa, Wdee; Gottlieb, Roberta A.; Feuer, Ralph
2015-01-01
Coxsackieviruses (CVs) are relatively common viruses associated with a number of serious human diseases, including myocarditis and meningo-encephalitis. These viruses are considered cytolytic yet can persist for extended periods of time within certain host tissues requiring evasion from the host immune response and a greatly reduced rate of replication. A member of Picornaviridae family, CVs have been historically considered non-enveloped viruses – although recent evidence suggest that CV and other picornaviruses hijack host membranes and acquire an envelope. Acquisition of an envelope might provide distinct benefits to CV virions, such as resistance to neutralizing antibodies and efficient nonlytic viral spread. CV exhibits a unique tropism for progenitor cells in the host which may help to explain the susceptibility of the young host to infection and the establishment of chronic disease in adults. CVs have also been shown to exploit autophagy to maximize viral replication and assist in unconventional release from target cells. In this article, we review recent progress in clarifying virus replication and dissemination within the host cell, identifying determinants of tropism, and defining strategies utilized by the virus to evade the host immune response. Also, we will highlight unanswered questions and provide future perspectives regarding the potential mechanisms of CV pathogenesis. PMID:26142496
Katz, Michael G; Fargnoli, Anthony S; Williams, Richard D; Bridges, Charles R
2013-11-01
Gene therapy is one of the most promising fields for developing new treatments for the advanced stages of ischemic and monogenetic, particularly autosomal or X-linked recessive, cardiomyopathies. The remarkable ongoing efforts in advancing various targets have largely been inspired by the results that have been achieved in several notable gene therapy trials, such as the hemophilia B and Leber's congenital amaurosis. Rate-limiting problems preventing successful clinical application in the cardiac disease area, however, are primarily attributable to inefficient gene transfer, host responses, and the lack of sustainable therapeutic transgene expression. It is arguable that these problems are directly correlated with the choice of vector, dose level, and associated cardiac delivery approach as a whole treatment system. Essentially, a delicate balance exists in maximizing gene transfer required for efficacy while remaining within safety limits. Therefore, the development of safe, effective, and clinically applicable gene delivery techniques for selected nonviral and viral vectors will certainly be invaluable in obtaining future regulatory approvals. The choice of gene transfer vector, dose level, and the delivery system are likely to be critical determinants of therapeutic efficacy. It is here that the interactions between vector uptake and trafficking, delivery route means, and the host's physical limits must be considered synergistically for a successful treatment course.
Synthetic generation of influenza vaccine viruses for rapid response to pandemics.
Dormitzer, Philip R; Suphaphiphat, Pirada; Gibson, Daniel G; Wentworth, David E; Stockwell, Timothy B; Algire, Mikkel A; Alperovich, Nina; Barro, Mario; Brown, David M; Craig, Stewart; Dattilo, Brian M; Denisova, Evgeniya A; De Souza, Ivna; Eickmann, Markus; Dugan, Vivien G; Ferrari, Annette; Gomila, Raul C; Han, Liqun; Judge, Casey; Mane, Sarthak; Matrosovich, Mikhail; Merryman, Chuck; Palladino, Giuseppe; Palmer, Gene A; Spencer, Terika; Strecker, Thomas; Trusheim, Heidi; Uhlendorff, Jennifer; Wen, Yingxia; Yee, Anthony C; Zaveri, Jayshree; Zhou, Bin; Becker, Stephan; Donabedian, Armen; Mason, Peter W; Glass, John I; Rappuoli, Rino; Venter, J Craig
2013-05-15
During the 2009 H1N1 influenza pandemic, vaccines for the virus became available in large quantities only after human infections peaked. To accelerate vaccine availability for future pandemics, we developed a synthetic approach that very rapidly generated vaccine viruses from sequence data. Beginning with hemagglutinin (HA) and neuraminidase (NA) gene sequences, we combined an enzymatic, cell-free gene assembly technique with enzymatic error correction to allow rapid, accurate gene synthesis. We then used these synthetic HA and NA genes to transfect Madin-Darby canine kidney (MDCK) cells that were qualified for vaccine manufacture with viral RNA expression constructs encoding HA and NA and plasmid DNAs encoding viral backbone genes. Viruses for use in vaccines were rescued from these MDCK cells. We performed this rescue with improved vaccine virus backbones, increasing the yield of the essential vaccine antigen, HA. Generation of synthetic vaccine seeds, together with more efficient vaccine release assays, would accelerate responses to influenza pandemics through a system of instantaneous electronic data exchange followed by real-time, geographically dispersed vaccine production.
Local rules simulation of the kinetics of virus capsid self-assembly.
Schwartz, R; Shor, P W; Prevelige, P E; Berger, B
1998-12-01
A computer model is described for studying the kinetics of the self-assembly of icosahedral viral capsids. Solution of this problem is crucial to an understanding of the viral life cycle, which currently cannot be adequately addressed through laboratory techniques. The abstract simulation model employed to address this is based on the local rules theory of. Proc. Natl. Acad. Sci. USA. 91:7732-7736). It is shown that the principle of local rules, generalized with a model of kinetics and other extensions, can be used to simulate complicated problems in self-assembly. This approach allows for a computationally tractable molecular dynamics-like simulation of coat protein interactions while retaining many relevant features of capsid self-assembly. Three simple simulation experiments are presented to illustrate the use of this model. These show the dependence of growth and malformation rates on the energetics of binding interactions, the tolerance of errors in binding positions, and the concentration of subunits in the examples. These experiments demonstrate a tradeoff within the model between growth rate and fidelity of assembly for the three parameters. A detailed discussion of the computational model is also provided.
Mainsah, B O; Reeves, G; Collins, L M; Throckmorton, C S
2017-08-01
The role of a brain-computer interface (BCI) is to discern a user's intended message or action by extracting and decoding relevant information from brain signals. Stimulus-driven BCIs, such as the P300 speller, rely on detecting event-related potentials (ERPs) in response to a user attending to relevant or target stimulus events. However, this process is error-prone because the ERPs are embedded in noisy electroencephalography (EEG) data, representing a fundamental problem in communication of the uncertainty in the information that is received during noisy transmission. A BCI can be modeled as a noisy communication system and an information-theoretic approach can be exploited to design a stimulus presentation paradigm to maximize the information content that is presented to the user. However, previous methods that focused on designing error-correcting codes failed to provide significant performance improvements due to underestimating the effects of psycho-physiological factors on the P300 ERP elicitation process and a limited ability to predict online performance with their proposed methods. Maximizing the information rate favors the selection of stimulus presentation patterns with increased target presentation frequency, which exacerbates refractory effects and negatively impacts performance within the context of an oddball paradigm. An information-theoretic approach that seeks to understand the fundamental trade-off between information rate and reliability is desirable. We developed a performance-based paradigm (PBP) by tuning specific parameters of the stimulus presentation paradigm to maximize performance while minimizing refractory effects. We used a probabilistic-based performance prediction method as an evaluation criterion to select a final configuration of the PBP. With our PBP, we demonstrate statistically significant improvements in online performance, both in accuracy and spelling rate, compared to the conventional row-column paradigm. By accounting for refractory effects, an information-theoretic approach can be exploited to significantly improve BCI performance across a wide range of performance levels.
NASA Astrophysics Data System (ADS)
Mainsah, B. O.; Reeves, G.; Collins, L. M.; Throckmorton, C. S.
2017-08-01
Objective. The role of a brain-computer interface (BCI) is to discern a user’s intended message or action by extracting and decoding relevant information from brain signals. Stimulus-driven BCIs, such as the P300 speller, rely on detecting event-related potentials (ERPs) in response to a user attending to relevant or target stimulus events. However, this process is error-prone because the ERPs are embedded in noisy electroencephalography (EEG) data, representing a fundamental problem in communication of the uncertainty in the information that is received during noisy transmission. A BCI can be modeled as a noisy communication system and an information-theoretic approach can be exploited to design a stimulus presentation paradigm to maximize the information content that is presented to the user. However, previous methods that focused on designing error-correcting codes failed to provide significant performance improvements due to underestimating the effects of psycho-physiological factors on the P300 ERP elicitation process and a limited ability to predict online performance with their proposed methods. Maximizing the information rate favors the selection of stimulus presentation patterns with increased target presentation frequency, which exacerbates refractory effects and negatively impacts performance within the context of an oddball paradigm. An information-theoretic approach that seeks to understand the fundamental trade-off between information rate and reliability is desirable. Approach. We developed a performance-based paradigm (PBP) by tuning specific parameters of the stimulus presentation paradigm to maximize performance while minimizing refractory effects. We used a probabilistic-based performance prediction method as an evaluation criterion to select a final configuration of the PBP. Main results. With our PBP, we demonstrate statistically significant improvements in online performance, both in accuracy and spelling rate, compared to the conventional row-column paradigm. Significance. By accounting for refractory effects, an information-theoretic approach can be exploited to significantly improve BCI performance across a wide range of performance levels.
de Souza E Silva, Christina G; Kaminsky, Leonard A; Arena, Ross; Christle, Jeffrey W; Araújo, Claudio Gil S; Lima, Ricardo M; Ashley, Euan A; Myers, Jonathan
2018-05-01
Background Maximal oxygen uptake (VO 2 max) is a powerful predictor of health outcomes. Valid and portable reference values are integral to interpreting measured VO 2 max; however, available reference standards lack validation and are specific to exercise mode. This study was undertaken to develop and validate a single equation for normal standards for VO 2 max for the treadmill or cycle ergometer in men and women. Methods Healthy individuals ( N = 10,881; 67.8% men, 20-85 years) who performed a maximal cardiopulmonary exercise test on either a treadmill or a cycle ergometer were studied. Of these, 7617 and 3264 individuals were randomly selected for development and validation of the equation, respectively. A Brazilian sample (1619 individuals) constituted a second validation cohort. The prediction equation was determined using multiple regression analysis, and comparisons were made with the widely-used Wasserman and European equations. Results Age, sex, weight, height and exercise mode were significant predictors of VO 2 max. The regression equation was: VO 2 max (ml kg -1 min -1 ) = 45.2 - 0.35*Age - 10.9*Sex (male = 1; female = 2) - 0.15*Weight (pounds) + 0.68*Height (inches) - 0.46*Exercise Mode (treadmill = 1; bike = 2) ( R = 0.79, R 2 = 0.62, standard error of the estimate = 6.6 ml kg -1 min -1 ). Percentage predicted VO 2 max for the US and Brazilian validation cohorts were 102.8% and 95.8%, respectively. The new equation performed better than traditional equations, particularly among women and individuals ≥60 years old. Conclusion A combined equation was developed for normal standards for VO 2 max for different exercise modes derived from a US national registry. The equation provided a lower average error between measured and predicted VO 2 max than traditional equations even when applied to an independent cohort. Additional studies are needed to determine its portability.
Receding and disparity cues aid relaxation of accommodation
Horwood, Anna M; Riddell, Patricia M
2015-01-01
Purpose Accommodation can mask hyperopia and reduce the accuracy of non-cycloplegic refraction. It is therefore important to minimize accommodation to obtain as accurate a measure of hyperopia as possible. In order to characterize the parameters required to measure the maximally hyperopic error using photorefraction, we used different target types and distances to determine which target was most likely to maximally relax accommodation and thus more accurately detect hyperopia in an individual. Methods A PlusoptiX SO4 infra-red photorefractor was mounted in a remote haploscope which presented the targets. All participants were tested with targets at four fixation distances between 0.3m and 2m containing all combinations of blur, disparity and proximity/looming cues. 38 infants (6-44 wks) were studied longitudinally, and 104 children (4 -15 yrs (mean 6.4)) and 85 adults, with a range of refractive errors and binocular vision status, were tested once. Cycloplegic refraction data was available for a sub-set of 59 participants spread across the age range. Results The maximally hyperopic refraction (MHR) found at any time in the session was most frequently found when fixating the most distant targets and those containing disparity and dynamic proximity/looming cues. Presence or absence of blur was less significant, and targets in which only single cues to depth were present were also less likely to produce MHR. MHR correlated closely with cycloplegic refraction (r = 0.93,mean difference 0.07D,p=n.s.,95%CI ±<0.25D) after correction by a calibration factor. Conclusion Maximum relaxation of accommodation occurred for binocular targets receding into the distance. Proximal and disparity cues aid relaxation of accommodation to a greater extent than blur, and thus non-cycloplegic refraction targets should incorporate these cues. This is especially important in screening contexts with a brief opportunity to test for significant hyperopia. MHR in our laboratory was found to be a reliable estimation of cycloplegic refraction. PMID:19770814
2012-09-27
we require no entangling gates or ancillary systems for the procedure. In contrast with [19], our method is not restricted to processes that are...states, such as those recently developed for use with permutation-invariant states [60], matrix product states [61] or multi-scale entangled states [62...by adjoining an ancilla, preparing the maximally entangled state |ψ0〉, and applying E); then do compressed quantum state tomography on ρE ; see
Mechanical design of a power-adjustable spectacle lens frame.
Zapata, Asuncion; Barbero, Sergio
2011-05-01
Power-adjustable spectacle lenses, based on the Alvarez-Lohmann principle, can be used to provide affordable spectacles for subjective refractive errors measurement and its correction. A new mechanical frame has been designed to maximize the advantages of this technology. The design includes a mechanism to match the interpupillary distance with that of the optical centers of the lenses. The frame can be manufactured using low cost plastic injection molding techniques. A prototype has been built to test the functioning of this mechanical design.
Quantum key distribution with passive decoy state selection
NASA Astrophysics Data System (ADS)
Mauerer, Wolfgang; Silberhorn, Christine
2007-05-01
We propose a quantum key distribution scheme which closely matches the performance of a perfect single photon source. It nearly attains the physical upper bound in terms of key generation rate and maximally achievable distance. Our scheme relies on a practical setup based on a parametric downconversion source and present day, nonideal photon-number detection. Arbitrary experimental imperfections which lead to bit errors are included. We select decoy states by classical postprocessing. This allows one to improve the effective signal statistics and achievable distance.
Adverse Effects in Dual-Star Interferometry
NASA Technical Reports Server (NTRS)
Colavita, M. Mark
2008-01-01
Narrow-angle dual-star interferometric astrometry can provide very high accuracy in the presence of the Earth's turbulent atmosphere. However, to exploit the high atmospherically-limited accuracy requires control of systematic errors in measurement of the interferometer baseline, internal OPDs, and fringe phase. In addition, as high photometric SNR is required, care must be taken to maximize throughput and coherence to obtain high accuracy on faint stars. This article reviews: the keys aspects of the dual-star approach and implementation; the main contributors to the
Mohsenizadeh, Daniel N; Dehghannasiri, Roozbeh; Dougherty, Edward R
2018-01-01
In systems biology, network models are often used to study interactions among cellular components, a salient aim being to develop drugs and therapeutic mechanisms to change the dynamical behavior of the network to avoid undesirable phenotypes. Owing to limited knowledge, model uncertainty is commonplace and network dynamics can be updated in different ways, thereby giving multiple dynamic trajectories, that is, dynamics uncertainty. In this manuscript, we propose an experimental design method that can effectively reduce the dynamics uncertainty and improve performance in an interaction-based network. Both dynamics uncertainty and experimental error are quantified with respect to the modeling objective, herein, therapeutic intervention. The aim of experimental design is to select among a set of candidate experiments the experiment whose outcome, when applied to the network model, maximally reduces the dynamics uncertainty pertinent to the intervention objective.
Ambiguity resolution for satellite Doppler positioning systems
NASA Technical Reports Server (NTRS)
Argentiero, P. D.; Marini, J. W.
1977-01-01
A test for ambiguity resolution was derived which was the most powerful in the sense that it maximized the probability of a correct decision. When systematic error sources were properly included in the least squares reduction process to yield an optimal solution, the test reduced to choosing the solution which provided the smaller valuation of the least squares loss function. When systematic error sources were ignored in the least squares reduction, the most powerful test was a quadratic form comparison with the weighting matrix of the quadratic form obtained by computing the pseudo-inverse of a reduced rank square matrix. A formula is presented for computing the power of the most powerful test. A numerical example is included in which the power of the test is computed for a situation which may occur during an actual satellite aided search and rescue mission.
Ego-motion based on EM for bionic navigation
NASA Astrophysics Data System (ADS)
Yue, Xiaofeng; Wang, L. J.; Liu, J. G.
2015-12-01
Researches have proved that flying insects such as bees can achieve efficient and robust flight control, and biologists have explored some biomimetic principles regarding how they control flight. Based on those basic studies and principles acquired from the flying insects, this paper proposes a different solution of recovering ego-motion for low level navigation. Firstly, a new type of entropy flow is provided to calculate the motion parameters. Secondly, EKF, which has been used for navigation for some years to correct accumulated error, and estimation-Maximization, which is always used to estimate parameters, are put together to determine the ego-motion estimation of aerial vehicles. Numerical simulation on MATLAB has proved that this navigation system provides more accurate position and smaller mean absolute error than pure optical flow navigation. This paper has done pioneering work in bionic mechanism to space navigation.
Nisbet, Elizabeth K; Zelenski, John M
2011-09-01
Modern lifestyles disconnect people from nature, and this may have adverse consequences for the well-being of both humans and the environment. In two experiments, we found that although outdoor walks in nearby nature made participants much happier than indoor walks did, participants made affective forecasting errors, such that they systematically underestimated nature's hedonic benefit. The pleasant moods experienced on outdoor nature walks facilitated a subjective sense of connection with nature, a construct strongly linked with concern for the environment and environmentally sustainable behavior. To the extent that affective forecasts determine choices, our findings suggest that people fail to maximize their time in nearby nature and thus miss opportunities to increase their happiness and relatedness to nature. Our findings suggest a happy path to sustainability, whereby contact with nature fosters individual happiness and environmentally responsible behavior.
Spitzer Telemetry Processing System
NASA Technical Reports Server (NTRS)
Stanboli, Alice; Martinez, Elmain M.; McAuley, James M.
2013-01-01
The Spitzer Telemetry Processing System (SirtfTlmProc) was designed to address objectives of JPL's Multi-mission Image Processing Lab (MIPL) in processing spacecraft telemetry and distributing the resulting data to the science community. To minimize costs and maximize operability, the software design focused on automated error recovery, performance, and information management. The system processes telemetry from the Spitzer spacecraft and delivers Level 0 products to the Spitzer Science Center. SirtfTlmProc is a unique system with automated error notification and recovery, with a real-time continuous service that can go quiescent after periods of inactivity. The software can process 2 GB of telemetry and deliver Level 0 science products to the end user in four hours. It provides analysis tools so the operator can manage the system and troubleshoot problems. It automates telemetry processing in order to reduce staffing costs.
Software reliability experiments data analysis and investigation
NASA Technical Reports Server (NTRS)
Walker, J. Leslie; Caglayan, Alper K.
1991-01-01
The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.
Improving the safety of vaccine delivery.
Evans, Huw P; Cooper, Alison; Williams, Huw; Carson-Stevens, Andrew
2016-05-03
Vaccines save millions of lives per annum as an integral part of community primary care provision worldwide. Adverse events due to the vaccine delivery process outnumber those arising from the pharmacological properties of the vaccines themselves. Whilst one in three patients receiving a vaccine will encounter some form of error, little is known about their underlying causes and how to mitigate them in practice. Patient safety incident reporting systems and adverse drug event surveillance offer a rich opportunity for understanding the underlying causes of those errors. Reducing harm relies on the identification and implementation of changes to improve vaccine safety at multiple levels: from patient interventions through to organizational actions at local, national and international levels. Here we highlight the potential for maximizing learning from patient safety incident reports to improve the quality and safety of vaccine delivery.
Acceptance threshold theory can explain occurrence of homosexual behaviour.
Engel, Katharina C; Männer, Lisa; Ayasse, Manfred; Steiger, Sandra
2015-01-01
Same-sex sexual behaviour (SSB) has been documented in a wide range of animals, but its evolutionary causes are not well understood. Here, we investigated SSB in the light of Reeve's acceptance threshold theory. When recognition is not error-proof, the acceptance threshold used by males to recognize potential mating partners should be flexibly adjusted to maximize the fitness pay-off between the costs of erroneously accepting males and the benefits of accepting females. By manipulating male burying beetles' search time for females and their reproductive potential, we influenced their perceived costs of making an acceptance or rejection error. As predicted, when the costs of rejecting females increased, males exhibited more permissive discrimination decisions and showed high levels of SSB; when the costs of accepting males increased, males were more restrictive and showed low levels of SSB. Our results support the idea that in animal species, in which the recognition cues of females and males overlap to a certain degree, SSB is a consequence of an adaptive discrimination strategy to avoid the costs of making rejection errors. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
CP function: an alpha spending function based on conditional power.
Jiang, Zhiwei; Wang, Ling; Li, Chanjuan; Xia, Jielai; Wang, William
2014-11-20
Alpha spending function and stochastic curtailment are two frequently used methods in group sequential design. In the stochastic curtailment approach, the actual type I error probability cannot be well controlled within the specified significance level. But conditional power (CP) in stochastic curtailment is easier to be accepted and understood by clinicians. In this paper, we develop a spending function based on the concept of conditional power, named CP function, which combines desirable features of alpha spending and stochastic curtailment. Like other two-parameter functions, CP function is flexible to fit the needs of the trial. A simulation study is conducted to explore the choice of CP boundary in CP function that maximizes the trial power. It is equivalent to, even better than, classical Pocock, O'Brien-Fleming, and quadratic spending function as long as a proper ρ0 is given, which is pre-specified CP threshold for efficacy. It also well controls the overall type I error type I error rate and overcomes the disadvantage of stochastic curtailment. Copyright © 2014 John Wiley & Sons, Ltd.
Kim, Joo Hyoung; Cha, Jung Yul; Hwang, Chung Ju
2012-12-01
This in vitro study was undertaken to evaluate the physical, chemical, and biological properties of commercially available metal orthodontic brackets in South Korea, because national standards for these products are lacking. FOUR BRACKET BRANDS WERE TESTED FOR DIMENSIONAL ACCURACY, (MANUFACTURING ERRORS IN ANGULATION AND TORQUE), CYTOTOXICITY, COMPOSITION, ELUTION, AND CORROSION: Archist (Daeseung Medical), Victory (3M Unitek), Kosaka (Tomy), and Confidence (Shinye Odontology Materials). The tested rackets showed no significant differences in manufacturing errors in angulation, but Confidence brackets showed a significant difference in manufacturing errors in torque. None of the brackets were cytotoxic to mouse fibroblasts. The metal ion components did not show a regular increasing or decreasing trend of elution over time, but the volume of the total eluted metal ions increased: Archist brackets had the maximal Cr elution and Confidence brackets appeared to have the largest volume of total eluted metal ions because of excessive Ni elution. Confidence brackets showed the lowest corrosion resistance during potentiodynamic polarization. The results of this study could potentially be applied in establishing national standards for metal orthodontic brackets and in evaluating commercially available products.
Pageler, Natalie M; Grazier G'Sell, Max Jacob; Chandler, Warren; Mailes, Emily; Yang, Christine; Longhurst, Christopher A
2016-09-01
The objective of this project was to use statistical techniques to determine the completeness and accuracy of data migrated during electronic health record conversion. Data validation during migration consists of mapped record testing and validation of a sample of the data for completeness and accuracy. We statistically determined a randomized sample size for each data type based on the desired confidence level and error limits. The only error identified in the post go-live period was a failure to migrate some clinical notes, which was unrelated to the validation process. No errors in the migrated data were found during the 12- month post-implementation period. Compared to the typical industry approach, we have demonstrated that a statistical approach to sampling size for data validation can ensure consistent confidence levels while maximizing efficiency of the validation process during a major electronic health record conversion. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Janet, Jon Paul; Chan, Lydia; Kulik, Heather J
2018-03-01
Machine learning (ML) has emerged as a powerful complement to simulation for materials discovery by reducing time for evaluation of energies and properties at accuracy competitive with first-principles methods. We use genetic algorithm (GA) optimization to discover unconventional spin-crossover complexes in combination with efficient scoring from an artificial neural network (ANN) that predicts spin-state splitting of inorganic complexes. We explore a compound space of over 5600 candidate materials derived from eight metal/oxidation state combinations and a 32-ligand pool. We introduce a strategy for error-aware ML-driven discovery by limiting how far the GA travels away from the nearest ANN training points while maximizing property (i.e., spin-splitting) fitness, leading to discovery of 80% of the leads from full chemical space enumeration. Over a 51-complex subset, average unsigned errors (4.5 kcal/mol) are close to the ANN's baseline 3 kcal/mol error. By obtaining leads from the trained ANN within seconds rather than days from a DFT-driven GA, this strategy demonstrates the power of ML for accelerating inorganic material discovery.
Significance of acceleration period in a dynamic strength testing study.
Chen, W L; Su, F C; Chou, Y L
1994-06-01
The acceleration period that occurs during isokinetic tests may provide valuable information regarding neuromuscular readiness to produce maximal contraction. The purpose of this study was to collect the normative data of acceleration time during isokinetic knee testing, to calculate the acceleration work (Wacc), and to determine the errors (ERexp, ERwork, ERpower) due to ignoring Wacc during explosiveness, total work, and average power measurements. Seven male and 13 female subjects attended the test by using the Cybex 325 system and electronic stroboscope machine for 10 testing speeds (30-300 degrees/sec). A three-way ANOVA was used to assess gender, direction, and speed factors on acceleration time, Wacc, and errors. The results indicated that acceleration time was significantly affected by speed and direction; Wacc and ERexp by speed, direction, and gender; and ERwork and ERpower by speed and gender. The errors appeared to increase when testing the female subjects, during the knee flexion test, or when speed increased. To increase validity in clinical testing, it is important to consider the acceleration phase effect, especially in higher velocity isokinetic testing or for weaker muscle groups.
Randhawa, P; Pastrana, D V; Zeng, G; Huang, Y; Shapiro, R; Sood, P; Puttarajappa, C; Berger, M; Hariharan, S; Buck, C B
2015-04-01
Neutralizing antibodies (NAbs) form the basis of immunotherapeutic strategies against many important human viral infections. Accordingly, we studied the prevalence, titer, genotype-specificity, and mechanism of action of anti-polyomavirus BK (BKV) NAbs in commercially available human immune globulin (IG) preparations designed for intravenous (IV) use. Pseudovirions (PsV) of genotypes Ia, Ib2, Ic, II, III, and IV were generated by co-transfecting a reporter plasmid encoding luciferase and expression plasmids containing synthetic codon-modified VP1, VP2, and VP3 capsid protein genes into 293TT cells. NAbs were measured using luminometry. All IG preparations neutralized all BKV genotypes, with mean EC50 titers as high as 254 899 for genotype Ia and 6,666 for genotype IV. Neutralizing titers against genotypes II and III were higher than expected, adding to growing evidence that infections with these genotypes are more common than currently appreciated. Batch to batch variation in different lots of IG was within the limits of experimental error. Antibody mediated virus neutralizing was dose dependent, modestly enhanced by complement, genotype-specific, and achieved without effect on viral aggregation, capsid morphology, elution, or host cell release. IG contains potent NAbs capable of neutralizing all major BKV genotypes. Clinical trials based on sound pharmacokinetic principles are needed to explore prophylactic and therapeutic applications of these anti-viral effects, until effective small molecule inhibitors of BKV replication can be developed. © Copyright 2015 The American Society of Transplantation and the American Society of Transplant Surgeons.
Pranata, Adrian; Perraton, Luke; El-Ansary, Doa; Clark, Ross; Fortin, Karine; Dettmann, Tim; Brandham, Robert; Bryant, Adam
2017-07-01
The ability to control lumbar extensor force output is necessary for daily activities. However, it is unknown whether this ability is impaired in chronic low back pain patients. Similarly, it is unknown whether lumbar extensor force control is related to the disability levels of chronic low back pain patients. Thirty-three chronic low back pain and 20 healthy people performed lumbar extension force-matching task where they increased and decreased their force output to match a variable target force within 20%-50% maximal voluntary isometric contraction. Force control was quantified as the root-mean-square-error between participants' force output and target force across the entire, during the increasing and decreasing portions of the force curve. Within- and between-group differences in force-matching error and the relationship between back pain group's force-matching results and their Oswestry Disability Index scores were assessed using ANCOVA and linear regression respectively. Back pain group demonstrated more overall force-matching error (mean difference=1.60 [0.78, 2.43], P<0.01) and more force-matching error while increasing force output (mean difference=2.19 [1.01, 3.37], P<0.01) than control group. The back pain group demonstrated more force-matching error while increasing than decreasing force output (mean difference=1.74, P<0.001, 95%CI [0.87, 2.61]). A unit increase in force-matching error while decreasing force output is associated with a 47% increase in Oswestry score in back pain group (R 2 =0.19, P=0.006). Lumbar extensor muscle force control is compromised in chronic low back pain patients. Force-matching error predicts disability, confirming the validity of our force control protocol for chronic low back pain patients. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki; Fujii, Keisuke; Nishimura, Harumichi
2017-04-01
The one-clean qubit model (or the DQC1 model) is a restricted model of quantum computing where only a single qubit of the initial state is pure and others are maximally mixed. Although the model is not universal, it can efficiently solve several problems whose classical efficient solutions are not known. Furthermore, it was recently shown that if the one-clean qubit model is classically efficiently simulated, the polynomial hierarchy collapses to the second level. A disadvantage of the one-clean qubit model is, however, that the clean qubit is too clean: for example, in realistic NMR experiments, polarizations are not high enough to have the perfectly pure qubit. In this paper, we consider a more realistic one-clean qubit model, where the clean qubit is not clean, but depolarized. We first show that, for any polarization, a multiplicative-error calculation of the output probability distribution of the model is possible in a classical polynomial time if we take an appropriately large multiplicative error. The result is in strong contrast with that of the ideal one-clean qubit model where the classical efficient multiplicative-error calculation (or even the sampling) with the same amount of error causes the collapse of the polynomial hierarchy. We next show that, for any polarization lower-bounded by an inverse polynomial, a classical efficient sampling (in terms of a sufficiently small multiplicative error or an exponentially small additive error) of the output probability distribution of the model is impossible unless BQP (bounded error quantum polynomial time) is contained in the second level of the polynomial hierarchy, which suggests the hardness of the classical efficient simulation of the one nonclean qubit model.
How to minimize perceptual error and maximize expertise in medical imaging
NASA Astrophysics Data System (ADS)
Kundel, Harold L.
2007-03-01
Visual perception is such an intimate part of human experience that we assume that it is entirely accurate. Yet, perception accounts for about half of the errors made by radiologists using adequate imaging technology. The true incidence of errors that directly affect patient well being is not known but it is probably at the lower end of the reported values of 3 to 25%. Errors in screening for lung and breast cancer are somewhat better characterized than errors in routine diagnosis. About 25% of cancers actually recorded on the images are missed and cancer is falsely reported in about 5% of normal people. Radiologists must strive to decrease error not only because of the potential impact on patient care but also because substantial variation among observers undermines confidence in the reliability of imaging diagnosis. Observer variation also has a major impact on technology evaluation because the variation between observers is frequently greater than the difference in the technologies being evaluated. This has become particularly important in the evaluation of computer aided diagnosis (CAD). Understanding the basic principles that govern the perception of medical images can provide a rational basis for making recommendations for minimizing perceptual error. It is convenient to organize thinking about perceptual error into five steps. 1) The initial acquisition of the image by the eye-brain (contrast and detail perception). 2) The organization of the retinal image into logical components to produce a literal perception (bottom-up, global, holistic). 3) Conversion of the literal perception into a preferred perception by resolving ambiguities in the literal perception (top-down, simulation, synthesis). 4) Selective visual scanning to acquire details that update the preferred perception. 5) Apply decision criteria to the preferred perception. The five steps are illustrated with examples from radiology with suggestions for minimizing error. The role of perceptual learning in the development of expertise is also considered.
NASA Astrophysics Data System (ADS)
Kocifaj, Miroslav; Gueymard, Christian A.
2011-02-01
Aerosol optical depth (AOD) has a crucial importance for estimating the optical properties of the atmosphere, and is constantly present in optical models of aerosol systems. Any error in aerosol optical depth (∂AOD) has direct and indirect consequences. On the one hand, such errors affect the accuracy of radiative transfer models (thus implying, e.g., potential errors in the evaluation of radiative forcing by aerosols). Additionally, any error in determining AOD is reflected in the retrieved microphysical properties of aerosol particles, which might therefore be inaccurate. Three distinct effects (circumsolar radiation, optical mass, and solar disk's brightness distribution) affecting ∂AOD are qualified and quantified in the present study. The contribution of circumsolar (CS) radiation to the measured flux density of direct solar radiation has received more attention than the two other effects in the literature. It varies rapidly with meteorological conditions and size distribution of the aerosol particles, but also with instrument field of view. Numerical simulations of the three effects just mentioned were conducted, assuming otherwise "perfect" experimental conditions. The results show that CS is responsible for the largest error in AOD, while the effect of brightness distribution (BD) has only a negligible impact. The optical mass (OM) effect yields negligible errors in AOD generally, but noticeable errors for low sun (within 10° of the horizon). In general, the OM and BD effects result in negative errors in AOD (i.e. the true AOD is smaller than that of the experimental determination), conversely to CS. Although the rapid increase in optical mass at large zenith angles can change the sign of ∂AOD, the CS contribution frequently plays the leading role in ∂AOD. To maximize the accuracy in AOD retrievals, the CS effect should not be ignored. In practice, however, this effect can be difficult to evaluate correctly unless the instantaneous aerosols size distribution is known from, e.g., inversion techniques.
Caprihan, A; Pearlson, G D; Calhoun, V D
2008-08-15
Principal component analysis (PCA) is often used to reduce the dimension of data before applying more sophisticated data analysis methods such as non-linear classification algorithms or independent component analysis. This practice is based on selecting components corresponding to the largest eigenvalues. If the ultimate goal is separation of data in two groups, then these set of components need not have the most discriminatory power. We measured the distance between two such populations using Mahalanobis distance and chose the eigenvectors to maximize it, a modified PCA method, which we call the discriminant PCA (DPCA). DPCA was applied to diffusion tensor-based fractional anisotropy images to distinguish age-matched schizophrenia subjects from healthy controls. The performance of the proposed method was evaluated by the one-leave-out method. We show that for this fractional anisotropy data set, the classification error with 60 components was close to the minimum error and that the Mahalanobis distance was twice as large with DPCA, than with PCA. Finally, by masking the discriminant function with the white matter tracts of the Johns Hopkins University atlas, we identified left superior longitudinal fasciculus as the tract which gave the least classification error. In addition, with six optimally chosen tracts the classification error was zero.
Metabolic emergencies and the emergency physician.
Fletcher, Janice Mary
2016-02-01
Fifty percent of inborn errors of metabolism are present in later childhood and adulthood, with crises commonly precipitated by minor viral illnesses or increased protein ingestion. Many physicians only consider IEM after more common conditions (such as sepsis) have been considered. In view of the large number of inborn errors, it might appear that their diagnosis requires precise knowledge of a large number of biochemical pathways and their interrelationship. As a matter of fact, an adequate diagnostic approach can be based on the proper use of only a few screening tests. A detailed history of antecedent events, together with these simple screening tests, can be diagnostic, leading to life-saving, targeted treatments for many disorders. Unrecognised, IEM can lead to significant mortality and morbidity. Advice is available 24/7 through the metabolic service based at the major paediatric hospital in each state and Starship Children's Health in New Zealand. © 2016 The Author. Journal of Paediatrics and Child Health © 2016 Paediatrics and Child Health Division (Royal Australasian College of Physicians).
Successful remediation of patient safety incidents: a tale of two medication errors.
Helmchen, Lorens A; Richards, Michael R; McDonald, Timothy B
2011-01-01
As patient safety acquires strategic importance for all stakeholders in the health care delivery chain, one promising mechanism centers on the proactive disclosure of medical errors to patients. Yet, disclosure and apology alone will not be effective in fully addressing patients' concerns after an adverse event unless they are paired with a remediation component. The purpose of this study was to identify key features of successful remediation efforts that accompany the proactive disclosure of medical errors to patients. We describe and contrast two recent and very similar cases of preventable medical error involving inappropriate medication at a large tertiary-care academic medical center in the Midwestern United States. Despite their similarity, the two medical errors led to very different health outcomes and remediation trajectories for the injured patients. Although one error causing no permanent harm was mismanaged to the lasting dissatisfaction of the patient, the other resulted in the death of the patient but was remediated to the point of allowing the family to come to terms with the loss and even restored a modicum of trust in the providers' sincerity. To maximize the opportunities for successful remediation, as soon as possible after the incident, providers should pledge to injured patients and their relatives that they will assist and accompany them in their recovery as long as necessary and then follow through on their pledge. As the two case studies show, it takes training and vigilance to ensure adherence to these principles and reach an optimal outcome for patients and their relatives.
Structure and inhibition of EV-D68, a virus that causes respiratory illness in children.
Liu, Yue; Sheng, Ju; Fokine, Andrei; Meng, Geng; Shin, Woong-Hee; Long, Feng; Kuhn, Richard J; Kihara, Daisuke; Rossmann, Michael G
2015-01-02
Enterovirus D68 (EV-D68) is a member of Picornaviridae and is a causative agent of recent outbreaks of respiratory illness in children in the United States. We report here the crystal structures of EV-D68 and its complex with pleconaril, a capsid-binding compound that had been developed as an anti-rhinovirus drug. The hydrophobic drug-binding pocket in viral protein 1 contained density that is consistent with a fatty acid of about 10 carbon atoms. This density could be displaced by pleconaril. We also showed that pleconaril inhibits EV-D68 at a half-maximal effective concentration of 430 nanomolar and might, therefore, be a possible drug candidate to alleviate EV-D68 outbreaks. Copyright © 2015, American Association for the Advancement of Science.
Javaugue, François-Charles; Recordon-Pinson, Patricia; Decoin, Madeleine; Masquelier, Bernard; Cazanave, Charles; Neau, Didier; Dupon, Michel; Ragnaud, Jean-Marie; Fleury, Hervé J
2012-09-01
The molecular characterization of non-B HIV type 1 subtypes and the sociodemographic baseline characteristics have been studied for 114 non-B HIV-1-infected patients followed at the University Hospital of Bordeaux, France, and diagnosed as HIV infected between 1989 and 2009. Individuals enrolled in this study were mainly women with heterosexual transmission in West and Central Africa and who have been discovered to be HIV positive during pregnancy. Nevertheless, HIV acquisition among individuals born in France was significantly increasing. Recombinant form CRF02_AG was the most frequent subtype (38%) among a highly diverse viral background since 19 subtypes and CRFs have been characterized with a maximal diversity observed in the past decade.
Structure and inhibition of EV-D68, a virus that causes respiratory illness in children
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yue; Sheng, Ju; Fokine, Andrei
Enterovirus D68 (EV-D68) is a member of Picornaviridae and is a causative agent of recent outbreaks of respiratory illness in children in the United States. We report in this paper the crystal structures of EV-D68 and its complex with pleconaril, a capsid-binding compound that had been developed as an anti-rhinovirus drug. The hydrophobic drug-binding pocket in viral protein 1 contained density that is consistent with a fatty acid of about 10 carbon atoms. This density could be displaced by pleconaril. Finally, we also showed that pleconaril inhibits EV-D68 at a half-maximal effective concentration of 430 nanomolar and might, therefore, bemore » a possible drug candidate to alleviate EV-D68 outbreaks.« less
Structure and inhibition of EV-D68, a virus that causes respiratory illness in children
Liu, Yue; Sheng, Ju; Fokine, Andrei; ...
2015-01-02
Enterovirus D68 (EV-D68) is a member of Picornaviridae and is a causative agent of recent outbreaks of respiratory illness in children in the United States. We report in this paper the crystal structures of EV-D68 and its complex with pleconaril, a capsid-binding compound that had been developed as an anti-rhinovirus drug. The hydrophobic drug-binding pocket in viral protein 1 contained density that is consistent with a fatty acid of about 10 carbon atoms. This density could be displaced by pleconaril. Finally, we also showed that pleconaril inhibits EV-D68 at a half-maximal effective concentration of 430 nanomolar and might, therefore, bemore » a possible drug candidate to alleviate EV-D68 outbreaks.« less
Adaptive pre-specification in randomized trials with and without pair-matching.
Balzer, Laura B; van der Laan, Mark J; Petersen, Maya L
2016-11-10
In randomized trials, adjustment for measured covariates during the analysis can reduce variance and increase power. To avoid misleading inference, the analysis plan must be pre-specified. However, it is often unclear a priori which baseline covariates (if any) should be adjusted for in the analysis. Consider, for example, the Sustainable East Africa Research in Community Health (SEARCH) trial for HIV prevention and treatment. There are 16 matched pairs of communities and many potential adjustment variables, including region, HIV prevalence, male circumcision coverage, and measures of community-level viral load. In this paper, we propose a rigorous procedure to data-adaptively select the adjustment set, which maximizes the efficiency of the analysis. Specifically, we use cross-validation to select from a pre-specified library the candidate targeted maximum likelihood estimator (TMLE) that minimizes the estimated variance. For further gains in precision, we also propose a collaborative procedure for estimating the known exposure mechanism. Our small sample simulations demonstrate the promise of the methodology to maximize study power, while maintaining nominal confidence interval coverage. We show how our procedure can be tailored to the scientific question (intervention effect for the study sample vs. for the target population) and study design (pair-matched or not). Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Cell-Mediated Immunity to Target the Persistent Human Immunodeficiency Virus Reservoir.
Riley, James L; Montaner, Luis J
2017-03-15
Effective clearance of virally infected cells requires the sequential activity of innate and adaptive immunity effectors. In human immunodeficiency virus (HIV) infection, naturally induced cell-mediated immune responses rarely eradicate infection. However, optimized immune responses could potentially be leveraged in HIV cure efforts if epitope escape and lack of sustained effector memory responses were to be addressed. Here we review leading HIV cure strategies that harness cell-mediated control against HIV in stably suppressed antiretroviral-treated subjects. We focus on strategies that may maximize target recognition and eradication by the sequential activation of a reconstituted immune system, together with delivery of optimal T-cell responses that can eliminate the reservoir and serve as means to maintain control of HIV spread in the absence of antiretroviral therapy (ART). As evidenced by the evolution of ART, we argue that a combination of immune-based strategies will be a superior path to cell-mediated HIV control and eradication. Available data from several human pilot trials already identify target strategies that may maximize antiviral pressure by joining innate and engineered T cell responses toward testing for sustained HIV remission and/or cure. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com.
Color coding of control room displays: the psychocartography of visual layering effects.
Van Laar, Darren; Deshe, Ofer
2007-06-01
To evaluate which of three color coding methods (monochrome, maximally discriminable, and visual layering) used to code four types of control room display format (bars, tables, trend, mimic) was superior in two classes of task (search, compare). It has recently been shown that color coding of visual layers, as used in cartography, may be used to color code any type of information display, but this has yet to be fully evaluated. Twenty-four people took part in a 2 (task) x 3 (coding method) x 4 (format) wholly repeated measures design. The dependent variables assessed were target location reaction time, error rates, workload, and subjective feedback. Overall, the visual layers coding method produced significantly faster reaction times than did the maximally discriminable and the monochrome methods for both the search and compare tasks. No significant difference in errors was observed between conditions for either task type. Significantly less perceived workload was experienced with the visual layers coding method, which was also rated more highly than the other coding methods on a 14-item visual display quality questionnaire. The visual layers coding method is superior to other color coding methods for control room displays when the method supports the user's task. The visual layers color coding method has wide applicability to the design of all complex information displays utilizing color coding, from the most maplike (e.g., air traffic control) to the most abstract (e.g., abstracted ecological display).
NASA Astrophysics Data System (ADS)
Ren, Xiaoqiang; Yan, Jiaqi; Mo, Yilin
2018-03-01
This paper studies binary hypothesis testing based on measurements from a set of sensors, a subset of which can be compromised by an attacker. The measurements from a compromised sensor can be manipulated arbitrarily by the adversary. The asymptotic exponential rate, with which the probability of error goes to zero, is adopted to indicate the detection performance of a detector. In practice, we expect the attack on sensors to be sporadic, and therefore the system may operate with all the sensors being benign for extended period of time. This motivates us to consider the trade-off between the detection performance of a detector, i.e., the probability of error, when the attacker is absent (defined as efficiency) and the worst-case detection performance when the attacker is present (defined as security). We first provide the fundamental limits of this trade-off, and then propose a detection strategy that achieves these limits. We then consider a special case, where there is no trade-off between security and efficiency. In other words, our detection strategy can achieve the maximal efficiency and the maximal security simultaneously. Two extensions of the secure hypothesis testing problem are also studied and fundamental limits and achievability results are provided: 1) a subset of sensors, namely "secure" sensors, are assumed to be equipped with better security countermeasures and hence are guaranteed to be benign, 2) detection performance with unknown number of compromised sensors. Numerical examples are given to illustrate the main results.
Reduction of ZTD outliers through improved GNSS data processing and screening strategies
NASA Astrophysics Data System (ADS)
Stepniak, Katarzyna; Bock, Olivier; Wielgosz, Pawel
2018-03-01
Though Global Navigation Satellite System (GNSS) data processing has been significantly improved over the years, it is still commonly observed that zenith tropospheric delay (ZTD) estimates contain many outliers which are detrimental to meteorological and climatological applications. In this paper, we show that ZTD outliers in double-difference processing are mostly caused by sub-daily data gaps at reference stations, which cause disconnections of clusters of stations from the reference network and common mode biases due to the strong correlation between stations in short baselines. They can reach a few centimetres in ZTD and usually coincide with a jump in formal errors. The magnitude and sign of these biases are impossible to predict because they depend on different errors in the observations and on the geometry of the baselines. We elaborate and test a new baseline strategy which solves this problem and significantly reduces the number of outliers compared to the standard strategy commonly used for positioning (e.g. determination of national reference frame) in which the pre-defined network is composed of a skeleton of reference stations to which secondary stations are connected in a star-like structure. The new strategy is also shown to perform better than the widely used strategy maximizing the number of observations available in many GNSS programs. The reason is that observations are maximized before processing, whereas the final number of used observations can be dramatically lower because of data rejection (screening) during the processing. The study relies on the analysis of 1 year of GPS (Global Positioning System) data from a regional network of 136 GNSS stations processed using Bernese GNSS Software v.5.2. A post-processing screening procedure is also proposed to detect and remove a few outliers which may still remain due to short data gaps. It is based on a combination of range checks and outlier checks of ZTD and formal errors. The accuracy of the final screened GPS ZTD estimates is assessed by comparison to ERA-Interim reanalysis.
NASA Astrophysics Data System (ADS)
Lisson, Jerold B.; Mounts, Darryl I.; Fehniger, Michael J.
1992-08-01
Localized wavefront performance analysis (LWPA) is a system that allows the full utilization of the system optical transfer function (OTF) for the specification and acceptance of hybrid imaging systems. We show that LWPA dictates the correction of wavefront errors with the greatest impact on critical imaging spatial frequencies. This is accomplished by the generation of an imaging performance map-analogous to a map of the optic pupil error-using a local OTF. The resulting performance map a function of transfer function spatial frequency is directly relatable to the primary viewing condition of the end-user. In addition to optimizing quality for the viewer it will be seen that the system has the potential for an improved matching of the optical and electronic bandpass of the imager and for the development of more realistic acceptance specifications. 1. LOCAL WAVEFRONT PERFORMANCE ANALYSIS The LWPA system generates a local optical quality factor (LOQF) in the form of a map analogous to that used for the presentation and evaluation of wavefront errors. In conjunction with the local phase transfer function (LPTF) it can be used for maximally efficient specification and correction of imaging system pupil errors. The LOQF and LPTF are respectively equivalent to the global modulation transfer function (MTF) and phase transfer function (PTF) parts of the OTF. The LPTF is related to difference of the average of the errors in separated regions of the pupil. Figure
Wiese, Steffen; Teutenberg, Thorsten; Schmidt, Torsten C
2011-09-28
In the present work it is shown that the linear elution strength (LES) model which was adapted from temperature-programming gas chromatography (GC) can also be employed to predict retention times for segmented-temperature gradients based on temperature-gradient input data in liquid chromatography (LC) with high accuracy. The LES model assumes that retention times for isothermal separations can be predicted based on two temperature gradients and is employed to calculate the retention factor of an analyte when changing the start temperature of the temperature gradient. In this study it was investigated whether this approach can also be employed in LC. It was shown that this approximation cannot be transferred to temperature-programmed LC where a temperature range from 60°C up to 180°C is investigated. Major relative errors up to 169.6% were observed for isothermal retention factor predictions. In order to predict retention times for temperature gradients with different start temperatures in LC, another relationship is required to describe the influence of temperature on retention. Therefore, retention times for isothermal separations based on isothermal input runs were predicted using a plot of the natural logarithm of the retention factor vs. the inverse temperature and a plot of the natural logarithm of the retention factor vs. temperature. It could be shown that a plot of lnk vs. T yields more reliable isothermal/isocratic retention time predictions than a plot of lnk vs. 1/T which is usually employed. Hence, in order to predict retention times for temperature-gradients with different start temperatures in LC, two temperature gradient and two isothermal measurements have been employed. In this case, retention times can be predicted with a maximal relative error of 5.5% (average relative error: 2.9%). In comparison, if the start temperature of the simulated temperature gradient is equal to the start temperature of the input data, only two temperature-gradient measurements are required. Under these conditions, retention times can be predicted with a maximal relative error of 4.3% (average relative error: 2.2%). As an example, the systematic method development for an isothermal as well as a temperature gradient separation of selected sulfonamides by means of the adapted LES model is demonstrated using a pure water mobile phase. Both methods are compared and it is shown that the temperature-gradient separation provides some advantages over the isothermal separation in terms of limits of detection and analysis time. Copyright © 2011 Elsevier B.V. All rights reserved.
Mira, Nieves Orta; Serrano, María del Remedio Guna; Martínez, José Carlos Latorre; Ovies, María Rosario; Pérez, José L; Cardona, Concepción Gimeno
2010-01-01
Human immunodeficiency virus type 1 (HIV-1) and hepatitis C virus (HCV) viral load determinations are among the most relevant markers for the follow up of patients infected with these viruses. External quality control tools are crucial to ensure the accuracy of results obtained by microbiology laboratories. This article summarized the results obtained from the 2008 SEIMC External Quality Control Program for HIV-1 and HCV viral loads. In the HIV-1 program, a total of five standards were sent. One standard consisted in seronegative human plasma, while the remaining four contained plasma from 3 different viremic patients, in the range of 2-5 log(10) copies/mL; two of these standards were identical aiming to determine repeatability. The specificity was complete for all commercial methods, and no false positive results were reported by the participants. A significant proportion of the laboratories (24% on average) obtained values out of the accepted range (mean +/- 0.2 log(10) copies/mL), depending on the standard and on the method used for quantification. Repeatability was very good, with up to 95% of laboratories reporting results within the limits (D < 0.5 log(10) copias/mL). The HCV program consisted of two standards with different viral load contents. Most of the participants (88,7%) obtained results within the accepted range (mean +/- 1.96 SD log(10) UI/mL). Post-analytical errors due to mistranscription of the results were detected for HCV, but not for the HIV-1 program. Data from this analysis reinforce the utility of proficiency programmes to ensure the quality of the results obtained by a particular laboratory, as well as the importance of the post-analytical phase on the overall quality. Due to the remarkable interlaboratory variability, it is advisable to use the same method and the same laboratory for patient follow up. 2010 Elsevier España S.L. All rights reserved.
Sequencing artifacts in the type A influenza databases and attempts to correct them.
Suarez, David L; Chester, Nikki; Hatfield, Jason
2014-07-01
There are over 276 000 influenza gene sequences in public databases, with the quality of the sequences determined by the contributor. As part of a high school class project, influenza sequences with possible errors were identified in the public databases based on the size of the gene being longer than expected, with the hypothesis that these sequences would have an error. Students contacted sequence submitters alerting them of the possible sequence issue(s) and requested they the suspect sequence(s) be correct as appropriate. Type A influenza viruses were screened, and gene segments longer than the accepted size were identified for further analysis. Attention was placed on sequences with additional nucleotides upstream or downstream of the highly conserved non-coding ends of the viral segments. A total of 1081 sequences were identified that met this criterion. Three types of errors were commonly observed: non-influenza primer sequence wasn't removed from the sequence; PCR product was cloned and plasmid sequence was included in the sequence; and Taq polymerase added an adenine at the end of the PCR product. Internal insertions of nucleotide sequence were also commonly observed, but in many cases it was unclear if the sequence was correct or actually contained an error. A total of 215 sequences, or 22.8% of the suspect sequences, were corrected in the public databases in the first year of the student project. Unfortunately 138 additional sequences with possible errors were added to the databases in the second year. Additional awareness of the need for data integrity of sequences submitted to public databases is needed to fully reap the benefits of these large data sets. © 2014 The Authors. Influenza and Other Respiratory Viruses Published by John Wiley & Sons Ltd.
Forecasting in foodservice: model development, testing, and evaluation.
Miller, J L; Thompson, P A; Orabella, M M
1991-05-01
This study was designed to develop, test, and evaluate mathematical models appropriate for forecasting menu-item production demand in foodservice. Data were collected from residence and dining hall foodservices at Ohio State University. Objectives of the study were to collect, code, and analyze the data; develop and test models using actual operation data; and compare forecasting results with current methods in use. Customer count was forecast using deseasonalized simple exponential smoothing. Menu-item demand was forecast by multiplying the count forecast by a predicted preference statistic. Forecasting models were evaluated using mean squared error, mean absolute deviation, and mean absolute percentage error techniques. All models were more accurate than current methods. A broad spectrum of forecasting techniques could be used by foodservice managers with access to a personal computer and spread-sheet and database-management software. The findings indicate that mathematical forecasting techniques may be effective in foodservice operations to control costs, increase productivity, and maximize profits.
NASA Astrophysics Data System (ADS)
Alexandrou, Constantia; Athenodorou, Andreas; Cichy, Krzysztof; Constantinou, Martha; Horkel, Derek P.; Jansen, Karl; Koutsou, Giannis; Larkin, Conor
2018-04-01
We compare lattice QCD determinations of topological susceptibility using a gluonic definition from the gradient flow and a fermionic definition from the spectral-projector method. We use ensembles with dynamical light, strange and charm flavors of maximally twisted mass fermions. For both definitions of the susceptibility we employ ensembles at three values of the lattice spacing and several quark masses at each spacing. The data are fitted to chiral perturbation theory predictions with a discretization term to determine the continuum chiral condensate in the massless limit and estimate the overall discretization errors. We find that both approaches lead to compatible results in the continuum limit, but the gluonic ones are much more affected by cutoff effects. This finally yields a much smaller total error in the spectral-projector results. We show that there exists, in principle, a value of the spectral cutoff which would completely eliminate discretization effects in the topological susceptibility.
Archetypal Analysis for Sparse Representation-Based Hyperspectral Sub-Pixel Quantification
NASA Astrophysics Data System (ADS)
Drees, L.; Roscher, R.
2017-05-01
This paper focuses on the quantification of land cover fractions in an urban area of Berlin, Germany, using simulated hyperspectral EnMAP data with a spatial resolution of 30m×30m. For this, sparse representation is applied, where each pixel with unknown surface characteristics is expressed by a weighted linear combination of elementary spectra with known land cover class. The elementary spectra are determined from image reference data using simplex volume maximization, which is a fast heuristic technique for archetypal analysis. In the experiments, the estimation of class fractions based on the archetypal spectral library is compared to the estimation obtained by a manually designed spectral library by means of reconstruction error, mean absolute error of the fraction estimates, sum of fractions and the number of used elementary spectra. We will show, that a collection of archetypes can be an adequate and efficient alternative to the spectral library with respect to mentioned criteria.
Optimized multiple linear mappings for single image super-resolution
NASA Astrophysics Data System (ADS)
Zhang, Kaibing; Li, Jie; Xiong, Zenggang; Liu, Xiuping; Gao, Xinbo
2017-12-01
Learning piecewise linear regression has been recognized as an effective way for example learning-based single image super-resolution (SR) in literature. In this paper, we employ an expectation-maximization (EM) algorithm to further improve the SR performance of our previous multiple linear mappings (MLM) based SR method. In the training stage, the proposed method starts with a set of linear regressors obtained by the MLM-based method, and then jointly optimizes the clustering results and the low- and high-resolution subdictionary pairs for regression functions by using the metric of the reconstruction errors. In the test stage, we select the optimal regressor for SR reconstruction by accumulating the reconstruction errors of m-nearest neighbors in the training set. Thorough experimental results carried on six publicly available datasets demonstrate that the proposed SR method can yield high-quality images with finer details and sharper edges in terms of both quantitative and perceptual image quality assessments.
Improving z-tracking accuracy in the two-photon single-particle tracking microscope.
Liu, C; Liu, Y-L; Perillo, E P; Jiang, N; Dunn, A K; Yeh, H-C
2015-10-12
Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we have precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico . Our method can be generally applied to other 3D single-particle tracking techniques.
GTARG - The TOPEX/Poseidon ground track maintenance maneuver targeting program
NASA Technical Reports Server (NTRS)
Shapiro, Bruce E.; Bhat, Ramachandra S.
1993-01-01
GTARG is a computer program used to design orbit maintenance maneuvers for the TOPEX/Poseidon satellite. These maneuvers ensure that the ground track is kept within +/-1 km with of an = 9.9 day exact repeat pattern. Maneuver parameters are determined using either of two targeting strategies: longitude targeting, which maximizes the time between maneuvers, and time targeting, in which maneuvers are targeted to occur at specific intervals. The GTARG algorithm propagates nonsingular mean elements, taking into account anticipated error sigma's in orbit determination, Delta v execution, drag prediction and Delta v quantization. A satellite unique drag model is used which incorporates an approximate mean orbital Jacchia-Roberts atmosphere and a variable mean area model. Maneuver Delta v magnitudes are targeted to precisely maintain either the unbiased ground track itself, or a comfortable (3 sigma) error envelope about the unbiased ground track.
Nonlinear Quantum Metrology of Many-Body Open Systems
NASA Astrophysics Data System (ADS)
Beau, M.; del Campo, A.
2017-07-01
We introduce general bounds for the parameter estimation error in nonlinear quantum metrology of many-body open systems in the Markovian limit. Given a k -body Hamiltonian and p -body Lindblad operators, the estimation error of a Hamiltonian parameter using a Greenberger-Horne-Zeilinger state as a probe is shown to scale as N-[k -(p /2 )], surpassing the shot-noise limit for 2 k >p +1 . Metrology equivalence between initial product states and maximally entangled states is established for p ≥1 . We further show that one can estimate the system-environment coupling parameter with precision N-(p /2 ), while many-body decoherence enhances the precision to N-k in the noise-amplitude estimation of a fluctuating k -body Hamiltonian. For the long-range Ising model, we show that the precision of this parameter beats the shot-noise limit when the range of interactions is below a threshold value.
Umar, Amara; Javaid, Nadeem; Ahmad, Ashfaq; Khan, Zahoor Ali; Qasim, Umar; Alrajeh, Nabil; Hayat, Amir
2015-06-18
Performance enhancement of Underwater Wireless Sensor Networks (UWSNs) in terms of throughput maximization, energy conservation and Bit Error Rate (BER) minimization is a potential research area. However, limited available bandwidth, high propagation delay, highly dynamic network topology, and high error probability leads to performance degradation in these networks. In this regard, many cooperative communication protocols have been developed that either investigate the physical layer or the Medium Access Control (MAC) layer, however, the network layer is still unexplored. More specifically, cooperative routing has not yet been jointly considered with sink mobility. Therefore, this paper aims to enhance the network reliability and efficiency via dominating set based cooperative routing and sink mobility. The proposed work is validated via simulations which show relatively improved performance of our proposed work in terms the selected performance metrics.
Improving z-tracking accuracy in the two-photon single-particle tracking microscope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C.; Liu, Y.-L.; Perillo, E. P.
Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we havemore » precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico. Our method can be generally applied to other 3D single-particle tracking techniques.« less
Random access in large-scale DNA data storage.
Organick, Lee; Ang, Siena Dumas; Chen, Yuan-Jyue; Lopez, Randolph; Yekhanin, Sergey; Makarychev, Konstantin; Racz, Miklos Z; Kamath, Govinda; Gopalan, Parikshit; Nguyen, Bichlien; Takahashi, Christopher N; Newman, Sharon; Parker, Hsing-Yeh; Rashtchian, Cyrus; Stewart, Kendall; Gupta, Gagan; Carlson, Robert; Mulligan, John; Carmean, Douglas; Seelig, Georg; Ceze, Luis; Strauss, Karin
2018-03-01
Synthetic DNA is durable and can encode digital data with high density, making it an attractive medium for data storage. However, recovering stored data on a large-scale currently requires all the DNA in a pool to be sequenced, even if only a subset of the information needs to be extracted. Here, we encode and store 35 distinct files (over 200 MB of data), in more than 13 million DNA oligonucleotides, and show that we can recover each file individually and with no errors, using a random access approach. We design and validate a large library of primers that enable individual recovery of all files stored within the DNA. We also develop an algorithm that greatly reduces the sequencing read coverage required for error-free decoding by maximizing information from all sequence reads. These advances demonstrate a viable, large-scale system for DNA data storage and retrieval.
On the capacity of ternary Hebbian networks
NASA Technical Reports Server (NTRS)
Baram, Yoram
1991-01-01
Networks of ternary neurons storing random vectors over the set -1,0,1 by the so-called Hebbian rule are considered. It is shown that the maximal number of stored patterns that are equilibrium states of the network with probability tending to one as N tends to infinity is at least on the order of (N exp 2-1/alpha)/K, where N is the number of neurons, K is the number of nonzero elements in a pattern, and t = alpha x K, alpha between 1/2 and 1, is the threshold in the neuron function. While, for small K, this bound is similar to that obtained for fully connected binary networks, the number of interneural connections required in the ternary case is considerably smaller. Similar bounds, incorporating error probabilities, are shown to guarantee, in the same probabilistic sense, the correction of errors in the nonzero elements and in the location of these elements.
Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays
Trucco, Andrea; Traverso, Federico; Crocco, Marco
2015-01-01
For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed). In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches. PMID:26066987
Consistency of peak and mean concentric and eccentric force using a novel squat testing device.
Stock, Matt S; Luera, Micheal J
2014-04-01
The ability to examine force curves from multiple-joint assessments combines many of the benefits of dynamic constant external resistance exercise and isokinetic dynamometry. The purpose of this investigation was to examine test-retest reliability statistics for peak and mean force using the Exerbotics eSQ during maximal concentric and eccentric squats. Seventeen resistance-trained men (mean±SD age=21±2 years) visited the laboratory on two occasions. For each trial, the subjects performed two maximal concentric and eccentric squats, and the muscle actions with the highest force values were analyzed. There were no mean differences between the trials (P>.05), and the effect sizes were <0.12. When the entire force curve was examined, the intraclass correlation coefficients (model 2,1) and standard errors of measurement, respectively, were concentric peak force=0.743 (8.8%); concentric mean force=0.804 (6.0%); eccentric peak force=0.696 (10.6%); eccentric mean force=0.736 (9.6%). These findings indicated moderate-to-high reliability for the peak and mean force values obtained from the Exerbotics eSQ during maximal squat testing. The analysis of force curves from multiple-joint testing provides researchers and practitioners with a reliable means of assessing performance, especially during concentric muscle actions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert, J; Labarbe, R; Sterpin, E
2016-06-15
Purpose: To understand the extent to which the prompt gamma camera measurements can be used to predict the residual proton range due to setup errors and errors in the calibration curve. Methods: We generated ten variations on a default calibration curve (CC) and ten corresponding range maps (RM). Starting with the default RM, we chose a square array of N beamlets, which were then rotated by a random angle θ and shifted by a random vector s. We added a 5% distal Gaussian noise to each beamlet in order to introduce discrepancies that exist between the ranges predicted from themore » prompt gamma measurements and those simulated with Monte Carlo algorithms. For each RM, s, θ, along with an offset u in the CC, were optimized using a simple Euclidian distance between the default ranges and the ranges produced by the given RM. Results: The application of our method lead to the maximal overrange of 2.0mm and underrange of 0.6mm on average. Compared to the situations where s, θ, and u were ignored, these values were larger: 2.1mm and 4.3mm. In order to quantify the need for setup error corrections, we also performed computations in which u was corrected for, but s and θ were not. This yielded: 3.2mm and 3.2mm. The average computation time for 170 beamlets was 65 seconds. Conclusion: These results emphasize the necessity to correct for setup errors and the errors in the calibration curve. The simplicity and speed of our method makes it a good candidate for being implemented as a tool for in-room adaptive therapy. This work also demonstrates that the Prompt gamma range measurements can indeed be useful in the effort to reduce range errors. Given these results, and barring further refinements, this approach is a promising step towards an adaptive proton radiotherapy.« less
NASA Astrophysics Data System (ADS)
Khaki, M.; Schumacher, M.; Forootan, E.; Kuhn, M.; Awange, J. L.; van Dijk, A. I. J. M.
2017-10-01
Assimilation of terrestrial water storage (TWS) information from the Gravity Recovery And Climate Experiment (GRACE) satellite mission can provide significant improvements in hydrological modelling. However, the rather coarse spatial resolution of GRACE TWS and its spatially correlated errors pose considerable challenges for achieving realistic assimilation results. Consequently, successful data assimilation depends on rigorous modelling of the full error covariance matrix of the GRACE TWS estimates, as well as realistic error behavior for hydrological model simulations. In this study, we assess the application of local analysis (LA) to maximize the contribution of GRACE TWS in hydrological data assimilation. For this, we assimilate GRACE TWS into the World-Wide Water Resources Assessment system (W3RA) over the Australian continent while applying LA and accounting for existing spatial correlations using the full error covariance matrix. GRACE TWS data is applied with different spatial resolutions including 1° to 5° grids, as well as basin averages. The ensemble-based sequential filtering technique of the Square Root Analysis (SQRA) is applied to assimilate TWS data into W3RA. For each spatial scale, the performance of the data assimilation is assessed through comparison with independent in-situ ground water and soil moisture observations. Overall, the results demonstrate that LA is able to stabilize the inversion process (within the implementation of the SQRA filter) leading to less errors for all spatial scales considered with an average RMSE improvement of 54% (e.g., 52.23 mm down to 26.80 mm) for all the cases with respect to groundwater in-situ measurements. Validating the assimilated results with groundwater observations indicates that LA leads to 13% better (in terms of RMSE) assimilation results compared to the cases with Gaussian errors assumptions. This highlights the great potential of LA and the use of the full error covariance matrix of GRACE TWS estimates for improved data assimilation results.
Discovering Communicable Scientific Knowledge from Spatio-Temporal Data
NASA Technical Reports Server (NTRS)
Schwabacher, Mark; Langley, Pat; Norvig, Peter (Technical Monitor)
2001-01-01
This paper describes how we used regression rules to improve upon a result previously published in the Earth science literature. In such a scientific application of machine learning, it is crucially important for the learned models to be understandable and communicable. We recount how we selected a learning algorithm to maximize communicability, and then describe two visualization techniques that we developed to aid in understanding the model by exploiting the spatial nature of the data. We also report how evaluating the learned models across time let us discover an error in the data.
HOW TO WRITE A SCIENTIFIC ARTICLE
Manske, Robert C.
2012-01-01
Successful production of a written product for submission to a peer‐reviewed scientific journal requires substantial effort. Such an effort can be maximized by following a few simple suggestions when composing/creating the product for submission. By following some suggested guidelines and avoiding common errors, the process can be streamlined and success realized for even beginning/novice authors as they negotiate the publication process. The purpose of this invited commentary is to offer practical suggestions for achieving success when writing and submitting manuscripts to The International Journal of Sports Physical Therapy and other professional journals. PMID:23091783
Self-optimization and auto-stabilization of receiver in DPSK transmission system.
Jang, Y S
2008-03-17
We propose a self-optimization and auto-stabilization method for a 1-bit DMZI in DPSK transmission. Using the characteristics of eye patterns, the optical frequency transmittance of a 1-bit DMZI is thermally controlled to maximize the power difference between the constructive and destructive output ports. Unlike other techniques, this control method can be realized without additional components, making it simple and cost effective. Experimental results show that error-free performance is maintained when the carrier optical frequency variation is approximately 10% of the data rate.
Discovering Communicable Models from Earth Science Data
NASA Technical Reports Server (NTRS)
Schwabacher, Mark; Langley, Pat; Potter, Christopher; Klooster, Steven; Torregrosa, Alicia
2002-01-01
This chapter describes how we used regression rules to improve upon results previously published in the Earth science literature. In such a scientific application of machine learning, it is crucially important for the learned models to be understandable and communicable. We recount how we selected a learning algorithm to maximize communicability, and then describe two visualization techniques that we developed to aid in understanding the model by exploiting the spatial nature of the data. We also report how evaluating the learned models across time let us discover an error in the data.
CHD3 facilitates vRNP nuclear export by interacting with NES1 of influenza A virus NS2.
Hu, Yong; Liu, Xiaokun; Zhang, Anding; Zhou, Hongbo; Liu, Ziduo; Chen, Huanchun; Jin, Meilin
2015-03-01
NS2 from influenza A virus mediates Crm1-dependent vRNP nuclear export through interaction with Crm1. However, even though the nuclear export signal 1 (NES1) of NS2 does not play a requisite role in NS2-Crm1 interaction, there is no doubt that NES1 is crucial for vRNP nuclear export. While the mechanism of the NES1 is still unclear, it is speculated that certain host partners might mediate the NES1 function through their interaction with NES1. In the present study, chromodomain-helicase-DNA-binding protein 3 (CHD3) was identified as a novel host nuclear protein for locating NS2 and Crm1 on dense chromatin for NS2 and Crm1-dependent vRNP nuclear export. CHD3 was confirmed to interact with NES1 in NS2, and a disruption to this interaction by mutation in NES1 significantly delayed viral vRNPs export and viral propagation. Further, the knockdown of CHD3 would affect the propagation of the wild-type virus but not the mutant with the weakened NS2-CHD3 interaction. Therefore, this study demonstrates that NES1 is required for maximal binding of NS2 to CHD3, and that the NS2-CHD3 interaction on the dense chromatin contributed to the NS2-mediated vRNP nuclear export.
Control of virus diseases in soybeans.
Hill, John H; Whitham, Steven A
2014-01-01
Soybean, one of the world's most important sources of animal feed and vegetable oil, can be infected by numerous viruses. However, only a small number of the viruses that can potentially infect soybean are considered as major economic problems to soybean production. Therefore, we consider management options available to control diseases caused by eight viruses that cause, or have the potential to cause, significant economic loss to producers. We summarize management tactics in use and suggest direction for the future. Clearly, the most important tactic is disease resistance. Several resistance genes are available for three of the eight viruses discussed. Other options include use of virus-free seed and avoidance of alternative virus hosts when planting. Attempts at arthropod vector control have generally not provided consistent disease management. In the future, disease management will be considerably enhanced by knowledge of the interaction between soybean and viral proteins. Identification of genes required for soybean defense may represent key regulatory hubs that will enhance or broaden the spectrum of basal resistance to viruses. It may be possible to create new recessive or dominant negative alleles of host proteins that do not support viral functions but perform normal cellular function. The future approach to virus control based on gene editing or exploiting allelic diversity points to necessary research into soybean-virus interactions. This will help to generate the knowledge needed for rational design of durable resistance that will maximize global production.
Katz, Michael G.; Fargnoli, Anthony S.; Williams, Richard D.
2013-01-01
Abstract Gene therapy is one of the most promising fields for developing new treatments for the advanced stages of ischemic and monogenetic, particularly autosomal or X-linked recessive, cardiomyopathies. The remarkable ongoing efforts in advancing various targets have largely been inspired by the results that have been achieved in several notable gene therapy trials, such as the hemophilia B and Leber's congenital amaurosis. Rate-limiting problems preventing successful clinical application in the cardiac disease area, however, are primarily attributable to inefficient gene transfer, host responses, and the lack of sustainable therapeutic transgene expression. It is arguable that these problems are directly correlated with the choice of vector, dose level, and associated cardiac delivery approach as a whole treatment system. Essentially, a delicate balance exists in maximizing gene transfer required for efficacy while remaining within safety limits. Therefore, the development of safe, effective, and clinically applicable gene delivery techniques for selected nonviral and viral vectors will certainly be invaluable in obtaining future regulatory approvals. The choice of gene transfer vector, dose level, and the delivery system are likely to be critical determinants of therapeutic efficacy. It is here that the interactions between vector uptake and trafficking, delivery route means, and the host's physical limits must be considered synergistically for a successful treatment course. PMID:24164239
Novel gemini cationic lipids with carbamate groups for gene delivery
Zhao, Yi-Nan; Qureshi, Farooq; Zhang, Shu-Biao; Cui, Shao-Hui; Wang, Bing; Chen, Hui-Ying; Lv, Hong-Tao; Zhang, Shu-Fen; Huang, Leaf
2014-01-01
To obtain efficient non-viral vectors, a series of Gemini cationic lipids with carbamate linkers between headgroups and hydrophobic tails were synthesized. They have the hydrocarbon chains of 12, 14, 16 and 18 carbon atoms as tails, designated as G12, G14, G16 and G18, respectively. These Gemini cationic lipids were prepared into cationic liposomes for the study of the physicochemical properties and gene delivery. The DNA-bonding ability of these Gemini cationic liposomes was much better than their mono-head counterparts (designated as M12, M14, M16 and M18, respectively). In the same series of liposomes, bonding ability declined with an increase in tail length. They were tested for their gene-transferring capabilities in Hep-2 and A549 cells. They showed higher transfection efficiency than their mono-head counterparts and were comparable or superior in transfection efficiency and cytotoxicity to the commercial liposomes, DOTAP and Lipofectamine 2000. Our results convincingly demonstrate that the gene-transferring capabilities of these cationic lipids depended on hydrocarbon chain length. Gene transfection efficiency was maximal at a chain length of 14, as G14 can silence about 80 % of luciferase in A549 cells. Cell uptake results indicate that Gemini lipid delivery systems could be internalised by cells very efficiently. Thus, the Gemini cationic lipids could be used as synthetic non-viral gene delivery carriers for further study. PMID:25045521
Structure-based drug discovery for combating influenza virus by targeting the PA-PB1 interaction.
Watanabe, Ken; Ishikawa, Takeshi; Otaki, Hiroki; Mizuta, Satoshi; Hamada, Tsuyoshi; Nakagaki, Takehiro; Ishibashi, Daisuke; Urata, Shuzo; Yasuda, Jiro; Tanaka, Yoshimasa; Nishida, Noriyuki
2017-08-25
Influenza virus infections are serious public health concerns throughout the world. The development of compounds with novel mechanisms of action is urgently required due to the emergence of viruses with resistance to the currently-approved anti-influenza viral drugs. We performed in silico screening using a structure-based drug discovery algorithm called Nagasaki University Docking Engine (NUDE), which is optimised for a GPU-based supercomputer (DEstination for Gpu Intensive MAchine; DEGIMA), by targeting influenza viral PA protein. The compounds selected by NUDE were tested for anti-influenza virus activity using a cell-based assay. The most potent compound, designated as PA-49, is a medium-sized quinolinone derivative bearing a tetrazole moiety, and it inhibited the replication of influenza virus A/WSN/33 at a half maximal inhibitory concentration of 0.47 μM. PA-49 has the ability to bind PA and its anti-influenza activity was promising against various influenza strains, including a clinical isolate of A(H1N1)pdm09 and type B viruses. The docking simulation suggested that PA-49 interrupts the PA-PB1 interface where important amino acids are mostly conserved in the virus strains tested, suggesting the strain independent utility. Because our NUDE/DEGIMA system is rapid and efficient, it may help effective drug discovery against the influenza virus and other emerging viruses.
Failed triple therapy in a treatment-experienced patient with genotype 6 hepatitis C infection.
Gammal, Roseann S; Spooner, Linda M; Abraham, George M
2014-02-01
The first published report of the use of triple therapy in a patient with hepatitis C virus (HCV) genotype 6 infection-a treatment that was prescribed due to incorrect HCV genotyping and which ultimately failed-is presented. A 70-year-old male U.S. resident of Vietnamese descent requested treatment for chronic HCV infection acquired decades earlier. He reported experiencing hepatitis C treatment failures twice before-13 years prior (interferon alfa monotherapy for six months) and 7 years prior (standard dual therapy with pegylated interferon alfa-2b and ribavirin for nine months). Initial viral genotyping indicated infection with HCV genotypes 1a and 6c (a form of mixed HCV disease amenable to triple therapy), and treatment with pegylated interferon alfa-2a, ribavirin, and boceprevir was initiated. By week 8 of triple therapy, the patient's viral load had decreased from 15,700,000 (7.20 log) to 462,882 (5.67 log) IU/mL, but the viral load subsequently rebounded to baseline levels, and treatment was discontinued at week 16. When repeat HCV genotyping was performed, it was discovered that initial genotyping was incorrect and that the man's infection involved not mixed genotypes but only genotype 6; he was not an appropriate candidate for triple therapy. The case emphasizes the need for clinicians to be cognizant of potential HCV genotyping errors, particularly with regard to patients of Southeast Asian descent. Three courses of interferon-based treatment, including triple therapy with boceprevir, failed to produce a sustained therapeutic response in a 70-year-old ethnic Vietnamese man with genotype 6 HCV infection.
Kortenhoeven, Cornell; Joubert, Fourie; Bastos, Armanda D S; Abolnik, Celia
2015-02-22
Extensive focus is placed on the comparative analyses of consensus genotypes in the study of West Nile virus (WNV) emergence. Few studies account for genetic change in the underlying WNV quasispecies population variants. These variants are not discernable in the consensus genome at the time of emergence, and the maintenance of mutation-selection equilibria of population variants is greatly underestimated. The emergence of lineage 1 WNV strains has been studied extensively, but recent epidemics caused by lineage 2 WNV strains in Hungary, Austria, Greece and Italy emphasizes the increasing importance of this lineage to public health. In this study we explored the quasispecies dynamics of minority variants that contribute to cell-tropism and host determination, i.e. the ability to infect different cell types or cells from different species from Next Generation Sequencing (NGS) data of a historic lineage 2 WNV strain. Minority variants contributing to host cell membrane association persist in the viral population without contributing to the genetic change in the consensus genome. Minority variants are shown to maintain a stable mutation-selection equilibrium under positive selection, particularly in the capsid gene region. This study is the first to infer positive selection and the persistence of WNV haplotype variants that contribute to viral fitness without accompanying genetic change in the consensus genotype, documented solely from NGS sequence data. The approach used in this study streamlines the experimental design seeking viral minority variants accurately from NGS data whilst minimizing the influence of associated sequence error.
NASA Astrophysics Data System (ADS)
Murray, J. R.
2017-12-01
Earth surface displacements measured at Global Navigation Satellite System (GNSS) sites record crustal deformation due, for example, to slip on faults underground. A primary objective in designing geodetic networks to study crustal deformation is to maximize the ability to recover parameters of interest like fault slip. Given Green's functions (GFs) relating observed displacement to motion on buried dislocations representing a fault, one can use various methods to estimate spatially variable slip. However, assumptions embodied in the GFs, e.g., use of a simplified elastic structure, introduce spatially correlated model prediction errors (MPE) not reflected in measurement uncertainties (Duputel et al., 2014). In theory, selection algorithms should incorporate inter-site correlations to identify measurement locations that give unique information. I assess the impact of MPE on site selection by expanding existing methods (Klein et al., 2017; Reeves and Zhe, 1999) to incorporate this effect. Reeves and Zhe's algorithm sequentially adds or removes a predetermined number of data according to a criterion that minimizes the sum of squared errors (SSE) on parameter estimates. Adapting this method to GNSS network design, Klein et al. select new sites that maximize model resolution, using trade-off curves to determine when additional resolution gain is small. Their analysis uses uncorrelated data errors and GFs for a uniform elastic half space. I compare results using GFs for spatially variable strike slip on a discretized dislocation in a uniform elastic half space, a layered elastic half space, and a layered half space with inclusion of MPE. I define an objective criterion to terminate the algorithm once the next site removal would increase SSE more than the expected incremental SSE increase if all sites had equal impact. Using a grid of candidate sites with 8 km spacing, I find the relative value of the selected sites (defined by the percent increase in SSE that further removal of each site would cause) is more uniform when MPE is included. However, the number and distribution of selected sites depends primarily on site location relative to the fault. For this test case, inclusion of MPE has minimal practical impact; I will investigate whether these findings hold for more densely spaced candidate grids and dipping faults.
Thermodynamic efficiency of nonimaging concentrators
NASA Astrophysics Data System (ADS)
Shatz, Narkis; Bortz, John; Winston, Roland
2009-08-01
The purpose of a nonimaging concentrator is to transfer maximal flux from the phase space of a source to that of a target. A concentrator's performance can be expressed relative to a thermodynamic reference. We discuss consequences of Fermat's principle of geometrical optics. We review étendue dilution and optical loss mechanisms associated with nonimaging concentrators, especially for the photovoltaic (PV) role. We introduce the concept of optical thermodynamic efficiency which is a performance metric combining the first and second laws of thermodynamics. The optical thermodynamic efficiency is a comprehensive metric that takes into account all loss mechanisms associated with transferring flux from the source to the target phase space, which may include losses due to inadequate design, non-ideal materials, fabrication errors, and less than maximal concentration. As such, this metric is a gold standard for evaluating the performance of nonimaging concentrators. Examples are provided to illustrate the use of this new metric. In particular we discuss concentrating PV systems for solar power applications.
Unsupervised Deep Hashing With Pseudo Labels for Scalable Image Retrieval.
Zhang, Haofeng; Liu, Li; Long, Yang; Shao, Ling
2018-04-01
In order to achieve efficient similarity searching, hash functions are designed to encode images into low-dimensional binary codes with the constraint that similar features will have a short distance in the projected Hamming space. Recently, deep learning-based methods have become more popular, and outperform traditional non-deep methods. However, without label information, most state-of-the-art unsupervised deep hashing (DH) algorithms suffer from severe performance degradation for unsupervised scenarios. One of the main reasons is that the ad-hoc encoding process cannot properly capture the visual feature distribution. In this paper, we propose a novel unsupervised framework that has two main contributions: 1) we convert the unsupervised DH model into supervised by discovering pseudo labels; 2) the framework unifies likelihood maximization, mutual information maximization, and quantization error minimization so that the pseudo labels can maximumly preserve the distribution of visual features. Extensive experiments on three popular data sets demonstrate the advantages of the proposed method, which leads to significant performance improvement over the state-of-the-art unsupervised hashing algorithms.
TCP throughput adaptation in WiMax networks using replicator dynamics.
Anastasopoulos, Markos P; Petraki, Dionysia K; Kannan, Rajgopal; Vasilakos, Athanasios V
2010-06-01
The high-frequency segment (10-66 GHz) of the IEEE 802.16 standard seems promising for the implementation of wireless backhaul networks carrying large volumes of Internet traffic. In contrast to wireline backbone networks, where channel errors seldom occur, the TCP protocol in IEEE 802.16 Worldwide Interoperability for Microwave Access networks is conditioned exclusively by wireless channel impairments rather than by congestion. This renders a cross-layer design approach between the transport and physical layers more appropriate during fading periods. In this paper, an adaptive coding and modulation (ACM) scheme for TCP throughput maximization is presented. In the current approach, Internet traffic is modulated and coded employing an adaptive scheme that is mathematically equivalent to the replicator dynamics model. The stability of the proposed ACM scheme is proven, and the dependence of the speed of convergence on various physical-layer parameters is investigated. It is also shown that convergence to the strategy that maximizes TCP throughput may be further accelerated by increasing the amount of information from the physical layer.
Johansson, Magnus; Zhang, Jingji; Ehrenberg, Måns
2012-01-03
Rapid and accurate translation of the genetic code into protein is fundamental to life. Yet due to lack of a suitable assay, little is known about the accuracy-determining parameters and their correlation with translational speed. Here, we develop such an assay, based on Mg(2+) concentration changes, to determine maximal accuracy limits for a complete set of single-mismatch codon-anticodon interactions. We found a simple, linear trade-off between efficiency of cognate codon reading and accuracy of tRNA selection. The maximal accuracy was highest for the second codon position and lowest for the third. The results rationalize the existence of proofreading in code reading and have implications for the understanding of tRNA modifications, as well as of translation error-modulating ribosomal mutations and antibiotics. Finally, the results bridge the gap between in vivo and in vitro translation and allow us to calibrate our test tube conditions to represent the environment inside the living cell.
Molecular evolution and emergence of avian gammacoronaviruses.
Jackwood, Mark W; Hall, David; Handel, Andreas
2012-08-01
Coronaviruses, which are single stranded, positive sense RNA viruses, are responsible for a wide variety of existing and emerging diseases in humans and other animals. The gammacoronaviruses primarily infect avian hosts. Within this genus of coronaviruses, the avian coronavirus infectious bronchitis virus (IBV) causes a highly infectious upper-respiratory tract disease in commercial poultry. IBV shows rapid evolution in chickens, frequently producing new antigenic types, which adds to the multiple serotypes of the virus that do not cross protect. Rapid evolution in IBV is facilitated by strong selection, large population sizes and high genetic diversity within hosts, and transmission bottlenecks between hosts. Genetic diversity within a host arises primarily by mutation, which includes substitutions, insertions and deletions. Mutations are caused both by the high error rate, and limited proof reading capability, of the viral RNA-dependent RNA-polymerase, and by recombination. Recombination also generates new haplotype diversity by recombining existing variants. Rapid evolution of avian coronavirus IBV makes this virus extremely difficult to diagnose and control, but also makes it an excellent model system to study viral genetic diversity and the mechanisms behind the emergence of coronaviruses in their natural host. Copyright © 2012 Elsevier B.V. All rights reserved.
Segura-Correa, J C; Domínguez-Díaz, D; Avalos-Ramírez, R; Argaez-Sosa, J
2010-09-01
Knowledge of the intraherd correlation coefficient (ICC) and design (D) effect for infectious diseases could be of interest in sample size calculation and to provide the correct standard errors of prevalence estimates in cluster or two-stage samplings surveys. Information on 813 animals from 48 non-vaccinated cow-calf herds from North-eastern Mexico was used. The ICC for the bovine viral diarrhoea (BVD), infectious bovine rhinotracheitis (IBR), leptospirosis and neosporosis diseases were calculated using a Bayesian approach adjusting for the sensitivity and specificity of the diagnostic tests. The ICC and D values for BVD, IBR, leptospirosis and neosporosis were 0.31 and 5.91, 0.18 and 3.88, 0.22 and 4.53, and 0.11 and 2.68, respectively. The ICC and D values were different from 0 and D greater than 1, therefore large sample sizes are required to obtain the same precision in prevalence estimates than for a random simple sampling design. The report of ICC and D values is of great help in planning and designing two-stage sampling studies. 2010 Elsevier B.V. All rights reserved.
The HIV care continuum: no partial credit given.
McNairy, Margaret L; El-Sadr, Wafaa M
2012-09-10
Despite significant scale-up of HIV care and treatment across the world, overall effectiveness of HIV programs is severely undermined by attrition of patients across the HIV care continuum, both in resource-rich and resource-limited settings. The care continuum has four essential steps: linkage from testing to enrollment in care, determination of antiretroviral therapy (ART) eligibility, ART initiation, and adherence to medications to achieve viral suppression. In order to substantially improve health outcomes for the individual and potentially for prevention of transmission to others, each of the steps of the entire care continuum must be achieved. This will require the adoption of interventions that address the multiplicity of barriers and social contexts faced by individuals and populations across each step, a reconceptualization of services to maximize engagement in care, and ambitious evaluation of program performance using all-or-none measurement.
Predictors and Outcomes of Burnout in Primary Care Physicians.
Rabatin, Joseph; Williams, Eric; Baier Manwell, Linda; Schwartz, Mark D; Brown, Roger L; Linzer, Mark
2016-01-01
To assess relationships between primary care work conditions, physician burnout, quality of care, and medical errors. Cross-sectional and longitudinal analyses of data from the MEMO (Minimizing Error, Maximizing Outcome) Study. Two surveys of 422 family physicians and general internists, administered 1 year apart, queried physician job satisfaction, stress and burnout, organizational culture, and intent to leave within 2 years. A chart audit of 1795 of their adult patients with diabetes and/or hypertension assessed care quality and medical errors. Women physicians were almost twice as likely as men to report burnout (36% vs 19%, P < .001). Burned out clinicians reported less satisfaction (P < .001), more job stress (P < .001), more time pressure during visits (P < .01), more chaotic work conditions (P < .001), and less work control (P < .001). Their workplaces were less likely to emphasize work-life balance (P < .001) and they noted more intent to leave the practice (56% vs 21%, P < .001). There were no consistent relationships between burnout, care quality, and medical errors. Burnout is highly associated with adverse work conditions and a greater intention to leave the practice, but not with adverse patient outcomes. Care quality thus appears to be preserved at great personal cost to primary care physicians. Efforts focused on workplace redesign and physician self-care are warranted to sustain the primary care workforce. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza
2018-03-01
In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.
Predictors and Outcomes of Burnout in Primary Care Physicians
Rabatin, Joseph; Williams, Eric; Baier Manwell, Linda; Schwartz, Mark D.; Brown, Roger L.; Linzer, Mark
2015-01-01
Objective: To assess relationships between primary care work conditions, physician burnout, quality of care, and medical errors. Methods: Cross-sectional and longitudinal analyses of data from the MEMO (Minimizing Error, Maximizing Outcome) Study. Two surveys of 422 family physicians and general internists, administered 1 year apart, queried physician job satisfaction, stress and burnout, organizational culture, and intent to leave within 2 years. A chart audit of 1795 of their adult patients with diabetes and/or hypertension assessed care quality and medical errors. Key Results: Women physicians were almost twice as likely as men to report burnout (36% vs 19%, P < .001). Burned out clinicians reported less satisfaction (P < .001), more job stress (P < .001), more time pressure during visits (P < .01), more chaotic work conditions (P < .001), and less work control (P < .001). Their workplaces were less likely to emphasize work-life balance (P < .001) and they noted more intent to leave the practice (56% vs 21%, P < .001). There were no consistent relationships between burnout, care quality, and medical errors. Conclusions: Burnout is highly associated with adverse work conditions and a greater intention to leave the practice, but not with adverse patient outcomes. Care quality thus appears to be preserved at great personal cost to primary care physicians. Efforts focused on workplace redesign and physician self-care are warranted to sustain the primary care workforce. PMID:26416697
Reliability of anthropometric measurements in European preschool children: the ToyBox-study.
De Miguel-Etayo, P; Mesana, M I; Cardon, G; De Bourdeaudhuij, I; Góźdź, M; Socha, P; Lateva, M; Iotova, V; Koletzko, B V; Duvinage, K; Androutsos, O; Manios, Y; Moreno, L A
2014-08-01
The ToyBox-study aims to develop and test an innovative and evidence-based obesity prevention programme for preschoolers in six European countries: Belgium, Bulgaria, Germany, Greece, Poland and Spain. In multicentre studies, anthropometric measurements using standardized procedures that minimize errors in the data collection are essential to maximize reliability of measurements. The aim of this paper is to describe the standardization process and reliability (intra- and inter-observer) of height, weight and waist circumference (WC) measurements in preschoolers. All technical procedures and devices were standardized and centralized training was given to the fieldworkers. At least seven children per country participated in the intra- and inter-observer reliability testing. Intra-observer technical error ranged from 0.00 to 0.03 kg for weight and from 0.07 to 0.20 cm for height, with the overall reliability being above 99%. A second training was organized for WC due to low reliability observed in the first training. Intra-observer technical error for WC ranged from 0.12 to 0.71 cm during the first training and from 0.05 to 1.11 cm during the second training, and reliability above 92% was achieved. Epidemiological surveys need standardized procedures and training of researchers to reduce measurement error. In the ToyBox-study, very good intra- and-inter-observer agreement was achieved for all anthropometric measurements performed. © 2014 World Obesity.
Stress free configuration of the human eye.
Elsheikh, Ahmed; Whitford, Charles; Hamarashid, Rosti; Kassem, Wael; Joda, Akram; Büchler, Philippe
2013-02-01
Numerical simulations of eye globes often rely on topographies that have been measured in vivo using devices such as the Pentacam or OCT. The topographies, which represent the form of the already stressed eye under the existing intraocular pressure, introduce approximations in the analysis. The accuracy of the simulations could be improved if either the stress state of the eye under the effect of intraocular pressure is determined, or the stress-free form of the eye estimated prior to conducting the analysis. This study reviews earlier attempts to address this problem and assesses the performance of an iterative technique proposed by Pandolfi and Holzapfel [1], which is both simple to implement and promises high accuracy in estimating the eye's stress-free form. A parametric study has been conducted and demonstrated reliance of the error level on the level of flexibility of the eye model, especially in the cornea region. However, in all cases considered 3-4 analysis iterations were sufficient to produce a stress-free form with average errors in node location <10(-6)mm and a maximal error <10(-4)mm. This error level, which is similar to what has been achieved with other methods and orders of magnitude lower than the accuracy of current clinical topography systems, justifies the use of the technique as a pre-processing step in ocular numerical simulations. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K. T.
2012-12-01
Accuracy of reservoir inflow forecasts is instrumental for maximizing value of water resources and influences operation of hydropower reservoirs significantly. Improving hourly reservoir inflow forecasts over a 24 hours lead-time is considered with the day-ahead (Elspot) market of the Nordic exchange market in perspectives. The procedure presented comprises of an error model added on top of an un-alterable constant parameter conceptual model, and a sequential data assimilation routine. The structure of the error model was investigated using freely available software for detecting mathematical relationships in a given dataset (EUREQA) and adopted to contain minimum complexity for computational reasons. As new streamflow data become available the extra information manifested in the discrepancies between measurements and conceptual model outputs are extracted and assimilated into the forecasting system recursively using Sequential Monte Carlo technique. Besides improving forecast skills significantly, the probabilistic inflow forecasts provided by the present approach entrains suitable information for reducing uncertainty in decision making processes related to hydropower systems operation. The potential of the current procedure for improving accuracy of inflow forecasts at lead-times unto 24 hours and its reliability in different seasons of the year will be illustrated and discussed thoroughly.
NASA Astrophysics Data System (ADS)
Bell, Stephen C.; Ginsburg, Marc A.; Rao, Prabhakara P.
An important part of space launch vehicle mission planning for a planetary mission is the integrated analysis of guidance and performance dispersions for both booster and upper stage vehicles. For the Mars Observer mission, an integrated trajectory analysis was used to maximize the scientific payload and to minimize injection errors by optimizing the energy management of both vehicles. This was accomplished by designing the Titan III booster vehicle to inject into a hyperbolic departure plane, and the Transfer Orbit Stage (TOS) to correct any booster dispersions. An integrated Monte Carlo analysis of the performance and guidance dispersions of both vehicles provided sensitivities, an evaluation of their guidance schemes and an injection error covariance matrix. The polynomial guidance schemes used for the Titan III variable flight azimuth computations and the TOS solid rocket motor ignition time and burn direction derivations accounted for a wide variation of launch times, performance dispersions, and target conditions. The Mars Observer spacecraft was launched on 25 September 1992 on the Titan III/TOS vehicle. The post flight analysis indicated that a near perfect park orbit injection was achieved, followed by a trans-Mars injection with less than 2sigma errors.
A open loop guidance architecture for navigationally robust on-orbit docking
NASA Technical Reports Server (NTRS)
Chern, Hung-Sheng
1995-01-01
The development of an open-hop guidance architecture is outlined for autonomous rendezvous and docking (AR&D) missions to determine whether the Global Positioning System (GPS) can be used in place of optical sensors for relative initial position determination of the chase vehicle. Feasible command trajectories for one, two, and three impulse AR&D maneuvers are determined using constrained trajectory optimization. Early AR&D command trajectory results suggest that docking accuracies are most sensitive to vertical position errors at the initial conduction of the chase vehicle. Thus, a feasible command trajectory is based on maximizing the size of the locus of initial vertical positions for which a fixed sequence of impulses will translate the chase vehicle into the target while satisfying docking accuracy requirements. Documented accuracies are used to determine whether relative GPS can achieve the vertical position error requirements of the impulsive command trajectories. Preliminary development of a thruster management system for the Cargo Transfer Vehicle (CTV) based on optimal throttle settings is presented to complete the guidance architecture. Results show that a guidance architecture based on a two impulse maneuvers generated the best performance in terms of initial position error and total velocity change for the chase vehicle.
Han, Miaomiao; Guo, Zhirong; Liu, Haifeng; Li, Qinghua
2018-05-01
Tomographic Gamma Scanning (TGS) is a method used for the nondestructive assay of radioactive wastes. In TGS, the actual irregular edge voxels are regarded as regular cubic voxels in the traditional treatment method. In this study, in order to improve the performance of TGS, a novel edge treatment method is proposed that considers the actual shapes of these voxels. The two different edge voxel treatment methods were compared by computing the pixel-level relative errors and normalized mean square errors (NMSEs) between the reconstructed transmission images and the ideal images. Both methods were coupled with two different interative algorithms comprising Algebraic Reconstruction Technique (ART) with a non-negativity constraint and Maximum Likelihood Expectation Maximization (MLEM). The results demonstrated that the traditional method for edge voxel treatment can introduce significant error and that the real irregular edge voxel treatment method can improve the performance of TGS by obtaining better transmission reconstruction images. With the real irregular edge voxel treatment method, MLEM algorithm and ART algorithm can be comparable when assaying homogenous matrices, but MLEM algorithm is superior to ART algorithm when assaying heterogeneous matrices. Copyright © 2018 Elsevier Ltd. All rights reserved.
DCS-Neural-Network Program for Aircraft Control and Testing
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
2006-01-01
A computer program implements a dynamic-cell-structure (DCS) artificial neural network that can perform such tasks as learning selected aerodynamic characteristics of an airplane from wind-tunnel test data and computing real-time stability and control derivatives of the airplane for use in feedback linearized control. A DCS neural network is one of several types of neural networks that can incorporate additional nodes in order to rapidly learn increasingly complex relationships between inputs and outputs. In the DCS neural network implemented by the present program, the insertion of nodes is based on accumulated error. A competitive Hebbian learning rule (a supervised-learning rule in which connection weights are adjusted to minimize differences between actual and desired outputs for training examples) is used. A Kohonen-style learning rule (derived from a relatively simple training algorithm, implements a Delaunay triangulation layout of neurons) is used to adjust node positions during training. Neighborhood topology determines which nodes are used to estimate new values. The network learns, starting with two nodes, and adds new nodes sequentially in locations chosen to maximize reductions in global error. At any given time during learning, the error becomes homogeneously distributed over all nodes.
Schuchardt, Christiane; Kulkarni, Harshad R.; Shahinfar, Mostafa; Singh, Aviral; Glatting, Gerhard; Baum, Richard P.; Beer, Ambros J.
2016-01-01
In molecular radiotherapy with 177Lu-labeled prostate specific membrane antigen (PSMA) peptides, kidney and/or salivary glands doses limit the activity which can be administered. The aim of this work was to investigate the effect of the ligand amount and injected activity on the tumor-to-normal tissue biologically effective dose (BED) ratio for 177Lu-labeled PSMA peptides. For this retrospective study, a recently developed physiologically based pharmacokinetic model was adapted for PSMA targeting peptides. General physiological parameters were taken from the literature. Individual parameters were fitted to planar gamma camera measurements (177Lu-PSMA I&T) of five patients with metastasizing prostate cancer. Based on the estimated parameters, the pharmacokinetics of tumor, salivary glands, kidneys, total body and red marrow was simulated and time-integrated activity coefficients were calculated for different peptide amounts. Based on these simulations, the absorbed doses and BEDs for normal tissue and tumor were calculated for all activities leading to a maximal tolerable kidney BED of 10 Gy2.5/cycle, a maximal salivary gland absorbed dose of 7.5 Gy/cycle and a maximal red marrow BED of 0.25 Gy15/cycle. The fits yielded coefficients of determination > 0.85, acceptable relative standard errors and low parameter correlations. All estimated parameters were in a physiologically reasonable range. The amounts (for 25−29 nmol) and pertaining activities leading to a maximal tumor dose, considering the defined maximal tolerable doses to organs of risk, were calculated to be 272±253 nmol (452±420 μg) and 7.3±5.1 GBq. Using the actually injected amount (235±155 μg) and the same maximal tolerable doses, the potential improvement for the tumor BED was 1–3 fold. The results suggest that currently given amounts for therapy are in the appropriate order of magnitude for many lesions. However, for lesions with high binding site density or lower perfusion, optimizing the peptide amount and activity might improve the tumor-to-kidney and tumor-to-salivary glands BED ratio considerably. PMID:27611841
Kletting, Peter; Schuchardt, Christiane; Kulkarni, Harshad R; Shahinfar, Mostafa; Singh, Aviral; Glatting, Gerhard; Baum, Richard P; Beer, Ambros J
2016-01-01
In molecular radiotherapy with 177Lu-labeled prostate specific membrane antigen (PSMA) peptides, kidney and/or salivary glands doses limit the activity which can be administered. The aim of this work was to investigate the effect of the ligand amount and injected activity on the tumor-to-normal tissue biologically effective dose (BED) ratio for 177Lu-labeled PSMA peptides. For this retrospective study, a recently developed physiologically based pharmacokinetic model was adapted for PSMA targeting peptides. General physiological parameters were taken from the literature. Individual parameters were fitted to planar gamma camera measurements (177Lu-PSMA I&T) of five patients with metastasizing prostate cancer. Based on the estimated parameters, the pharmacokinetics of tumor, salivary glands, kidneys, total body and red marrow was simulated and time-integrated activity coefficients were calculated for different peptide amounts. Based on these simulations, the absorbed doses and BEDs for normal tissue and tumor were calculated for all activities leading to a maximal tolerable kidney BED of 10 Gy2.5/cycle, a maximal salivary gland absorbed dose of 7.5 Gy/cycle and a maximal red marrow BED of 0.25 Gy15/cycle. The fits yielded coefficients of determination > 0.85, acceptable relative standard errors and low parameter correlations. All estimated parameters were in a physiologically reasonable range. The amounts (for 25-29 nmol) and pertaining activities leading to a maximal tumor dose, considering the defined maximal tolerable doses to organs of risk, were calculated to be 272±253 nmol (452±420 μg) and 7.3±5.1 GBq. Using the actually injected amount (235±155 μg) and the same maximal tolerable doses, the potential improvement for the tumor BED was 1-3 fold. The results suggest that currently given amounts for therapy are in the appropriate order of magnitude for many lesions. However, for lesions with high binding site density or lower perfusion, optimizing the peptide amount and activity might improve the tumor-to-kidney and tumor-to-salivary glands BED ratio considerably.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, J; Dept of Radiation Oncology, New York Weill Cornell Medical Ctr, New York, NY
Purpose: To develop a generalized statistical model that incorporates the treatment uncertainty from the rotational error of single iso-center technique, and calculate the additional PTV (planning target volume) margin required to compensate for this error. Methods: The random vectors for setup and additional rotation errors in the three-dimensional (3D) patient coordinate system were assumed to follow the 3D independent normal distribution with zero mean, and standard deviations σx, σy, σz, for setup error and a uniform σR for rotational error. Both random vectors were summed, normalized and transformed to the spherical coordinates to derive the chi distribution with 3 degreesmore » of freedom for the radical distance ρ. PTV margin was determined using the critical value of this distribution for 0.05 significant level so that 95% of the time the treatment target would be covered by ρ. The additional PTV margin required to compensate for the rotational error was calculated as a function of σx, σy, σz and σR. Results: The effect of the rotational error is more pronounced for treatments that requires high accuracy/precision like stereotactic radiosurgery (SRS) or stereotactic body radiotherapy (SBRT). With a uniform 2mm PTV margin (or σx =σy=σz=0.7mm), a σR=0.32mm will decrease the PTV coverage from 95% to 90% of the time, or an additional 0.2mm PTV margin is needed to prevent this loss of coverage. If we choose 0.2 mm as the threshold, any σR>0.3mm will lead to an additional PTV margin that cannot be ignored, and the maximal σR that can be ignored is 0.0064 rad (or 0.37°) for iso-to-target distance=5cm, or 0.0032 rad (or 0.18°) for iso-to-target distance=10cm. Conclusions: The rotational error cannot be ignored for high-accuracy/-precision treatments like SRS/SBRT, particularly when the distance between the iso-center and target is large.« less
Effect of electrical coupling on ionic current and synaptic potential measurements.
Rabbah, Pascale; Golowasch, Jorge; Nadim, Farzan
2005-07-01
Recent studies have found electrical coupling to be more ubiquitous than previously thought, and coupling through gap junctions is known to play a crucial role in neuronal function and network output. In particular, current spread through gap junctions may affect the activation of voltage-dependent conductances as well as chemical synaptic release. Using voltage-clamp recordings of two strongly electrically coupled neurons of the lobster stomatogastric ganglion and conductance-based models of these neurons, we identified effects of electrical coupling on the measurement of leak and voltage-gated outward currents, as well as synaptic potentials. Experimental measurements showed that both leak and voltage-gated outward currents are recruited by gap junctions from neurons coupled to the clamped cell. Nevertheless, in spite of the strong coupling between these neurons, the errors made in estimating voltage-gated conductance parameters were relatively minor (<10%). Thus in many cases isolation of coupled neurons may not be required if a small degree of measurement error of the voltage-gated currents or the synaptic potentials is acceptable. Modeling results show, however, that such errors may be as high as 20% if the gap-junction position is near the recording site or as high as 90% when measuring smaller voltage-gated ionic currents. Paradoxically, improved space clamp increases the errors arising from electrical coupling because voltage control across gap junctions is poor for even the highest realistic coupling conductances. Furthermore, the common procedure of leak subtraction can add an extra error to the conductance measurement, the sign of which depends on the maximal conductance.
Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey
2012-01-01
Tight glycemic control (TGC) has shown benefits but has been difficult to implement. Model-based methods and computerized protocols offer the opportunity to improve TGC quality and compliance. This research presents an interface design to maximize compliance, minimize real and perceived clinical effort, and minimize error based on simple human factors and end user input. The graphical user interface (GUI) design is presented by construction based on a series of simple, short design criteria based on fundamental human factors engineering and includes the use of user feedback and focus groups comprising nursing staff at Christchurch Hospital. The overall design maximizes ease of use and minimizes (unnecessary) interaction and use. It is coupled to a protocol that allows nurse staff to select measurement intervals and thus self-manage workload. The overall GUI design is presented and requires only one data entry point per intervention cycle. The design and main interface are heavily focused on the nurse end users who are the predominant users, while additional detailed and longitudinal data, which are of interest to doctors guiding overall patient care, are available via tabs. This dichotomy of needs and interests based on the end user's immediate focus and goals shows how interfaces must adapt to offer different information to multiple types of users. The interface is designed to minimize real and perceived clinical effort, and ongoing pilot trials have reported high levels of acceptance. The overall design principles, approach, and testing methods are based on fundamental human factors principles designed to reduce user effort and error and are readily generalizable. © 2012 Diabetes Technology Society.
Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey
2012-01-01
Introduction Tight glycemic control (TGC) has shown benefits but has been difficult to implement. Model-based methods and computerized protocols offer the opportunity to improve TGC quality and compliance. This research presents an interface design to maximize compliance, minimize real and perceived clinical effort, and minimize error based on simple human factors and end user input. Method The graphical user interface (GUI) design is presented by construction based on a series of simple, short design criteria based on fundamental human factors engineering and includes the use of user feedback and focus groups comprising nursing staff at Christchurch Hospital. The overall design maximizes ease of use and minimizes (unnecessary) interaction and use. It is coupled to a protocol that allows nurse staff to select measurement intervals and thus self-manage workload. Results The overall GUI design is presented and requires only one data entry point per intervention cycle. The design and main interface are heavily focused on the nurse end users who are the predominant users, while additional detailed and longitudinal data, which are of interest to doctors guiding overall patient care, are available via tabs. This dichotomy of needs and interests based on the end user's immediate focus and goals shows how interfaces must adapt to offer different information to multiple types of users. Conclusions The interface is designed to minimize real and perceived clinical effort, and ongoing pilot trials have reported high levels of acceptance. The overall design principles, approach, and testing methods are based on fundamental human factors principles designed to reduce user effort and error and are readily generalizable. PMID:22401330
Error quantification of abnormal extreme high waves in Operational Oceanographic System in Korea
NASA Astrophysics Data System (ADS)
Jeong, Sang-Hun; Kim, Jinah; Heo, Ki-Young; Park, Kwang-Soon
2017-04-01
In winter season, large-height swell-like waves have occurred on the East coast of Korea, causing property damages and loss of human life. It is known that those waves are generated by a local strong wind made by temperate cyclone moving to eastward in the East Sea of Korean peninsula. Because the waves are often occurred in the clear weather, in particular, the damages are to be maximized. Therefore, it is necessary to predict and forecast large-height swell-like waves to prevent and correspond to the coastal damages. In Korea, an operational oceanographic system (KOOS) has been developed by the Korea institute of ocean science and technology (KIOST) and KOOS provides daily basis 72-hours' ocean forecasts such as wind, water elevation, sea currents, water temperature, salinity, and waves which are computed from not only meteorological and hydrodynamic model (WRF, ROMS, MOM, and MOHID) but also wave models (WW-III and SWAN). In order to evaluate the model performance and guarantee a certain level of accuracy of ocean forecasts, a Skill Assessment (SA) system was established as a one of module in KOOS. It has been performed through comparison of model results with in-situ observation data and model errors have been quantified with skill scores. Statistics which are used in skill assessment are including a measure of both errors and correlations such as root-mean-square-error (RMSE), root-mean-square-error percentage (RMSE%), mean bias (MB), correlation coefficient (R), scatter index (SI), circular correlation (CC) and central frequency (CF) that is a frequency with which errors lie within acceptable error criteria. It should be utilized simultaneously not only to quantify an error but also to improve an accuracy of forecasts by providing a feedback interactively. However, in an abnormal phenomena such as high-height swell-like waves in the East coast of Korea, it requires more advanced and optimized error quantification method that allows to predict the abnormal waves well and to improve the accuracy of forecasts by supporting modification of physics and numeric on numerical models through sensitivity test. In this study, we proposed an appropriate method of error quantification especially on abnormal high waves which are occurred by local weather condition. Furthermore, we introduced that how the quantification errors are contributed to improve wind-wave modeling by applying data assimilation and utilizing reanalysis data.
McManus, A; Leung, M
2000-04-01
Implicit in deciding upon an exercise test strategy to elucidate cardiopulmonary function in children with congenital heart disease are appropriate application of gas exchange techniques and the significance of the data collected to the specific congenital heart disorder. Post-operative cardiopulmonary responses to exercise in cyanotic disorders are complex and, despite a large body of extant literature in paediatric patients, there has been much difficulty in achieving quality and consistency of data. Maximal oxygen uptake is widely recognised as the best single indicator of cardiopulmonary function and has therefore been the focus of most clinical exercise tests in children. Many children with various heart anomalies are able to exercise to maximum without adverse symptoms, and it is essential that test termination is based on the same criteria for these children. Choosing appropriate, valid indicators of maximum in children with congenital heart disease is beset by difficulties. Such maximal intensity exercise testing procedures have been challenged on the grounds that they do not give a good indication of cardiopulmonary function that is relevant to real life situations. Furthermore, they are prone to much interindividual variability and error in the definition of maximal exertion. Alternative strategies have been proposed which focus upon dynamic submaximal and kinetic cardiopulmonary responses, which are thought to be less dependent on maximal voluntary effort and more suited to the daily activity patterns of children. These methods are also not without problems. Variability in anaerobic threshold measurements and controversy regarding its physiological meaning have been debated. It is recommended that an appropriate cardiopulmonary exercise gas exchange test strategy, which provides clinically useful information for children with cyanotic congenital heart disease, should include both maximal and submaximal data. The inclusion of oxygen uptake kinetics and ventilatory data are encouraged, since they may allow the distinction between a pulmonary, cardiovascular or inactivity related exercise limitation.
To repair or not to repair: with FAVOR there is no question
NASA Astrophysics Data System (ADS)
Garetto, Anthony; Schulz, Kristian; Tabbone, Gilles; Himmelhaus, Michael; Scheruebl, Thomas
2016-10-01
In the mask shop the challenges associated with today's advanced technology nodes, both technical and economic, are becoming increasingly difficult. The constant drive to continue shrinking features means more masks per device, smaller manufacturing tolerances and more complexity along the manufacturing line with respect to the number of manufacturing steps required. Furthermore, the extremely competitive nature of the industry makes it critical for mask shops to optimize asset utilization and processes in order to maximize their competitive advantage and, in the end, profitability. Full maximization of profitability in such a complex and technologically sophisticated environment simply cannot be achieved without the use of smart automation. Smart automation allows productivity to be maximized through better asset utilization and process optimization. Reliability is improved through the minimization of manual interactions leading to fewer human error contributions and a more efficient manufacturing line. In addition to these improvements in productivity and reliability, extra value can be added through the collection and cross-verification of data from multiple sources which provides more information about our products and processes. When it comes to handling mask defects, for instance, the process consists largely of time consuming manual interactions that are error prone and often require quick decisions from operators and engineers who are under pressure. The handling of defects itself is a multiple step process consisting of several iterations of inspection, disposition, repair, review and cleaning steps. Smaller manufacturing tolerances and features with higher complexity contribute to a higher number of defects which must be handled as well as a higher level of complexity. In this paper the recent efforts undertaken by ZEISS to provide solutions which address these challenges, particularly those associated with defectivity, will be presented. From automation of aerial image analysis to the use of data driven decision making to predict and propose the optimized back end of line process flow, productivity and reliability improvements are targeted by smart automation. Additionally the generation of the ideal aerial image from the design and several repair enhancement features offer additional capabilities to improve the efficiency and yield associated with defect handling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, H; Chen, Z; Nath, R
Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertaintymore » through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the tumor is within the margin or initialize motion compensation if it is out of the margin.« less
Auto-tracking system for human lumbar motion analysis.
Sui, Fuge; Zhang, Da; Lam, Shing Chun Benny; Zhao, Lifeng; Wang, Dongjun; Bi, Zhenggang; Hu, Yong
2011-01-01
Previous lumbar motion analyses suggest the usefulness of quantitatively characterizing spine motion. However, the application of such measurements is still limited by the lack of user-friendly automatic spine motion analysis systems. This paper describes an automatic analysis system to measure lumbar spine disorders that consists of a spine motion guidance device, an X-ray imaging modality to acquire digitized video fluoroscopy (DVF) sequences and an automated tracking module with a graphical user interface (GUI). DVF sequences of the lumbar spine are recorded during flexion-extension under a guidance device. The automatic tracking software utilizing a particle filter locates the vertebra-of-interest in every frame of the sequence, and the tracking result is displayed on the GUI. Kinematic parameters are also extracted from the tracking results for motion analysis. We observed that, in a bone model test, the maximum fiducial error was 3.7%, and the maximum repeatability error in translation and rotation was 1.2% and 2.6%, respectively. In our simulated DVF sequence study, the automatic tracking was not successful when the noise intensity was greater than 0.50. In a noisy situation, the maximal difference was 1.3 mm in translation and 1° in the rotation angle. The errors were calculated in translation (fiducial error: 2.4%, repeatability error: 0.5%) and in the rotation angle (fiducial error: 1.0%, repeatability error: 0.7%). However, the automatic tracking software could successfully track simulated sequences contaminated by noise at a density ≤ 0.5 with very high accuracy, providing good reliability and robustness. A clinical trial with 10 healthy subjects and 2 lumbar spondylolisthesis patients were enrolled in this study. The measurement with auto-tacking of DVF provided some information not seen in the conventional X-ray. The results proposed the potential use of the proposed system for clinical applications.
Cobbin, Joanna C. A.; Verity, Erin E.; Gilbertson, Brad P.; Rockman, Steven P.
2013-01-01
The yields of egg-grown influenza vaccines are maximized by the production of a seed strain using a reassortment of the seasonal influenza virus isolate with a highly egg-adapted strain. The seed virus is selected based on high yields of viral hemagglutinin (HA) and expression of the surface antigens from the seasonal isolate. The remaining proteins are usually derived from the high-growth parent. However, a retrospective analysis of vaccine seeds revealed that the seasonal PB1 gene was selected in more than 50% of reassortment events. Using the model seasonal H3N2 virus A/Udorn/307/72 (Udorn) virus and the high-growth A/Puerto Rico/8/34 (PR8) virus, we assessed the influence of the source of the PB1 gene on virus growth and vaccine yield. Classical reassortment of these two strains led to the selection of viruses that predominantly had the Udorn PB1 gene. The presence of Udorn PB1 in the seed virus, however, did not result in higher yields of virus or HA compared to the yields in the corresponding seed virus with PR8 PB1. The 8-fold-fewer virions produced with the seed virus containing the Udorn PB1 were somewhat compensated for by a 4-fold increase in HA per virion. A higher HA/nucleoprotein (NP) ratio was found in past vaccine preparations when the seasonal PB1 was present, also indicative of a higher HA density in these vaccine viruses. As the HA viral RNA (vRNA) and mRNA levels in infected cells were similar, we propose that PB1 selectively alters the translation of viral mRNA. This study helps to explain the variability of vaccine seeds with respect to HA yield. PMID:23468502
Identifying protein phosphorylation sites with kinase substrate specificity on human viruses.
Bretaña, Neil Arvin; Lu, Cheng-Tsung; Chiang, Chiu-Yun; Su, Min-Gang; Huang, Kai-Yao; Lee, Tzong-Yi; Weng, Shun-Long
2012-01-01
Viruses infect humans and progress inside the body leading to various diseases and complications. The phosphorylation of viral proteins catalyzed by host kinases plays crucial regulatory roles in enhancing replication and inhibition of normal host-cell functions. Due to its biological importance, there is a desire to identify the protein phosphorylation sites on human viruses. However, the use of mass spectrometry-based experiments is proven to be expensive and labor-intensive. Furthermore, previous studies which have identified phosphorylation sites in human viruses do not include the investigation of the responsible kinases. Thus, we are motivated to propose a new method to identify protein phosphorylation sites with its kinase substrate specificity on human viruses. The experimentally verified phosphorylation data were extracted from virPTM--a database containing 301 experimentally verified phosphorylation data on 104 human kinase-phosphorylated virus proteins. In an attempt to investigate kinase substrate specificities in viral protein phosphorylation sites, maximal dependence decomposition (MDD) is employed to cluster a large set of phosphorylation data into subgroups containing significantly conserved motifs. The experimental human phosphorylation sites are collected from Phospho.ELM, grouped according to its kinase annotation, and compared with the virus MDD clusters. This investigation identifies human kinases such as CK2, PKB, CDK, and MAPK as potential kinases for catalyzing virus protein substrates as confirmed by published literature. Profile hidden Markov model is then applied to learn a predictive model for each subgroup. A five-fold cross validation evaluation on the MDD-clustered HMMs yields an average accuracy of 84.93% for Serine, and 78.05% for Threonine. Furthermore, an independent testing data collected from UniProtKB and Phospho.ELM is used to make a comparison of predictive performance on three popular kinase-specific phosphorylation site prediction tools. In the independent testing, the high sensitivity and specificity of the proposed method demonstrate the predictive effectiveness of the identified substrate motifs and the importance of investigating potential kinases for viral protein phosphorylation sites.
Systemic Immune Activation and HIV Shedding in the Female Genital Tract.
Spencer, LaShonda Y; Christiansen, Shawna; Wang, Chia-Hao H; Mack, Wendy J; Young, Mary; Strickler, Howard D; Anastos, Kathryn; Minkoff, Howard; Cohen, Mardge; Geenblatt, Ruth M; Karim, Roksana; Operskalski, Eva; Frederick, Toni; Homans, James D; Landay, Alan; Kovacs, Andrea
2016-02-01
Plasma HIV RNA is the most significant determinant of cervical HIV shedding. However, shedding is also associated with sexually transmitted infections (STIs) and cervical inflammation. The mechanism by which this occurs is poorly understood. There is evidence that systemic immune activation promotes viral entry, replication, and HIV disease progression. We hypothesized that systemic immune activation would be associated with an increase in HIV genital shedding. Clinical assessments, HIV RNA in plasma and genital secretions, and markers of immune activation (CD38(+)DR(+) and CD38(-)DR(-)) on CD4(+) and CD8(+) T cells in blood were evaluated in 226 HIV+ women enrolled in the Women's Interagency HIV Study. There were 569 genital evaluations of which 159 (28%) exhibited HIV RNA shedding, defined as HIV viral load >80 copies per milliliter. We tested associations between immune activation and shedding using generalized estimating equations with logit link function. In the univariate model, higher levels of CD4(+) and CD8(+) T-cell activation in blood were significantly associated with genital tract shedding. However, in the multivariate model adjusting for plasma HIV RNA, STIs, and genital tract infections, only higher levels of resting CD8(+) T cells (CD38(-)DR(-)) were significantly inversely associated with HIV shedding in the genital tract (odds ratios = 0.44, 95% confidence interval: 0.21 to 0.9, P = 0.02). The association of systemic immune activation with genital HIV shedding is multifactorial. Systemic T-cell activation is associated with genital tract shedding in univariate analysis but not when adjusting for plasma HIV RNA, STIs, and genital tract infections. In addition, women with high percentage of resting T cells are less likely to have HIV shedding compared with those with lower percentages. These findings suggest that a higher percentage of resting cells, as a result of maximal viral suppression with treatment, may decrease local genital activation, HIV shedding, and transmission.
NASA Astrophysics Data System (ADS)
Lankford, George Bernard
In this dissertation, we address applying mathematical and numerical techniques in the fields of high energy physics and biomedical sciences. The first portion of this thesis presents a method for optimizing the design of klystron circuits. A klystron is an electron beam tube lined with cavities that emit resonant frequencies to velocity modulate electrons that pass through the tube. Radio frequencies (RF) inserted in the klystron are amplified due to the velocity modulation of the electrons. The routine described in this work automates the selection of cavity positions, resonant frequencies, quality factors, and other circuit parameters to maximize the efficiency with required gain. The method is based on deterministic sampling methods. We will describe the procedure and give several examples for both narrow and wide band klystrons, using the klystron codes AJDISK (Java) and TESLA (Python). The rest of the dissertation is dedicated to developing, calibrating and using a mathematical model for hepatitis C dynamics with triple drug combination therapy. Groundbreaking new drugs, called direct acting antivirals, have been introduced recently to fight off chronic hepatitis C virus infection. The model we introduce is for hepatitis C dynamics treated with the direct acting antiviral drug, telaprevir, along with traditional interferon and ribavirin treatments to understand how this therapy affects the viral load of patients exhibiting different types of response. We use sensitivity and identifiability techniques to determine which parameters can be best estimated from viral load data. We use these estimations to give patient-specific fits of the model to partial viral response, end-of-treatment response, and breakthrough patients. We will then revise the model to incorporate an immune response dynamic to more accurately describe the dynamics. Finally, we will implement a suboptimal control to acquire a drug treatment regimen that will alleviate the systemic cost associated with constant drug treatment.
Villa, T G; Feijoo-Siota, L; Sánchez-Pérez, A
2018-06-27
The advancement of human knowledge has historically followed the pattern of one-step growth (the same pattern followed by microorganisms in laboratory culture conditions). In this way, each new important discovery opened the door to multiple secondary breakthroughs, eventually reaching a "plateau" when new findings emerged. Microbiology research has usually followed this pattern, but often the conclusions attained from experimentation/observation were either equivocal or altogether false, causing important delays in the advancement of this science. This mini-review deals with some of these documented scientific errors, but the aim is not to include every mistake, but to select those that are paramount to the advance of Microbiology.
Crosby, Richard A
2017-02-01
The behavioural aspects of pre-exposure prophylaxis (PrEP) are challenging, particularly the issue of condom migration. Three vital questions are: (1) at the population-level, will condom migration lead to increases in non-viral sexually transmissible infections?; (2) how can clinic-based counselling best promote the dual use of condoms and PrEP?; and (3) in future PrEP trials, what are the 'best practices' that should be used to avoid type 1 and type 2 errors that arise without accounting for condom use behaviours? This communication piece addresses each question and suggests the risk of a 'PrEP only' focus to widening health disparities.
Liste-Calleja, Leticia; Lecina, Martí; Cairó, Jordi Joan
2014-04-01
The increasing demand for biopharmaceuticals produced in mammalian cells has lead industries to enhance bioprocess volumetric productivity through different strategies. Among those strategies, cell culture media development is of major interest. In the present work, several commercially available culture media for Human Embryonic Kidney cells (HEK293) were evaluated in terms of maximal specific growth rate and maximal viable cell concentration supported. The main objective was to provide different cell culture platforms which are suitable for a wide range of applications depending on the type and the final use of the product obtained. Performing simple media supplementations with and without animal derived components, an enhancement of cell concentration from 2 × 10(6) cell/mL to 17 × 10(6) cell/mL was achieved in batch mode operation. Additionally, the media were evaluated for adenovirus production as a specific application case of HEK293 cells. None of the supplements interfered significantly with the adenovirus infection although some differences were encountered in viral productivity. To the best of our knowledge, the high cell density achieved in the work presented has never been reported before in HEK293 batch cell cultures and thus, our results are greatly promising to further study cell culture strategies in bioreactor towards bioprocess optimization. Copyright © 2013 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Michel, Christian J
2017-04-18
In 1996, a set X of 20 trinucleotides was identified in genes of both prokaryotes and eukaryotes which has on average the highest occurrence in reading frame compared to its two shifted frames. Furthermore, this set X has an interesting mathematical property as X is a maximal C 3 self-complementary trinucleotide circular code. In 2015, by quantifying the inspection approach used in 1996, the circular code X was confirmed in the genes of bacteria and eukaryotes and was also identified in the genes of plasmids and viruses. The method was based on the preferential occurrence of trinucleotides among the three frames at the gene population level. We extend here this definition at the gene level. This new statistical approach considers all the genes, i.e., of large and small lengths, with the same weight for searching the circular code X . As a consequence, the concept of circular code, in particular the reading frame retrieval, is directly associated to each gene. At the gene level, the circular code X is strengthened in the genes of bacteria, eukaryotes, plasmids, and viruses, and is now also identified in the genes of archaea. The genes of mitochondria and chloroplasts contain a subset of the circular code X . Finally, by studying viral genes, the circular code X was found in DNA genomes, RNA genomes, double-stranded genomes, and single-stranded genomes.
Sanchez, Erica L; Pulliam, Thomas H; Dimaio, Terri A; Thalhofer, Angel B; Delgado, Tracie; Lagunoff, Michael
2017-05-15
Kaposi's sarcoma-associated herpesvirus (KSHV) is the etiologic agent of Kaposi's sarcoma (KS). KSHV infection induces and requires multiple metabolic pathways, including the glycolysis, glutaminolysis, and fatty acid synthesis (FAS) pathways, for the survival of latently infected endothelial cells. To determine the metabolic requirements for productive KSHV infection, we induced lytic replication in the presence of inhibitors of different metabolic pathways. We found that glycolysis, glutaminolysis, and FAS are all required for maximal KSHV virus production and that these pathways appear to participate in virus production at different stages of the viral life cycle. Glycolysis and glutaminolysis, but not FAS, inhibit viral genome replication and, interestingly, are required for different early steps of lytic gene expression. Glycolysis is necessary for early gene transcription, while glutaminolysis is necessary for early gene translation but not transcription. Inhibition of FAS resulted in decreased production of extracellular virions but did not reduce intracellular genome levels or block intracellular virion production. However, in the presence of FAS inhibitors, the intracellular virions are noninfectious, indicating that FAS is required for virion assembly or maturation. KS tumors support both latent and lytic KSHV replication. Previous work has shown that multiple cellular metabolic pathways are required for latency, and we now show that these metabolic pathways are required for efficient lytic replication, providing novel therapeutic avenues for KS tumors. IMPORTANCE KSHV is the etiologic agent of Kaposi's sarcoma, the most common tumor of AIDS patients. KS spindle cells, the main tumor cells, all contain KSHV, mostly in the latent state, during which there is limited viral gene expression. However, a percentage of spindle cells support lytic replication and production of virus and these cells are thought to contribute to overall tumor formation. Our previous findings showed that latently infected cells are sensitive to inhibitors of cellular metabolic pathways, including glycolysis, glutaminolysis, and fatty acid synthesis. Here we found that these same inhibitors block the production of infectious virus from lytically infected cells, each at a different stage of viral replication. Therefore, inhibition of specific cellular metabolic pathways can both eliminate latently infected cells and block lytic replication, thereby inhibiting infection of new cells. Inhibition of metabolic pathways provides novel therapeutic approaches for KS tumors. Copyright © 2017 American Society for Microbiology.
Sanchez, Erica L.; Pulliam, Thomas H.; Dimaio, Terri A.; Thalhofer, Angel B.; Delgado, Tracie
2017-01-01
ABSTRACT Kaposi's sarcoma-associated herpesvirus (KSHV) is the etiologic agent of Kaposi's sarcoma (KS). KSHV infection induces and requires multiple metabolic pathways, including the glycolysis, glutaminolysis, and fatty acid synthesis (FAS) pathways, for the survival of latently infected endothelial cells. To determine the metabolic requirements for productive KSHV infection, we induced lytic replication in the presence of inhibitors of different metabolic pathways. We found that glycolysis, glutaminolysis, and FAS are all required for maximal KSHV virus production and that these pathways appear to participate in virus production at different stages of the viral life cycle. Glycolysis and glutaminolysis, but not FAS, inhibit viral genome replication and, interestingly, are required for different early steps of lytic gene expression. Glycolysis is necessary for early gene transcription, while glutaminolysis is necessary for early gene translation but not transcription. Inhibition of FAS resulted in decreased production of extracellular virions but did not reduce intracellular genome levels or block intracellular virion production. However, in the presence of FAS inhibitors, the intracellular virions are noninfectious, indicating that FAS is required for virion assembly or maturation. KS tumors support both latent and lytic KSHV replication. Previous work has shown that multiple cellular metabolic pathways are required for latency, and we now show that these metabolic pathways are required for efficient lytic replication, providing novel therapeutic avenues for KS tumors. IMPORTANCE KSHV is the etiologic agent of Kaposi's sarcoma, the most common tumor of AIDS patients. KS spindle cells, the main tumor cells, all contain KSHV, mostly in the latent state, during which there is limited viral gene expression. However, a percentage of spindle cells support lytic replication and production of virus and these cells are thought to contribute to overall tumor formation. Our previous findings showed that latently infected cells are sensitive to inhibitors of cellular metabolic pathways, including glycolysis, glutaminolysis, and fatty acid synthesis. Here we found that these same inhibitors block the production of infectious virus from lytically infected cells, each at a different stage of viral replication. Therefore, inhibition of specific cellular metabolic pathways can both eliminate latently infected cells and block lytic replication, thereby inhibiting infection of new cells. Inhibition of metabolic pathways provides novel therapeutic approaches for KS tumors. PMID:28275189
Akhrameyeva, Natalie V.; Zhang, Pengwei; Sugiyama, Nao; Behar, Samuel M.; Yao, Feng
2011-01-01
Using the T-REx (Invitrogen, California) gene switch technology and a dominant-negative mutant polypeptide of herpes simplex virus 1 (HSV-1)-origin binding protein UL9, we previously constructed a glycoprotein D-expressing replication-defective and dominant-negative HSV-1 recombinant viral vaccine, CJ9-gD, for protection against HSV infection and disease. It was demonstrated that CJ9-gD is avirulent following intracerebral inoculation in mice, cannot establish detectable latent infection following different routes of infection, and offers highly effective protective immunity against primary HSV-1 and HSV-2 infection and disease in mouse and guinea pig models of HSV infections. Given these favorable safety and immunological profiles of CJ9-gD, aiming to maximize levels of HSV-2 glycoprotein D (gD2) expression, we have constructed an ICP0 null mutant-based dominant-negative and replication-defective HSV-2 recombinant, CJ2-gD2, that contains 2 copies of the gD2 gene driven by the tetracycline operator (tetO)-bearing HSV-1 major immediate-early ICP4 promoter. CJ2-gD2 expresses gD2 as efficiently as wild-type HSV-2 infection and can lead to a 150-fold reduction in wild-type HSV-2 viral replication in cells coinfected with CJ2-gD2 and wild-type HSV-2 at the same multiplicity of infection. CJ2-gD2 is avirulent following intracerebral injection and cannot establish a detectable latent infection following subcutaneous (s.c.) immunization. CJ2-gD2 is a more effective vaccine than HSV-1 CJ9-gD and a non-gD2-expressing dominant-negative and replication-defective HSV-2 recombinant in protection against wild-type HSV-2 genital disease. Using recall response, we showed that immunization with CJ2-gD2 elicited strong HSV-2-specific memory CD4+ and CD8+ T-cell responses. Collectively, given the demonstrated preclinical immunogenicity and its unique safety profiles, CJ2-gD2 represents a new class of HSV-2 replication-defective recombinant viral vaccines in protection against HSV-2 genital infection and disease. PMID:21389121
Autoimmunity: a decision theory model.
Morris, J A
1987-01-01
Concepts from statistical decision theory were used to analyse the detection problem faced by the body's immune system in mounting immune responses to bacteria of the normal body flora. Given that these bacteria are potentially harmful, that there can be extensive cross reaction between bacterial antigens and host tissues, and that the decisions are made in uncertainty, there is a finite chance of error in immune response leading to autoimmune disease. A model of ageing in the immune system is proposed that is based on random decay in components of the decision process, leading to a steep age dependent increase in the probability of error. The age incidence of those autoimmune diseases which peak in early and middle life can be explained as the resultant of two processes: an exponentially falling curve of incidence of first contact with common bacteria, and a rapidly rising error function. Epidemiological data on the variation of incidence with social class, sibship order, climate and culture can be used to predict the likely site of carriage and mode of spread of the causative bacteria. Furthermore, those autoimmune diseases precipitated by common viral respiratory tract infections might represent reactions to nasopharyngeal bacterial overgrowth, and this theory can be tested using monoclonal antibodies to search the bacterial isolates for cross reacting antigens. If this model is correct then prevention of autoimmune disease by early exposure to low doses of bacteria might be possible. PMID:3818985
Weder, A B; Torretti, B A; Katch, V L; Rocchini, A P
1984-10-01
Measures of maximal rates of lithium-sodium countertransport and frusemide-sensitive sodium and potassium cotransport have been proposed as biochemical markers for human essential hypertension. The stability of these functions over time within the same individuals has led to the suggestion that maximal transport capacities are genetically determined. The present study confirms the reproducibility of functional assays of countertransport and cotransport in human erythrocytes after overnight storage and over a six-month period in normal volunteers and provides estimates of the magnitude of technical error for each assay. A long-term dietary intervention study in a group of obese adolescents demonstrated marked increases in erythrocyte sodium levels and maximal frusemide-sensitive sodium and potassium fluxes but no changes in cell potassium or water and no effect on lithium-sodium countertransport. A correlation between the decrease in percentage of body fat and the increase in cell sodium content suggests a link between the metabolic effects of dieting and control of erythrocyte cation handling. Although the mechanism linking dietary calorie restriction and changes in erythrocyte cation metabolism is unknown, evaluation of body weight, and especially recent weight loss, is important in studies of erythrocyte transport. Conclusions regarding genetic contributions to the activities of lithium-sodium countertransport and sodium-potassium cotransport systems will be strengthened by clarification of environmental regulators.
Stephens, Byron F; Hebert, Casey T; Azar, Frederick M; Mihalko, William M; Throckmorton, Thomas W
2015-09-01
Baseplate loosening in reverse total shoulder arthroplasty (RTSA) remains a concern. Placing peripheral screws into the 3 pillars of the densest scapular bone is believed to optimize baseplate fixation. Using a 3-dimensional computer-aided design (3D CAD) program, we investigated the optimal rotational baseplate alignment to maximize peripheral locking-screw purchase. Seventy-three arthritic scapulae were reconstructed from computed tomography images and imported into a 3D CAD software program along with representations of an RTSA baseplate that uses 4 fixed-angle peripheral locking screws. The baseplate position was standardized, and the baseplate was rotated to maximize individual and combined peripheral locking-screw purchase in each of the 3 scapular pillars. The mean ± standard error of the mean positions for optimal individual peripheral locking-screw placement (referenced in internal rotation) were 6° ± 2° for the coracoid pillar, 198° ± 2° for the inferior pillar, and 295° ± 3° for the scapular spine pillar. Of note, 78% (57 of 73) of the screws attempting to obtain purchase in the scapular spine pillar could not be placed without an in-out-in configuration. In contrast, 100% of coracoid and 99% of inferior pillar screws achieved full purchase. The position of combined maximal fixation was 11° ± 1°. These results suggest that approximately 11° of internal rotation is the ideal baseplate position for maximal peripheral locking-screw fixation in RTSA. In addition, these results highlight the difficulty in obtaining optimal purchase in the scapular spine. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
A data analysis expert system for large established distributed databases
NASA Technical Reports Server (NTRS)
Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick
1987-01-01
A design for a natural language database interface system, called the Deductively Augmented NASA Management Decision support System (DANMDS), is presented. The DANMDS system components have been chosen on the basis of the following considerations: maximal employment of the existing NASA IBM-PC computers and supporting software; local structuring and storing of external data via the entity-relationship model; a natural easy-to-use error-free database query language; user ability to alter query language vocabulary and data analysis heuristic; and significant artificial intelligence data analysis heuristic techniques that allow the system to become progressively and automatically more useful.
Optimizing a remote sensing instrument to measure atmospheric surface pressure
NASA Technical Reports Server (NTRS)
Peckham, G. E.; Gatley, C.; Flower, D. A.
1983-01-01
Atmospheric surface pressure can be remotely sensed from a satellite by an active instrument which measures return echoes from the ocean at frequencies near the 60 GHz oxygen absorption band. The instrument is optimized by selecting its frequencies of operation, transmitter powers and antenna size through a new procedure baesd on numerical simulation which maximizes the retrieval accuracy. The predicted standard deviation error in the retrieved surface pressure is 1 mb. In addition the measurements can be used to retrieve water vapor, cloud liquid water and sea state, which is related to wind speed.
Anytime synthetic projection: Maximizing the probability of goal satisfaction
NASA Technical Reports Server (NTRS)
Drummond, Mark; Bresina, John L.
1990-01-01
A projection algorithm is presented for incremental control rule synthesis. The algorithm synthesizes an initial set of goal achieving control rules using a combination of situation probability and estimated remaining work as a search heuristic. This set of control rules has a certain probability of satisfying the given goal. The probability is incrementally increased by synthesizing additional control rules to handle 'error' situations the execution system is likely to encounter when following the initial control rules. By using situation probabilities, the algorithm achieves a computationally effective balance between the limited robustness of triangle tables and the absolute robustness of universal plans.
Locating influential nodes in complex networks
Malliaros, Fragkiskos D.; Rossi, Maria-Evgenia G.; Vazirgiannis, Michalis
2016-01-01
Understanding and controlling spreading processes in networks is an important topic with many diverse applications, including information dissemination, disease propagation and viral marketing. It is of crucial importance to identify which entities act as influential spreaders that can propagate information to a large portion of the network, in order to ensure efficient information diffusion, optimize available resources or even control the spreading. In this work, we capitalize on the properties of the K-truss decomposition, a triangle-based extension of the core decomposition of graphs, to locate individual influential nodes. Our analysis on real networks indicates that the nodes belonging to the maximal K-truss subgraph show better spreading behavior compared to previously used importance criteria, including node degree and k-core index, leading to faster and wider epidemic spreading. We further show that nodes belonging to such dense subgraphs, dominate the small set of nodes that achieve the optimal spreading in the network. PMID:26776455
Desiccant-assisted air conditioner improves IAQ and comfort
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meckler, M.
1994-10-01
This article describes a system which offers the advantage of downsizing the evaporator coil and condensing unit capacities for comparable design loads, which in turn provides numerous benefits. Airborne microorganisms, which are responsible for many acute diseases, infections, and allergies, are well protected indoors by the moisture surrounding them. While the human body is generally the host for various bacteria and viruses, fungi can grow in moist places. It has been concluded that an optimum relative humidity (RH) range of 40 to 60 percent is necessary to minimize or eliminate the bacterial, viral, and fungal growth. In addition, humidity alsomore » has an effect on air cleanliness--it reduces the presence of dust particles--and on the deterioration of the building structure and its contents. Therefore, controlling humidity is a very important factor to human comfort in minimizing adverse health effects and maximizing the structural longevity of the building.« less
Yoshida, Asuka; Kawabata, Ryoko; Honda, Tomoyuki; Sakai, Kouji; Ami, Yasushi; Sakaguchi, Takemasa; Irie, Takashi
2018-03-01
One of the first defenses against infecting pathogens is the innate immune system activated by cellular recognition of pathogen-associated molecular patterns (PAMPs). Although virus-derived RNA species, especially copyback (cb)-type defective interfering (DI) genomes, have been shown to serve as real PAMPs, which strongly induce interferon-beta (IFN-β) during mononegavirus infection, the mechanisms underlying DI generation remain unclear. Here, for the first time, we identified a single amino acid substitution causing production of cbDI genomes by successful isolation of two distinct types of viral clones with cbDI-producing and cbDI-nonproducing phenotypes from the stock Sendai virus (SeV) strain Cantell, which has been widely used in a number of studies on antiviral innate immunity as a representative IFN-β-inducing virus. IFN-β induction was totally dependent on the presence of a significant amount of cbDI genome-containing viral particles (DI particles) in the viral stock, but not on deficiency of the IFN-antagonistic viral accessory proteins C and V. Comparison of the isolates indicated that a single amino acid substitution found within the N protein of the cbDI-producing clone was enough to cause the emergence of DI genomes. The mutated N protein of the cbDI-producing clone resulted in a lower density of nucleocapsids than that of the DI-nonproducing clone, probably causing both production of the DI genomes and their formation of a stem-loop structure, which serves as an ideal ligand for RIG-I. These results suggested that the integrity of mononegaviral nucleocapsids might be a critical factor in avoiding the undesirable recognition of infection by host cells. IMPORTANCE The type I interferon (IFN) system is a pivotal defense against infecting RNA viruses that is activated by sensing viral RNA species. RIG-I is a major sensor for infection with most mononegaviruses, and copyback (cb)-type defective interfering (DI) genomes have been shown to serve as strong RIG-I ligands in real infections. However, the mechanism underlying production of cbDI genomes remains unclear, although DI genomes emerge as the result of an error during viral replication with high doses of viruses. Sendai virus has been extensively studied and is unique in that its interaction with innate immunity reveals opposing characteristics, such as high-level IFN-β induction and strong inhibition of type I IFN pathways. Our findings provide novel insights into the mechanism of production of mononegaviral cbDI genomes, as well as virus-host interactions during innate immunity. Copyright © 2018 American Society for Microbiology.
Acero Fernández, Doroteo; Ferri Iglesias, María José; López Nuñez, Carme; Louvrie Freire, René; Aldeguer Manté, Xavier
2013-01-01
For years many clinical laboratories have routinely classified undetectable and unquantifiable levels of hepatitis C virus RNA (HCV-RNA) determined by RT-PCR as below limit of quantification (BLOQ). This practice might result in erroneous clinical decisions. To assess the frequency and clinical relevance of assuming that samples that are BLOQ are negative. We performed a retrospective analysis of RNA determinations performed between 2009 and 2011 (Cobas/Taqman, lower LOQ: 15 IU/ml). We distinguished between samples classified as «undetectable» and those classified as «<1.50E+01IU/mL» (BLOQ). We analyzed 2.432 HCV-RNA measurements in 1.371 patients. RNA was BLOQ in 26 samples (1.07%) from 23 patients (1.68%). BLOQ results were highly prevalent among patients receiving Peg-Riba: 23 of 216 samples (10.6%) from 20 of 88 patients receiving treatment (22.7%). The clinical impact of BLOQ RNA samples was as follows: a) 2 patients initially considered to have negative results subsequently showed quantifiable RNA; b) 8 of 9 patients (88.9%) with BLOQ RNA at week 4 of treatment later showed sustained viral response; c) 3 patients with BLOQ RNA at weeks 12 and 48 of treatment relapsed; d) 4 patients with BLOQ RNA at week 24 and/or later had partial or breakthrough treatment responses, and e) in 5 patients the impact were null or could not be ascertained. This study suggests that BLOQ HCV-RNA indicates viremia and that equating a BLOQ result with a negative result can lead to treatment errors. BLOQ results are highly prevalent in on-treatment patients. The results of HCV-RNA quantification should be classified clearly, distinguishing between undetectable levels and levels that are BLOQ. Copyright © 2013 Elsevier España, S.L. and AEEH y AEG. All rights reserved.
Automated spike sorting algorithm based on Laplacian eigenmaps and k-means clustering.
Chah, E; Hok, V; Della-Chiesa, A; Miller, J J H; O'Mara, S M; Reilly, R B
2011-02-01
This study presents a new automatic spike sorting method based on feature extraction by Laplacian eigenmaps combined with k-means clustering. The performance of the proposed method was compared against previously reported algorithms such as principal component analysis (PCA) and amplitude-based feature extraction. Two types of classifier (namely k-means and classification expectation-maximization) were incorporated within the spike sorting algorithms, in order to find a suitable classifier for the feature sets. Simulated data sets and in-vivo tetrode multichannel recordings were employed to assess the performance of the spike sorting algorithms. The results show that the proposed algorithm yields significantly improved performance with mean sorting accuracy of 73% and sorting error of 10% compared to PCA which combined with k-means had a sorting accuracy of 58% and sorting error of 10%.A correction was made to this article on 22 February 2011. The spacing of the title was amended on the abstract page. No changes were made to the article PDF and the print version was unaffected.
Association between cardiovascular fitness and metabolic syndrome among American workers.
Lewis, John E; Cutrono, Stacy E; Hodgson, Nicole; LeBlanc, William G; Arheart, Kristopher L; Fleming, Lora E; Lee, David J
2015-02-01
To explore the association between cardiovascular fitness and metabolic syndrome across occupational groups using a nationally representative sample of the US population. Respondents aged 18 to 49 years from the 1999 to 2004 National Health and Nutrition Examination Survey were evaluated for cardiovascular fitness and classified with regard to metabolic syndrome. Comparisons were made across 40 occupational categories. For all occupations with and without metabolic syndrome, the estimated maximal oxygen consumption (VO2max) was 38.8 mL/kg/min (standard error = 0.5) and 41.1 mL/kg/min (standard error = 0.2), respectively. The estimated VO2max was higher for those without metabolic syndrome for most occupational groups, particularly for sales supervisors and proprietors, sales representatives, finance, business, and commodities, and freight, stock, and material movers. Low estimated VO2max among workers with metabolic syndrome can be addressed, in part, by workplace interventions designed to increase fitness. This study identifies priority occupational groups for these interventions.
THE TWO-WAVELENGTH METHOD OF MICROSPECTROPHOTOMETRY
Mendelsohn, Mortimer L.
1961-01-01
In connection with the potential development of automatic two-wavelength microspectrophotometry, a new version of the two-wavelength method has been formulated. Unlike its predecessors, the Ornstein and Patau versions, the new method varies the area of the photometric field seeking to maximize a relationship between distributional errors at the two wavelengths. Stating this distributional error relationship in conventional photometric terms, the conditions at the maximum are defined by taking the first derivative with respect to field size and setting it equal to zero. This operation supplies two equations; one relates the transmittances at the two wavelengths, and a second states the relative amount of chromophore in the field in terms of transmittance at one wavelength. With the first equation to drive a servomechanism which sets the appropriate field size, the desired answer can then be obtained directly and continuously from the second equation. The result is identical in theory with those of the earlier methods, but the technique is more suitable for electronic computing. PMID:14472536
Visually Lossless Data Compression for Real-Time Frame/Pushbroom Space Science Imagers
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.
2000-01-01
A visually lossless data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also applicable to frame based imaging and is error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on a block transform of a hybrid of modulated lapped transform (MLT) and discrete cosine transform (DCT), or a 2-dimensional lapped transform, followed by bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate as desired by the user. The approach requires no unique table to maximize its performance. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Flight qualified hardware implementations are in development; a functional chip set is expected by the end of 2001. The chip set is being designed to compress data in excess of 20 Msamples/sec and support quantizations from 2 to 16 bits.
Somato-dendritic Synaptic Plasticity and Error-backpropagation in Active Dendrites
Schiess, Mathieu; Urbanczik, Robert; Senn, Walter
2016-01-01
In the last decade dendrites of cortical neurons have been shown to nonlinearly combine synaptic inputs by evoking local dendritic spikes. It has been suggested that these nonlinearities raise the computational power of a single neuron, making it comparable to a 2-layer network of point neurons. But how these nonlinearities can be incorporated into the synaptic plasticity to optimally support learning remains unclear. We present a theoretically derived synaptic plasticity rule for supervised and reinforcement learning that depends on the timing of the presynaptic, the dendritic and the postsynaptic spikes. For supervised learning, the rule can be seen as a biological version of the classical error-backpropagation algorithm applied to the dendritic case. When modulated by a delayed reward signal, the same plasticity is shown to maximize the expected reward in reinforcement learning for various coding scenarios. Our framework makes specific experimental predictions and highlights the unique advantage of active dendrites for implementing powerful synaptic plasticity rules that have access to downstream information via backpropagation of action potentials. PMID:26841235
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gurkin, N V; Konyshev, V A; Novikov, A G
2015-01-31
We have studied experimentally and using numerical simulations and a phenomenological analytical model the dependences of the bit error rate (BER) on the signal power and length of a coherent single-span communication line with transponders employing polarisation division multiplexing and four-level phase modulation (100 Gbit s{sup -1} DP-QPSK format). In comparing the data of the experiment, numerical simulations and theoretical analysis, we have found two optimal powers: the power at which the BER is minimal and the power at which the fade margin in the line is maximal. We have derived and analysed the dependences of the BER on themore » optical signal power at the fibre line input and the dependence of the admissible input signal power range for implementation of the communication lines with a length from 30 – 50 km up to a maximum length of 250 km. (optical transmission of information)« less
Miniaturized force/torque sensor for in vivo measurements of tissue characteristics.
Hessinger, M; Pilic, T; Werthschutzky, R; Pott, P P
2016-08-01
This paper presents the development of a surgical instrument to measure interaction forces/torques with organic tissue during operation. The focus is on the design progress of the sensor element, consisting of a spoke wheel deformation element with a diameter of 12 mm and eight inhomogeneous doped piezoresistive silicon strain gauges on an integrated full-bridge assembly with an edge length of 500 μm. The silicon chips are contacted to flex-circuits via flip chip and bonded on the substrate with a single component adhesive. A signal processing board with an 18 bit serial A/D converter is integrated into the sensor. The design concept of the handheld surgical sensor device consists of an instrument coupling, the six-axis sensor, a wireless communication interface and battery. The nominal force of the sensing element is 10 N and the nominal torque is 1 N-m in all spatial directions. A first characterization of the force sensor results in a maximal systematic error of 4.92 % and random error of 1.13 %.
Optimization of Dish Solar Collectors with and without Secondary Concentrators
NASA Technical Reports Server (NTRS)
Jaffe, L. D.
1982-01-01
Methods for optimizing parabolic dish solar collectors and the consequent effects of various optical, thermal, mechanical, and cost variables are examined. The most important performance optimization is adjusting the receiver aperture to maximize collector efficiency. Other parameters that can be adjusted to optimize efficiency include focal length, and, if a heat engine is used, the receiver temperature. The efficiency maxima associated with focal length and receiver temperature are relatively broad; it may, accordingly, be desirable to design somewhat away from the maxima. Performance optimization is sensitive to the slope and specularity errors of the concentrator. Other optical and thermal variables affecting optimization are the reflectance and blocking factor of the concentrator, the absorptance and losses of the receiver, and, if a heat engine is used, the shape of the engine efficiency versus temperature curve. Performance may sometimes be improved by use of an additional optical element (a secondary concentrator) or a receiver window if the errors of the primary concentrator are large or the receiver temperature is high.
A Bayesian approach to microwave precipitation profile retrieval
NASA Technical Reports Server (NTRS)
Evans, K. Franklin; Turk, Joseph; Wong, Takmeng; Stephens, Graeme L.
1995-01-01
A multichannel passive microwave precipitation retrieval algorithm is developed. Bayes theorem is used to combine statistical information from numerical cloud models with forward radiative transfer modeling. A multivariate lognormal prior probability distribution contains the covariance information about hydrometeor distribution that resolves the nonuniqueness inherent in the inversion process. Hydrometeor profiles are retrieved by maximizing the posterior probability density for each vector of observations. The hydrometeor profile retrieval method is tested with data from the Advanced Microwave Precipitation Radiometer (10, 19, 37, and 85 GHz) of convection over ocean and land in Florida. The CP-2 multiparameter radar data are used to verify the retrieved profiles. The results show that the method can retrieve approximate hydrometeor profiles, with larger errors over land than water. There is considerably greater accuracy in the retrieval of integrated hydrometeor contents than of profiles. Many of the retrieval errors are traced to problems with the cloud model microphysical information, and future improvements to the algorithm are suggested.
A game theory approach to target tracking in sensor networks.
Gu, Dongbing
2011-02-01
In this paper, we investigate a moving-target tracking problem with sensor networks. Each sensor node has a sensor to observe the target and a processor to estimate the target position. It also has wireless communication capability but with limited range and can only communicate with neighbors. The moving target is assumed to be an intelligent agent, which is "smart" enough to escape from the detection by maximizing the estimation error. This adversary behavior makes the target tracking problem more difficult. We formulate this target estimation problem as a zero-sum game in this paper and use a minimax filter to estimate the target position. The minimax filter is a robust filter that minimizes the estimation error by considering the worst case noise. Furthermore, we develop a distributed version of the minimax filter for multiple sensor nodes. The distributed computation is implemented via modeling the information received from neighbors as measurements in the minimax filter. The simulation results show that the target tracking algorithm proposed in this paper provides a satisfactory result.
Measurement configuration optimization for dynamic metrology using Stokes polarimetry
NASA Astrophysics Data System (ADS)
Liu, Jiamin; Zhang, Chuanwei; Zhong, Zhicheng; Gu, Honggang; Chen, Xiuguo; Jiang, Hao; Liu, Shiyuan
2018-05-01
As dynamic loading experiments such as a shock compression test are usually characterized by short duration, unrepeatability and high costs, high temporal resolution and precise accuracy of the measurements is required. Due to high temporal resolution up to a ten-nanosecond-scale, a Stokes polarimeter with six parallel channels has been developed to capture such instantaneous changes in optical properties in this paper. Since the measurement accuracy heavily depends on the configuration of the probing beam incident angle and the polarizer azimuth angle, it is important to select an optimal combination from the numerous options. In this paper, a systematic error propagation-based measurement configuration optimization method corresponding to the Stokes polarimeter was proposed. The maximal Frobenius norm of the combinatorial matrix of the configuration error propagating matrix and the intrinsic error propagating matrix is introduced to assess the measurement accuracy. The optimal configuration for thickness measurement of a SiO2 thin film deposited on a Si substrate has been achieved by minimizing the merit function. Simulation and experimental results show a good agreement between the optimal measurement configuration achieved experimentally using the polarimeter and the theoretical prediction. In particular, the experimental result shows that the relative error in the thickness measurement can be reduced from 6% to 1% by using the optimal polarizer azimuth angle when the incident angle is 45°. Furthermore, the optimal configuration for the dynamic metrology of a nickel foil under quasi-dynamic loading is investigated using the proposed optimization method.
Bressel, Eadric; Yonker, Joshua C; Kras, John; Heath, Edward M
2007-01-01
Context: How athletes from different sports perform on balance tests is not well understood. When prescribing balance exercises to athletes in different sports, it may be important to recognize performance variations. Objective: To compare static and dynamic balance among collegiate athletes competing or training in soccer, basketball, and gymnastics. Design: A quasi-experimental, between-groups design. Independent variables included limb (dominant and nondominant) and sport played. Setting: A university athletic training facility. Patients or Other Participants: Thirty-four female volunteers who competed in National Collegiate Athletic Association Division I soccer (n = 11), basketball (n = 11), or gymnastics (n = 12). Intervention(s): To assess static balance, participants performed 3 stance variations (double leg, single leg, and tandem leg) on 2 surfaces (stiff and compliant). For assessment of dynamic balance, participants performed multidirectional maximal single-leg reaches from a unilateral base of support. Main Outcome Measure(s): Errors from the Balance Error Scoring System and normalized leg reach distances from the Star Excursion Balance Test were used to assess static and dynamic balance, respectively. Results: Balance Error Scoring System error scores for the gymnastics group were 55% lower than for the basketball group (P = .01), and Star Excursion Balance Test scores were 7% higher in the soccer group than the basketball group (P = .04). Conclusions: Gymnasts and soccer players did not differ in terms of static and dynamic balance. In contrast, basketball players displayed inferior static balance compared with gymnasts and inferior dynamic balance compared with soccer players. PMID:17597942
Gerns Storey, Helen L; Richardson, Barbra A; Singa, Benson; Naulikha, Jackie; Prindle, Vivian C; Diaz-Ochoa, Vladimir E; Felgner, Phil L; Camerini, David; Horton, Helen; John-Stewart, Grace; Walson, Judd L
2014-01-01
The role of HIV-1-specific antibody responses in HIV disease progression is complex and would benefit from analysis techniques that examine clusterings of responses. Protein microarray platforms facilitate the simultaneous evaluation of numerous protein-specific antibody responses, though excessive data are cumbersome in analyses. Principal components analysis (PCA) reduces data dimensionality by generating fewer composite variables that maximally account for variance in a dataset. To identify clusters of antibody responses involved in disease control, we investigated the association of HIV-1-specific antibody responses by protein microarray, and assessed their association with disease progression using PCA in a nested cohort design. Associations observed among collections of antibody responses paralleled protein-specific responses. At baseline, greater antibody responses to the transmembrane glycoprotein (TM) and reverse transcriptase (RT) were associated with higher viral loads, while responses to the surface glycoprotein (SU), capsid (CA), matrix (MA), and integrase (IN) proteins were associated with lower viral loads. Over 12 months greater antibody responses were associated with smaller decreases in CD4 count (CA, MA, IN), and reduced likelihood of disease progression (CA, IN). PCA and protein microarray analyses highlighted a collection of HIV-specific antibody responses that together were associated with reduced disease progression, and may not have been identified by examining individual antibody responses. This technique may be useful to explore multifaceted host-disease interactions, such as HIV coinfections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwon, Jeanny; Kim, Dan Hyo; Park, Ji Min
Purpose: To investigate which isotype of phosphatidylinositol 4-kinase (PI4K) may affect radiosensitivity and examine whether anti–hepatitis C viral (HCV) agents, some of which have been shown to inhibit PI4K IIIα activity, could be repositioned as a radiosensitizer in human cancer cells. Methods and Materials: U251, BT474, and HepG2 cell lines and normal human astrocyte were used. Ribonucleic acid interference, clonogenic assays, Western blotting, immunofluorescence, annexin V assay, lysotracker staining, and β-galactosidase assay were performed. Results: Of the 4 PI4K isotypes, specific inhibition of IIIα increased radiosensitivity. For pharmacologic inhibition of PI4K IIIα, we screened 9 anti-HCV agents by half-maximal inhibitorymore » concentration assay. Simeprevir was selected, and its inhibition of PI4K IIIα activity was confirmed. Combination of simeprevir treatment and radiation significantly attenuated expression of phospho-phospho-PKC and phospho-Akt and increased radiation-induced cell death in tested cell lines. Pretreatment with simeprevir prolonged γH2AX foci formation and down-regulation of phospho-DNA-PKcs, indicating impairment of nonhomologous end-joining repair. Cells pretreated with simeprevir exhibited mixed modes of cell death, including apoptosis and autophagy. Conclusion: These data demonstrate that targeting PI4K IIIα using an anti-HCV agent is a viable approach to enhance the therapeutic efficacy of radiation therapy in various human cancers, such as glioma, breast, and hepatocellular carcinoma.« less
Decision-Making under Risk of Loss in Children
Steelandt, Sophie; Broihanne, Marie-Hélène; Romain, Amélie; Thierry, Bernard; Dufour, Valérie
2013-01-01
In human adults, judgment errors are known to often lead to irrational decision-making in risky contexts. While these errors can affect the accuracy of profit evaluation, they may have once enhanced survival in dangerous contexts following a “better be safe than sorry” rule of thumb. Such a rule can be critical for children, and it could develop early on. Here, we investigated the rationality of choices and the possible occurrence of judgment errors in children aged 3 to 9 years when exposed to a risky trade. Children were allocated with a piece of cookie that they could either keep or risk in exchange of the content of one cup among 6, visible in front of them. In the cups, cookies could be of larger, equal or smaller sizes than the initial allocation. Chances of losing or winning were manipulated by presenting different combinations of cookie sizes in the cups (for example 3 large, 2 equal and 1 small cookie). We investigated the rationality of children's response using the theoretical models of Expected Utility Theory (EUT) and Cumulative Prospect Theory. Children aged 3 to 4 years old were unable to discriminate the profitability of exchanging in the different combinations. From 5 years, children were better at maximizing their benefit in each combination, their decisions were negatively induced by the probability of losing, and they exhibited a framing effect, a judgment error found in adults. Confronting data to the EUT indicated that children aged over 5 were risk-seekers but also revealed inconsistencies in their choices. According to a complementary model, the Cumulative Prospect Theory (CPT), they exhibited loss aversion, a pattern also found in adults. These findings confirm that adult-like judgment errors occur in children, which suggests that they possess a survival value. PMID:23349682
Statistically Self-Consistent and Accurate Errors for SuperDARN Data
NASA Astrophysics Data System (ADS)
Reimer, A. S.; Hussey, G. C.; McWilliams, K. A.
2018-01-01
The Super Dual Auroral Radar Network (SuperDARN)-fitted data products (e.g., spectral width and velocity) are produced using weighted least squares fitting. We present a new First-Principles Fitting Methodology (FPFM) that utilizes the first-principles approach of Reimer et al. (2016) to estimate the variance of the real and imaginary components of the mean autocorrelation functions (ACFs) lags. SuperDARN ACFs fitted by the FPFM do not use ad hoc or empirical criteria. Currently, the weighting used to fit the ACF lags is derived from ad hoc estimates of the ACF lag variance. Additionally, an overcautious lag filtering criterion is used that sometimes discards data that contains useful information. In low signal-to-noise (SNR) and/or low signal-to-clutter regimes the ad hoc variance and empirical criterion lead to underestimated errors for the fitted parameter because the relative contributions of signal, noise, and clutter to the ACF variance is not taken into consideration. The FPFM variance expressions include contributions of signal, noise, and clutter. The clutter is estimated using the maximal power-based self-clutter estimator derived by Reimer and Hussey (2015). The FPFM was successfully implemented and tested using synthetic ACFs generated with the radar data simulator of Ribeiro, Ponomarenko, et al. (2013). The fitted parameters and the fitted-parameter errors produced by the FPFM are compared with the current SuperDARN fitting software, FITACF. Using self-consistent statistical analysis, the FPFM produces reliable or trustworthy quantitative measures of the errors of the fitted parameters. For an SNR in excess of 3 dB and velocity error below 100 m/s, the FPFM produces 52% more data points than FITACF.
Decision-making under risk of loss in children.
Steelandt, Sophie; Broihanne, Marie-Hélène; Romain, Amélie; Thierry, Bernard; Dufour, Valérie
2013-01-01
In human adults, judgment errors are known to often lead to irrational decision-making in risky contexts. While these errors can affect the accuracy of profit evaluation, they may have once enhanced survival in dangerous contexts following a "better be safe than sorry" rule of thumb. Such a rule can be critical for children, and it could develop early on. Here, we investigated the rationality of choices and the possible occurrence of judgment errors in children aged 3 to 9 years when exposed to a risky trade. Children were allocated with a piece of cookie that they could either keep or risk in exchange of the content of one cup among 6, visible in front of them. In the cups, cookies could be of larger, equal or smaller sizes than the initial allocation. Chances of losing or winning were manipulated by presenting different combinations of cookie sizes in the cups (for example 3 large, 2 equal and 1 small cookie). We investigated the rationality of children's response using the theoretical models of Expected Utility Theory (EUT) and Cumulative Prospect Theory. Children aged 3 to 4 years old were unable to discriminate the profitability of exchanging in the different combinations. From 5 years, children were better at maximizing their benefit in each combination, their decisions were negatively induced by the probability of losing, and they exhibited a framing effect, a judgment error found in adults. Confronting data to the EUT indicated that children aged over 5 were risk-seekers but also revealed inconsistencies in their choices. According to a complementary model, the Cumulative Prospect Theory (CPT), they exhibited loss aversion, a pattern also found in adults. These findings confirm that adult-like judgment errors occur in children, which suggests that they possess a survival value.
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
Safety Strategies in an Academic Radiation Oncology Department and Recommendations for Action
Terezakis, Stephanie A.; Pronovost, Peter; Harris, Kendra; DeWeese, Theodore; Ford, Eric
2013-01-01
Background Safety initiatives in the United States continue to work on providing guidance as to how the average practitioner might make patients safer in the face of the complex process by which radiation therapy (RT), an essential treatment used in the management of many patients with cancer, is prepared and delivered. Quality control measures can uncover certain specific errors such as machine dose mis-calibration or misalignments of the patient in the radiation treatment beam. However, they are less effective at uncovering less common errors that can occur anywhere along the treatment planning and delivery process, and even when the process is functioning as intended, errors still occur. Prioritizing Risks and Implementing Risk-Reduction Strategies Activities undertaken at the radiation oncology department at the Johns Hopkins Hospital (Baltimore) include Failure Mode and Effects Analysis (FMEA), risk-reduction interventions, and voluntary error and near-miss reporting systems. A visual process map portrayed 269 RT steps occurring among four subprocesses—including consult, simulation, treatment planning, and treatment delivery. Two FMEAs revealed 127 and 159 possible failure modes, respectively. Risk-reduction interventions for 15 “top-ranked” failure modes were implemented. Since the error and near-miss reporting system’s implementation in the department in 2007, 253 events have been logged. However, the system may be insufficient for radiation oncology, for which a greater level of practice-specific information is required to fully understand each event. Conclusions The “basic science” of radiation treatment has received considerable support and attention in developing novel therapies to benefit patients. The time has come to apply the same focus and resources to ensuring that patients safely receive the maximal benefits possible. PMID:21819027
Orta Mira, Nieves; Serrano, María del Remedio Guna; Martínez, José-Carlos Latorre; Ovies, María Rosario; Poveda, Marta; de Gopegui, Enrique Ruiz; Cardona, Concepción Gimeno
2011-12-01
Human immunodeficiency virus type 1 (HIV-1) and hepatitis B (HBV) and C virus (HCV) viral load determinations are among the most important markers for the follow-up of patients infected with these viruses. External quality control tools are crucial to ensure the accuracy of the results obtained by microbiology laboratories. This article summarized the results obtained in the 2010 External Quality Control Program of the Spanish Society of Infectious Diseases and Clinical Microbiology for HIV-1, HCV, and HBV viral loads and HCV genotyping. In the HIV-1 program, a total of five standards were sent. One standard consisted of seronegative human plasma, while the remaining four contained plasma from three different viremic patients, in the range of 3-5 log(10) copies/mL; two of these standards were identical, with the aim of determining repeatability. A significant proportion of the laboratories (22.6% on average) obtained values out of the accepted range (mean ± 0.2 log(10)copies/mL), depending on the standard and on the method used for quantification. Repeatability was very good, with up to 95% of laboratories reporting results within the limits (Δ<0.5 log(10)copies/mL). The HBV and HCV program consisted of two standards with different viral load contents. Most of the participants, 86.1% in the case of HCV and 87.1% in HBV, obtained all the results within the accepted range (mean ± 1.96 SD log(10)UI/mL). Post-analytical errors due to mistranscription of the results were detected in these controls. Data from this analysis reinforce the utility of proficiency programs to ensure the quality of the results obtained by a particular laboratory, as well as the importance of the post-analytical phase in overall quality. Due to interlaboratory variability, use of the same method and the same laboratory for patient follow-up is advisable. Copyright © 2011 Elsevier España S.L. All rights reserved.
An 802.11 n wireless local area network transmission scheme for wireless telemedicine applications.
Lin, C F; Hung, S I; Chiang, I H
2010-10-01
In this paper, an 802.11 n transmission scheme is proposed for wireless telemedicine applications. IEEE 802.11n standards, a power assignment strategy, space-time block coding (STBC), and an object composition Petri net (OCPN) model are adopted. With the proposed wireless system, G.729 audio bit streams, Joint Photographic Experts Group 2000 (JPEG 2000) clinical images, and Moving Picture Experts Group 4 (MPEG-4) video bit streams achieve a transmission bit error rate (BER) of 10-7, 10-4, and 103 simultaneously. The proposed system meets the requirements prescribed for wireless telemedicine applications. An essential feature of this proposed transmission scheme is that clinical information that requires a high quality of service (QoS) is transmitted at a high power transmission rate with significant error protection. For maximizing resource utilization and minimizing the total transmission power, STBC and adaptive modulation techniques are used in the proposed 802.11 n wireless telemedicine system. Further, low power, direct mapping (DM), low-error protection scheme, and high-level modulation are adopted for messages that can tolerate a high BER. With the proposed transmission scheme, the required reliability of communication can be achieved. Our simulation results have shown that the proposed 802.11 n transmission scheme can be used for developing effective wireless telemedicine systems.
Derivation and precision of mean field electrodynamics with mesoscale fluctuations
NASA Astrophysics Data System (ADS)
Zhou, Hongzhe; Blackman, Eric G.
2018-06-01
Mean field electrodynamics (MFE) facilitates practical modelling of secular, large scale properties of astrophysical or laboratory systems with fluctuations. Practitioners commonly assume wide scale separation between mean and fluctuating quantities, to justify equality of ensemble and spatial or temporal averages. Often however, real systems do not exhibit such scale separation. This raises two questions: (I) What are the appropriate generalized equations of MFE in the presence of mesoscale fluctuations? (II) How precise are theoretical predictions from MFE? We address both by first deriving the equations of MFE for different types of averaging, along with mesoscale correction terms that depend on the ratio of averaging scale to variation scale of the mean. We then show that even if these terms are small, predictions of MFE can still have a significant precision error. This error has an intrinsic contribution from the dynamo input parameters and a filtering contribution from differences in the way observations and theory are projected through the measurement kernel. Minimizing the sum of these contributions can produce an optimal scale of averaging that makes the theory maximally precise. The precision error is important to quantify when comparing to observations because it quantifies the resolution of predictive power. We exemplify these principles for galactic dynamos, comment on broader implications, and identify possibilities for further work.
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America
The Weighted-Average Lagged Ensemble.
DelSole, T; Trenary, L; Tippett, M K
2017-11-01
A lagged ensemble is an ensemble of forecasts from the same model initialized at different times but verifying at the same time. The skill of a lagged ensemble mean can be improved by assigning weights to different forecasts in such a way as to maximize skill. If the forecasts are bias corrected, then an unbiased weighted lagged ensemble requires the weights to sum to one. Such a scheme is called a weighted-average lagged ensemble. In the limit of uncorrelated errors, the optimal weights are positive and decay monotonically with lead time, so that the least skillful forecasts have the least weight. In more realistic applications, the optimal weights do not always behave this way. This paper presents a series of analytic examples designed to illuminate conditions under which the weights of an optimal weighted-average lagged ensemble become negative or depend nonmonotonically on lead time. It is shown that negative weights are most likely to occur when the errors grow rapidly and are highly correlated across lead time. The weights are most likely to behave nonmonotonically when the mean square error is approximately constant over the range forecasts included in the lagged ensemble. An extreme example of the latter behavior is presented in which the optimal weights vanish everywhere except at the shortest and longest lead times.
Comparison between goal programming and cointegration approaches in enhanced index tracking
NASA Astrophysics Data System (ADS)
Lam, Weng Siew; Jamaan, Saiful Hafizah Hj.
2013-04-01
Index tracking is a popular form of passive fund management in stock market. Passive management is a buy-and-hold strategy that aims to achieve rate of return similar to the market return. Index tracking problem is a problem of reproducing the performance of a stock market index, without purchasing all of the stocks that make up the index. This can be done by establishing an optimal portfolio that minimizes risk or tracking error. An improved index tracking (enhanced index tracking) is a dual-objective optimization problem, a trade-off between maximizing the mean return and minimizing the tracking error. Enhanced index tracking aims to generate excess return over the return achieved by the index. The objective of this study is to compare the portfolio compositions and performances by using two different approaches in enhanced index tracking problem, which are goal programming and cointegration. The result of this study shows that the optimal portfolios for both approaches are able to outperform the Malaysia market index which is Kuala Lumpur Composite Index. Both approaches give different optimal portfolio compositions. Besides, the cointegration approach outperforms the goal programming approach because the cointegration approach gives higher mean return and lower risk or tracking error. Therefore, the cointegration approach is more appropriate for the investors in Malaysia.
Satellite-Tracking Millimeter-Wave Reflector Antenna System For Mobile Satellite-Tracking
NASA Technical Reports Server (NTRS)
Densmore, Arthur C. (Inventor); Jamnejad, Vahraz (Inventor); Woo, Kenneth E. (Inventor)
2001-01-01
A miniature dual-band two-way mobile satellite-tracking antenna system mounted on a movable vehicle includes a miniature parabolic reflector dish having an elliptical aperture with major and minor elliptical axes aligned horizontally and vertically, respectively, to maximize azimuthal directionality and minimize elevational directionality to an extent corresponding to expected pitch excursions of the movable ground vehicle. A feed-horn has a back end and an open front end facing the reflector dish and has vertical side walls opening out from the back end to the front end at a lesser horn angle and horizontal top and bottom walls opening out from the back end to the front end at a greater horn angle. An RF circuit couples two different signal bands between the feed-horn and the user. An antenna attitude controller maintains an antenna azimuth direction relative to the satellite by rotating it in azimuth in response to sensed yaw motions of the movable ground vehicle so as to compensate for the yaw motions to within a pointing error angle. The controller sinusoidally dithers the antenna through a small azimuth dither angle greater than the pointing error angle while sensing a signal from the satellite received at the reflector dish, and deduces the pointing angle error from dither-induced fluctuations in the received signal.
An Optimal t-{Delta}v Guidance Law for Intercepting a Boosting Target
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, L.C.; Breitfeller, E.; Ledebuhr, A.G.
2002-06-30
Lawrence Livermore National Laboratory (LLNL) have developed a new missile guidance law for intercepting a missile during boost phase. Unlike other known missile guidance laws being used today, the new t-{Delta}v guidance law optimally trades an interceptor's onboard fuel capacity against time-to-go before impact. In particular, this guidance law allows a missile designer to program the interceptor to maximally impact a boosting missile before burnout or burn termination and thus negating its ability to achieve the maximum kinetic velocity. For an intercontinental range ballistic missile (ICBM), it can be shown that for every second of earlier intercept prior to burnout,more » the ICBM ground range is reduced by 350 km. Therefore, intercepting a mere 15 seconds earlier would result in amiss of 5,250 km from the intended target or approximately a distance across the continental US. This paper also shows how the t-{Delta}v guidance law can incorporate uncertainties in target burnout time, predicted intercept point (PIP) error, time-to-go error, and other track estimation errors. The authors believe that the t-{Delta}v guidance law is a step toward the development of a new and smart missile guidance law that would enhance the probability of achieving a boost phase intercept.« less
A satellite-tracking millimeter-wave reflector antenna system for mobile satellite-tracking
NASA Technical Reports Server (NTRS)
Densmore, Arthur C. (Inventor); Jamnejad, Vahraz (Inventor); Woo, Kenneth E. (Inventor)
1995-01-01
A miniature dual-band two-way mobile satellite tracking antenna system mounted on a movable ground vehicle includes a miniature parabolic reflector dish having an elliptical aperture with major and minor elliptical axes aligned horizontally and vertically, respectively, to maximize azimuthal directionality and minimize elevational directionality to an extent corresponding to expected pitch excursions of the movable ground vehicle. A feed-horn has a back end and an open front end facing the reflector dish and has vertical side walls opening out from the back end to the front end at a lesser horn angle and horizontal top and bottom walls opening out from the back end to the front end at a greater horn angle. An RF circuit couples two different signal bands between the feed-horn and the user. An antenna attitude controller maintains an antenna azimuth direction relative to the satellite by rotating it in azimuth in response to sensed yaw motions of the movable ground vehicle so as to compensate for the yaw motions to within a pointing error angle. The controller sinusoidally dithers the antenna through a small azimuth dither angle greater than the pointing error angle while sensing a signal from the satellite received at the reflector dish, and deduces the pointing angle error from dither-induced fluctuations in the received signal.
NASA Technical Reports Server (NTRS)
Schlegel, E.; Norris, Jay P. (Technical Monitor)
2002-01-01
This project was awarded funding from the CGRO program to support ROSAT and ground-based observations of unidentified sources from data obtained by the EGRET instrument on the Compton Gamma-Ray Observatory. The critical items in the project are the individual ROSAT observations that are used to cover the 99% error circle of the unidentified EGRET source. Each error circle is a degree or larger in diameter. Each ROSAT field is about 30 deg in diameter. Hence, a number (>4) of ROSAT pointings must be obtained for each EGRET source to cover the field. The scheduling of ROSAT observations is carried out to maximize the efficiency of the total schedule. As a result, each pointing is broken into one or more sub-pointings of various exposure times. This project was awarded ROSAT observing time for four unidentified EGRET sources, summarized in the table. The column headings are defined as follows: 'Coverings' = number of observations to cover the error circle; 'SubPtg' = total number of sub-pointings to observe all of the coverings; 'Rec'd' = number of individual sub-pointings received to date; 'CompFlds' = number of individual coverings for which the requested complete exposure has been received. Processing of the data can not occur until a complete exposure has been accumulated for each covering.
NASA Astrophysics Data System (ADS)
Liu, Yang; Pu, Huangsheng; Zhang, Xi; Li, Baojuan; Liang, Zhengrong; Lu, Hongbing
2017-03-01
Arterial spin labeling (ASL) provides a noninvasive measurement of cerebral blood flow (CBF). Due to relatively low spatial resolution, the accuracy of CBF measurement is affected by the partial volume (PV) effect. To obtain accurate CBF estimation, the contribution of each tissue type in the mixture is desirable. In general, this can be obtained according to the registration of ASL and structural image in current ASL studies. This approach can obtain probability of each tissue type inside each voxel, but it also introduces error, which include error of registration algorithm and imaging itself error in scanning of ASL and structural image. Therefore, estimation of mixture percentage directly from ASL data is greatly needed. Under the assumption that ASL signal followed the Gaussian distribution and each tissue type is independent, a maximum a posteriori expectation-maximization (MAP-EM) approach was formulated to estimate the contribution of each tissue type to the observed perfusion signal at each voxel. Considering the sensitivity of MAP-EM to the initialization, an approximately accurate initialization was obtain using 3D Fuzzy c-means method. Our preliminary results demonstrated that the GM and WM pattern across the perfusion image can be sufficiently visualized by the voxel-wise tissue mixtures, which may be promising for the diagnosis of various brain diseases.
Prabhu, David; Mehanna, Emile; Gargesha, Madhusudhana; Brandt, Eric; Wen, Di; van Ditzhuijzen, Nienke S; Chamie, Daniel; Yamamoto, Hirosada; Fujino, Yusuke; Alian, Ali; Patel, Jaymin; Costa, Marco; Bezerra, Hiram G; Wilson, David L
2016-04-01
Evidence suggests high-resolution, high-contrast, [Formula: see text] intravascular optical coherence tomography (IVOCT) can distinguish plaque types, but further validation is needed, especially for automated plaque characterization. We developed experimental and three-dimensional (3-D) registration methods to provide validation of IVOCT pullback volumes using microscopic, color, and fluorescent cryo-image volumes with optional registered cryo-histology. A specialized registration method matched IVOCT pullback images acquired in the catheter reference frame to a true 3-D cryo-image volume. Briefly, an 11-parameter registration model including a polynomial virtual catheter was initialized within the cryo-image volume, and perpendicular images were extracted, mimicking IVOCT image acquisition. Virtual catheter parameters were optimized to maximize cryo and IVOCT lumen overlap. Multiple assessments suggested that the registration error was better than the [Formula: see text] spacing between IVOCT image frames. Tests on a digital synthetic phantom gave a registration error of only [Formula: see text] (signed distance). Visual assessment of randomly presented nearby frames suggested registration accuracy within 1 IVOCT frame interval ([Formula: see text]). This would eliminate potential misinterpretations confronted by the typical histological approaches to validation, with estimated 1-mm errors. The method can be used to create annotated datasets and automated plaque classification methods and can be extended to other intravascular imaging modalities.
Biallelic mutations in IRF8 impair human NK cell maturation and function
Mace, Emily M.; Gunesch, Justin T.; Chinn, Ivan K.; Angelo, Laura S.; Maisuria, Sheetal; Keller, Michael D.; Togi, Sumihito; Watkin, Levi B.; LaRosa, David F.; Jhangiani, Shalini N.; Muzny, Donna M.; Stray-Pedersen, Asbjørg; Coban Akdemir, Zeynep; Smith, Jansen B.; Hernández-Sanabria, Mayra; Le, Duy T.; Hogg, Graham D.; Cao, Tram N.; Freud, Aharon G.; Szymanski, Eva P.; Collin, Matthew; Cant, Andrew J.; Gibbs, Richard A.; Holland, Steven M.; Caligiuri, Michael A.; Ozato, Keiko; Paust, Silke; Doody, Gina M.; Lupski, James R.; Orange, Jordan S.
2016-01-01
Human NK cell deficiencies are rare yet result in severe and often fatal disease, particularly as a result of viral susceptibility. NK cells develop from hematopoietic stem cells, and few monogenic errors that specifically interrupt NK cell development have been reported. Here we have described biallelic mutations in IRF8, which encodes an interferon regulatory factor, as a cause of familial NK cell deficiency that results in fatal and severe viral disease. Compound heterozygous or homozygous mutations in IRF8 in 3 unrelated families resulted in a paucity of mature CD56dim NK cells and an increase in the frequency of the immature CD56bright NK cells, and this impairment in terminal maturation was also observed in Irf8–/–, but not Irf8+/–, mice. We then determined that impaired maturation was NK cell intrinsic, and gene expression analysis of human NK cell developmental subsets showed that multiple genes were dysregulated by IRF8 mutation. The phenotype was accompanied by deficient NK cell function and was stable over time. Together, these data indicate that human NK cells require IRF8 for development and functional maturation and that dysregulation of this function results in severe human disease, thereby emphasizing a critical role for NK cells in human antiviral defense. PMID:27893462
Biallelic mutations in IRF8 impair human NK cell maturation and function.
Mace, Emily M; Bigley, Venetia; Gunesch, Justin T; Chinn, Ivan K; Angelo, Laura S; Care, Matthew A; Maisuria, Sheetal; Keller, Michael D; Togi, Sumihito; Watkin, Levi B; LaRosa, David F; Jhangiani, Shalini N; Muzny, Donna M; Stray-Pedersen, Asbjørg; Coban Akdemir, Zeynep; Smith, Jansen B; Hernández-Sanabria, Mayra; Le, Duy T; Hogg, Graham D; Cao, Tram N; Freud, Aharon G; Szymanski, Eva P; Savic, Sinisa; Collin, Matthew; Cant, Andrew J; Gibbs, Richard A; Holland, Steven M; Caligiuri, Michael A; Ozato, Keiko; Paust, Silke; Doody, Gina M; Lupski, James R; Orange, Jordan S
2017-01-03
Human NK cell deficiencies are rare yet result in severe and often fatal disease, particularly as a result of viral susceptibility. NK cells develop from hematopoietic stem cells, and few monogenic errors that specifically interrupt NK cell development have been reported. Here we have described biallelic mutations in IRF8, which encodes an interferon regulatory factor, as a cause of familial NK cell deficiency that results in fatal and severe viral disease. Compound heterozygous or homozygous mutations in IRF8 in 3 unrelated families resulted in a paucity of mature CD56dim NK cells and an increase in the frequency of the immature CD56bright NK cells, and this impairment in terminal maturation was also observed in Irf8-/-, but not Irf8+/-, mice. We then determined that impaired maturation was NK cell intrinsic, and gene expression analysis of human NK cell developmental subsets showed that multiple genes were dysregulated by IRF8 mutation. The phenotype was accompanied by deficient NK cell function and was stable over time. Together, these data indicate that human NK cells require IRF8 for development and functional maturation and that dysregulation of this function results in severe human disease, thereby emphasizing a critical role for NK cells in human antiviral defense.
Correlational Neural Networks.
Chandar, Sarath; Khapra, Mitesh M; Larochelle, Hugo; Ravindran, Balaraman
2016-02-01
Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)-based approaches and autoencoder (AE)-based approaches. CCA-based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace. AE-based methods learn a common representation by minimizing the error of reconstructing the two views. Each of these approaches has its own advantages and disadvantages. For example, while CCA-based approaches outperform AE-based approaches for the task of transfer learning, they are not as scalable as the latter. In this work, we propose an AE-based approach, correlational neural network (CorrNet), that explicitly maximizes correlation among the views when projected to the common subspace. Through a series of experiments, we demonstrate that the proposed CorrNet is better than AE and CCA with respect to its ability to learn correlated common representations. We employ CorrNet for several cross-language tasks and show that the representations learned using it perform better than the ones learned using other state-of-the-art approaches.
Letting the daylight in: Reviewing the reviewers and other ways to maximize transparency in science
Wicherts, Jelte M.; Kievit, Rogier A.; Bakker, Marjan; Borsboom, Denny
2012-01-01
With the emergence of online publishing, opportunities to maximize transparency of scientific research have grown considerably. However, these possibilities are still only marginally used. We argue for the implementation of (1) peer-reviewed peer review, (2) transparent editorial hierarchies, and (3) online data publication. First, peer-reviewed peer review entails a community-wide review system in which reviews are published online and rated by peers. This ensures accountability of reviewers, thereby increasing academic quality of reviews. Second, reviewers who write many highly regarded reviews may move to higher editorial positions. Third, online publication of data ensures the possibility of independent verification of inferential claims in published papers. This counters statistical errors and overly positive reporting of statistical results. We illustrate the benefits of these strategies by discussing an example in which the classical publication system has gone awry, namely controversial IQ research. We argue that this case would have likely been avoided using more transparent publication practices. We argue that the proposed system leads to better reviews, meritocratic editorial hierarchies, and a higher degree of replicability of statistical analyses. PMID:22536180
Slope Estimation in Noisy Piecewise Linear Functions✩
Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy
2014-01-01
This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure. PMID:25419020
Optimizing cosmological surveys in a crowded market
NASA Astrophysics Data System (ADS)
Bassett, Bruce A.
2005-04-01
Optimizing the major next-generation cosmological surveys (such as SNAP, KAOS, etc.) is a key problem given our ignorance of the physics underlying cosmic acceleration and the plethora of surveys planned. We propose a Bayesian design framework which (1) maximizes the discrimination power of a survey without assuming any underlying dark-energy model, (2) finds the best niche survey geometry given current data and future competing experiments, (3) maximizes the cross section for serendipitous discoveries and (4) can be adapted to answer specific questions (such as “is dark energy dynamical?”). Integrated parameter-space optimization (IPSO) is a design framework that integrates projected parameter errors over an entire dark energy parameter space and then extremizes a figure of merit (such as Shannon entropy gain which we show is stable to off-diagonal covariance matrix perturbations) as a function of survey parameters using analytical, grid or MCMC techniques. We discuss examples where the optimization can be performed analytically. IPSO is thus a general, model-independent and scalable framework that allows us to appropriately use prior information to design the best possible surveys.
Xing, Chao; Elston, Robert C
2006-07-01
The multipoint lod score and mod score methods have been advocated for their superior power in detecting linkage. However, little has been done to determine the distribution of multipoint lod scores or to examine the properties of mod scores. In this paper we study the distribution of multipoint lod scores both analytically and by simulation. We also study by simulation the distribution of maximum multipoint lod scores when maximized over different penetrance models. The multipoint lod score is approximately normally distributed with mean and variance that depend on marker informativity, marker density, specified genetic model, number of pedigrees, pedigree structure, and pattern of affection status. When the multipoint lod scores are maximized over a set of assumed penetrances models, an excess of false positive indications of linkage appear under dominant analysis models with low penetrances and under recessive analysis models with high penetrances. Therefore, caution should be taken in interpreting results when employing multipoint lod score and mod score approaches, in particular when inferring the level of linkage significance and the mode of inheritance of a trait.
Slope Estimation in Noisy Piecewise Linear Functions.
Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy
2015-03-01
This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.
Head repositioning accuracy in patients with whiplash-associated disorders.
Feipel, Veronique; Salvia, Patrick; Klein, Helene; Rooze, Marcel
2006-01-15
Controlled study, measuring head repositioning error (HRE) using an electrogoniometric device. To compare HRE in neutral position, axial rotation and complex postures of patients with whiplash-associated disorders (WAD) to that of control subjects. The presence of kinesthetic alterations in patients with WAD is controversial. In 26 control subjects and 29 patients with WAD (aged 22-74 years), head kinematics was sampled using a 3-dimensional electrogoniometer mounted using a harness and a helmet. All tasks were realized in seated position. The repositioning tasks included neutral repositioning after maximal flexion-extension, eyes open and blindfolded, repositioning at 50 degrees of axial rotation, and repositioning at 50 degrees of axial rotation combined to 20 degrees of ipsilateral bending. The flexion-extension, ipsilateral bending, and axial rotation components of HRE were considered. A multiple-way repeated-measures analysis of variance was used to compare tasks and groups. The WAD group displayed a reduced flexion-extension range (P = 1.9 x 10(-4)), and larger HRE during flexion-extension and repositioning tasks (P = 0.009) than controls. Neither group nor task affected maximal motion velocity. Neutral HRE of the flexion-extension component was larger in blindfolded condition (P = 0.03). Ipsilateral bending and axial rotation HRE components were smaller than the flexion-extension component (P = 7.1 x 10(-23)). For pure rotation repositioning, axial rotation HRE was significantly larger than flexion-extension and ipsilateral bending repositioning error (P = 3.0 x 10(-23)). Ipsilateral bending component of HRE was significantly larger combined tasks than for pure rotation tasks (P = 0.004). In patients with WAD, range of motion and head repositioning accuracy were reduced. However, the differences were small. Vision suppression and task type influenced HRE.
Multi-frequency EIT system with radially symmetric architecture: KHU Mark1.
Oh, Tong In; Woo, Eung Je; Holder, David
2007-07-01
We describe the development of a multi-frequency electrical impedance tomography (EIT) system (KHU Mark1) with a single balanced current source and multiple voltmeters. It was primarily designed for imaging brain function with a flexible strategy for addressing electrodes and a frequency range from 10 Hz-500 kHz. The maximal number of voltmeters is 64, and all of them can simultaneously acquire and demodulate voltage signals. Each voltmeter measures a differential voltage between a pair of electrodes. All voltmeters are configured in a radially symmetric architecture in order to optimize the routing of wires and minimize cross-talk. We adopted several techniques from existing EIT systems including digital waveform generation, a Howland current generator with a generalized impedance converter (GIC), digital phase-sensitive demodulation and tri-axial cables. New features of the KHU Mark1 system include multiple GIC circuits to maximize the output impedance of the current source at multiple frequencies. The voltmeter employs contact impedance measurements, data overflow detection, spike noise rejection, automatic gain control and programmable data averaging. The KHU Mark1 system measures both in-phase and quadrature components of trans-impedances. By using a script file describing an operating mode, the system setup can be easily changed. The performance of the developed multi-frequency EIT system was evaluated in terms of a common-mode rejection ratio, signal-to-noise ratio, linearity error and reciprocity error. Time-difference and frequency-difference images of a saline phantom with a banana object are presented showing a frequency-dependent complex conductivity of the banana. Future design of a more innovative system is suggested including miniaturization and wireless techniques.
The acceleration dependent validity and reliability of 10 Hz GPS.
Akenhead, Richard; French, Duncan; Thompson, Kevin G; Hayes, Philip R
2014-09-01
To examine the validity and inter-unit reliability of 10 Hz GPS for measuring instantaneous velocity during maximal accelerations. Experimental. Two 10 Hz GPS devices secured to a sliding platform mounted on a custom built monorail were towed whilst sprinting maximally over 10 m. Displacement of GPS devices was measured using a laser sampling at 2000 Hz, from which velocity and mean acceleration were derived. Velocity data was pooled into acceleration thresholds according to mean acceleration. Agreement between laser and GPS measures of instantaneous velocity within each acceleration threshold was examined using least squares linear regression and Bland-Altman limits of agreement (LOA). Inter-unit reliability was expressed as typical error (TE) and a Pearson correlation coefficient. Mean bias ± 95% LOA during accelerations of 0-0.99 ms(-2) was 0.12 ± 0.27 ms(-1), decreasing to -0.40 ± 0.67 ms(-1) during accelerations >4 ms(-2). Standard error of the estimate ± 95% CI (SEE) increased from 0.12 ± 0.02 ms(-1) during accelerations of 0-0.99 ms(-2) to 0.32 ± 0.06 ms(-1) during accelerations >4 ms(-2). TE increased from 0.05 ± 0.01 to 0.12 ± 0.01 ms(-1) during accelerations of 0-0.99 ms(-2) and >4 ms(-2) respectively. The validity and reliability of 10 Hz GPS for the measurement of instantaneous velocity has been shown to be inversely related to acceleration. Those using 10 Hz GPS should be aware that during accelerations of over 4 ms(-2), accuracy is compromised. Copyright © 2013 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Morin, Mélanie; Gravel, Denis; Bourbonnais, Daniel; Dumoulin, Chantale; Ouellet, Stéphane
2008-01-01
The passive properties of the pelvic floor muscles (PFM) might play a role in stress urinary incontinence (SUI) pathophysiology. To investigate the test-retest reliability of the dynamometric passive properties of the PFM in postmenopausal SUI women. Thirty-two SUI postmenopausal women were convened to two sessions 2 weeks apart. In each session, the measurements were repeated twice. The pelvic floor musculature was evaluated in four different conditions: (1) forces recorded at minimal aperture (initial passive resistance); (2) passive resistance at maximal aperture; (3) five lengthening and shortening cycles (Forces and passive elastic stiffness (PES) were evaluated at different vaginal apertures. Hysteresis was also calculated.); (4) Percentage of passive resistance loss after 1 min of sustained stretching was computed. The generalizability theory was used to calculate two reliability estimates, the dependability indices (Phi) and the standard error of measurement (SEM), for one session involving one measurement or the mean of two measurements. Overall, the reliability of the passive properties was good with indices of dependability of 0.75-0.93. The SEMs for forces and PES were 0.24-0.67 N and 0.03-0.10 N/mm, respectively, for mean, maximal and 20-mm apertures, representing an error between 13% and 23%. Passive forces at minimal aperture showed lower reliability (Phi = 0.51-0.57) compared with other vaginal openings. The aperture at a common force of 0.5 N was the only parameter demonstrating a poor reliability (Phi = 0.35). This new approach for assessing PFM passive properties showed enough reliability for highly recommending its inclusion in the PFM assessment of SUI postmenopausal women. (c) 2008 Wiley-Liss, Inc.
MIMO radar waveform design with peak and sum power constraints
NASA Astrophysics Data System (ADS)
Arulraj, Merline; Jeyaraman, Thiruvengadam S.
2013-12-01
Optimal power allocation for multiple-input multiple-output radar waveform design subject to combined peak and sum power constraints using two different criteria is addressed in this paper. The first one is by maximizing the mutual information between the random target impulse response and the reflected waveforms, and the second one is by minimizing the mean square error in estimating the target impulse response. It is assumed that the radar transmitter has knowledge of the target's second-order statistics. Conventionally, the power is allocated to transmit antennas based on the sum power constraint at the transmitter. However, the wide power variations across the transmit antenna pose a severe constraint on the dynamic range and peak power of the power amplifier at each antenna. In practice, each antenna has the same absolute peak power limitation. So it is desirable to consider the peak power constraint on the transmit antennas. A generalized constraint that jointly meets both the peak power constraint and the average sum power constraint to bound the dynamic range of the power amplifier at each transmit antenna is proposed recently. The optimal power allocation using the concept of waterfilling, based on the sum power constraint, is the special case of p = 1. The optimal solution for maximizing the mutual information and minimizing the mean square error is obtained through the Karush-Kuhn-Tucker (KKT) approach, and the numerical solutions are found through a nested Newton-type algorithm. The simulation results show that the detection performance of the system with both sum and peak power constraints gives better detection performance than considering only the sum power constraint at low signal-to-noise ratio.
Bounding Averages Rigorously Using Semidefinite Programming: Mean Moments of the Lorenz System
NASA Astrophysics Data System (ADS)
Goluskin, David
2018-04-01
We describe methods for proving bounds on infinite-time averages in differential dynamical systems. The methods rely on the construction of nonnegative polynomials with certain properties, similarly to the way nonlinear stability can be proved using Lyapunov functions. Nonnegativity is enforced by requiring the polynomials to be sums of squares, a condition which is then formulated as a semidefinite program (SDP) that can be solved computationally. Although such computations are subject to numerical error, we demonstrate two ways to obtain rigorous results: using interval arithmetic to control the error of an approximate SDP solution, and finding exact analytical solutions to relatively small SDPs. Previous formulations are extended to allow for bounds depending analytically on parametric variables. These methods are illustrated using the Lorenz equations, a system with three state variables ( x, y, z) and three parameters (β ,σ ,r). Bounds are reported for infinite-time averages of all eighteen moments x^ly^mz^n up to quartic degree that are symmetric under (x,y)\\mapsto (-x,-y). These bounds apply to all solutions regardless of stability, including chaotic trajectories, periodic orbits, and equilibrium points. The analytical approach yields two novel bounds that are sharp: the mean of z^3 can be no larger than its value of (r-1)^3 at the nonzero equilibria, and the mean of xy^3 must be nonnegative. The interval arithmetic approach is applied at the standard chaotic parameters to bound eleven average moments that all appear to be maximized on the shortest periodic orbit. Our best upper bound on each such average exceeds its value on the maximizing orbit by less than 1%. Many bounds reported here are much tighter than would be possible without computer assistance.
Higher criticism thresholding: Optimal feature selection when useful features are rare and weak.
Donoho, David; Jin, Jiashun
2008-09-30
In important application fields today-genomics and proteomics are examples-selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, ..., p, let pi(i) denote the two-sided P-value associated with the ith feature Z-score and pi((i)) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p - pi((i)))/sqrt{i/p(1-i/p)}. We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT.
Higher criticism thresholding: Optimal feature selection when useful features are rare and weak
Donoho, David; Jin, Jiashun
2008-01-01
In important application fields today—genomics and proteomics are examples—selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, …, p, let πi denote the two-sided P-value associated with the ith feature Z-score and π(i) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p − π(i))/i/p(1−i/p). We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT. PMID:18815365
3-O-galloylated procyanidins from Rumex acetosa L. inhibit the attachment of influenza A virus.
Derksen, Andrea; Hensel, Andreas; Hafezi, Wali; Herrmann, Fabian; Schmidt, Thomas J; Ehrhardt, Christina; Ludwig, Stephan; Kühn, Joachim
2014-01-01
Infections by influenza A viruses (IAV) are a major health burden to mankind. The current antiviral arsenal against IAV is limited and novel drugs are urgently required. Medicinal plants are known as an abundant source for bioactive compounds, including antiviral agents. The aim of the present study was to characterize the anti-IAV potential of a proanthocyanidin-enriched extract derived from the aerial parts of Rumex acetosa (RA), and to identify active compounds of RA, their mode of action, and structural features conferring anti-IAV activity. In a modified MTT (MTTIAV) assay, RA was shown to inhibit growth of the IAV strain PR8 (H1N1) and a clinical isolate of IAV(H1N1)pdm09 with a half-maximal inhibitory concentration (IC50) of 2.5 µg/mL and 2.2 µg/mL, and a selectivity index (SI) (half-maximal cytotoxic concentration (CC50)/IC50)) of 32 and 36, respectively. At RA concentrations>1 µg/mL plaque formation of IAV(H1N1)pdm09 was abrogated. RA was also active against an oseltamivir-resistant isolate of IAV(H1N1)pdm09. TNF-α and EGF-induced signal transduction in A549 cells was not affected by RA. The dimeric proanthocyanidin epicatechin-3-O-gallate-(4β→8)-epicatechin-3'-O-gallate (procyanidin B2-di-gallate) was identified as the main active principle of RA (IC50 approx. 15 µM, SI≥13). RA and procyanidin B2-di-gallate blocked attachment of IAV and interfered with viral penetration at higher concentrations. Galloylation of the procyanidin core structure was shown to be a prerequisite for anti-IAV activity; o-trihydroxylation in the B-ring increased the anti-IAV activity. In silico docking studies indicated that procyanidin B2-di-gallate is able to interact with the receptor binding site of IAV(H1N1)pdm09 hemagglutinin (HA). In conclusion, the proanthocyanidin-enriched extract RA and its main active constituent procyanidin B2-di-gallate protect cells from IAV infection by inhibiting viral entry into the host cell. RA and procyanidin B2-di-gallate appear to be a promising expansion of the currently available anti-influenza agents.
Bandeira, Vanessa S; Tomás, Hélio A; Alici, Evren; Carrondo, Manuel J T; Coroadinha, Ana S
2017-04-01
Gammaretrovirus and lentivirus are the preferred viral vectors to genetically modify T and natural killer cells to be used in immune cell therapies. The transduction efficiency of hematopoietic and T cells is more efficient using gibbon ape leukemia virus (GaLV) pseudotyping. In this context gammaretroviral vector producer cells offer competitive higher titers than transient lentiviral vectors productions. The main aim of this work was to identify the key parameters governing GaLV-pseudotyped gammaretroviral vector productivity in stable producer cells, using a retroviral vector expression cassette enabling positive (facilitating cell enrichment) and negative cell selection (allowing cell elimination). The retroviral vector contains a thymidine kinase suicide gene fused with a ouabain-resistant Na + ,K + -ATPase gene, a potential safer and faster marker. The establishment of retroviral vector producer cells is traditionally performed by randomly integrating the retroviral vector expression cassette codifying the transgene. More recently, recombinase-mediated cassette exchange methodologies have been introduced to achieve targeted integration. Herein we compared random and targeted integration of the retroviral vector transgene construct. Two retroviral producer cell lines, 293 OuaS and 293 FlexOuaS, were generated by random and targeted integration, respectively, producing high titers (on the order of 10 7 infectious particles·ml -1 ). Results showed that the retroviral vector transgene cassette is the key retroviral vector component determining the viral titers notwithstanding, single-copy integration is sufficient to provide high titers. The expression levels of the three retroviral constructs (gag-pol, GaLV env, and retroviral vector transgene) were analyzed. Although gag-pol and GaLV env gene expression levels should surpass a minimal threshold, we found that relatively modest expression levels of these two expression cassettes are required. Their levels of expression should not be maximized. We concluded, to establish a high producer retroviral vector cell line only the expression level of the genomic retroviral RNA, that is, the retroviral vector transgene cassette, should be maximized, both through (1) the optimization of its design (i.e., genetic elements composition) and (2) the selection of high expressing chromosomal locus for its integration. The use of methodologies identifying and promoting integration into high-expression loci, as targeted integration or high-throughput screening are in this perspective highly valuable.
The Bayesian Approach to Association
NASA Astrophysics Data System (ADS)
Arora, N. S.
2017-12-01
The Bayesian approach to Association focuses mainly on quantifying the physics of the domain. In the case of seismic association for instance let X be the set of all significant events (above some threshold) and their attributes, such as location, time, and magnitude, Y1 be the set of detections that are caused by significant events and their attributes such as seismic phase, arrival time, amplitude etc., Y2 be the set of detections that are not caused by significant events, and finally Y be the set of observed detections We would now define the joint distribution P(X, Y1, Y2, Y) = P(X) P(Y1 | X) P(Y2) I(Y = Y1 + Y2) ; where the last term simply states that Y1 and Y2 are a partitioning of Y. Given the above joint distribution the inference problem is simply to find the X, Y1, and Y2 that maximizes posterior probability P(X, Y1, Y2| Y) which reduces to maximizing P(X) P(Y1 | X) P(Y2) I(Y = Y1 + Y2). In this expression P(X) captures our prior belief about event locations. P(Y1 | X) captures notions of travel time, residual error distributions as well as detection and mis-detection probabilities. While P(Y2) captures the false detection rate of our seismic network. The elegance of this approach is that all of the assumptions are stated clearly in the model for P(X), P(Y1|X) and P(Y2). The implementation of the inference is merely a by-product of this model. In contrast some of the other methods such as GA hide a number of assumptions in the implementation details of the inference - such as the so called "driver cells." The other important aspect of this approach is that all seismic knowledge including knowledge from other domains such as infrasound and hydroacoustic can be included in the same model. So, we don't need to separately account for misdetections or merge seismic and infrasound events as a separate step. Finally, it should be noted that the objective of automatic association is to simplify the job of humans who are publishing seismic bulletins based on this output. The error metric for association should accordingly count errors such as missed events much higher than spurious events because the former require more work from humans. Furthermore, the error rate needs to be weighted higher during periods of high seismicity such as an aftershock sequence when the human effort tends to increase.