Sample records for minimum variance lcmv

  1. Null steering of adaptive beamforming using linear constraint minimum variance assisted by particle swarm optimization, dynamic mutated artificial immune system, and gravitational search algorithm.

    PubMed

    Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem

    2014-01-01

    Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.

  2. Null Steering of Adaptive Beamforming Using Linear Constraint Minimum Variance Assisted by Particle Swarm Optimization, Dynamic Mutated Artificial Immune System, and Gravitational Search Algorithm

    PubMed Central

    Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem

    2014-01-01

    Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859

  3. LCMV beamforming for a novel wireless local positioning system: a stationarity analysis

    NASA Astrophysics Data System (ADS)

    Tong, Hui; Zekavat, Seyed A.

    2005-05-01

    In this paper, we discuss the implementation of Linear Constrained Minimum Variance (LCMV) beamforming (BF) for a novel Wireless Local Position System (WLPS). WLPS main components are: (a) a dynamic base station (DBS), and (b) a transponder (TRX), both mounted on mobiles. WLPS might be considered as a node in a Mobile Adhoc NETwork (MANET). Each TRX is assigned an identification (ID) code. DBS transmits periodic short bursts of energy which contains an ID request (IDR) signal. The TRX transmits back its ID code (a signal with a limited duration) to the DBS as soon as it detects the IDR signal. Hence, the DBS receives non-continuous signals transmitted by TRX. In this work, we assume asynchronous Direct-Sequence Code Division Multiple Access (DS-CDMA) transmission from the TRX with antenna array/LCMV BF mounted at the DBS, and we discuss the implementation of the observed signal covariance matrix for LCMV BF. In LCMV BF, the observed covariance matrix should be estimated. Usually sample covariance matrix (SCM) is used to estimate this covariance matrix assuming a stationary model for the observed data which is the case in many communication systems. However, due to the non-stationary behavior of the received signal in WLPS systems, SCM does not lead to a high WLPS performance compared to even a conventional beamformer. A modified covariance matrix estimation method which utilizes the cyclostationarity property of WLPS system is introduced as a solution to this problem. It is shown that this method leads to a significant improvement in the WLPS performance.

  4. A Multipath Mitigation Algorithm for vehicle with Smart Antenna

    NASA Astrophysics Data System (ADS)

    Ji, Jing; Zhang, Jiantong; Chen, Wei; Su, Deliang

    2018-01-01

    In this paper, the antenna array adaptive method is used to eliminate the multipath interference in the environment of GPS L1 frequency. Combined with the power inversion (PI) algorithm and the minimum variance no distortion response (MVDR) algorithm, the anti-Simulation and verification of the antenna array, and the program into the FPGA, the actual test on the CBD road, the theoretical analysis of the LCMV criteria and PI and MVDR algorithm principles and characteristics of MVDR algorithm to verify anti-multipath interference performance is better than PI algorithm, The satellite navigation in the field of vehicle engineering practice has some guidance and reference.

  5. Point focusing using loudspeaker arrays from the perspective of optimal beamforming.

    PubMed

    Bai, Mingsian R; Hsieh, Yu-Hao

    2015-06-01

    Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance.

  6. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆

    PubMed Central

    Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

    2014-01-01

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

  7. Transcranial Electrical Neuromodulation Based on the Reciprocity Principle

    PubMed Central

    Fernández-Corazza, Mariano; Turovets, Sergei; Luu, Phan; Anderson, Erik; Tucker, Don

    2016-01-01

    A key challenge in multi-electrode transcranial electrical stimulation (TES) or transcranial direct current stimulation (tDCS) is to find a current injection pattern that delivers the necessary current density at a target and minimizes it in the rest of the head, which is mathematically modeled as an optimization problem. Such an optimization with the Least Squares (LS) or Linearly Constrained Minimum Variance (LCMV) algorithms is generally computationally expensive and requires multiple independent current sources. Based on the reciprocity principle in electroencephalography (EEG) and TES, it could be possible to find the optimal TES patterns quickly whenever the solution of the forward EEG problem is available for a brain region of interest. Here, we investigate the reciprocity principle as a guideline for finding optimal current injection patterns in TES that comply with safety constraints. We define four different trial cortical targets in a detailed seven-tissue finite element head model, and analyze the performance of the reciprocity family of TES methods in terms of electrode density, targeting error, focality, intensity, and directionality using the LS and LCMV solutions as the reference standards. It is found that the reciprocity algorithms show good performance comparable to the LCMV and LS solutions. Comparing the 128 and 256 electrode cases, we found that use of greater electrode density improves focality, directionality, and intensity parameters. The results show that reciprocity principle can be used to quickly determine optimal current injection patterns in TES and help to simplify TES protocols that are consistent with hardware and software availability and with safety constraints. PMID:27303311

  8. Transcranial Electrical Neuromodulation Based on the Reciprocity Principle.

    PubMed

    Fernández-Corazza, Mariano; Turovets, Sergei; Luu, Phan; Anderson, Erik; Tucker, Don

    2016-01-01

    A key challenge in multi-electrode transcranial electrical stimulation (TES) or transcranial direct current stimulation (tDCS) is to find a current injection pattern that delivers the necessary current density at a target and minimizes it in the rest of the head, which is mathematically modeled as an optimization problem. Such an optimization with the Least Squares (LS) or Linearly Constrained Minimum Variance (LCMV) algorithms is generally computationally expensive and requires multiple independent current sources. Based on the reciprocity principle in electroencephalography (EEG) and TES, it could be possible to find the optimal TES patterns quickly whenever the solution of the forward EEG problem is available for a brain region of interest. Here, we investigate the reciprocity principle as a guideline for finding optimal current injection patterns in TES that comply with safety constraints. We define four different trial cortical targets in a detailed seven-tissue finite element head model, and analyze the performance of the reciprocity family of TES methods in terms of electrode density, targeting error, focality, intensity, and directionality using the LS and LCMV solutions as the reference standards. It is found that the reciprocity algorithms show good performance comparable to the LCMV and LS solutions. Comparing the 128 and 256 electrode cases, we found that use of greater electrode density improves focality, directionality, and intensity parameters. The results show that reciprocity principle can be used to quickly determine optimal current injection patterns in TES and help to simplify TES protocols that are consistent with hardware and software availability and with safety constraints.

  9. Selection of genetic variants of lymphocytic choriomeningitis virus in spleens of persistently infected mice. Role in suppression of cytotoxic T lymphocyte response and viral persistence

    PubMed Central

    1984-01-01

    We studied the mechanism of lymphocytic choriomeningitis virus (LCMV) persistence and the suppression of cytotoxic T lymphocyte (CTL) responses in BALB/c WEHI mice infected at birth with LCMV Armstrong strain. Using adoptive transfer experiments we found that spleen cells from persistently infected (carrier) mice actively suppressed the expected LCMV-specific CTL response of spleen cells from normal adult mice. The suppression was specific for the CTL response and LCMV - specific antibody responses were not affected. Associated with the specific CTL suppression was the establishment of persistent LCMV infection. The transfer of spleen or lymph node cells containing LCMV - specific CTL resulted in virus clearance and prevented establishment of the carrier state. The suppression of LCMV -specific CTL responses by carrier spleen cells is not mediated by a suppressor cell, but is due to the presence of genetic variants of LCMV in spleens of carrier mice. Such virus variants selectively suppress LCMV-specific CTL responses and cause persistent infections in immunocompetent mice. In striking contrast, wild-type LCMV Armstrong, from which these variants were generated, induces a potent CTL response in immunocompetent mice and the LCMV infection is rapidly cleared. Our results show that LCMV variants that emerge during infection in vivo play a crucial role in the suppression of virus-specific CTL responses and in the maintenance of virus persistence. PMID:6332167

  10. Resistance of human plasmacytoid dendritic CAL-1 cells to infection with lymphocytic choriomeningitis virus (LCMV) is caused by restricted virus cell entry, which is overcome by contact of CAL-1 cells with LCMV-infected cells.

    PubMed

    Iwasaki, Masaharu; Sharma, Siddhartha M; Marro, Brett S; de la Torre, Juan C

    2017-11-01

    Plasmacytoid dendritic cells (pDCs), a main source of type I interferon in response to viral infection, are an early cell target during lymphocytic choriomeningitis virus (LCMV) infection, which has been associated with the LCMV's ability to establish chronic infections. Human blood-derived pDCs have been reported to be refractory to ex vivo LCMV infection. In the present study we show that human pDC CAL-1 cells are refractory to infection with cell-free LCMV, but highly susceptible to infection with recombinant LCMVs carrying the surface glycoprotein of VSV, indicating that LCMV infection of CAL-1 cells is restricted at the cell entry step. Co-culture of uninfected CAL-1 cells with LCMV-infected HEK293 cells enabled LCMV to infect CAL-1 cells. This cell-to-cell spread required direct cell-cell contact and did not involve exosome pathway. Our findings indicate the presence of a novel entry pathway utilized by LCMV to infect pDC. Copyright © 2017. Published by Elsevier Inc.

  11. Lymphocytic choriomeningitis virus (LCMV) infection of macaques: a model for Lassa fever

    PubMed Central

    Zapata, Juan C.; Pauza, C. David; Djavani, Mahmoud M.; Rodas, Juan D.; Moshkoff, Dmitry; Bryant, Joseph; Ateh, Eugene; Garcia, Cybele; Lukashevich, Igor S.; Salvato, Maria S.

    2011-01-01

    Arenaviruses such as Lassa fever virus (LASV) and lymphocytic choriomeningitis virus (LCMV) are benign in their natural reservoir hosts, and can occasionally cause severe viral hemorrhagic fever (VHF) in non-human primates and in human beings. LCMV is considerably more benign for human beings than Lassa virus, however certain strains, like the LCMV-WE strain, can cause severe disease when the virus is delivered as a high-dose inoculum. Here we describe a rhesus macaque model for Lassa fever that employs a virulent strain of LCMV. Since LASV must be studied within Biosafety Level-4 (BSL-4) facilities, the LCMV-infected macaque model has the advantage that it can be used at BSL-3. LCMV-induced disease is rarely as severe as other VHF, but it is similar in cases where vascular leakage leads to lethal systemic failure. The LCMV-infected macaque has been valuable for describing the course of disease with differing viral strains, doses and routes of infection. By monitoring system-wide changes in physiology and gene expression in a controlled experimental setting, it is possible to identify events that are pathognomonic for developing VHF and potential treatment targets. PMID:21820469

  12. Serological study of the lymphochoriomeningitis virus (LCMV) in an inner city of Argentina.

    PubMed

    Riera, Laura; Castillo, Ernesto; Del Carmen Saavedra, María; Priotto, José; Sottosanti, Josefa; Polop, Jaime; Ambrosio, Ana María

    2005-06-01

    Lymphocytic choriomeningitis virus (LCMV) is the prototype of the family Arenaviridae and is associated with the natural reservoir, Mus domesticus (Md). It causes meningitis and a flu-like illness characterized by malaise, myalgia, retrorbital headache, and photophobia. This study presents the data obtained in a rodent and human serological study during 6 years (1998-2003) in the city of Rio Cuarto, Argentina. Antibodies anti-LCMV were sought by ELISA in rodents and humans. LCMV was found only in Md species in 9.4% of animals. The results also show some seasonal, no significant variations in the prevalence of the infection. Distribution of positive mice was not modified significantly by trapping sites, sex, or age of the animals. The prevalence of LCMV positive urban residents was found to be consistently low (1-3.6%) along the study period, with overage prevalence of 3.3% and values in males (4.6%) significantly higher than in females (2.6%) (P < 0.05). Seven of 432 pregnant women were found to be LCMV positive, but the absence of LCMV antibodies in the newborns demonstrated that the mothers were infected before pregnancy. This study is the first evidence on endemic LCMV in an Argentine city located outside the endemic area of Argentine hemorrhagic fever (AHF) and described the need to study other areas and increase awareness of this viral infection. Copyright 2005 Wiley-Liss, Inc.

  13. Replicating viral vector platform exploits alarmin signals for potent CD8+ T cell-mediated tumour immunotherapy

    PubMed Central

    Kallert, Sandra M.; Darbre, Stephanie; Bonilla, Weldy V.; Kreutzfeldt, Mario; Page, Nicolas; Müller, Philipp; Kreuzaler, Matthias; Lu, Min; Favre, Stéphanie; Kreppel, Florian; Löhning, Max; Luther, Sanjiv A.; Zippelius, Alfred; Merkler, Doron; Pinschewer, Daniel D.

    2017-01-01

    Viral infections lead to alarmin release and elicit potent cytotoxic effector T lymphocyte (CTLeff) responses. Conversely, the induction of protective tumour-specific CTLeff and their recruitment into the tumour remain challenging tasks. Here we show that lymphocytic choriomeningitis virus (LCMV) can be engineered to serve as a replication competent, stably-attenuated immunotherapy vector (artLCMV). artLCMV delivers tumour-associated antigens to dendritic cells for efficient CTL priming. Unlike replication-deficient vectors, artLCMV targets also lymphoid tissue stroma cells expressing the alarmin interleukin-33. By triggering interleukin-33 signals, artLCMV elicits CTLeff responses of higher magnitude and functionality than those induced by replication-deficient vectors. Superior anti-tumour efficacy of artLCMV immunotherapy depends on interleukin-33 signalling, and a massive CTLeff influx triggers an inflammatory conversion of the tumour microenvironment. Our observations suggest that replicating viral delivery systems can release alarmins for improved anti-tumour efficacy. These mechanistic insights may outweigh safety concerns around replicating viral vectors in cancer immunotherapy. PMID:28548102

  14. Pathogenesis of Lassa fever virus infection: I. Susceptibility of mice to recombinant Lassa Gp/LCMV chimeric virus.

    PubMed

    Lee, Andrew M; Cruite, Justin; Welch, Megan J; Sullivan, Brian; Oldstone, Michael B A

    2013-08-01

    Lassa virus (LASV) is a BSL-4 restricted agent. To allow study of infection by LASV under BSL-2 conditions, we generated a recombinant virus in which the LASV glycoprotein (Gp) was placed on the backbone of lymphocytic choriomeningitis virus (LCMV) Cl13 nucleoprotein, Z and polymerase genes (rLCMV Cl13/LASV Gp). The recombinant virus displayed high tropism for dendritic cells following in vitro or in vivo infection. Inoculation of immunocompetent adults resulted in an acute infection, generation of virus-specific CD8(+) T cells and clearance of the infection. Inoculation of newborn mice with rLCMV Cl13/LASV Gp resulted in a life-long persistent infection. Interestingly, adoptive transfer of rLCMV Cl13/LASV Gp immune memory cells into such persistently infected mice failed to purge virus but, in contrast, cleared virus from mice persistently infected with wt LCMV Cl13. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Toll-like receptor 7 is required for effective adaptive immune responses that prevent persistent virus infection.

    PubMed

    Walsh, Kevin B; Teijaro, John R; Zuniga, Elina I; Welch, Megan J; Fremgen, Daniel M; Blackburn, Shawn D; von Tiehl, Karl F; Wherry, E John; Flavell, Richard A; Oldstone, Michael B A

    2012-06-14

    TLR7 is an innate signaling receptor that recognizes single-stranded viral RNA and is activated by viruses that cause persistent infections. We show that TLR7 signaling dictates either clearance or establishment of life-long chronic infection by lymphocytic choriomeningitis virus (LCMV) Cl 13 but does not affect clearance of the acute LCMV Armstrong 53b strain. TLR7(-/-) mice infected with LCMV Cl 13 remained viremic throughout life from defects in the adaptive antiviral immune response-notably, diminished T cell function, exacerbated T cell exhaustion, decreased plasma cell maturation, and negligible antiviral antibody production. Adoptive transfer of TLR7(+/+) LCMV immune memory cells that enhanced clearance of persistent LCMV Cl 13 infection in TLR7(+/+) mice failed to purge LCMV Cl 13 infection in TLR7(-/-) mice, demonstrating that a TLR7-deficient environment renders antiviral responses ineffective. Therefore, methods that promote TLR7 signaling are promising treatment strategies for chronic viral infections. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Brief report: Lymphocytic choriomeningitis virus transmitted through solid organ transplantation--Massachusetts, 2008.

    PubMed

    2008-07-25

    Lymphocytic choriomeningitis virus (LCMV) is a rodent-borne arenavirus found worldwide. House mice (Mus musculus) are the natural reservoir, but LCMV also can infect other wild, pet, and laboratory rodents (e.g., rats, mice, guinea pigs, and hamsters). Humans can be infected through exposure to rodent excreta. Person-to-person transmission has occurred only through maternal-fetal transmission and solid organ transplantation. LCMV infection in humans can be asymptomatic or cause a spectrum of illness ranging from isolated fever to meningitis and encephalitis. Overall case fatality is <1%. Fetal infections can result in congenital abnormalities or death. Immunosuppressed patients, such as organ transplant recipients, can develop fatal hemorrhagic fever-like disease. Transmission of LCMV and an LCMV-like arenavirus via organ transplantation has been documented in three previous clusters. Of 11 recipients described in those clusters, 10 died of multisystem organ failure, with LCMV-associated hepatitis as a prominent feature. The surviving patient was treated with ribavirin (an antiviral with in vitro activity against LCMV) and reduction of immunosuppressive therapy. On April 15, 2008, an organ procurement organization (OPO) notified CDC of severe illness in two kidney transplant recipients from a common donor; at the time of notification, one of the recipients had died. Samples from the donor and both recipients were tested at CDC; on April 22, test results revealed evidence of acute LCMV infection in the donor and both recipients. This report summarizes the results of the subsequent public health investigation.

  17. Meningitis in a College Student in Connecticut, 2007

    ERIC Educational Resources Information Center

    Sosa, Lynn E.; Gupta, Shaili; Juthani-Mehta, Manisha; Hadler, James L.

    2009-01-01

    The authors describe a case of aseptic meningitis in a college student that was ultimately attributed to infection with lymphocytic choriomeningitis virus (LCMV). The authors also provide a review of LCMV infection, epidemiology, and public health implications. Providers should be aware of LCMV as a cause of meningitis in college students,…

  18. Origin and fate of lymphocytic choriomeningitis virus-specific CD8+ T cells coexpressing the inhibitory NK cell receptor Ly49G2.

    PubMed

    Peacock, Craig D; Welsh, Raymond M

    2004-07-01

    CD8+ T cells that coexpress the inhibitory NK cell receptor, Ly49G2 (G2), are present in immunologically naive C57BL/6 mice but display Ags found on memory T cells. To assess how G2+CD8+ cells relate to bona fide memory cells, we examined the origin and fate of lymphocytic choriomeningitis virus (LCMV)-induced G2+CD8+ cells. During early (day 4) acute LCMV infection, both G2+ and G2-CD8+ T cell subsets underwent an attrition in number and displayed an activation (CD69(high)1B11(high)CD62L(low)) phenotype. By day 8, both subsets synthesized IFN-gamma in response to immunodominant LCMV peptides, though the expansion of G2+ cells was less than that of G2- cells. Adoptive transfer experiments with purified G2- or G2+CD8+ cells from naive mice indicated that the LCMV-specific G2+ subset was derived from a pre-existing G2+ population and not generated from G2- cells responding to LCMV infection. Their participation in the LCMV-specific T cell response increased with age, reflecting an increase in the size of the pre-existing G2+ pool. Following establishment of stable LCMV memory, the proportion of CD8+ cells coexpressing G2 was reduced in comparison to naive controls, presumably due to displacement by G2- LCMV-specific memory cells. LCMV-specific G2+ cells were present in the memory pool, but at low frequencies, and they did not exhibit the typical phenotypic changes of reactivation during secondary challenge. We suggest that G2+CD8+ cells represent a cell lineage distinct from bona fide memory T cells, but that they can participate in an acute virus-specific T cell response.

  19. A novel phosphoserine motif in the LCMV matrix protein Z regulates the release of infectious virus and defective interfering particles.

    PubMed

    Ziegler, Christopher M; Eisenhauer, Philip; Bruce, Emily A; Beganovic, Vedran; King, Benjamin R; Weir, Marion E; Ballif, Bryan A; Botten, Jason

    2016-09-01

    We report that the lymphocytic choriomeningitis virus (LCMV) matrix protein, which drives viral budding, is phosphorylated at serine 41 (S41). A recombinant (r)LCMV bearing a phosphomimetic mutation (S41D) was impaired in infectious and defective interfering (DI) particle release, while a non-phosphorylatable mutant (S41A) was not. The S41D mutant was disproportionately impaired in its ability to release DI particles relative to infectious particles. Thus, DI particle production by LCMV may be dynamically regulated via phosphorylation of S41.

  20. Lymphocytic Choriomeningitis Virus in Employees and Mice at Multipremises Feeder-Rodent Operation, United States, 2012

    PubMed Central

    Ströher, Ute; Edison, Laura; Albariño, César G.; Lovejoy, Jodi; Armeanu, Emilian; House, Jennifer; Cory, Denise; Horton, Clayton; Fowler, Kathy L.; Austin, Jessica; Poe, John; Humbaugh, Kraig E.; Guerrero, Lisa; Campbell, Shelley; Gibbons, Aridth; Reed, Zachary; Cannon, Deborah; Manning, Craig; Petersen, Brett; Metcalf, Douglas; Marsh, Bret; Nichol, Stuart T.; Rollin, Pierre E.

    2014-01-01

    We investigated the extent of lymphocytic choriomeningitis virus (LCMV) infection in employees and rodents at 3 commercial breeding facilities. Of 97 employees tested, 31 (32%) had IgM and/or IgG to LCMV, and aseptic meningitis was diagnosed in 4 employees. Of 1,820 rodents tested in 1 facility, 382 (21%) mice (Mus musculus) had detectable IgG, and 13 (0.7%) were positive by reverse transcription PCR; LCMV was isolated from 8. Rats (Rattus norvegicus) were not found to be infected. S-segment RNA sequence was similar to strains previously isolated in North America. Contact by wild mice with colony mice was the likely source for LCMV, and shipments of infected mice among facilities spread the infection. The breeding colonies were depopulated to prevent further human infections. Future outbreaks can be prevented with monitoring and management, and employees should be made aware of LCMV risks and prevention. PMID:24447605

  1. Trace-Forward Investigation of Mice in Response to Lymphocytic Choriomeningitis Virus Outbreak

    PubMed Central

    Knust, Barbara; Petersen, Bret; Gabel, Julie; Manning, Craig; Drenzek, Cherie; Ströher, Ute; Rollin, Pierre E.; Thoroughman, Douglas; Nichol, Stuart T.

    2014-01-01

    During follow-up of a 2012 US outbreak of lymphocytic choriomeningitis virus (LCMV), we conducted a trace-forward investigation. LCMV-infected feeder mice originating from a US rodent breeding facility had been distributed to >500 locations in 21 states. All mice from the facility were euthanized, and no additional persons tested positive for LCMV infection. PMID:24447898

  2. Independent Lineage of Lymphocytic Choriomeningitis Virus in Wood Mice (Apodemus sylvaticus), Spain

    PubMed Central

    Ledesma, Juan; Fedele, Cesare Giovanni; Carro, Francisco; Lledó, Lourdes; Sánchez-Seco, María Paz; Tenorio, Antonio; Soriguer, Ramón Casimiro; Saz, José Vicente; Domínguez, Gerardo; Rosas, María Flora; Barandika, Jesús Félix

    2009-01-01

    To clarify the presence of lymphocytic choriomeningitis virus (LCMV) in Spain, we examined blood and tissue specimens from 866 small mammals. LCMV RNA was detected in 3 of 694 wood mice (Apodemus sylvaticus). Phylogenetic analyses suggest that the strains constitute a new evolutionary lineage. LCMV antibodies were detected in 4 of 10 rodent species tested. PMID:19861074

  3. PC61 (Anti-CD25) Treatment Inhibits Influenza A Virus-Expanded Regulatory T Cells and Severe Lung Pathology during a Subsequent Heterologous Lymphocytic Choriomeningitis Virus Infection

    PubMed Central

    Kraft, Anke R. M.; Wlodarczyk, Myriam F.; Kenney, Laurie L.

    2013-01-01

    Prior immunity to influenza A virus (IAV) in mice changes the outcome to a subsequent lymphocytic choriomeningitis virus (LCMV) infection and can result in severe lung pathology, similar to that observed in patients that died of the 1918 H1N1 pandemic. This pathology is induced by IAV-specific memory CD8+ T cells cross-reactive with LCMV. Here, we discovered that IAV-immune mice have enhanced CD4+ Foxp3+ T-regulatory (Treg) cells in their lungs, leading us to question whether a modulation in the normal balance of Treg and effector T-cell responses also contributes to enhancing lung pathology upon LCMV infection of IAV-immune mice. Treg cell and interleukin-10 (IL-10) levels remained elevated in the lungs and mediastinal lymph nodes (mLNs) throughout the acute LCMV response of IAV-immune mice. PC61 treatment, used to decrease Treg cell levels, did not change LCMV titers but resulted in a surprising decrease in lung pathology upon LCMV infection in IAV-immune but not in naive mice. Associated with this decrease in pathology was a retention of Treg in the mLN and an unexpected partial clonal exhaustion of LCMV-specific CD8+ T-cell responses only in IAV-immune mice. PC61 treatment did not affect cross-reactive memory CD8+ T-cell proliferation. These results suggest that in the absence of IAV-expanded Treg cells and in the presence of cross-reactive memory, the LCMV-specific response was overstimulated and became partially exhausted, resulting in a decreased effector response. These studies suggest that Treg cells generated during past infections can influence the characteristics of effector T-cell responses and immunopathology during subsequent heterologous infections. Thus, in humans with complex infection histories, PC61 treatment may lead to unexpected results. PMID:24049180

  4. Translating insights from persistent LCMV infection into anti-HIV immunity.

    PubMed

    Wilson, Elizabeth B; Brooks, David G

    2010-12-01

    Human immunodeficiency virus (HIV) is a major global health concern with more than 30 million individuals currently infected worldwide. To date, attempts to stimulate protective immunity to viral components of HIV have been unsuccessful in preventing or clearing infection. Lymphocytic choriomeningitis virus (LCMV) is an established murine model of persistent viral infection that has been instrumental in illuminating several critical aspects of antiviral immunity. Although virologically the course of LCMV infection differs significantly from HIV, the immune responses and regulatory mechanisms elicited by these two viruses are markedly similar. In this review we discuss important recent findings in the LCMV model, highlighting the role of host-derived proteins in shaping immune responses to persistent infections, and explore the therapeutic potential of manipulating these pathways to enhance HIV vaccination strategies.

  5. Characterization of host proteins interacting with the lymphocytic choriomeningitis virus L protein.

    PubMed

    Khamina, Kseniya; Lercher, Alexander; Caldera, Michael; Schliehe, Christopher; Vilagos, Bojan; Sahin, Mehmet; Kosack, Lindsay; Bhattacharya, Anannya; Májek, Peter; Stukalov, Alexey; Sacco, Roberto; James, Leo C; Pinschewer, Daniel D; Bennett, Keiryn L; Menche, Jörg; Bergthaler, Andreas

    2017-12-01

    RNA-dependent RNA polymerases (RdRps) play a key role in the life cycle of RNA viruses and impact their immunobiology. The arenavirus lymphocytic choriomeningitis virus (LCMV) strain Clone 13 provides a benchmark model for studying chronic infection. A major genetic determinant for its ability to persist maps to a single amino acid exchange in the viral L protein, which exhibits RdRp activity, yet its functional consequences remain elusive. To unravel the L protein interactions with the host proteome, we engineered infectious L protein-tagged LCMV virions by reverse genetics. A subsequent mass-spectrometric analysis of L protein pulldowns from infected human cells revealed a comprehensive network of interacting host proteins. The obtained LCMV L protein interactome was bioinformatically integrated with known host protein interactors of RdRps from other RNA viruses, emphasizing interconnected modules of human proteins. Functional characterization of selected interactors highlighted proviral (DDX3X) as well as antiviral (NKRF, TRIM21) host factors. To corroborate these findings, we infected Trim21-/- mice with LCMV and found impaired virus control in chronic infection. These results provide insights into the complex interactions of the arenavirus LCMV and other viral RdRps with the host proteome and contribute to a better molecular understanding of how chronic viruses interact with their host.

  6. Alcohol intake alters immune responses and promotes CNS viral persistence in mice.

    PubMed

    Loftis, Jennifer M; Taylor, Jonathan; Raué, Hans-Peter; Slifka, Mark K; Huang, Elaine

    2016-10-01

    Chronic hepatitis C virus (HCV) infection leads to progressive liver disease and is associated with a variety of extrahepatic effects, including central nervous system (CNS) damage and neuropsychiatric impairments. Alcohol abuse can exacerbate these adverse effects on brain and behavior, but the molecular mechanisms are not well understood. This study investigated the role of alcohol in regulating viral persistence and CNS immunopathology in mice infected with lymphocytic choriomeningitis virus (LCMV), a model for HCV infections in humans. Female and male BALB/c mice (n=94) were exposed to alcohol (ethanol; EtOH) and water (or water only) using a two-bottle choice paradigm, followed one week later by infection with either LCMV clone 13 (causes chronic infection similar to chronic HCV), LCMV Armstrong (causes acute infection), or vehicle. Mice were monitored for 60days post-infection and continued to receive 24-h access to EtOH and water. Animals infected with LCMV clone 13 drank more EtOH, as compared to those with an acute or no viral infection. Six weeks after infection with LCMV clone 13, mice with EtOH exposure evidenced higher serum viral titers, as compared to mice without EtOH exposure. EtOH intake was also associated with reductions in virus-specific CD8(+) T cell frequencies (particularly CD11a(hi) subsets) and evidence of persistent CNS viremia in chronically infected mice. These findings support the hypothesis that EtOH use and chronic viral infection can result in combined toxic effects accelerating CNS damage and neuropsychiatric dysfunction and suggest that examining the role of EtOH in regulating viral persistence and CNS immunopathology in mice infected with LCMV can lead to a more comprehensive understanding of comorbid alcohol use disorder and chronic viral infection. Published by Elsevier B.V.

  7. T cell priming versus T cell tolerance induced by synthetic peptides

    PubMed Central

    1995-01-01

    It is well known that synthetic peptides are able to both induce and tolerize T cells. We have examined the parameters leading either to priming or tolerance of CD8+ cytotoxic T lymphocytes (CTL) in vivo with a major histocompatibility complex class I (H-2 Db) binding peptide derived from the glycoprotein (GP aa33-41) of lymphocytic choriomeningitis virus (LCMV). By varying dose, route, and frequency of LCMV GP peptide application, we found that a single local subcutaneous injection of 50-500 micrograms peptide emulsified in incomplete Freund's adjuvant protected mice against LCMV infection, whereas repetitive and systemic intraperitoneal application of the same dose caused tolerance of LCMV-specific CTL. The peptide-induced tolerance was transient in euthymic mice but permanent in thymectomized mice. These findings are relevant for a selective use of peptides as a therapeutic approach: peptide-induced priming of T cells for vaccination and peptide-mediated T cell tolerance for intervention in immunopathologies and autoimmune diseases. PMID:7540654

  8. The impact of illegal waste sites on a transmission of zoonotic viruses.

    PubMed

    Duh, Darja; Hasic, Sandra; Buzan, Elena

    2017-07-20

    Illegal waste disposal impacts public health and causes aesthetic and environmental pollution. Waste disposed in places without permitted and controlled facilities can provide a ready source of nutrition and shelter for rodents and thus promote the spread of their ecto- and endoparasites. The presence of two distinct zoonotic viruses, lymphocytic choriomeningitis virus (LCMV) and tick-borne encephalitis virus (TBEV), was searched at illegal waste sites. The aim of this study was to determine the prevalence of infection with both viruses in rodents and to discuss the virus-rodent relations in such environments. Rodents sampled between October 2011 and April 2013 at 7 locations in the Istrian peninsula, were identified morphologically and genetically to minimize misidentification. Serological and molecular techniques were used to determine seroprevalence of infection in rodents and to detect viral RNAs. Serological testing was performed by immune fluorescence assay for detection of LCMV and TBEV specific antibodies. Real-time RT PCR was used for the detection of LCMV nucleoprotein gene and TBEV 3' non-coding region. Data were statistically analysed using SPSS statistic v2.0. Out of 82 rodent sera tested, the presence of LCMV antibodies was demonstrated in 24.93%. The highest prevalence of LCMV infection was found in commensal Mus musculus (47.37%), followed by 11.53%, 19.04% and 25% prevalence of infection in A. agrarius, A. flavicolis and A. sylvaticus, respectively. The highest prevalence of infection in rodents (53.33%) was found in locations with large waste sites and high anthropogenic influence. LCMV seroprevalence was significantly lower in rodents sampled from natural habitats. Viral nucleic acids were screened in 46 samples but yielded no amplicons of LCMV or TBEV. In addition, TBEV specific antibodies were not detected. Illegal waste sites have considerable impact on the area where they are located. Results have shown that the transmission of human pathogens can be significantly increased by the presence of waste sites. However, the pathogen must be endemic in the environment where the waste site is located. The introduction of a human pathogen as a consequence of the waste site in the area of interest could not be proven.

  9. Inflammatory Monocytes Recruited to the Liver within 24 Hours after Virus-Induced Inflammation Resemble Kupffer Cells but Are Functionally Distinct

    PubMed Central

    Movita, Dowty; Biesta, Paula; Kreefft, Kim; Haagmans, Bart; Zuniga, Elina; Herschke, Florence; De Jonghe, Sandra; Janssen, Harry L. A.; Gama, Lucio; Boonstra, Andre

    2015-01-01

    ABSTRACT Due to a scarcity of immunocompetent animal models for viral hepatitis, little is known about the early innate immune responses in the liver. In various hepatotoxic models, both pro- and anti-inflammatory activities of recruited monocytes have been described. In this study, we compared the effect of liver inflammation induced by the Toll-like receptor 4 ligand lipopolysaccharide (LPS) with that of a persistent virus, lymphocytic choriomeningitis virus (LCMV) clone 13, on early innate intrahepatic immune responses in mice. LCMV infection induces a remarkable influx of inflammatory monocytes in the liver within 24 h, accompanied by increased transcript levels of several proinflammatory cytokines and chemokines in whole liver. Importantly, while a single LPS injection results in similar recruitment of inflammatory monocytes to the liver, the functional properties of the infiltrating cells are dramatically different in response to LPS versus LCMV infection. In fact, intrahepatic inflammatory monocytes are skewed toward a secretory phenotype with impaired phagocytosis in LCMV-induced liver inflammation but exhibit increased endocytic capacity after LPS challenge. In contrast, F4/80high-Kupffer cells retain their steady-state endocytic functions upon LCMV infection. Strikingly, the gene expression levels of inflammatory monocytes dramatically change upon LCMV exposure and resemble those of Kupffer cells. Since inflammatory monocytes outnumber Kupffer cells 24 h after LCMV infection, it is highly likely that inflammatory monocytes contribute to the intrahepatic inflammatory response during the early phase of infection. Our findings are instrumental in understanding the early immunological events during virus-induced liver disease and point toward inflammatory monocytes as potential target cells for future treatment options in viral hepatitis. IMPORTANCE Insights into how the immune system deals with hepatitis B virus (HBV) and HCV are scarce due to the lack of adequate animal model systems. This knowledge is, however, crucial to developing new antiviral strategies aimed at eradicating these chronic infections. We model virus-host interactions during the initial phase of liver inflammation 24 h after inoculating mice with LCMV. We show that infected Kupffer cells are rapidly outnumbered by infiltrating inflammatory monocytes, which secrete proinflammatory cytokines but are less phagocytic. Nevertheless, these recruited inflammatory monocytes start to resemble Kupffer cells on a transcript level. The specificity of these cellular changes for virus-induced liver inflammation is corroborated by demonstrating opposite functions of monocytes after LPS challenge. Overall, this demonstrates the enormous functional and genetic plasticity of infiltrating monocytes and identifies them as an important target cell for future treatment regimens. PMID:25673700

  10. Self-Association of Lymphocytic Choriomeningitis Virus Nucleoprotein Is Mediated by Its N-Terminal Region and Is Not Required for Its Anti-Interferon Function

    PubMed Central

    Ortiz-Riaño, Emilio; Cheng, Benson Yee Hin

    2012-01-01

    Arenaviruses have a bisegmented, negative-strand RNA genome. Both the large (L) and small (S) genome segments use an ambisense coding strategy to direct the synthesis of two viral proteins. The L segment encodes the virus polymerase (L protein) and the matrix Z protein, whereas the S segment encodes the nucleoprotein (NP) and the glycoprotein precursor (GPC). NPs are the most abundant viral protein in infected cells and virions and encapsidate genomic RNA species to form an NP-RNA complex that, together with the virus L polymerase, forms the virus ribonucleoprotein (RNP) core capable of directing both replication and transcription of the viral genome. RNP formation predicts a self-association property of NPs. Here we document self-association (homotypic interaction) of the NP of the prototypic arenavirus lymphocytic choriomeningitis virus (LCMV), as well as those of the hemorrhagic fever (HF) arenaviruses Lassa virus (LASV) and Machupo virus (MACV). We also show heterotypic interaction between NPs from both closely (LCMV and LASV) and distantly (LCMV and MACV) genetically related arenaviruses. LCMV NP self-association was dependent on the presence of single-stranded RNA and mediated by an N-terminal region of the NP that did not overlap with the previously described C-terminal NP domain involved in either counteracting the host type I interferon response or interacting with LCMV Z. PMID:22258244

  11. Improving the Nulling Beamformer Using Subspace Suppression.

    PubMed

    Rana, Kunjan D; Hämäläinen, Matti S; Vaina, Lucia M

    2018-01-01

    Magnetoencephalography (MEG) captures the magnetic fields generated by neuronal current sources with sensors outside the head. In MEG analysis these current sources are estimated from the measured data to identify the locations and time courses of neural activity. Since there is no unique solution to this so-called inverse problem, multiple source estimation techniques have been developed. The nulling beamformer (NB), a modified form of the linearly constrained minimum variance (LCMV) beamformer, is specifically used in the process of inferring interregional interactions and is designed to eliminate shared signal contributions, or cross-talk, between regions of interest (ROIs) that would otherwise interfere with the connectivity analyses. The nulling beamformer applies the truncated singular value decomposition (TSVD) to remove small signal contributions from a ROI to the sensor signals. However, ROIs with strong crosstalk will have high separating power in the weaker components, which may be removed by the TSVD operation. To address this issue we propose a new method, the nulling beamformer with subspace suppression (NBSS). This method, controlled by a tuning parameter, reweights the singular values of the gain matrix mapping from source to sensor space such that components with high overlap are reduced. By doing so, we are able to measure signals between nearby source locations with limited cross-talk interference, allowing for reliable cortical connectivity analysis between them. In two simulations, we demonstrated that NBSS reduces cross-talk while retaining ROIs' signal power, and has higher separating power than both the minimum norm estimate (MNE) and the nulling beamformer without subspace suppression. We also showed that NBSS successfully localized the auditory M100 event-related field in primary auditory cortex, measured from a subject undergoing an auditory localizer task, and suppressed cross-talk in a nearby region in the superior temporal sulcus.

  12. Weak vaccinia virus-induced NK cell regulation of CD4 T cells is associated with reduced NK cell differentiation and cytolytic activity.

    PubMed

    Hatfield, Steven D; Daniels, Keith A; O'Donnell, Carey L; Waggoner, Stephen N; Welsh, Raymond M

    2018-06-01

    Natural killer (NK) cells control antiviral adaptive immune responses in mice during some virus infections, but the universality of this phenomenon remains unknown. Lymphocytic choriomeningitis virus (LCMV) infection of mice triggered potent cytotoxic activity of NK cells (NK LCMV ) against activated CD4 T cells, tumor cells, and allogeneic lymphocytes. In contrast, NK cells activated by vaccinia virus (VACV) infection (NK VACV ) exhibited weaker cytolytic activity against each of these target cells. Relative to NK LCMV cells, NK VACV cells exhibited a more immature (CD11b - CD27 + ) phenotype, and lower expression levels of the activation marker CD69, cytotoxic effector molecules (perforin, granzyme B), and the transcription factor IRF4. NK VACV cells expressed higher levels of the inhibitory molecule NKG2A than NK LCMV cells. Consistent with this apparent lethargy, NK VACV cells only weakly constrained VACV-specific CD4 T-cell responses. This suggests that NK cell regulation of adaptive immunity, while universal, may be limited with viruses that poorly activate NK cells. Published by Elsevier Inc.

  13. Mining a Kröhnke Pyridine Library for Anti-Arenavirus Activity.

    PubMed

    Miranda, Pedro O; Cubitt, Beatrice; Jacob, Nicholas T; Janda, Kim D; de la Torre, Juan C

    2018-05-11

    Several arenaviruses cause hemorrhagic fever (HF) disease in humans and represent important public health problems in their endemic regions. In addition, evidence indicates that the worldwide-distributed prototypic arenavirus lymphocytic choriomeningitis virus is a neglected human pathogen of clinical significance. There are no licensed arenavirus vaccines, and current antiarenavirus therapy is limited to an off-label use of ribavirin that is only partially effective. Therefore, there is an unmet need for novel therapeutics to combat human pathogenic arenaviruses, a task that will be facilitated by the identification of compounds with antiarenaviral activity that could serve as probes to identify arenavirus-host interactions suitable for targeting, as well as lead compounds to develop future antiarenaviral drugs. Screening of a combinatorial library of Krönhke pyridines identified compound KP-146 [(5-(5-(2,3-dihydrobenzo[ b][1,4] dioxin-6-yl)-4'-methoxy-[1,1'-biphenyl]-3-yl)thiophene-2-carboxamide] as having strong anti-lymphocytic choriomeningitis virus (LCMV) activity in cultured cells. KP-146 did not inhibit LCMV cell entry but rather interfered with the activity of the LCMV ribonucleoprotein (vRNP) responsible for directing virus RNA replication and gene transcription, as well as with the budding process mediated by the LCMV matrix Z protein. LCMV variants with increased resistance to KP-146 did not emerge after serial passages in the presence of KP-146. Our findings support the consideration of Kröhnke pyridine scaffold as a valuable source to identify compounds that could serve as tools to dissect arenavirus-host interactions, as well as lead candidate structures to develop antiarenaviral drugs.

  14. Genomic and Biological Characterization of Aggressive and Docile Strains of LCMV Rescued from a Plasmid-Based Reverse Genetics System

    PubMed Central

    Chen, Minjie; Lan, Shuiyun; Ou, Rong; Price, Graeme E.; Jiang, Hong; de la Torre, Juan Carlos; Moskophidis, Demetrius

    2008-01-01

    Arenaviruses include several causative agents of hemorrhagic fever disease in humans. In addition, the prototypic arenavirus lymphocytic choriomeningitis virus (LCMV) is a superb model for the study of virus-host interactions, including the basis of viral persistence and associated diseases. The molecular mechanisms concerning the regulation and specific role of viral proteins in modulating arenavirus-host cell interactions associated either with an acute or persistent infection and associated disease remain little understood. Here we report the genomic and biological characterization of LCMV strains Docile (persistent) and Aggressive (not persistent) recovered from cloned cDNA via reverse genetics. Our results confirmed that the cloned viruses accurately recreated the in vivo phenotypes associated with the corresponding natural Docile and Aggressive viral isolates. In addition, we provide evidence that the ability of the Docile strain to persist is determined by the nature of both S and L RNA segments. Thus, our findings provide the foundation for studies aimed at gaining a detailed understanding of viral determinants of LCMV persistence in its natural host that may aid in the development of vaccines to prevent or treat the diseases caused by arenaviruses in humans. PMID:18474558

  15. Peptide-Induced Antiviral Protection by Cytotoxic T Cells

    NASA Astrophysics Data System (ADS)

    Schulz, Manfred; Zinkernagel, Rolf M.; Hengartner, Hans

    1991-02-01

    A specific antiviral cytotoxic immune response in vivo could be induced by the subcutaneous injection of the T-cell epitope of the lymphocytic choriomeningitis virus (LCMV) nucleoprotein as an unmodified free synthetic peptide (Arg-Pro-Gln-Ala-Ser-Gly-Val-Tyr-Met-Gly-Asn-Leu-Thr-Ala-Gln) emulsified in incomplete Freund's adjuvant. This immunization rendered mice into a LCMV-specific protective state as shown by the inhibition of LCMV replication in spleens of such mice. The protection level of these mice correlated with the ability to respond to the peptide challenge by CD8^+ virus-specific cytotoxic T cells. This is a direct demonstration that peptide vaccines can be antivirally protective in vivo, thus encouraging further search for appropriate mixtures of stable peptides that may be used as T-cell vaccines.

  16. Replication-defective lymphocytic choriomeningitis virus vectors expressing guinea pig cytomegalovirus gB and pp65 homologs are protective against congenital guinea pig cytomegalovirus infection.

    PubMed

    Cardin, Rhonda D; Bravo, Fernando J; Pullum, Derek A; Orlinger, Klaus; Watson, Elizabeth M; Aspoeck, Andreas; Fuhrmann, Gerhard; Guirakhoo, Farshad; Monath, Thomas; Bernstein, David I

    2016-04-12

    Congenital cytomegalovirus infection can be life-threatening and often results in significant developmental deficits and/or hearing loss. Thus, there is a critical need for an effective anti-CMV vaccine. To determine the efficacy of replication-defective lymphocytic choriomeningitis virus (rLCMV) vectors expressing the guinea pig CMV (GPCMV) antigens, gB and pp65, in the guinea pig model of congenital CMV infection. Female Hartley strain guinea pigs were divided into three groups: Buffer control group (n = 9), rLCMV-gB group (n = 11), and rLCMV-pp65 (n = 11). The vaccines were administered three times IM at 1.54 × 10(6)FFU per dose at 21-day intervals. At two weeks after vaccination, the female guinea pigs underwent breeding. Pregnant guinea pigs were challenged SQ at ∼ 45-55 days of gestation with 1 × 10(5)PFU of GPCMV. Viremia in the dams, pup survival, weights of pups at delivery, and viral load in both dam and pup tissues were determined. Pup survival was significantly increased in the LCMV-gB vaccine group. There was 23% pup mortality in the gB vaccine group (p = 0.044) and 26% pup mortality in the pp65 vaccine group (p = 0.054) compared to 49% control pup mortality. The gB vaccine induced high levels of gB binding and detectable neutralizing antibodies, reduced dam viremia, and significantly reduced viral load in dam tissues compared to control dams (p < 0.03). Reduced viral load and transmission in pups born to gB-vaccinated dams was observed compared to pups from pp65-vaccinated or control dams. The rLCMV-gB vaccine significantly improved pup survival and also increased pup weights and gestation time. The gB vaccine was also more effective at decreasing viral load in dams and pups and limiting congenital transmission. Thus, rLCMV vectors that express CMV antigens may be an effective vaccine strategy for congenital CMV infection. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. The C-terminal region of lymphocytic choriomeningitis virus nucleoprotein contains distinct and segregable functional domains involved in NP-Z interaction and counteraction of the type I interferon response.

    PubMed

    Ortiz-Riaño, Emilio; Cheng, Benson Yee Hin; de la Torre, Juan Carlos; Martínez-Sobrido, Luis

    2011-12-01

    Several arenaviruses cause hemorrhagic fever (HF) disease in humans that is associated with high morbidity and significant mortality. Arenavirus nucleoprotein (NP), the most abundant viral protein in infected cells and virions, encapsidates the viral genome RNA, and this NP-RNA complex, together with the viral L polymerase, forms the viral ribonucleoprotein (vRNP) that directs viral RNA replication and gene transcription. Formation of infectious arenavirus progeny requires packaging of vRNPs into budding particles, a process in which arenavirus matrix-like protein (Z) plays a central role. In the present study, we have characterized the NP-Z interaction for the prototypic arenavirus lymphocytic choriomeningitis virus (LCMV). The LCMV NP domain that interacted with Z overlapped with a previously documented C-terminal domain that counteracts the host type I interferon (IFN) response. However, we found that single amino acid mutations that affect the anti-IFN function of LCMV NP did not disrupt the NP-Z interaction, suggesting that within the C-terminal region of NP different amino acid residues critically contribute to these two distinct and segregable NP functions. A similar NP-Z interaction was confirmed for the HF arenavirus Lassa virus (LASV). Notably, LCMV NP interacted similarly with both LCMV Z and LASV Z, while LASV NP interacted only with LASV Z. Our results also suggest the presence of a conserved protein domain within NP but with specific amino acid residues playing key roles in determining the specificity of NP-Z interaction that may influence the viability of reassortant arenaviruses. In addition, this NP-Z interaction represents a potential target for the development of antiviral drugs to combat human-pathogenic arenaviruses.

  18. In vivo induction of a high-avidity, high-frequency cytotoxic T-lymphocyte response is associated with antiviral protective immunity.

    PubMed

    Sedlik, C; Dadaglio, G; Saron, M F; Deriaud, E; Rojas, M; Casal, S I; Leclerc, C

    2000-07-01

    Many approaches are currently being developed to deliver exogenous antigen into the major histocompatibility complex class I-restricted antigen pathway, leading to in vivo priming of CD8(+) cytotoxic T cells. One attractive possibility consists of targeting the antigen to phagocytic or macropinocytic antigen-presenting cells. In this study, we demonstrate that strong CD8(+) class I-restricted cytotoxic responses are induced upon intraperitoneal immunization of mice with different peptides, characterized as CD8(+) T-cell epitopes, bound to 1-microm synthetic latex microspheres and injected in the absence of adjuvant. The cytotoxic response induced against a lymphocytic choriomeningitis virus (LCMV) peptide linked to these microspheres was compared to the cytotoxic T-lymphocyte (CTL) response obtained upon immunization with the nonreplicative porcine parvovirus-like particles (PPV:VLP) carrying the same peptide (PPV:VLP-LCMV) previously described (C. Sedlik, M. F. Saron, J. Sarraseca, I. Casal, and C. Leclerc, Proc. Natl. Acad. Sci. USA 94:7503-7508, 1997). We show that the induction of specific CTL activity by peptides bound to microspheres requires CD4(+) T-cell help in contrast to the CTL response obtained with the peptide delivered by viral pseudoparticles. Furthermore, PPV:VLP are 100-fold more efficient than microspheres in generating a strong CTL response characterized by a high frequency of specific T cells of high avidity. Moreover, PPV:VLP-LCMV are able to protect mice against a lethal LCMV challenge whereas microspheres carrying the LCMV epitope fail to confer such protection. This study demonstrates the crucial involvement of the frequency and avidity of CTLs in conferring antiviral protective immunity and highlights the importance of considering these parameters when developing new vaccine strategies.

  19. In Vivo Induction of a High-Avidity, High-Frequency Cytotoxic T-Lymphocyte Response Is Associated with Antiviral Protective Immunity†

    PubMed Central

    Sedlik, C.; Dadaglio, G.; Saron, M. F.; Deriaud, E.; Rojas, M.; Casal, S. I.; Leclerc, C.

    2000-01-01

    Many approaches are currently being developed to deliver exogenous antigen into the major histocompatibility complex class I-restricted antigen pathway, leading to in vivo priming of CD8+ cytotoxic T cells. One attractive possibility consists of targeting the antigen to phagocytic or macropinocytic antigen-presenting cells. In this study, we demonstrate that strong CD8+ class I-restricted cytotoxic responses are induced upon intraperitoneal immunization of mice with different peptides, characterized as CD8+ T-cell epitopes, bound to 1-μm synthetic latex microspheres and injected in the absence of adjuvant. The cytotoxic response induced against a lymphocytic choriomeningitis virus (LCMV) peptide linked to these microspheres was compared to the cytotoxic T-lymphocyte (CTL) response obtained upon immunization with the nonreplicative porcine parvovirus-like particles (PPV:VLP) carrying the same peptide (PPV:VLP-LCMV) previously described (C. Sedlik, M. F. Saron, J. Sarraseca, I. Casal, and C. Leclerc, Proc. Natl. Acad. Sci. USA 94:7503–7508, 1997). We show that the induction of specific CTL activity by peptides bound to microspheres requires CD4+ T-cell help in contrast to the CTL response obtained with the peptide delivered by viral pseudoparticles. Furthermore, PPV:VLP are 100-fold more efficient than microspheres in generating a strong CTL response characterized by a high frequency of specific T cells of high avidity. Moreover, PPV:VLP-LCMV are able to protect mice against a lethal LCMV challenge whereas microspheres carrying the LCMV epitope fail to confer such protection. This study demonstrates the crucial involvement of the frequency and avidity of CTLs in conferring antiviral protective immunity and highlights the importance of considering these parameters when developing new vaccine strategies. PMID:10846055

  20. Virus-specific antibodies allow viral replication in the marginal zone, thereby promoting CD8+ T-cell priming and viral control

    PubMed Central

    Duhan, Vikas; Khairnar, Vishal; Friedrich, Sarah-Kim; Zhou, Fan; Gassa, Asmae; Honke, Nadine; Shaabani, Namir; Gailus, Nicole; Botezatu, Lacramioara; Khandanpour, Cyrus; Dittmer, Ulf; Häussinger, Dieter; Recher, Mike; Hardt, Cornelia; Lang, Philipp A.; Lang, Karl S.

    2016-01-01

    Clinically used human vaccination aims to induce specific antibodies that can guarantee long-term protection against a pathogen. The reasons that other immune components often fail to induce protective immunity are still debated. Recently we found that enforced viral replication in secondary lymphoid organs is essential for immune activation. In this study we used the lymphocytic choriomeningitis virus (LCMV) to determine whether enforced virus replication occurs in the presence of virus-specific antibodies or virus-specific CD8+ T cells. We found that after systemic recall infection with LCMV-WE the presence of virus-specific antibodies allowed intracellular replication of virus in the marginal zone of spleen. In contrast, specific antibodies limited viral replication in liver, lung, and kidney. Upon recall infection with the persistent virus strain LCMV-Docile, viral replication in spleen was essential for the priming of CD8+ T cells and for viral control. In contrast to specific antibodies, memory CD8+ T cells inhibited viral replication in marginal zone but failed to protect mice from persistent viral infection. We conclude that virus-specific antibodies limit viral infection in peripheral organs but still allow replication of LCMV in the marginal zone, a mechanism that allows immune boosting during recall infection and thereby guarantees control of persistent virus. PMID:26805453

  1. Evaluation of the immunomodulatory and antiviral effects of the cytokine combination IFN-α and IL-7 in the lymphocytic choriomeningitis virus and Friend retrovirus mouse infection models.

    PubMed

    Audigé, Annette; Hofer, Ursula; Dittmer, Ulf; van den Broek, Maries; Speck, Roberto F

    2011-10-01

    Existing therapies for chronic viral infections are still suboptimal or have considerable side effects, so new therapeutic strategies need to be developed. One option is to boost the host's immune response with cytokines. We have recently shown in an acute ex vivo HIV infection model that co-administration of interferon (IFN)-α and interleukin (IL)-7 allows us to combine the potent anti-HIV activity of IFN-α with the beneficial effects of IL-7 on T-cell survival and function. Here we evaluated the effect of combining IFN-α and IL-7 on viral replication in vivo in the chronic lymphocytic choriomeningitis virus (LCMV) and acute Friend retrovirus (FV) infection models. In the chronic LCMV model, cytokine treatment was started during the early replication phase (i.e., on day 7 post-infection [pi]). Under the experimental conditions used, exogenous IFN-α inhibited FV replication, but had no effect on viral replication in the LCMV model. There was no therapeutic benefit of IL-7 either alone or in combination with IFN-α in either of the two infection models. In the LCMV model, dose-dependent effects of the cytokine combination on T-cell phenotype/function were observed. It is possible that these effects would translate into antiviral activity in re-challenged mice. It is also possible that another type of IFN-α/β or induction of endogenous IFN-α/β alone or in combination with IL-7 would have antiviral activity in the LCMV model. Furthermore, we cannot exclude that some effect on viral titers would have been seen at later time points not investigated here (i.e., beyond day 34 pi). Finally, IFN-α/IL-7 may inhibit the replication of other viruses. Thus it might be worth testing these cytokines in other in vivo models of chronic viral infections.

  2. Spatiotemporally restricted arenavirus replication induces immune surveillance and type I interferon-dependent tumour regression

    PubMed Central

    Kalkavan, Halime; Sharma, Piyush; Kasper, Stefan; Helfrich, Iris; Pandyra, Aleksandra A.; Gassa, Asmae; Virchow, Isabel; Flatz, Lukas; Brandenburg, Tim; Namineni, Sukumar; Heikenwalder, Mathias; Höchst, Bastian; Knolle, Percy A.; Wollmann, Guido; von Laer, Dorothee; Drexler, Ingo; Rathbun, Jessica; Cannon, Paula M.; Scheu, Stefanie; Bauer, Jens; Chauhan, Jagat; Häussinger, Dieter; Willimsky, Gerald; Löhning, Max; Schadendorf, Dirk; Brandau, Sven; Schuler, Martin; Lang, Philipp A.; Lang, Karl S.

    2017-01-01

    Immune-mediated effector molecules can limit cancer growth, but lack of sustained immune activation in the tumour microenvironment restricts antitumour immunity. New therapeutic approaches that induce a strong and prolonged immune activation would represent a major immunotherapeutic advance. Here we show that the arenaviruses lymphocytic choriomeningitis virus (LCMV) and the clinically used Junin virus vaccine (Candid#1) preferentially replicate in tumour cells in a variety of murine and human cancer models. Viral replication leads to prolonged local immune activation, rapid regression of localized and metastatic cancers, and long-term disease control. Mechanistically, LCMV induces antitumour immunity, which depends on the recruitment of interferon-producing Ly6C+ monocytes and additionally enhances tumour-specific CD8+ T cells. In comparison with other clinically evaluated oncolytic viruses and to PD-1 blockade, LCMV treatment shows promising antitumoural benefits. In conclusion, therapeutically administered arenavirus replicates in cancer cells and induces tumour regression by enhancing local immune responses. PMID:28248314

  3. An inducible transgenic mouse breast cancer model for the analysis of tumor antigen specific CD8+ T-cell responses

    PubMed Central

    Bruns, Michael; Wanger, Jara; Utermöhlen, Olaf; Deppert, Wolfgang

    2015-01-01

    In Simian virus 40 (SV40) transgenic BALB/c WAP-T mice tumor development and progression is driven by SV40 tumor antigens encoded by inducible transgenes. WAP-T mice constitute a well characterized mouse model for breast cancer with strong similarities to the corresponding human disease. BALB/c mice mount only a weak cellular immune response against SV40 T-antigen (T-Ag). For studying tumor antigen specific CD8+ T-cell responses against transgene expressing cells, we created WAP-TNP mice, in which the transgene additionally codes for the NP118–126-epitope contained within the nucleoprotein of lymphocytic choriomeningitis virus (LCMV), the immune-dominant T-cell epitope in BALB/c mice. We then investigated in WAP-TNP mice the immune responses against SV40 tumor antigens and the NP-epitope within the chimeric T-Ag/NP protein (T-AgNP). Analysis of the immune-reactivity against T-Ag in WAP-T and of T-AgNP in WAP-TNP mice revealed that, in contrast to wild type (wt) BALB/c mice, WAP-T and WAP-TNP mice were non-reactive against T-Ag. However, like wtBALB/c mice, WAP-T as well as WAP-TNP mice were highly reactive against the immune-dominant LCMV NP-epitope, thereby allowing the analysis of NP-epitope specific cellular immune responses in WAP-TNP mice. LCMV infection of WAP-TNP mice induced a strong, LCMV NP-epitope specific CD8+ T-cell response, which was able to specifically eliminate T-AgNP expressing mammary epithelial cells both prior to tumor formation (i.e. in cells of lactating mammary glands), as well as in invasive tumors. Elimination of tumor cells, however, was only transient, even after repeated LCMV infections. Further studies showed that already non-infected WAP-TNP tumor mice contained LCMV NP-epitope specific CD8+ T-cells, albeit with strongly reduced, though measurable activity. Functional impairment of these ‘endogenous’ NP-epitope specific T-cells seems to be caused by expression of the programmed death-1 protein (PD1), as anti-PD1 treatment of splenocytes from WAP-TNP tumor mice restored their activity. These characteristics are similar to those found in many tumor patients and render WAP-TNP mice a suitable model for analyzing parameters to overcome the blockade of immune checkpoints in tumor patients. PMID:26513294

  4. Impaired Subset Progression and Polyfunctionality of T Cells in Mice Exposed to Methamphetamine during Chronic LCMV Infection

    PubMed Central

    Sriram, Uma; Hill, Beth L.; Cenna, Jonathan M.; Gofman, Larisa; Fernandes, Nicole C.; Haldar, Bijayesh; Potula, Raghava

    2016-01-01

    Methamphetamine (METH) is a widely used psychostimulant that severely impacts the host’s innate and adaptive immune systems and has profound immunological implications. T cells play a critical role in orchestrating immune responses. We have shown recently how chronic exposure to METH affects T cell activation using a murine model of lymphocytic choriomeningitis virus (LCMV) infection. Using the TriCOM (trinary state combinations) feature of GemStone™ to study the polyfunctionality of T cells, we have analyzed how METH affected the cytokine production pattern over the course of chronic LCMV infection. Furthermore, we have studied in detail the effects of METH on splenic T cell functions, such as cytokine production and degranulation, and how they regulate each other. We used the Probability State Modeling (PSM) program to visualize the differentiation of effector/memory T cell subsets during LCMV infection and analyze the effects of METH on T cell subset progression. We recently demonstrated that METH increased PD-1 expression on T cells during viral infection. In this study, we further analyzed the impact of PD-1 expression on T cell functional markers as well as its expression in the effector/memory subsets. Overall, our study indicates that analyzing polyfunctionality of T cells can provide additional insight into T cell effector functions. Analysis of T cell heterogeneity is important to highlight changes in the evolution of memory/effector functions during chronic viral infections. Our study also highlights the impact of METH on PD-1 expression and its consequences on T cell responses. PMID:27760221

  5. Impaired Subset Progression and Polyfunctionality of T Cells in Mice Exposed to Methamphetamine during Chronic LCMV Infection.

    PubMed

    Sriram, Uma; Hill, Beth L; Cenna, Jonathan M; Gofman, Larisa; Fernandes, Nicole C; Haldar, Bijayesh; Potula, Raghava

    2016-01-01

    Methamphetamine (METH) is a widely used psychostimulant that severely impacts the host's innate and adaptive immune systems and has profound immunological implications. T cells play a critical role in orchestrating immune responses. We have shown recently how chronic exposure to METH affects T cell activation using a murine model of lymphocytic choriomeningitis virus (LCMV) infection. Using the TriCOM (trinary state combinations) feature of GemStone™ to study the polyfunctionality of T cells, we have analyzed how METH affected the cytokine production pattern over the course of chronic LCMV infection. Furthermore, we have studied in detail the effects of METH on splenic T cell functions, such as cytokine production and degranulation, and how they regulate each other. We used the Probability State Modeling (PSM) program to visualize the differentiation of effector/memory T cell subsets during LCMV infection and analyze the effects of METH on T cell subset progression. We recently demonstrated that METH increased PD-1 expression on T cells during viral infection. In this study, we further analyzed the impact of PD-1 expression on T cell functional markers as well as its expression in the effector/memory subsets. Overall, our study indicates that analyzing polyfunctionality of T cells can provide additional insight into T cell effector functions. Analysis of T cell heterogeneity is important to highlight changes in the evolution of memory/effector functions during chronic viral infections. Our study also highlights the impact of METH on PD-1 expression and its consequences on T cell responses.

  6. Murine Models for Viral Hemorrhagic Fever.

    PubMed

    Gonzalez-Quintial, Rosana; Baccala, Roberto

    2018-01-01

    Hemorrhagic fever (HF) viruses, such as Lassa, Ebola, and dengue viruses, represent major human health risks due to their highly contagious nature, the severity of the clinical manifestations induced, the lack of vaccines, and the very limited therapeutic options currently available. Appropriate animal models are obviously critical to study disease pathogenesis and develop efficient therapies. We recently reported that the clone 13 (Cl13) variant of the lymphocytic choriomeningitis virus (LCMV-Cl13), a prototype arenavirus closely related to Lassa virus, causes in some mouse strains endothelial damage, vascular leakage, platelet loss, and death, mimicking pathological aspects typically observed in Lassa and other HF syndromes. This model has the advantage that the mice used are fully immunocompetent, allowing studies on the contribution of the immune response to disease progression. Moreover, LCMV is very well characterized and exhibits limited pathogenicity in humans, allowing handling in convenient BSL-2 facilities. In this chapter we outline protocols for the induction and analysis of arenavirus-mediated pathogenesis in the NZB/LCMV model, including mouse infection, virus titer determination, platelet counting, phenotypic analysis of virus-specific T cells, and assessment of vascular permeability.

  7. Childhood blindness and visual loss: an assessment at two institutions including a "new" cause.

    PubMed Central

    Mets, M B

    1999-01-01

    PURPOSE: This study was initiated to investigate the causes of childhood blindness and visual impairment in the United States. We also sought a particular etiology--congenital lymphocytic choriomeningitis virus (LCMV)--which has been considered exceedingly rare, in a fixed target population of children, the severely mentally retarded. METHODS: We undertook a library-based study of the world literature to shed light on the causes of childhood blindness internationally and to put our data in context. We prospectively examined all consented children (159) at 2 institutions in the United States to determine their ocular status and the etiology of any visual loss present. One of the institutions is a school for the visually impaired (hereafter referred to as Location V), in which most of the students have normal mentation. The other is a home for severely mentally retarded, nonambulatory children (hereafter referred to as Location M). This institution was selected specifically to provide a sample of visual loss associated with severe retardation because the handful of cases of LCMV in the literature have been associated with severe central nervous system insults. Histories were obtained from records on site, and all children received a complete cyclopleged ophthalmic examination at their institution performed by the author. Patients at Location M with chorioretinal scars consistent with intrauterine infection (a possible sign of LCMV) had separate consents for blood drawing. Sera was obtained and sent for standard TORCHS titers, toxoplasmosis titers (Jack S. Remington, MD, Palo Alto, Calif), and ELISA testing for LCMV (Centers for Disease Control and Prevention, Atlanta, Ga). RESULTS: The diagnoses at Location V were varied and included retinopathy of prematurity (19.4%), optic atrophy (19.4%), retinitis pigmentosa (14.5%), optic nerve hypoplasia (12.9%), cataracts (8.1%), foveal hypoplasia (8.1%), persistent hyperplastic primary vitreous (4.8%), and microphthalmos (3.2%). The most common diagnosis at Location M was bilateral optic atrophy, which was found in 65% of the patients examined who had visual loss. Of these, the insults were most often congenital (42.6%), with birth trauma, prematurity, and genetics each responsible for about 15% of the optic atrophy. The second most common diagnosis was cortical visual impairment (24%), followed by chorioretinal scars (5%), which are strongly suggestive of intrauterine infection. Of 95 patients examined at Location M, 4 had chorioretinal scars. Two of these had dramatically elevated titers for LCMV, as did one of their mothers. One of the other 2 children died before serum could be drawn, and the fourth had negative titers for both TORCHS and LCMV. CONCLUSIONS: At both locations studied, visual loss was most often due to congenital insults, whether genetic or simply prenatal. The visual loss at Location V was twice as likely as that at Location M to be caused by a genetic disorder. The genetic disorders at Location V were more often isolated eye diseases, while those among the severely retarded at Location M were more generalized genetic disorders. Our study identified optic atrophy as a common diagnosis among the severely mentally retarded with vision loss, a finding that is supported by previous studies in other countries. In our population of severely retarded children, the target etiology of lymphocytic choriomeningitis virus was responsible for half the visual loss secondary to chorioretinitis from intrauterine infection. This is more common than would be predicted by the few cases previously described in the literature, and strongly suggests that LCMV may be a more common cause of visual loss than previously appreciated. We believe that serology for LCMV should be part of the workup for congenital chorioretinitis, especially if the TORCHS titers are negative, and that perhaps the mnemonic should be revised to "TORCHS + L." Childhood blindness and visual impairment are tragic and co Images FIGURE 5 FIGURE 6 PMID:10703143

  8. Interactome analysis of the lymphocytic choriomeningitis virus nucleoprotein in infected cells reveals ATPase Na+/K+ transporting subunit Alpha 1 and prohibitin as host-cell factors involved in the life cycle of mammarenaviruses

    PubMed Central

    Iwasaki, Masaharu; Caì, Yíngyún; de la Torre, Juan C.

    2018-01-01

    Several mammalian arenaviruses (mammarenaviruses) cause hemorrhagic fevers in humans and pose serious public health concerns in their endemic regions. Additionally, mounting evidence indicates that the worldwide-distributed, prototypic mammarenavirus, lymphocytic choriomeningitis virus (LCMV), is a neglected human pathogen of clinical significance. Concerns about human-pathogenic mammarenaviruses are exacerbated by of the lack of licensed vaccines, and current anti-mammarenavirus therapy is limited to off-label use of ribavirin that is only partially effective. Detailed understanding of virus/host-cell interactions may facilitate the development of novel anti-mammarenavirus strategies by targeting components of the host-cell machinery that are required for efficient virus multiplication. Here we document the generation of a recombinant LCMV encoding a nucleoprotein (NP) containing an affinity tag (rLCMV/Strep-NP) and its use to capture the NP-interactome in infected cells. Our proteomic approach combined with genetics and pharmacological validation assays identified ATPase Na+/K+ transporting subunit alpha 1 (ATP1A1) and prohibitin (PHB) as pro-viral factors. Cell-based assays revealed that ATP1A1 and PHB are involved in different steps of the virus life cycle. Accordingly, we observed a synergistic inhibitory effect on LCMV multiplication with a combination of ATP1A1 and PHB inhibitors. We show that ATP1A1 inhibitors suppress multiplication of Lassa virus and Candid#1, a live-attenuated vaccine strain of Junín virus, suggesting that the requirement of ATP1A1 in virus multiplication is conserved among genetically distantly related mammarenaviruses. Our findings suggest that clinically approved inhibitors of ATP1A1, like digoxin, could be repurposed to treat infections by mammarenaviruses pathogenic for humans. PMID:29462184

  9. Virus zoonoses and their potential for contamination of cell cultures.

    PubMed

    Mahy, B W; Dykewicz, C; Fisher-Hoch, S; Ostroff, S; Tipple, M; Sanchez, A

    1991-01-01

    Silent virus infections of laboratory animals present a human health hazard, from direct exposure and from contamination of biological products for human use. Here we report two recent examples. In 1989, an outbreak of lymphocytic choriomeningitis virus (LCMV) infections was recognized among workers at a cancer research center after an animal caretaker developed viral meningitis. Investigation revealed that multiple tumor cell lines at the facility were infected with LCMV, as were research animals injected with these cell lines. Of 82 workers tested, eight (10%) were found to have been infected. The infected workers were more likely than other animal handlers to report handling athymic (nude) mice (p less than .0.007). The number of nude mice used in this facilty had increased five-fold in the previous year, possibly explaining the timing of the outbreak. This is the first reported LCMV outbreak since 1975, and the first to implicate nude mice as a source of human LCMV infections. In November 1989 and January 1990, infections caused by two distinct Ebola-like filoviruses were discovered in non-human primates at quarantine facilities in Virginia and Pennsylvania. Although 22 persons were considered to have high- or medium-risk exposures for Ebola infection, no Ebola-compatible illnesses occurred. One of the medium-risk persons had Ebola IgG antibodies confirmed by IFA and Western blot. Rigorous use of barrier precautions may have limited exposure and infection with these filoviruses. In February 1990, new groups of filovirus-infected monkeys were identified in Virginia and in Texas. Seroconversion occurred in four animal handlers, including one to very high titer, but again no illness was observed.(ABSTRACT TRUNCATED AT 250 WORDS)

  10. Role of PD-1 during effector CD8 T cell differentiation.

    PubMed

    Ahn, Eunseon; Araki, Koichi; Hashimoto, Masao; Li, Weiyan; Riley, James L; Cheung, Jeanne; Sharpe, Arlene H; Freeman, Gordon J; Irving, Bryan A; Ahmed, Rafi

    2018-05-01

    PD-1 (programmed cell death-1) is the central inhibitory receptor regulating CD8 T cell exhaustion during chronic viral infection and cancer. Interestingly, PD-1 is also expressed transiently by activated CD8 T cells during acute viral infection, but the role of PD-1 in modulating T cell effector differentiation and function is not well defined. To address this question, we examined the expression kinetics and role of PD-1 during acute lymphocytic choriomeningitis virus (LCMV) infection of mice. PD-1 was rapidly up-regulated in vivo upon activation of naive virus-specific CD8 T cells within 24 h after LCMV infection and in less than 4 h after peptide injection, well before any cell division had occurred. This rapid PD-1 expression by CD8 T cells was driven predominantly by antigen receptor signaling since infection with a LCMV strain with a mutation in the CD8 T cell epitope did not result in the increase of PD-1 on antigen-specific CD8 T cells. Blockade of the PD-1 pathway using anti-PD-L1 or anti-PD-1 antibodies during the early phase of acute LCMV infection increased mTOR signaling and granzyme B expression in virus-specific CD8 T cells and resulted in faster clearance of the infection. These results show that PD-1 plays an inhibitory role during the naive-to-effector CD8 T cell transition and that the PD-1 pathway can also be modulated at this stage of T cell differentiation. These findings have implications for developing therapeutic vaccination strategies in combination with PD-1 blockade.

  11. Recombinant Listeria monocytogenes as a Live Vaccine Vehicle for the Induction of Protective Anti-Viral Cell-Mediated Immunity

    NASA Astrophysics Data System (ADS)

    Shen, Hao; Slifka, Mark K.; Matloubian, Mehrdad; Jensen, Eric R.; Ahmed, Rafi; Miller, Jeff F.

    1995-04-01

    Listeria monocytogenes (LM) is a Gram-positive bacterium that is able to enter host cells, escape from the endocytic vesicle, multiply within the cytoplasm, and spread directly from cell to cell without encountering the extracellular milieu. The ability of LM to gain access to the host cell cytosol allows proteins secreted by the bacterium to efficiently enter the pathway for major histocompatibility complex class I antigen processing and presentation. We have established a genetic system for expression and secretion of foreign antigens by recombinant strains, based on stable site-specific integration of expression cassettes into the LM genome. The ability of LM recombinants to induce protective immunity against a heterologous pathogen was demonstrated with lymphocytic choriomeningitis virus (LCMV). LM strains expressing the entire LCMV nucleoprotein or an H-2L^d-restricted nucleoprotein epitope (aa 118-126) were constructed. Immunization of mice with LM vaccine strains conferred protection against challenge with virulent strains of LCMV that otherwise establish chronic infection in naive adult mice. In vivo depletion of CD8^+ T cells from vaccinated mice abrogated their ability to clear viral infection, showing that protective anti-viral immunity was due to CD8^+ T cells.

  12. Arenavirus reverse genetics for vaccine development

    PubMed Central

    Ortiz-Riaño, Emilio; Cheng, Benson Yee Hin; Carlos de la Torre, Juan

    2013-01-01

    Arenaviruses are important human pathogens with no Food and Drug Administration (FDA)-licensed vaccines available and current antiviral therapy being limited to an off-label use of the nucleoside analogue ribavirin of limited prophylactic efficacy. The development of reverse genetics systems represented a major breakthrough in arenavirus research. However, rescue of recombinant arenaviruses using current reverse genetics systems has been restricted to rodent cells. In this study, we describe the rescue of recombinant arenaviruses from human 293T cells and Vero cells, an FDA-approved line for vaccine development. We also describe the generation of novel vectors that mediate synthesis of both negative-sense genome RNA and positive-sense mRNA species of lymphocytic choriomeningitis virus (LCMV) directed by the human RNA polymerases I and II, respectively, within the same plasmid. This approach reduces by half the number of vectors required for arenavirus rescue, which could facilitate virus rescue in cell lines approved for human vaccine production but that cannot be transfected at high efficiencies. We have shown the feasibility of this approach by rescuing both the Old World prototypic arenavirus LCMV and the live-attenuated vaccine Candid#1 strain of the New World arenavirus Junín. Moreover, we show the feasibility of using these novel strategies for efficient rescue of recombinant tri-segmented both LCMV and Candid#1. PMID:23364194

  13. Rapid activation of spleen dendritic cell subsets following lymphocytic choriomeningitis virus infection of mice: analysis of the involvement of type 1 IFN.

    PubMed

    Montoya, Maria; Edwards, Matthew J; Reid, Delyth M; Borrow, Persephone

    2005-02-15

    In this study, we report the dynamic changes in activation and functions that occur in spleen dendritic cell (sDC) subsets following infection of mice with a natural murine pathogen, lymphocytic choriomeningitis virus (LCMV). Within 24 h postinfection (pi), sDCs acquired the ability to stimulate naive LCMV-specific CD8+ T cells ex vivo. Conventional (CD11chigh CD8+ and CD4+) sDC subsets rapidly up-regulated expression of costimulatory molecules and began to produce proinflammatory cytokines. Their tendency to undergo apoptosis ex vivo simultaneously increased, and in vivo the number of conventional DCs in the spleen decreased markedly, dropping approximately 2-fold by day 3 pi. Conversely, the number of plasmacytoid (CD11clowB220+) DCs in the spleen increased, so that they constituted almost 40% of sDCs by day 3 pi. Type 1 IFN production was up-regulated in plasmacytoid DCs by 24 h pi. Analysis of DC activation and maturation in mice unable to respond to type 1 IFNs implicated these cytokines in driving infection-associated phenotypic activation of conventional DCs and their enhanced tendency to undergo apoptosis, but also indicated the existence of type 1 IFN-independent pathways for the functional maturation of DCs during LCMV infection.

  14. A Highly Conserved Leucine in Mammarenavirus Matrix Z Protein Is Required for Z Interaction with the Virus L Polymerase and Z Stability in Cells Harboring an Active Viral Ribonucleoprotein.

    PubMed

    Iwasaki, Masaharu; de la Torre, Juan C

    2018-06-01

    Mammarenaviruses cause chronic infections in their natural rodent hosts. Infected rodents shed infectious virus into excreta. Humans are infected through mucosal exposure to aerosols or direct contact of abraded skin with fomites, resulting in a wide range of manifestations from asymptomatic or mild febrile illness to severe life-threatening hemorrhagic fever. The mammarenavirus matrix Z protein has been shown to be a main driving force of virus budding and to act as a negative regulator of viral RNA synthesis. To gain a better understanding of how the Z protein exerts its several different functions, we investigated the interaction between Z and viral polymerase L protein using the prototypic mammarenavirus, lymphocytic choriomeningitis virus (LCMV). We found that in the presence of an active viral ribonucleoprotein (vRNP), the Z protein translocated from nonionic detergent-resistant, membrane-rich structures to a subcellular compartment with a different membrane composition susceptible to disruption by nonionic detergents. Alanine (A) substitution of a highly conserved leucine (L) at position 72 in LCMV Z protein abrogated Z-L interaction. The L72A mutation did not affect the stability or budding activity of Z when expressed alone, but in the presence of an active vRNP, mutation L72A promoted rapid degradation of Z via a proteasome- and lysosome-independent pathway. Accordingly, L72A mutation in the Z protein resulted in nonviable LCMV. Our findings have uncovered novel aspects of the dynamics of the Z protein for which a highly conserved L residue was strictly required. IMPORTANCE Several mammarenaviruses, chiefly Lassa virus (LASV), cause hemorrhagic fever disease in humans and pose important public health concerns in their regions of endemicity. Moreover, mounting evidence indicates that the worldwide-distributed, prototypic mammarenavirus, lymphocytic choriomeningitis virus (LCMV), is a neglected human pathogen of clinical significance. The mammarenavirus matrix Z protein plays critical roles in different steps of the viral life cycle by interacting with viral and host cellular components. Here we report that alanine substitution of a highly conserved leucine residue, located at position 72 in LCMV Z protein, abrogated Z-L interaction. The L72A mutation did not affect Z budding activity but promoted its rapid degradation in the presence of an active viral ribonucleoprotein (vRNP). Our findings have uncovered novel aspects of the dynamics of the Z protein for which a highly conserved L residue was strictly required. Copyright © 2018 American Society for Microbiology.

  15. 75 FR 40797 - Upper Peninsula Power Company; Notice of Application for Temporary Amendment of License and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-14

    ... for drought-based temporary variance of the reservoir elevations and minimum flow releases at the Dead... temporary variance to the reservoir elevation and minimum flow requirements at the Hoist Development. The...: (1) Releasing a minimum flow of 75 cubic feet per second (cfs) from the Hoist Reservoir, instead of...

  16. The Variance of Solar Wind Magnetic Fluctuations: Solutions and Further Puzzles

    NASA Technical Reports Server (NTRS)

    Roberts, D. A.; Goldstein, M. L.

    2006-01-01

    We study the dependence of the variance directions of the magnetic field in the solar wind as a function of scale, radial distance, and Alfvenicity. The study resolves the question of why different studies have arrived at widely differing values for the maximum to minimum power (approximately equal to 3:1 up to approximately equal to 20:1). This is due to the decreasing anisotropy with increasing time interval chosen for the variance, and is a direct result of the "spherical polarization" of the waves which follows from the near constancy of |B|. The reason for the magnitude preserving evolution is still unresolved. Moreover, while the long-known tendency for the minimum variance to lie along the mean field also follows from this view (as shown by Barnes many years ago), there is no theory for why the minimum variance follows the field direction as the Parker angle changes. We show that this turning is quite generally true in Alfvenic regions over a wide range of heliocentric distances. The fact that nonAlfvenic regions, while still showing strong power anisotropies, tend to have a much broader range of angles between the minimum variance and the mean field makes it unlikely that the cause of the variance turning is to be found in a turbulence mechanism. There are no obvious alternative mechanisms, leaving us with another intriguing puzzle.

  17. Visualization of Arenavirus RNA Species in Individual Cells by Single-Molecule Fluorescence In Situ Hybridization Suggests a Model of Cyclical Infection and Clearance during Persistence.

    PubMed

    King, Benjamin R; Samacoits, Aubin; Eisenhauer, Philip L; Ziegler, Christopher M; Bruce, Emily A; Zenklusen, Daniel; Zimmer, Christophe; Mueller, Florian; Botten, Jason

    2018-06-15

    Lymphocytic choriomeningitis mammarenavirus (LCMV) is an enveloped, negative-strand RNA virus that causes serious disease in humans but establishes an asymptomatic, lifelong infection in reservoir rodents. Different models have been proposed to describe how arenaviruses regulate the replication and transcription of their bisegmented, single-stranded RNA genomes, particularly during persistent infection. However, these models were based largely on viral RNA profiling data derived from entire populations of cells. To better understand LCMV replication and transcription at the single-cell level, we established a high-throughput, single-molecule fluorescence in situ hybridization (smFISH) image acquisition and analysis pipeline and examined viral RNA species at discrete time points from virus entry through the late stages of persistent infection in vitro We observed the transcription of viral nucleoprotein and polymerase mRNAs from the incoming S and L segment genomic RNAs, respectively, within 1 h of infection, whereas the transcription of glycoprotein mRNA from the S segment antigenome required ∼4 to 6 h. This confirms the temporal separation of viral gene expression expected due to the ambisense coding strategy of arenaviruses and also suggests that antigenomic RNA contained in virions is not transcriptionally active upon entry. Viral replication and transcription peaked at 36 h postinfection, followed by a progressive loss of viral RNAs over the next several days. During persistence, the majority of cells showed repeating cyclical waves of viral transcription and replication followed by the clearance of viral RNA. Thus, our data support a model of LCMV persistence whereby infected cells can spontaneously clear infection and become reinfected by viral reservoir cells that remain in the population. IMPORTANCE Arenaviruses are human pathogens that can establish asymptomatic, lifelong infections in their rodent reservoirs. Several models have been proposed to explain how arenavirus spread is restricted within host rodents, including the periodic accumulation and loss of replication-competent, but transcriptionally incompetent, viral genomes. A limitation of previous studies was the inability to enumerate viral RNA species at the single-cell level. We developed a high-throughput, smFISH assay and used it to quantitate lymphocytic choriomeningitis mammarenavirus (LCMV) replicative and transcriptional RNA species in individual cells at distinct time points following infection. Our findings support a model whereby productively infected cells can clear infection, including viral RNAs and antigen, and later be reinfected. This information improves our understanding of the timing and possible regulation of LCMV genome replication and transcription during infection. Importantly, the smFISH assay and data analysis pipeline developed here is easily adaptable to other RNA viruses. Copyright © 2018 American Society for Microbiology.

  18. Minimum variance geographic sampling

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.

  19. Mechanism of lymphocytic choriomeningitis virus entry into cells.

    PubMed

    Borrow, P; Oldstone, M B

    1994-01-01

    The path that the arenavirus lymphocytic choriomeningitis virus (LCMV) uses to enter rodent fibroblastic cell lines was dissected by infectivity and inhibition studies and immunoelectron microscopy. Lysosomotropic weak bases (chloroquine and ammonium chloride) and carboxylic ionophores (monensin and nigericin) inhibited virus entry, assessed as virus nucleoprotein expression at early times post-infection, indicating that the entry process involved a pH-dependent fusion step in intracellular vesicles. That entry occurred in vesicles rather than by direct fusion of virions with the plasma membrane was confirmed by immunoelectron microscopy. The vesicles involved were large (150-300 nm diameter), smooth-walled, and not associated with clathrin. Unlike classical phagocytosis, virus uptake in these vesicles was a microfilament-independent process, as it was not blocked by cytochalasins. LCMV entry into rodent fibroblast cell lines thus involves viropexis in large smooth-walled vesicles, followed by a pH-dependent fusion event inside the cell.

  20. Portfolio optimization with mean-variance model

    NASA Astrophysics Data System (ADS)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  1. CD70-deficiency impairs effector CD8 T cell generation and viral clearance but is dispensable for the recall response to LCMV

    PubMed Central

    Munitic, Ivana; Kuka, Mirela; Allam, Atef; Scoville, Jonathan P.; Ashwell, Jonathan D.

    2012-01-01

    CD27 interactions with its ligand, CD70, are thought to be necessary for optimal primary and memory adaptive immune responses to a variety of pathogens. Thus far all studies addressing the function of the CD27-CD70 axis have been performed either in mice lacking CD27, overexpressing CD70, or in which these receptors were blocked or mimicked by antibodies or recombinant soluble CD70. Because these methods have in some cases led to divergent results, we generated CD70-deficient mice to directly assess its role in vivo. We find that lack of CD70-mediated stimulation during primary responses to LCMV lowered the magnitude of CD8 antigen-specific T cell response, resulting in impaired viral clearance, without affecting CD4 T cell responses. Unexpectedly, CD70-CD27 costimulation was not needed for memory CD8 T cell generation or the ability to mount a recall response to LCMV. Adoptive transfers of wild type (WT) memory T cells into CD70−/− or WT hosts also showed no need for CD70-mediated stimulation during the course of the recall response. Moreover, CD70-expression by CD8 T cells could not rescue endogenous CD70−/− cells from defective expansion, arguing against a role for CD70-mediated T:T help in this model. Therefore, CD70 appears to be an important factor in the initiation of a robust and effective primary response but dispensable for CD8 T cell memory responses. PMID:23269247

  2. 76 FR 1145 - Alabama Power Company; Notice of Application for Amendment of License and Soliciting Comments...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-07

    ... drought-based temporary variance of the Martin Project rule curve and minimum flow releases at the Yates... requesting a drought- based temporary variance to the Martin Project rule curve. The rule curve variance...

  3. Methamphetamine mediates immune dysregulation in a murine model of chronic viral infection

    PubMed Central

    Sriram, Uma; Haldar, Bijayesh; Cenna, Jonathan M.; Gofman, Larisa; Potula, Raghava

    2015-01-01

    Methamphetamine (METH) is a highly addictive psychostimulant that not only affects the brain and cognitive functions but also greatly impacts the host immune system, rendering the body susceptible to infections and exacerbating the severity of disease. Although there is gathering evidence about METH abuse and increased incidence of HIV and other viral infections, not much is known about the effects on the immune system in a chronic viral infection setting. We have used the lymphocytic choriomeningitis virus (LCMV) chronic mouse model of viral infection in a chronic METH environment and demonstrate that METH significantly increases CD3 marker on splenocytes and programmed death-1 (PD-1) expression on T cells, a cell surface signaling molecule known to inhibit T cell function and cause exhaustion in a lymphoid organ. Many of these METH effects were more pronounced during early stage of infection, which are gradually attenuated during later stages of infection. An essential cytokine for T-lymphocyte homeostasis, Interleukin-2 (IL-2) in serum was prominently reduced in METH-exposed infected mice. In addition, the serum pro-inflammatory (TNF, IL12 p70, IL1β, IL-6, and KC-GRO) and Th2 (IL-2, IL-10, and IL-4) cytokine profiles were also altered in the presence of METH. Interestingly CXCR3, an inflammatory chemokine receptor, showed significant increase in the METH treated LCMV infected mice. Similarly, compared to only infected mice, epidermal growth factor receptor (EGFR) in METH exposed LCMV infected mice were up regulated. Collectively, our data suggest that METH alters systemic, peripheral immune responses and modulates key markers on T cells involved in pathogenesis of chronic viral infection. PMID:26322025

  4. Evolutionary analysis of Old World arenaviruses reveals a major adaptive contribution of the viral polymerase.

    PubMed

    Pontremoli, Chiara; Forni, Diego; Cagliani, Rachele; Pozzoli, Uberto; Riva, Stefania; Bravo, Ignacio G; Clerici, Mario; Sironi, Manuela

    2017-10-01

    The Old World (OW) arenavirus complex includes several species of rodent-borne viruses, some of which (i.e., Lassa virus, LASV and Lymphocytic choriomeningitis virus, LCMV) cause human diseases. Most LCMV and LASV infections are caused by rodent-to-human transmissions. Thus, viral evolution is largely determined by events that occur in the wildlife reservoirs. We used a set of human- and rodent-derived viral sequences to investigate the evolutionary history underlying OW arenavirus speciation, as well as the more recent selective events that accompanied LASV spread in West Africa. We show that the viral RNA polymerase (L protein) was a major positive selection target in OW arenaviruses and during LASV out-of-Nigeria migration. No evidence of selection was observed for the glycoprotein, whereas positive selection acted on the nucleoprotein (NP) during LCMV speciation. Positively selected sites in L and NP are surrounded by highly conserved residues, and the bulk of the viral genome evolves under purifying selection. Several positively selected sites are likely to modulate viral replication/transcription. In both L and NP, structural features (solvent exposed surface area) are important determinants of site-wise evolutionary rate variation. By incorporating several rodent-derived sequences, we also performed an analysis of OW arenavirus codon adaptation to the human host. Results do not support a previously hypothesized role of codon adaptation in disease severity for non-Nigerian strains. In conclusion, L and NP represent the major selection targets and possible determinants of disease presentation; these results suggest that field surveys and experimental studies should primarily focus on these proteins. © 2017 John Wiley & Sons Ltd.

  5. Blockade of interferon Beta, but not interferon alpha, signaling controls persistent viral infection.

    PubMed

    Ng, Cherie T; Sullivan, Brian M; Teijaro, John R; Lee, Andrew M; Welch, Megan; Rice, Stephanie; Sheehan, Kathleen C F; Schreiber, Robert D; Oldstone, Michael B A

    2015-05-13

    Although type I interferon (IFN-I) is thought to be beneficial against microbial infections, persistent viral infections are characterized by high interferon signatures suggesting that IFN-I signaling may promote disease pathogenesis. During persistent lymphocytic choriomeningitis virus (LCMV) infection, IFNα and IFNβ are highly induced early after infection, and blocking IFN-I receptor (IFNAR) signaling promotes virus clearance. We assessed the specific roles of IFNβ versus IFNα in controlling LCMV infection. While blockade of IFNβ alone does not alter early viral dissemination, it is important in determining lymphoid structure, lymphocyte migration, and anti-viral T cell responses that lead to accelerated virus clearance, approximating what occurs during attenuation of IFNAR signaling. Comparatively, blockade of IFNα was not associated with improved viral control, but with early dissemination of virus. Thus, despite their use of the same receptor, IFNβ and IFNα have unique and distinguishable biologic functions, with IFNβ being mainly responsible for promoting viral persistence. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation

    NASA Astrophysics Data System (ADS)

    Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.

    2013-08-01

    In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.

  7. Analysis and application of minimum variance discrete time system identification

    NASA Technical Reports Server (NTRS)

    Kaufman, H.; Kotob, S.

    1975-01-01

    An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  8. Synthesis of correlation filters: a generalized space-domain approach for improved filter characteristics

    NASA Astrophysics Data System (ADS)

    Sudharsanan, Subramania I.; Mahalanobis, Abhijit; Sundareshan, Malur K.

    1990-12-01

    Discrete frequency domain design of Minimum Average Correlation Energy filters for optical pattern recognition introduces an implementational limitation of circular correlation. An alternative methodology which uses space domain computations to overcome this problem is presented. The technique is generalized to construct an improved synthetic discriminant function which satisfies the conflicting requirements of reduced noise variance and sharp correlation peaks to facilitate ease of detection. A quantitative evaluation of the performance characteristics of the new filter is conducted and is shown to compare favorably with the well known Minimum Variance Synthetic Discriminant Function and the space domain Minimum Average Correlation Energy filter, which are special cases of the present design.

  9. H-2 compatibility requirement for virus-specific T-cell-mediated cytolysis. Evaluation of the role of H-2I region and non-H-2 genes in regulating immune response

    PubMed Central

    1976-01-01

    Lymphocytic choriomeningitis virus (LCMV) and ectromelia virus-specific T-cell-mediated cytotoxicity was assayed in various strain combinations using as targets peritoneal macrophages which have been shown to express Ia antigens. Virus-specific cytotoxicity was found only in H-2K- or D-region compatible combinations. I-region compatibility was not necessary nor alone sufficient for lysis. Six different I-region specificities had no obvious effect on the capacity to generate in vivo specific cytotoxicity (expressed in vitro) associated with Dd. Low LCMV- specific cytotoxic activity generated in DBA/2 mice was caused by the non-H-2 genetic background. This trait was inversely related to the infectious virus dose and recessive. Non-H-2 genes, possibly involved in controlling initial spread and multiplication of virus, seem to be, at least in the examples tested, more important in determining virus- specific cytotoxic T-cell activity in spleens than are Ir genes coded in H-2. PMID:1085331

  10. Protective Capacity of Memory CD8+ T Cells is Dictated by Antigen Exposure History and Nature of the Infection

    PubMed Central

    Nolz, Jeffrey C.; Harty, John T.

    2011-01-01

    SUMMARY Infection or vaccination confers heightened resistance to pathogen re-challenge due to quantitative and qualitative differences between naïve and primary memory T cells. Herein, we show that secondary (boosted) memory CD8+ T cells were better than primary memory CD8+ T cells in controlling some, but not all acute infections with diverse pathogens. However, secondary memory CD8+ T cells were less efficient than an equal number of primary memory cells at preventing chronic LCMV infection and are more susceptible to functional exhaustion. Importantly, localization of memory CD8+ T cells within lymph nodes, which is reduced by antigen re-stimulation, was critical for both viral control in lymph nodes and for the sustained CD8+ T cell response required to prevent chronic LCMV infection. Thus, repeated antigen-stimulation shapes memory CD8+ T cell populations to either enhance or decrease per cell protective immunity in a pathogen-specific manner, a concept of importance in vaccine design against specific diseases. PMID:21549619

  11. The role of proteolytic processing and the stable signal peptide in expression of the Old World arenavirus envelope glycoprotein ectodomain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burri, Dominique J.; Pasquato, Antonella; Ramos da Palma, Joel

    2013-02-05

    Maturation of the arenavirus GP precursor (GPC) involves proteolytic processing by cellular signal peptidase and the proprotein convertase subtilisin kexin isozyme 1 (SKI-1)/site 1 protease (S1P), yielding a tripartite complex comprised of a stable signal peptide (SSP), the receptor-binding GP1, and the fusion-active transmembrane GP2. Here we investigated the roles of SKI-1/S1P processing and SSP in the biosynthesis of the recombinant GP ectodomains of lymphocytic choriomeningitis virus (LCMV) and Lassa virus (LASV). When expressed in mammalian cells, the LCMV and LASV GP ectodomains underwent processing by SKI-1/S1P, followed by dissociation of GP1 from GP2. The GP2 ectodomain spontaneously formed trimersmore » as revealed by chemical cross-linking. The endogenous SSP, known to be crucial for maturation and transport of full-length arenavirus GPC was dispensable for processing and secretion of the soluble GP ectodomain, suggesting a specific role of SSP in the stable prefusion conformation and transport of full-length GPC.« less

  12. Antiviral activity of NK 1.1+ natural killer cells in C57BL/6 scid mice infected with murine cytomegalovirus.

    PubMed

    Welsh, R M; O'Donnell, C L; Shultz, L D

    1994-01-01

    The activation, proliferation, and antiviral effects of natural killer (NK) cells were examined in a newly developed stock of mice, C57BL/6JSz mice homozygous for the severe combined immunodeficiency (scid) mutation. These mice lack functional T and B cells and express the NK 1.1 alloantigen. Such NK 1.1 expression facilitates the analysis of NK cells and their depletion in vivo with a monoclonal anti-NK 1.1 antibody. These mice, therefore, provide an excellent model to examine unambiguously the interactions between viral infections and NK cells in a system devoid of adaptive immune response mechanisms. Here we show that murine cytomegalovirus (MCMV) and lymphocytic choriomeningitis virus (LCMV) infections resulted in profound levels of NK cell activation. NK cells also proliferated greatly in response to LCMV but generally to a lesser degree in response to MCMV. Depletion of the NK cell activity in vivo caused substantial increases in MCMV synthesis and MCMV-induced pathology. These results further support the concept that NK cells are major regulators of MCMV pathogenesis.

  13. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.

    1980-12-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less

  14. Applications of active adaptive noise control to jet engines

    NASA Technical Reports Server (NTRS)

    Shoureshi, Rahmat; Brackney, Larry

    1993-01-01

    During phase 2 research on the application of active noise control to jet engines, the development of multiple-input/multiple-output (MIMO) active adaptive noise control algorithms and acoustic/controls models for turbofan engines were considered. Specific goals for this research phase included: (1) implementation of a MIMO adaptive minimum variance active noise controller; and (2) turbofan engine model development. A minimum variance control law for adaptive active noise control has been developed, simulated, and implemented for single-input/single-output (SISO) systems. Since acoustic systems tend to be distributed, multiple sensors, and actuators are more appropriate. As such, the SISO minimum variance controller was extended to the MIMO case. Simulation and experimental results are presented. A state-space model of a simplified gas turbine engine is developed using the bond graph technique. The model retains important system behavior, yet is of low enough order to be useful for controller design. Expansion of the model to include multiple stages and spools is also discussed.

  15. Large amplitude MHD waves upstream of the Jovian bow shock

    NASA Technical Reports Server (NTRS)

    Goldstein, M. L.; Smith, C. W.; Matthaeus, W. H.

    1983-01-01

    Observations of large amplitude magnetohydrodynamics (MHD) waves upstream of Jupiter's bow shock are analyzed. The waves are found to be right circularly polarized in the solar wind frame which suggests that they are propagating in the fast magnetosonic mode. A complete spectral and minimum variance eigenvalue analysis of the data was performed. The power spectrum of the magnetic fluctuations contains several peaks. The fluctuations at 2.3 mHz have a direction of minimum variance along the direction of the average magnetic field. The direction of minimum variance of these fluctuations lies at approximately 40 deg. to the magnetic field and is parallel to the radial direction. We argue that these fluctuations are waves excited by protons reflected off the Jovian bow shock. The inferred speed of the reflected protons is about two times the solar wind speed in the plasma rest frame. A linear instability analysis is presented which suggests an explanation for many of the observed features of the observations.

  16. Modeling Multiplicative Error Variance: An Example Predicting Tree Diameter from Stump Dimensions in Baldcypress

    Treesearch

    Bernard R. Parresol

    1993-01-01

    In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...

  17. Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.

    PubMed

    Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L

    2017-05-31

    Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.

  18. Analysis of 20 magnetic clouds at 1 AU during a solar minimum

    NASA Astrophysics Data System (ADS)

    Gulisano, A. M.; Dasso, S.; Mandrini, C. H.; Démoulin, P.

    We study 20 magnetic clouds, observed in situ by the spacecraft Wind, at the Lagrangian point L1, from 22 August, 1995, to 7 November, 1997. In previous works, assuming a cylindrical symmetry for the local magnetic configuration and a satellite trajectory crossing the axis of the cloud, we obtained their orientations using a minimum variance analysis. In this work we compute the orientations and magnetic configurations using a non-linear simultaneous fit of the geometric and physical parameters for a linear force-free model, including the possibility of a not null impact parameter. We quantify global magnitudes such as the relative magnetic helicity per unit length and compare the values found with both methods (minimum variance and the simultaneous fit). FULL TEXT IN SPANISH

  19. Timing and magnitude of type I interferon responses by distinct sensors impact CD8 T cell exhaustion and chronic viral infection

    PubMed Central

    Wang, Yaming; Swiecki, Melissa; Cella, Marina; Alber, Gottfried; Schreiber, Robert D; Gilfillan, Susan; Colonna, Marco

    2013-01-01

    Summary Type I Interferons (IFN-I) promote antiviral CD8+T cell responses, but the contribution of different IFN-I sources and signaling pathways are ill-defined. While plasmacytoid dendritic cells (pDCs) produce IFN-I upon TLR stimulation, IFN-I are induced in most cells by helicases like MDA5. Using acute and chronic lymphocytic choriomeningitis virus (LCMV) infection models, we determined that pDCs transiently produce IFN-I that minimally impacts CD8+T cell responses and viral persistence. Rather, MDA5 is the key sensor that induces IFN-I required for CD8+T cell responses. In the absence of MDA5, CD8+T cell responses to acute infection rely on CD4+T cell help, and loss of both CD4+T cells and MDA5 results in CD8+T cell exhaustion and persistent infection. Chronic LCMV infection rapidly attenuates IFN-I responses, but early administration of exogenous IFN-I rescues CD8+T cells, promoting viral clearance. Thus, effective antiviral CD8+T cell responses depend on the timing and magnitude of IFN-I responses. PMID:22704623

  20. Timing and magnitude of type I interferon responses by distinct sensors impact CD8 T cell exhaustion and chronic viral infection.

    PubMed

    Wang, Yaming; Swiecki, Melissa; Cella, Marina; Alber, Gottfried; Schreiber, Robert D; Gilfillan, Susan; Colonna, Marco

    2012-06-14

    Type I interferon (IFN-I) promotes antiviral CD8(+)T cell responses, but the contribution of different IFN-I sources and signaling pathways are ill defined. While plasmacytoid dendritic cells (pDCs) produce IFN-I upon TLR stimulation, IFN-I is induced in most cells by helicases like MDA5. Using acute and chronic lymphocytic choriomeningitis virus (LCMV) infection models, we determined that pDCs transiently produce IFN-I that minimally impacts CD8(+)T cell responses and viral persistence. Rather, MDA5 is the key sensor that induces IFN-I required for CD8(+)T cell responses. In the absence of MDA5, CD8(+)T cell responses to acute infection rely on CD4(+)T cell help, and loss of both CD4(+)T cells and MDA5 results in CD8(+)T cell exhaustion and persistent infection. Chronic LCMV infection rapidly attenuates IFN-I responses, but early administration of exogenous IFN-I rescues CD8(+)T cells, promoting viral clearance. Thus, effective antiviral CD8(+)T cell responses depend on the timing and magnitude of IFN-I production. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Anti-IFNγ and peptide-tolerization therapies inhibit acute lung injury induced by crossreactive influenza-A (IAV)-specific memory T-cells

    PubMed Central

    Wlodarczyk, Myriam F.; Kraft, Anke R.; Chen, Hong D.; Kenney, Laurie L.; Selin, Liisa K.

    2013-01-01

    Viral infections have variable outcomes with severe disease occurring in only few individuals. We hypothesized that this variable outcome could correlate with the nature of responses made to previous microbes. To test this, mice were infected initially with IAV and in memory-phase challenged with LCMV, which we show here to have relatively minor cross-reactivity with IAV. The outcome in genetically identical mice varied from mild pneumonitis to severe acute lung injury with extensive pneumonia and bronchiolization, similar to that observed in patients that died of the 1918 H1N1 pandemic. Lesion expression did not correlate with virus titers. Instead, disease severity directly correlated with and was predicted by the frequency of IAV-PB1703- and -PA224-specific responses, which crossreacted with LCMV-GP34 and -GP276, respectively. Eradication or functional ablation of these pathogenic memory T-cell populations, using mutant-viral strains, peptide-based tolerization strategies, or short-term anti-IFNγ treatment inhibited severe lesions such as bronchiolization from occurring. Heterologous immunity can shape outcome of infections and likely individual responses to vaccination, and can be manipulated to treat or prevent severe pathology. PMID:23408839

  2. Interferons direct Th2 cell reprogramming to generate a stable GATA-3(+)T-bet(+) cell subset with combined Th2 and Th1 cell functions.

    PubMed

    Hegazy, Ahmed N; Peine, Michael; Helmstetter, Caroline; Panse, Isabel; Fröhlich, Anja; Bergthaler, Andreas; Flatz, Lukas; Pinschewer, Daniel D; Radbruch, Andreas; Löhning, Max

    2010-01-29

    Current T cell differentiation models invoke separate T helper 2 (Th2) and Th1 cell lineages governed by the lineage-specifying transcription factors GATA-3 and T-bet. However, knowledge on the plasticity of Th2 cell lineage commitment is limited. Here we show that infection with Th1 cell-promoting lymphocytic choriomeningitis virus (LCMV) reprogrammed otherwise stably committed GATA-3(+) Th2 cells to adopt a GATA-3(+)T-bet(+) and interleukin-4(+)interferon-gamma(+) "Th2+1" phenotype that was maintained in vivo for months. Th2 cell reprogramming required T cell receptor stimulation, concerted type I and type II interferon and interleukin-12 signals, and T-bet. LCMV-triggered T-bet induction in adoptively transferred virus-specific Th2 cells was crucial to prevent viral persistence and fatal immunopathology. Thus, functional reprogramming of unfavorably differentiated Th2 cells may facilitate the establishment of protective immune responses. Stable coexpression of GATA-3 and T-bet provides a molecular concept for the long-term coexistence of Th2 and Th1 cell lineage characteristics in single memory T cells. Copyright 2010 Elsevier Inc. All rights reserved.

  3. Estimation of transformation parameters for microarray data.

    PubMed

    Durbin, Blythe; Rocke, David M

    2003-07-22

    Durbin et al. (2002), Huber et al. (2002) and Munson (2001) independently introduced a family of transformations (the generalized-log family) which stabilizes the variance of microarray data up to the first order. We introduce a method for estimating the transformation parameter in tandem with a linear model based on the procedure outlined in Box and Cox (1964). We also discuss means of finding transformations within the generalized-log family which are optimal under other criteria, such as minimum residual skewness and minimum mean-variance dependency. R and Matlab code and test data are available from the authors on request.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luis, Alfredo

    The use of Renyi entropy as an uncertainty measure alternative to variance leads to the study of states with quantum fluctuations below the levels established by Gaussian states, which are the position-momentum minimum uncertainty states according to variance. We examine the quantum properties of states with exponential wave functions, which combine reduced fluctuations with practical feasibility.

  5. Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.

    ERIC Educational Resources Information Center

    Wang, Yuh-Yin Wu; Schafer, William D.

    This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…

  6. A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Louis A; Mason, John J.

    We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, themore » problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.« less

  7. Dampened antiviral immunity to intravaginal exposure to RNA viral pathogens allows enhanced viral replication

    PubMed Central

    Woodruff, Erik M.; Trapecar, Martin; Fontaine, Krystal A.; Ezaki, Ashley; Ott, Melanie

    2016-01-01

    Understanding the host immune response to vaginal exposure to RNA viruses is required to combat sexual transmission of this class of pathogens. In this study, using lymphocytic choriomeningitis virus (LCMV) and Zika virus (ZIKV) in wild-type mice, we show that these viruses replicate in the vaginal mucosa with minimal induction of antiviral interferon and inflammatory response, causing dampened innate-mediated control of viral replication and a failure to mature local antigen-presenting cells (APCs). Enhancement of innate-mediated inflammation in the vaginal mucosa rescues this phenotype and completely inhibits ZIKV replication. To gain a better understanding of how this dampened innate immune activation in the lower female reproductive tract may also affect adaptive immunity, we modeled CD8 T cell responses using vaginal LCMV infection. We show that the lack of APC maturation in the vaginal mucosa leads to a delay in CD8 T cell activation in the draining lymph node and hinders the timely appearance of effector CD8 T cells in vaginal mucosa, thus further delaying viral control in this tissue. Our study demonstrates that vaginal tissue is exceptionally vulnerable to infection by RNA viruses and provides a conceptual framework for the male to female sexual transmission observed during ZIKV infection. PMID:27852793

  8. Conserved residues in Lassa fever virus Z protein modulate viral infectivity at the level of the ribonucleoprotein.

    PubMed

    Capul, Althea A; de la Torre, Juan Carlos; Buchmeier, Michael J

    2011-04-01

    Arenaviruses are negative-strand RNA viruses that cause human diseases such as lymphocytic choriomeningitis, Bolivian hemorrhagic fever, and Lassa hemorrhagic fever. No licensed vaccines exist, and current treatment is limited to ribavirin. The prototypic arenavirus, lymphocytic choriomeningitis virus (LCMV), is a model for dissecting virus-host interactions in persistent and acute disease. The RING finger protein Z has been identified as the driving force of arenaviral budding and acts as the viral matrix protein. While residues in Z required for viral budding have been described, residues that govern the Z matrix function(s) have yet to be fully elucidated. Because this matrix function is integral to viral assembly, we reasoned that this would be reflected in sequence conservation. Using sequence alignment, we identified several conserved residues in Z outside the RING and late domains. Nine residues were each mutated to alanine in Lassa fever virus Z. All of the mutations affected the expression of an LCMV minigenome and the infectivity of virus-like particles, but to greatly varying degrees. Interestingly, no mutations appeared to affect Z-mediated budding or association with viral GP. Our findings provide direct experimental evidence supporting a role for Z in the modulation of the activity of the viral ribonucleoprotein (RNP) complex and its packaging into mature infectious viral particles.

  9. A method for minimum risk portfolio optimization under hybrid uncertainty

    NASA Astrophysics Data System (ADS)

    Egorova, Yu E.; Yazenin, A. V.

    2018-03-01

    In this paper, we investigate a minimum risk portfolio model under hybrid uncertainty when the profitability of financial assets is described by fuzzy random variables. According to Feng, the variance of a portfolio is defined as a crisp value. To aggregate fuzzy information the weakest (drastic) t-norm is used. We construct an equivalent stochastic problem of the minimum risk portfolio model and specify the stochastic penalty method for solving it.

  10. Kalman filter for statistical monitoring of forest cover across sub-continental regions [Symposium

    Treesearch

    Raymond L. Czaplewski

    1991-01-01

    The Kalman filter is a generalization of the composite estimator. The univariate composite estimate combines 2 prior estimates of population parameter with a weighted average where the scalar weight is inversely proportional to the variances. The composite estimator is a minimum variance estimator that requires no distributional assumptions other than estimates of the...

  11. Solving portfolio selection problems with minimum transaction lots based on conditional-value-at-risk

    NASA Astrophysics Data System (ADS)

    Setiawan, E. P.; Rosadi, D.

    2017-01-01

    Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.

  12. Movement trajectory smoothness is not associated with the endpoint accuracy of rapid multi-joint arm movements in young and older adults

    PubMed Central

    Poston, Brach; Van Gemmert, Arend W.A.; Sharma, Siddharth; Chakrabarti, Somesh; Zavaremi, Shahrzad H.; Stelmach, George

    2013-01-01

    The minimum variance theory proposes that motor commands are corrupted by signal-dependent noise and smooth trajectories with low noise levels are selected to minimize endpoint error and endpoint variability. The purpose of the study was to determine the contribution of trajectory smoothness to the endpoint accuracy and endpoint variability of rapid multi-joint arm movements. Young and older adults performed arm movements (4 blocks of 25 trials) as fast and as accurately as possible to a target with the right (dominant) arm. Endpoint accuracy and endpoint variability along with trajectory smoothness and error were quantified for each block of trials. Endpoint error and endpoint variance were greater in older adults compared with young adults, but decreased at a similar rate with practice for the two age groups. The greater endpoint error and endpoint variance exhibited by older adults were primarily due to impairments in movement extent control and not movement direction control. The normalized jerk was similar for the two age groups, but was not strongly associated with endpoint error or endpoint variance for either group. However, endpoint variance was strongly associated with endpoint error for both the young and older adults. Finally, trajectory error was similar for both groups and was weakly associated with endpoint error for the older adults. The findings are not consistent with the predictions of the minimum variance theory, but support and extend previous observations that movement trajectories and endpoints are planned independently. PMID:23584101

  13. Effects of important parameters variations on computing eigenspace-based minimum variance weights for ultrasound tissue harmonic imaging

    NASA Astrophysics Data System (ADS)

    Haji Heidari, Mehdi; Mozaffarzadeh, Moein; Manwar, Rayyan; Nasiriavanaki, Mohammadreza

    2018-02-01

    In recent years, the minimum variance (MV) beamforming has been widely studied due to its high resolution and contrast in B-mode Ultrasound imaging (USI). However, the performance of the MV beamformer is degraded at the presence of noise, as a result of the inaccurate covariance matrix estimation which leads to a low quality image. Second harmonic imaging (SHI) provides many advantages over the conventional pulse-echo USI, such as enhanced axial and lateral resolutions. However, the low signal-to-noise ratio (SNR) is a major problem in SHI. In this paper, Eigenspace-based minimum variance (EIBMV) beamformer has been employed for second harmonic USI. The Tissue Harmonic Imaging (THI) is achieved by Pulse Inversion (PI) technique. Using the EIBMV weights, instead of the MV ones, would lead to reduced sidelobes and improved contrast, without compromising the high resolution of the MV beamformer (even at the presence of a strong noise). In addition, we have investigated the effects of variations of the important parameters in computing EIBMV weights, i.e., K, L, and δ, on the resolution and contrast obtained in SHI. The results are evaluated using numerical data (using point target and cyst phantoms), and the proper parameters of EIBMV are indicated for THI.

  14. Hydraulic geometry of river cross sections; theory of minimum variance

    USGS Publications Warehouse

    Williams, Garnett P.

    1978-01-01

    This study deals with the rates at which mean velocity, mean depth, and water-surface width increase with water discharge at a cross section on an alluvial stream. Such relations often follow power laws, the exponents in which are called hydraulic exponents. The Langbein (1964) minimum-variance theory is examined in regard to its validity and its ability to predict observed hydraulic exponents. The variables used with the theory were velocity, depth, width, bed shear stress, friction factor, slope (energy gradient), and stream power. Slope is often constant, in which case only velocity, depth, width, shear and friction factor need be considered. The theory was tested against a wide range of field data from various geographic areas of the United States. The original theory was intended to produce only the average hydraulic exponents for a group of cross sections in a similar type of geologic or hydraulic environment. The theory does predict these average exponents with a reasonable degree of accuracy. An attempt to forecast the exponents at any selected cross section was moderately successful. Empirical equations are more accurate than the minimum variance, Gauckler-Manning, or Chezy methods. Predictions of the exponent of width are most reliable, the exponent of depth fair, and the exponent of mean velocity poor. (Woodard-USGS)

  15. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm

    NASA Astrophysics Data System (ADS)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.

  16. Mesoscale Gravity Wave Variances from AMSU-A Radiances

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.

    2004-01-01

    A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.

  17. Analysis of conditional genetic effects and variance components in developmental genetics.

    PubMed

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  18. Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics

    PubMed Central

    Zhu, J.

    1995-01-01

    A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500

  19. Some refinements on the comparison of areal sampling methods via simulation

    Treesearch

    Jeffrey Gove

    2017-01-01

    The design of forest inventories and development of new sampling methods useful in such inventories normally have a two-fold target of design unbiasedness and minimum variance in mind. Many considerations such as costs go into the choices of sampling method for operational and other levels of inventory. However, the variance in terms of meeting a specified level of...

  20. A comparison of coronal and interplanetary current sheet inclinations

    NASA Technical Reports Server (NTRS)

    Behannon, K. W.; Burlaga, L. F.; Hundhausen, A. J.

    1983-01-01

    The HAO white light K-coronameter observations show that the inclination of the heliospheric current sheet at the base of the corona can be both large (nearly vertical with respect to the solar equator) or small during Cararington rotations 1660 - 1666 and even on a single solar rotation. Voyager 1 and 2 magnetic field observations of crossing of the heliospheric current sheet at distances from the Sun of 1.4 and 2.8 AU. Two cases are considered, one in which the corresponding coronameter data indicate a nearly vertical (north-south) current sheet and another in which a nearly horizontal, near equatorial current sheet is indicated. For the crossings of the vertical current sheet, a variance analysis based on hour averages of the magnetic field data gave a minimum variance direction consistent with a steep inclination. The horizontal current sheet was observed by Voyager as a region of mixed polarity and low speeds lasting several days, consistent with multiple crossings of a horizontal but irregular and fluctuating current sheet at 1.4 AU. However, variance analysis of individual current sheet crossings in this interval using 1.92 see averages did not give minimum variance directions consistent with a horizontal current sheet.

  1. Minimum Variance Distortionless Response Beamformer with Enhanced Nulling Level Control via Dynamic Mutated Artificial Immune System

    PubMed Central

    Kiong, Tiong Sieh; Salem, S. Balasem; Paw, Johnny Koh Siaw; Sankar, K. Prajindra

    2014-01-01

    In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals. PMID:25003136

  2. Minimum variance distortionless response beamformer with enhanced nulling level control via dynamic mutated artificial immune system.

    PubMed

    Kiong, Tiong Sieh; Salem, S Balasem; Paw, Johnny Koh Siaw; Sankar, K Prajindra; Darzi, Soodabeh

    2014-01-01

    In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals.

  3. 25 CFR 542.18 - How does a gaming operation apply for a variance from the standards of the part?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false How does a gaming operation apply for a variance from the standards of the part? 542.18 Section 542.18 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.18 How does a gaming operation apply for a...

  4. Enhancing T cell activation and antiviral protection by introducing the HIV-1 protein transduction domain into a DNA vaccine.

    PubMed

    Leifert, J A; Lindencrona, J A; Charo, J; Whitton, J L

    2001-10-10

    Protein transduction domains (PTD), which can transport proteins or peptides across biological membranes, have been identified in several proteins of viral, invertebrate, and vertebrate origin. Here, we evaluate the immunological and biological consequences of including PTD in synthetic peptides and in DNA vaccines that contain CD8(+) T cell epitopes from lymphocytic choriomeningitis virus (LCMV). Synthetic PTD-peptides did not induce detectable CD8(+) T cell responses. However, fusion of an open reading frame encoding a PTD to an epitope minigene caused transfected tissue culture cells to stimulate epitope-specific T cells much more effectively. Kinetic studies indicated that the epitope reached the surface of transfected cells more rapidly and that the number of transfected cells needed to stimulate T cell responses was reduced by 35- to 50-fold when compared to cells transfected with a standard minigene plasmid. The mechanism underlying the effect of PTD linkage is not clear, but transit of the PTD-attached epitope from transfected cells to nontransfected cells (cross presentation) seemed to play, at most, a minimal role. Mice immunized once with the plasmid encoding the PTD-linked epitope showed a markedly accelerated CD8(+) T cell response and, unlike mice immunized with a standard plasmid, were completely protected against a normally lethal LCMV challenge administered only 8 days post-immunization.

  5. Transient FTY720 treatment promotes immune-mediated clearance of a chronic viral infection.

    PubMed

    Premenko-Lanier, Mary; Moseley, Nelson B; Pruett, Sarah T; Romagnoli, Pablo A; Altman, John D

    2008-08-14

    For a wide variety of microbial pathogens, the outcome of the infection is indeterminate. In some individuals the microbe is cleared, but in others it establishes a chronic infection, and the factors that tip this balance are often unknown. In a widely used model of chronic viral infection, C57BL/6 mice clear the Armstrong strain of lymphocytic choriomeningitis virus (LCMV), but the clone 13 strain persists. Here we show that the Armstrong strain induces a profound lymphopenia at days 1-3 after infection, but the clone 13 strain does not. If we transiently augment lymphopenia by treating the clone-13-infected mice with the drug FTY720 at days 0-2 after infection, the mice successfully clear the infection by day 30. Clearance does not occur when CD4 T cells are absent at the time of treatment, indicating that the drug is not exerting direct antiviral effects. Notably, FTY720 treatment of an already established persistent infection also leads to viral clearance. In both models, FTY720 treatment preserves or augments LCMV-specific CD4 and CD8 T-cell responses, a result that is counter-intuitive because FTY720 is generally regarded as a new immunosuppressive agent. Because FTY720 targets host pathways that are completely evolutionarily conserved, our results may be translatable into new immunotherapies for the treatment of chronic microbial infections in humans.

  6. Intranasal delivery of recombinant parvovirus-like particles elicits cytotoxic T-cell and neutralizing antibody responses.

    PubMed

    Sedlik, C; Dridi, A; Deriaud, E; Saron, M F; Rueda, P; Sarraseca, J; Casal, J I; Leclerc, C

    1999-04-01

    We previously demonstrated that chimeric porcine parvovirus-like particles (PPV:VLP) carrying heterologous epitopes, when injected intraperitoneally into mice without adjuvant, activate strong CD4(+) and CD8(+) T-cell responses specific for the foreign epitopes. In the present study, we investigated the immunogenicity of PPV:VLP carrying a CD8(+) T-cell epitope from the lymphocytic choriomeningitis virus (LCMV) administered by mucosal routes. Mice immunized intranasally with recombinant PPV:VLP, in the absence of adjuvant, developed high levels of PPV-specific immunoglobulin G (IgG) and/or IgA in their serum, as well as in mucosal sites such as the bronchoalveolar and intestinal fluids. Antibodies in sera from mice immunized parenterally or intranasally with PPV:VLP were strongly neutralizing in vitro. Intranasal immunization with PPV:VLP carrying the LCMV CD8(+) T-cell epitope also elicited a strong peptide-specific cytotoxic-T-cell (CTL) response. In contrast, mice orally immunized with recombinant PPV:VLP did not develop any antibody or CTL responses. We also showed that mice primed with PPV:VLP are still able to develop strong CTL responses after subsequent immunization with chimeric PPV:VLP carrying a foreign CD8(+) T-cell epitope. These results highlight the attractive potential of PPV:VLP as a safe, nonreplicating antigen carrier to stimulate systemic and mucosal immunity after nasal administration.

  7. “Viral déjà vu” elicits organ-specific immune disease independent of reactivity to self

    PubMed Central

    Merkler, Doron; Horvath, Edit; Bruck, Wolfgang; Zinkernagel, Rolf M.; del la Torre, Juan Carlos; Pinschewer, Daniel D.

    2006-01-01

    Autoimmune diseases are often precipitated by viral infections. Yet our current understanding fails to explain how viruses trigger organ-specific autoimmunity despite thymic tolerance extending to many nonlymphohematopoietic self antigens. Additionally, a key epidemiological finding needs to be explained: In genetically susceptible individuals, early childhood infections seem to predispose them to multiple sclerosis (MS) or type 1 diabetes years or even decades before clinical onset. In the present work, we show that the innate immune system of neonatal mice was sufficient to eliminate an attenuated lymphocytic choriomeningitis virus (LCMV) from most tissues except for the CNS, where the virus persisted in neurons (predisposing virus). Virus-specific cytotoxic T cells (CTLs) were neither deleted nor sufficiently primed to cause disease, but they were efficiently triggered in adulthood upon WT LCMV infection (precipitating virus). This defined sequence of viral infections caused severe CNS inflammation that was histomorphologically reminiscent of rasmussen encephalitis, a fatal human autoimmune disease. Yet disease in mice was mediated by antiviral CTLs targeting an epitope shared by the precipitating virus and the predisposing virus persisting in neurons (déjà vu). Thus the concept of “viral déjà vu” demonstrates how 2 related but independently encountered viral infections can cause organ-specific immune disease without molecular mimicry of self and without breaking self tolerance. PMID:16604192

  8. Conserved region C functions to regulate PD-1 expression and subsequent CD8 T cell memory1

    PubMed Central

    Bally, Alexander P. R.; Tang, Yan; Lee, Joshua T.; Barwick, Benjamin G.; Martinez, Ryan; Evavold, Brian D.; Boss, Jeremy M.

    2016-01-01

    Expression of programmed death 1 (PD-1) on CD8 T cells promotes T cell exhaustion during chronic antigen exposure. During acute infections, PD-1 is transiently expressed and has the potential to modulate CD8 T cell memory formation. Conserved Region C (CR-C), a promoter proximal cis-regulatory element that is critical to PD-1 expression in vitro, responds to NFATc1, FoxO1, and/or NF-κB signaling pathways. Here, a CR-C knockout mouse (CRC−) was established to determine its role on PD-1 expression and corresponding effects on T cell function in vivo. Deletion of CR-C decreased PD-1 expression on CD4 T cells and antigen-specific CD8 T cells during acute and chronic lymphocytic choriomeningitis virus (LCMV) challenges, but did not affect the ability to clear an infection. Following acute LCMV infection, memory CD8 T cells in the CRC− mouse were formed in greater numbers, were more functional, and were more effective at responding to a melanoma tumor than wild-type memory cells. These data implicate a critical role for CR-C in governing PD-1 expression, and a subsequent role in guiding CD8 T cell differentiation. The data suggest the possibility that titrating PD-1 expression during CD8 T cell activation could have important ramifications in vaccine development and clinical care. PMID:27895178

  9. Intranasal Delivery of Recombinant Parvovirus-Like Particles Elicits Cytotoxic T-Cell and Neutralizing Antibody Responses

    PubMed Central

    Sedlik, C.; Dridi, A.; Deriaud, E.; Saron, M. F.; Rueda, P.; Sarraseca, J.; Casal, J. I.; Leclerc, C.

    1999-01-01

    We previously demonstrated that chimeric porcine parvovirus-like particles (PPV:VLP) carrying heterologous epitopes, when injected intraperitoneally into mice without adjuvant, activate strong CD4+ and CD8+ T-cell responses specific for the foreign epitopes. In the present study, we investigated the immunogenicity of PPV:VLP carrying a CD8+ T-cell epitope from the lymphocytic choriomeningitis virus (LCMV) administered by mucosal routes. Mice immunized intranasally with recombinant PPV:VLP, in the absence of adjuvant, developed high levels of PPV-specific immunoglobulin G (IgG) and/or IgA in their serum, as well as in mucosal sites such as the bronchoalveolar and intestinal fluids. Antibodies in sera from mice immunized parenterally or intranasally with PPV:VLP were strongly neutralizing in vitro. Intranasal immunization with PPV:VLP carrying the LCMV CD8+ T-cell epitope also elicited a strong peptide-specific cytotoxic-T-cell (CTL) response. In contrast, mice orally immunized with recombinant PPV:VLP did not develop any antibody or CTL responses. We also showed that mice primed with PPV:VLP are still able to develop strong CTL responses after subsequent immunization with chimeric PPV:VLP carrying a foreign CD8+ T-cell epitope. These results highlight the attractive potential of PPV:VLP as a safe, nonreplicating antigen carrier to stimulate systemic and mucosal immunity after nasal administration. PMID:10074120

  10. A test of source-surface model predictions of heliospheric current sheet inclination

    NASA Technical Reports Server (NTRS)

    Burton, M. E.; Crooker, N. U.; Siscoe, G. L.; Smith, E. J.

    1994-01-01

    The orientation of the heliospheric current sheet predicted from a source surface model is compared with the orientation determined from minimum-variance analysis of International Sun-Earth Explorer (ISEE) 3 magnetic field data at 1 AU near solar maximum. Of the 37 cases analyzed, 28 have minimum variance normals that lie orthogonal to the predicted Parker spiral direction. For these cases, the correlation coefficient between the predicted and measured inclinations is 0.6. However, for the subset of 14 cases for which transient signatures (either interplanetary shocks or bidirectional electrons) are absent, the agreement in inclinations improves dramatically, with a correlation coefficient of 0.96. These results validate not only the use of the source surface model as a predictor but also the previously questioned usefulness of minimum variance analysis across complex sector boundaries. In addition, the results imply that interplanetary dynamics have little effect on current sheet inclination at 1 AU. The dependence of the correlation on transient occurrence suggests that the leading edge of a coronal mass ejection (CME), where transient signatures are detected, disrupts the heliospheric current sheet but that the sheet re-forms between the trailing legs of the CME. In this way the global structure of the heliosphere, reflected both in the source surface maps and in the interplanetary sector structure, can be maintained even when the CME occurrence rate is high.

  11. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm.

    PubMed

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  12. Vegetation greenness impacts on maximum and minimum temperatures in northeast Colorado

    USGS Publications Warehouse

    Hanamean, J. R.; Pielke, R.A.; Castro, C. L.; Ojima, D.S.; Reed, Bradley C.; Gao, Z.

    2003-01-01

    The impact of vegetation on the microclimate has not been adequately considered in the analysis of temperature forecasting and modelling. To fill part of this gap, the following study was undertaken.A daily 850–700 mb layer mean temperature, computed from the National Center for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis, and satellite-derived greenness values, as defined by NDVI (Normalised Difference Vegetation Index), were correlated with surface maximum and minimum temperatures at six sites in northeast Colorado for the years 1989–98. The NDVI values, representing landscape greenness, act as a proxy for latent heat partitioning via transpiration. These sites encompass a wide array of environments, from irrigated-urban to short-grass prairie. The explained variance (r2 value) of surface maximum and minimum temperature by only the 850–700 mb layer mean temperature was subtracted from the corresponding explained variance by the 850–700 mb layer mean temperature and NDVI values. The subtraction shows that by including NDVI values in the analysis, the r2 values, and thus the degree of explanation of the surface temperatures, increase by a mean of 6% for the maxima and 8% for the minima over the period March–October. At most sites, there is a seasonal dependence in the explained variance of the maximum temperatures because of the seasonal cycle of plant growth and senescence. Between individual sites, the highest increase in explained variance occurred at the site with the least amount of anthropogenic influence. This work suggests the vegetation state needs to be included as a factor in surface temperature forecasting, numerical modeling, and climate change assessments.

  13. Change in mean temperature as a predictor of extreme temperature change in the Asia-Pacific region

    NASA Astrophysics Data System (ADS)

    Griffiths, G. M.; Chambers, L. E.; Haylock, M. R.; Manton, M. J.; Nicholls, N.; Baek, H.-J.; Choi, Y.; della-Marta, P. M.; Gosai, A.; Iga, N.; Lata, R.; Laurent, V.; Maitrepierre, L.; Nakamigawa, H.; Ouprasitwong, N.; Solofa, D.; Tahani, L.; Thuy, D. T.; Tibig, L.; Trewin, B.; Vediapan, K.; Zhai, P.

    2005-08-01

    Trends (1961-2003) in daily maximum and minimum temperatures, extremes and variance were found to be spatially coherent across the Asia-Pacific region. The majority of stations exhibited significant trends: increases in mean maximum and mean minimum temperature, decreases in cold nights and cool days, and increases in warm nights. No station showed a significant increase in cold days or cold nights, but a few sites showed significant decreases in hot days and warm nights. Significant decreases were observed in both maximum and minimum temperature standard deviation in China, Korea and some stations in Japan (probably reflecting urbanization effects), but also for some Thailand and coastal Australian sites. The South Pacific convergence zone (SPCZ) region between Fiji and the Solomon Islands showed a significant increase in maximum temperature variability.Correlations between mean temperature and the frequency of extreme temperatures were strongest in the tropical Pacific Ocean from French Polynesia to Papua New Guinea, Malaysia, the Philippines, Thailand and southern Japan. Correlations were weaker at continental or higher latitude locations, which may partly reflect urbanization.For non-urban stations, the dominant distribution change for both maximum and minimum temperature involved a change in the mean, impacting on one or both extremes, with no change in standard deviation. This occurred from French Polynesia to Papua New Guinea (except for maximum temperature changes near the SPCZ), in Malaysia, the Philippines, and several outlying Japanese islands. For urbanized stations the dominant change was a change in the mean and variance, impacting on one or both extremes. This result was particularly evident for minimum temperature.The results presented here, for non-urban tropical and maritime locations in the Asia-Pacific region, support the hypothesis that changes in mean temperature may be used to predict changes in extreme temperatures. At urbanized or higher latitude locations, changes in variance should be incorporated.

  14. Obtaining Reliable Predictions of Terrestrial Energy Coupling From Real-Time Solar Wind Measurement

    NASA Technical Reports Server (NTRS)

    Weimer, Daniel R.

    2001-01-01

    The first draft of a manuscript titled "Variable time delays in the propagation of the interplanetary magnetic field" has been completed, for submission to the Journal of Geophysical Research. In the preparation of this manuscript all data and analysis programs had been updated to the highest temporal resolution possible, at 16 seconds or better. The program which computes the "measured" IMF propagation time delays from these data has also undergone another improvement. In another significant development, a technique has been developed in order to predict IMF phase plane orientations, and the resulting time delays, using only measurements from a single satellite at L1. The "minimum variance" method is used for this computation. Further work will be done on optimizing the choice of several parameters for the minimum variance calculation.

  15. The influence of SO4 and NO3 to the acidity (pH) of rainwater using minimum variance quadratic unbiased estimation (MIVQUE) and maximum likelihood methods

    NASA Astrophysics Data System (ADS)

    Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto

    2017-03-01

    Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.

  16. Diallel analysis for sex-linked and maternal effects.

    PubMed

    Zhu, J; Weir, B S

    1996-01-01

    Genetic models including sex-linked and maternal effects as well as autosomal gene effects are described. Monte Carlo simulations were conducted to compare efficiencies of estimation by minimum norm quadratic unbiased estimation (MINQUE) and restricted maximum likelihood (REML) methods. MINQUE(1), which has 1 for all prior values, has a similar efficiency to MINQUE(θ), which requires prior estimates of parameter values. MINQUE(1) has the advantage over REML of unbiased estimation and convenient computation. An adjusted unbiased prediction (AUP) method is developed for predicting random genetic effects. AUP is desirable for its easy computation and unbiasedness of both mean and variance of predictors. The jackknife procedure is appropriate for estimating the sampling variances of estimated variances (or covariances) and of predicted genetic effects. A t-test based on jackknife variances is applicable for detecting significance of variation. Worked examples from mice and silkworm data are given in order to demonstrate variance and covariance estimation and genetic effect prediction.

  17. IFN-Gamma-Dependent and Independent Mechanisms of CD4⁺ Memory T Cell-Mediated Protection from Listeria Infection.

    PubMed

    Meek, Stephanie M; Williams, Matthew A

    2018-02-13

    While CD8⁺ memory T cells can promote long-lived protection from secondary exposure to intracellular pathogens, less is known regarding the direct protective mechanisms of CD4⁺ T cells. We utilized a prime/boost model in which mice are initially exposed to an acutely infecting strain of lymphocytic choriomeningitis virus (LCMV), followed by a heterologous rechallenge with Listeria monocytogenes recombinantly expressing the MHC Class II-restricted LCMV epitope, GP 61-80 (Lm-gp61). We found that heterologous Lm-gp61 rechallenge resulted in robust activation of CD4⁺ memory T cells and that they were required for rapid bacterial clearance. We further assessed the relative roles of TNF and IFNγ in the direct anti-bacterial function of CD4⁺ memory T cells. We found that disruption of TNF resulted in a complete loss of protection mediated by CD4⁺ memory T cells, whereas disruption of IFNγ signaling to macrophages results in only a partial loss of protection. The protective effect mediated by CD4⁺ T cells corresponded to the rapid accumulation of pro-inflammatory macrophages in the spleen and an altered inflammatory environment in vivo. Overall, we conclude that protection mediated by CD4⁺ memory T cells from heterologous Listeria challenge is most directly dependent on TNF, whereas IFNγ only plays a minor role.

  18. Mixed model approaches for diallel analysis based on a bio-model.

    PubMed

    Zhu, J; Weir, B S

    1996-12-01

    A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.

  19. Minimum number of measurements for evaluating Bertholletia excelsa.

    PubMed

    Baldoni, A B; Tonini, H; Tardin, F D; Botelho, S C C; Teodoro, P E

    2017-09-27

    Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of Brazil nut tree (Bertholletia excelsa) genotypes based on fruit yield. For this, we assessed the number of fruits and dry mass of seeds of 75 Brazil nut genotypes, from native forest, located in the municipality of Itaúba, MT, for 5 years. To better estimate r, four procedures were used: analysis of variance (ANOVA), principal component analysis based on the correlation matrix (CPCOR), principal component analysis based on the phenotypic variance and covariance matrix (CPCOV), and structural analysis based on the correlation matrix (mean r - AECOR). There was a significant effect of genotypes and measurements, which reveals the need to study the minimum number of measurements for selecting superior Brazil nut genotypes for a production increase. Estimates of r by ANOVA were lower than those observed with the principal component methodology and close to AECOR. The CPCOV methodology provided the highest estimate of r, which resulted in a lower number of measurements needed to identify superior Brazil nut genotypes for the number of fruits and dry mass of seeds. Based on this methodology, three measurements are necessary to predict the true value of the Brazil nut genotypes with a minimum accuracy of 85%.

  20. On the design of classifiers for crop inventories

    NASA Technical Reports Server (NTRS)

    Heydorn, R. P.; Takacs, H. C.

    1986-01-01

    Crop proportion estimators that use classifications of satellite data to correct, in an additive way, a given estimate acquired from ground observations are discussed. A linear version of these estimators is optimal, in terms of minimum variance, when the regression of the ground observations onto the satellite observations in linear. When this regression is not linear, but the reverse regression (satellite observations onto ground observations) is linear, the estimator is suboptimal but still has certain appealing variance properties. In this paper expressions are derived for those regressions which relate the intercepts and slopes to conditional classification probabilities. These expressions are then used to discuss the question of classifier designs that can lead to low-variance crop proportion estimates. Variance expressions for these estimates in terms of classifier omission and commission errors are also derived.

  1. Minimum-variance Brownian motion control of an optically trapped probe.

    PubMed

    Huang, Yanan; Zhang, Zhipeng; Menq, Chia-Hsiang

    2009-10-20

    This paper presents a theoretical and experimental investigation of the Brownian motion control of an optically trapped probe. The Langevin equation is employed to describe the motion of the probe experiencing random thermal force and optical trapping force. Since active feedback control is applied to suppress the probe's Brownian motion, actuator dynamics and measurement delay are included in the equation. The equation of motion is simplified to a first-order linear differential equation and transformed to a discrete model for the purpose of controller design and data analysis. The derived model is experimentally verified by comparing the model prediction to the measured response of a 1.87 microm trapped probe subject to proportional control. It is then employed to design the optimal controller that minimizes the variance of the probe's Brownian motion. Theoretical analysis is derived to evaluate the control performance of a specific optical trap. Both experiment and simulation are used to validate the design as well as theoretical analysis, and to illustrate the performance envelope of the active control. Moreover, adaptive minimum variance control is implemented to maintain the optimal performance in the case in which the system is time varying when operating the actively controlled optical trap in a complex environment.

  2. Cost effective stream-gaging strategies for the Lower Colorado River basin; the Blythe field office operations

    USGS Publications Warehouse

    Moss, Marshall E.; Gilroy, Edward J.

    1980-01-01

    This report describes the theoretical developments and illustrates the applications of techniques that recently have been assembled to analyze the cost-effectiveness of federally funded stream-gaging activities in support of the Colorado River compact and subsequent adjudications. The cost effectiveness of 19 stream gages in terms of minimizing the sum of the variances of the errors of estimation of annual mean discharge is explored by means of a sequential-search optimization scheme. The search is conducted over a set of decision variables that describes the number of times that each gaging route is traveled in a year. A gage route is defined as the most expeditious circuit that is made from a field office to visit one or more stream gages and return to the office. The error variance is defined as a function of the frequency of visits to a gage by using optimal estimation theory. Currently a minimum of 12 visits per year is made to any gage. By changing to a six-visit minimum, the same total error variance can be attained for the 19 stations with a budget of 10% less than the current one. Other strategies are also explored. (USGS)

  3. River meanders - Theory of minimum variance

    USGS Publications Warehouse

    Langbein, Walter Basil; Leopold, Luna Bergere

    1966-01-01

    Meanders are the result of erosion-deposition processes tending toward the most stable form in which the variability of certain essential properties is minimized. This minimization involves the adjustment of the planimetric geometry and the hydraulic factors of depth, velocity, and local slope.The planimetric geometry of a meander is that of a random walk whose most frequent form minimizes the sum of the squares of the changes in direction in each successive unit length. The direction angles are then sine functions of channel distance. This yields a meander shape typically present in meandering rivers and has the characteristic that the ratio of meander length to average radius of curvature in the bend is 4.7.Depth, velocity, and slope are shown by field observations to be adjusted so as to decrease the variance of shear and the friction factor in a meander curve over that in an otherwise comparable straight reach of the same riverSince theory and observation indicate meanders achieve the minimum variance postulated, it follows that for channels in which alternating pools and riffles occur, meandering is the most probable form of channel geometry and thus is more stable geometry than a straight or nonmeandering alinement.

  4. RFI in hybrid loops - Simulation and experimental results.

    NASA Technical Reports Server (NTRS)

    Ziemer, R. E.; Nelson, D. R.; Raghavan, H. R.

    1972-01-01

    A digital simulation of an imperfect second-order hybrid phase-locked loop (HPLL) operating in radio frequency interference (RFI) is described. Its performance is characterized in terms of phase error variance and phase error probability density function (PDF). Monte-Carlo simulation is used to show that the HPLL can be superior to the conventional phase-locked loops in RFI backgrounds when minimum phase error variance is the goodness criterion. Similar experimentally obtained data are given in support of the simulation data.

  5. Eigenspace-based minimum variance adaptive beamformer combined with delay multiply and sum: experimental study

    NASA Astrophysics Data System (ADS)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-02-01

    Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra, called EIBMV-DMAS, using the expansion of DMAS algorithm. The proposed method is used as the reconstruction algorithm in linear-array PAI. EIBMV-DMAS is experimentally evaluated where the quantitative and qualitative results show that it outperforms DAS, DMAS and EIBMV. The proposed method degrades the sidelobes for about 365 %, 221 % and 40 %, compared to DAS, DMAS and EIBMV, respectively. Moreover, EIBMV-DMAS improves the SNR about 158 %, 63 % and 20 %, respectively.

  6. Nonlinear unbiased minimum-variance filter for Mars entry autonomous navigation under large uncertainties and unknown measurement bias.

    PubMed

    Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua

    2018-05-01

    High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Charged particle tracking at Titan, and further applications

    NASA Astrophysics Data System (ADS)

    Bebesi, Zsofia; Erdos, Geza; Szego, Karoly

    2016-04-01

    We use the CAPS ion data of Cassini to investigate the dynamics and origin of Titan's atmospheric ions. We developed a 4th order Runge-Kutta method to calculate particle trajectories in a time reversed scenario. The test particle magnetic field environment imitates the curved magnetic environment in the vicinity of Titan. The minimum variance directions along the S/C trajectory have been calculated for all available Titan flybys, and we assumed a homogeneous field that is perpendicular to the minimum variance direction. Using this method the magnetic field lines have been calculated along the flyby orbits so we could select those observational intervals when Cassini and the upper atmosphere of Titan were magnetically connected. We have also taken the Kronian magnetodisc into consideration, and used different upstream magnetic field approximations depending on whether Titan was located inside of the magnetodisc current sheet, or in the lobe regions. We also discuss the code's applicability to comets.

  8. Microstructure of the IMF turbulences at 2.5 AU

    NASA Technical Reports Server (NTRS)

    Mavromichalaki, H.; Vassilaki, A.; Marmatsouri, L.; Moussas, X.; Quenby, J. J.; Smith, E. J.

    1995-01-01

    A detailed analysis of small period (15-900 sec) magnetohydrodynamic (MHD) turbulences of the interplanetary magnetic field (IMF) has been made using Pioneer-11 high time resolution data (0.75 sec) inside a Corotating Interaction Region (CIR) at a heliocentric distance of 2.5 AU in 1973. The methods used are the hodogram analysis, the minimum variance matrix analysis and the cohenrence analysis. The minimum variance analysis gives evidence of linear polarized wave modes. Coherence analysis has shown that the field fluctuations are dominated by the magnetosonic fast modes with periods 15 sec to 15 min. However, it is also shown that some small amplitude Alfven waves are present in the trailing edge of this region with characteristic periods (15-200 sec). The observed wave modes are locally generated and possibly attributed to the scattering of Alfven waves energy into random magnetosonic waves.

  9. Optical tomographic detection of rheumatoid arthritis with computer-aided classification schemes

    NASA Astrophysics Data System (ADS)

    Klose, Christian D.; Klose, Alexander D.; Netz, Uwe; Beuthan, Jürgen; Hielscher, Andreas H.

    2009-02-01

    A recent research study has shown that combining multiple parameters, drawn from optical tomographic images, leads to better classification results to identifying human finger joints that are affected or not affected by rheumatic arthritis RA. Building up on the research findings of the previous study, this article presents an advanced computer-aided classification approach for interpreting optical image data to detect RA in finger joints. Additional data are used including, for example, maximum and minimum values of the absorption coefficient as well as their ratios and image variances. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index and area under the curve AUC. Results were compared to different benchmarks ("gold standard"): magnet resonance, ultrasound and clinical evaluation. Maximum accuracies (AUC=0.88) were reached when combining minimum/maximum-ratios and image variances and using ultrasound as gold standard.

  10. An analytic technique for statistically modeling random atomic clock errors in estimation

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1981-01-01

    Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.

  11. Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.

    PubMed

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2007-05-01

    Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.

  12. Software for the grouped optimal aggregation technique

    NASA Technical Reports Server (NTRS)

    Brown, P. M.; Shaw, G. W. (Principal Investigator)

    1982-01-01

    The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.

  13. Source-space ICA for MEG source imaging.

    PubMed

    Jonmohamadi, Yaqub; Jones, Richard D

    2016-02-01

    One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.

  14. Design of a compensation for an ARMA model of a discrete time system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Mainemer, C. I.

    1978-01-01

    The design of an optimal dynamic compensator for a multivariable discrete time system is studied. Also the design of compensators to achieve minimum variance control strategies for single input single output systems is analyzed. In the first problem the initial conditions of the plant are random variables with known first and second order moments, and the cost is the expected value of the standard cost, quadratic in the states and controls. The compensator is based on the minimum order Luenberger observer and it is found optimally by minimizing a performance index. Necessary and sufficient conditions for optimality of the compensator are derived. The second problem is solved in three different ways; two of them working directly in the frequency domain and one working in the time domain. The first and second order moments of the initial conditions are irrelevant to the solution. Necessary and sufficient conditions are derived for the compensator to minimize the variance of the output.

  15. Thermospheric mass density model error variance as a function of time scale

    NASA Astrophysics Data System (ADS)

    Emmert, J. T.; Sutton, E. K.

    2017-12-01

    In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).

  16. 25 CFR 543.18 - What are the minimum internal control standards for the cage, vault, kiosk, cash and cash...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... available upon demand for each day, shift, and drop cycle (this is not required if the system does not track..., beverage containers, etc., into and out of the cage. (j) Variances. The operation must establish, as...

  17. 25 CFR 543.18 - What are the minimum internal control standards for the cage, vault, kiosk, cash and cash...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... available upon demand for each day, shift, and drop cycle (this is not required if the system does not track..., beverage containers, etc., into and out of the cage. (j) Variances. The operation must establish, as...

  18. Use of parvovirus-like particles for vaccination and induction of multiple immune responses.

    PubMed

    Casal, J I

    1999-04-01

    Expression of the VP2 gene of autonomous parvoviruses in insect cells with the use of the baculovirus system has led to the production of virus-like particles (VLPs) formed by the self-assembly of VP2. These VLPs are expressed at high levels and can easily be purified by salt fractionation. They are highly immunogenic in the corresponding host, being fully protective at doses as low as 1-2 microg of purified material per animal. No special adjuvants are required. An interesting property of these particles is their usefulness as a diagnostic reagent for ELISA kits, which have successfully replaced conventional methods for parvovirus diagnostics based on haemagglutination. Another application of the hybrid recombinant parvovirus-like particles of pig parvovirus (PPV) and canine parvovirus (CPV) is its use as an antigen delivery system. PPV:VLPs containing a CD8(+) epitope from the lymphocytic choriomeningitis virus (LCMV) nucleoprotein are able to evoke a potent cytolytic T-lymphocyte response and to protect mice against a lethal infection with LCMV. Also, PPV:VLPs containing the C3:T epitope from poliovirus elicited a T helper response in mice. These T-cell epitopes were inserted into the N-terminus of the VP2 protein. Unfortunately, the N-terminus is not adequate for antibody responses because it is inside the particle. Recent findings have shown that fine tailoring of the point of insertion around the tip of loop 2 of the surface of CPV allowed the elicitation of a potent antibody response. Thus mice immunized with chimaeric C3:B CPV:VLPs were able to elicit a strong neutralizing antibody response (>3 log10 units) against poliovirus. We now have the possibility of using these particles to elicit different immune responses against single or multiple pathogens in a simple and economic way.

  19. Rodents and Risk in the Mekong Delta of Vietnam: Seroprevalence of Selected Zoonotic Viruses in Rodents and Humans

    PubMed Central

    Van Cuong, Nguyen; Carrique-Mas, Juan; Vo Be, Hien; An, Nguyen Ngoc; Tue, Ngo Tri; Anh, Nguyet Lam; Anh, Pham Hong; Phuc, Nguyen The; Baker, Stephen; Voutilainen, Liina; Jääskeläinen, Anne; Huhtamo, Eili; Utriainen, Mira; Sironen, Tarja; Vaheri, Antti; Henttonen, Heikki; Vapalahti, Olli; Chaval, Yannick

    2015-01-01

    Abstract In the Mekong Delta in southern Vietnam, rats are commonly traded in wet markets and sold live for food consumption. We investigated seroprevalence to selected groups of rodent-borne viruses among human populations with high levels of animal exposure and among co-located rodent populations. The indirect fluorescence antibody test (IFAT) was used to determine seropositivity to representative reference strains of hantaviruses (Dobrava virus [DOBV], Seoul virus [SEOV]), cowpox virus, arenaviruses (lymphocytic choriomeningitis virus [LCMV]), flaviviruses (tick-borne encephalitis virus [TBEV]), and rodent parechoviruses (Ljungan virus), using sera from 245 humans living in Dong Thap Province and 275 rodents representing the five common rodent species sold in wet markets and present in peridomestic and farm settings. Combined seropositivity to DOBV and SEOV among the rodents and humans was 6.9% (19/275) and 3.7% (9/245), respectively; 1.1% (3/275) and 4.5% (11/245) to cowpox virus; 5.4% (15/275) and 47.3% (116/245) for TBEV; and exposure to Ljungan virus was 18.8% (46/245) in humans, but 0% in rodents. Very little seroreactivity was observed to LCMV in either rodents (1/275, 0.4%) or humans (2/245, 0.8%). Molecular screening of rodent liver tissues using consensus primers for flaviviruses did not yield any amplicons, whereas molecular screening of rodent lung tissues for hantavirus yielded one hantavirus sequence (SEOV). In summary, these results indicate low to moderate levels of endemic hantavirus circulation, possible circulation of a flavivirus in rodent reservoirs, and the first available data on human exposures to parechoviruses in Vietnam. Although the current evidence suggests only limited exposure of humans to known rodent-borne diseases, further research is warranted to assess public health implications of the rodent trade. PMID:25629782

  20. Real-time Neuroimaging and Cognitive Monitoring Using Wearable Dry EEG

    PubMed Central

    Mullen, Tim R.; Kothe, Christian A.E.; Chi, Mike; Ojeda, Alejandro; Kerth, Trevor; Makeig, Scott; Jung, Tzyy-Ping; Cauwenberghs, Gert

    2015-01-01

    Goal We present and evaluate a wearable high-density dry electrode EEG system and an open-source software framework for online neuroimaging and state classification. Methods The system integrates a 64-channel dry EEG form-factor with wireless data streaming for online analysis. A real-time software framework is applied, including adaptive artifact rejection, cortical source localization, multivariate effective connectivity inference, data visualization, and cognitive state classification from connectivity features using a constrained logistic regression approach (ProxConn). We evaluate the system identification methods on simulated 64-channel EEG data. Then we evaluate system performance, using ProxConn and a benchmark ERP method, in classifying response errors in 9 subjects using the dry EEG system. Results Simulations yielded high accuracy (AUC=0.97±0.021) for real-time cortical connectivity estimation. Response error classification using cortical effective connectivity (sdDTF) was significantly above chance with similar performance (AUC) for cLORETA (0.74±0.09) and LCMV (0.72±0.08) source localization. Cortical ERP-based classification was equivalent to ProxConn for cLORETA (0.74±0.16) but significantly better for LCMV (0.82±0.12). Conclusion We demonstrated the feasibility for real-time cortical connectivity analysis and cognitive state classification from high-density wearable dry EEG. Significance This paper is the first validated application of these methods to 64-channel dry EEG. The work addresses a need for robust real-time measurement and interpretation of complex brain activity in the dynamic environment of the wearable setting. Such advances can have broad impact in research, medicine, and brain-computer interfaces. The pipelines are made freely available in the open-source SIFT and BCILAB toolboxes. PMID:26415149

  1. Patterns and Prevalence of Core Profile Types in the WPPSI Standardization Sample.

    ERIC Educational Resources Information Center

    Glutting, Joseph J.; McDermott, Paul A.

    1990-01-01

    Found most representative subtest profiles for 1,200 children comprising standardization sample of Wechsler Preschool and Primary Scale of Intelligence (WPPSI). Grouped scaled scores from WPPSI subtests according to similar level and shape using sequential minimum-variance cluster analysis with independent replications. Obtained final solution of…

  2. A Review on Sensor, Signal, and Information Processing Algorithms (PREPRINT)

    DTIC Science & Technology

    2010-01-01

    processing [214], ambi- guity surface averaging [215], optimum uncertain field tracking, and optimal minimum variance track - before - detect [216]. In [217, 218...2) (2001) 739–746. [216] S. L. Tantum, L. W. Nolte, J. L. Krolik, K. Harmanci, The performance of matched-field track - before - detect methods using

  3. A Comparison of Item Selection Techniques for Testlets

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Dodd, Barbara G.; Vaughn, Brandon K.

    2010-01-01

    This study examined the performance of the maximum Fisher's information, the maximum posterior weighted information, and the minimum expected posterior variance methods for selecting items in a computerized adaptive testing system when the items were grouped in testlets. A simulation study compared the efficiency of ability estimation among the…

  4. Low genetic variance in the duration of the incubation period in a collared flycatcher (Ficedula albicollis) population.

    PubMed

    Husby, Arild; Gustafsson, Lars; Qvarnström, Anna

    2012-01-01

    The avian incubation period is associated with high energetic costs and mortality risks suggesting that there should be strong selection to reduce the duration to the minimum required for normal offspring development. Although there is much variation in the duration of the incubation period across species, there is also variation within species. It is necessary to estimate to what extent this variation is genetically determined if we want to predict the evolutionary potential of this trait. Here we use a long-term study of collared flycatchers to examine the genetic basis of variation in incubation duration. We demonstrate limited genetic variance as reflected in the low and nonsignificant additive genetic variance, with a corresponding heritability of 0.04 and coefficient of additive genetic variance of 2.16. Any selection acting on incubation duration will therefore be inefficient. To our knowledge, this is the first time heritability of incubation duration has been estimated in a natural bird population. © 2011 by The University of Chicago.

  5. Overlap between treatment and control distributions as an effect size measure in experiments.

    PubMed

    Hedges, Larry V; Olkin, Ingram

    2016-03-01

    The proportion π of treatment group observations that exceed the control group mean has been proposed as an effect size measure for experiments that randomly assign independent units into 2 groups. We give the exact distribution of a simple estimator of π based on the standardized mean difference and use it to study the small sample bias of this estimator. We also give the minimum variance unbiased estimator of π under 2 models, one in which the variance of the mean difference is known and one in which the variance is unknown. We show how to use the relation between the standardized mean difference and the overlap measure to compute confidence intervals for π and show that these results can be used to obtain unbiased estimators, large sample variances, and confidence intervals for 3 related effect size measures based on the overlap. Finally, we show how the effect size π can be used in a meta-analysis. (c) 2016 APA, all rights reserved).

  6. Wavelet-based multiscale analysis of minimum toe clearance variability in the young and elderly during walking.

    PubMed

    Khandoker, Ahsan H; Karmakar, Chandan K; Begg, Rezaul K; Palaniswami, Marimuthu

    2007-01-01

    As humans age or are influenced by pathology of the neuromuscular system, gait patterns are known to adjust, accommodating for reduced function in the balance control system. The aim of this study was to investigate the effectiveness of a wavelet based multiscale analysis of a gait variable [minimum toe clearance (MTC)] in deriving indexes for understanding age-related declines in gait performance and screening of balance impairments in the elderly. MTC during walking on a treadmill for 30 healthy young, 27 healthy elderly and 10 falls risk elderly subjects with a history of tripping falls were analyzed. The MTC signal from each subject was decomposed to eight detailed signals at different wavelet scales by using the discrete wavelet transform. The variances of detailed signals at scales 8 to 1 were calculated. The multiscale exponent (beta) was then estimated from the slope of the variance progression at successive scales. The variance at scale 5 was significantly (p<0.01) different between young and healthy elderly group. Results also suggest that the Beta between scales 1 to 2 are effective for recognizing falls risk gait patterns. Results have implication for quantifying gait dynamics in normal, ageing and pathological conditions. Early detection of gait pattern changes due to ageing and balance impairments using wavelet-based multiscale analysis might provide the opportunity to initiate preemptive measures to be undertaken to avoid injurious falls.

  7. Determining size and dispersion of minimum viable populations for land management planning and species conservation

    NASA Astrophysics Data System (ADS)

    Lehmkuhl, John F.

    1984-03-01

    The concept of minimum populations of wildlife and plants has only recently been discussed in the literature. Population genetics has emerged as a basic underlying criterion for determining minimum population size. This paper presents a genetic framework and procedure for determining minimum viable population size and dispersion strategies in the context of multiple-use land management planning. A procedure is presented for determining minimum population size based on maintenance of genetic heterozygosity and reduction of inbreeding. A minimum effective population size ( N e ) of 50 breeding animals is taken from the literature as the minimum shortterm size to keep inbreeding below 1% per generation. Steps in the procedure adjust N e to account for variance in progeny number, unequal sex ratios, overlapping generations, population fluctuations, and period of habitat/population constraint. The result is an approximate census number that falls within a range of effective population size of 50 500 individuals. This population range defines the time range of short- to long-term population fitness and evolutionary potential. The length of the term is a relative function of the species generation time. Two population dispersion strategies are proposed: core population and dispersed population.

  8. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process

    PubMed Central

    Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.

    2013-01-01

    Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531

  9. Sampling intraspecific variability in leaf functional traits: Practical suggestions to maximize collected information.

    PubMed

    Petruzzellis, Francesco; Palandrani, Chiara; Savi, Tadeja; Alberti, Roberto; Nardini, Andrea; Bacaro, Giovanni

    2017-12-01

    The choice of the best sampling strategy to capture mean values of functional traits for a species/population, while maintaining information about traits' variability and minimizing the sampling size and effort, is an open issue in functional trait ecology. Intraspecific variability (ITV) of functional traits strongly influences sampling size and effort. However, while adequate information is available about intraspecific variability between individuals (ITV BI ) and among populations (ITV POP ), relatively few studies have analyzed intraspecific variability within individuals (ITV WI ). Here, we provide an analysis of ITV WI of two foliar traits, namely specific leaf area (SLA) and osmotic potential (π), in a population of Quercus ilex L. We assessed the baseline ITV WI level of variation between the two traits and provided the minimum and optimal sampling size in order to take into account ITV WI , comparing sampling optimization outputs with those previously proposed in the literature. Different factors accounted for different amount of variance of the two traits. SLA variance was mostly spread within individuals (43.4% of the total variance), while π variance was mainly spread between individuals (43.2%). Strategies that did not account for all the canopy strata produced mean values not representative of the sampled population. The minimum size to adequately capture the studied functional traits corresponded to 5 leaves taken randomly from 5 individuals, while the most accurate and feasible sampling size was 4 leaves taken randomly from 10 individuals. We demonstrate that the spatial structure of the canopy could significantly affect traits variability. Moreover, different strategies for different traits could be implemented during sampling surveys. We partially confirm sampling sizes previously proposed in the recent literature and encourage future analysis involving different traits.

  10. REML/BLUP and sequential path analysis in estimating genotypic values and interrelationships among simple maize grain yield-related traits.

    PubMed

    Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q

    2017-03-22

    Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε < 0.3). The number of kernels per ear (NKE) and thousand-kernel weight (TKW) are the traits with the largest direct effects on grain yield (r = 0.66 and 0.73, respectively). The high accuracy of selection (0.86 and 0.89) associated with the high heritability of the average (0.732 and 0.794) for NKE and TKW, respectively, indicated good reliability and prospects of success in the indirect selection of hybrids with high-yield potential through these traits. The negative direct effect of NKE on TKW (r = -0.856), however, must be considered. The joint use of mixed models and sequential path analysis is effective in the evaluation of maize-breeding trials.

  11. A de-noising method using the improved wavelet threshold function based on noise variance estimation

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao

    2018-01-01

    The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.

  12. Efficient design of cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances.

    PubMed

    van Breukelen, Gerard J P; Candel, Math J J M

    2018-06-10

    Cluster randomized trials evaluate the effect of a treatment on persons nested within clusters, where treatment is randomly assigned to clusters. Current equations for the optimal sample size at the cluster and person level assume that the outcome variances and/or the study costs are known and homogeneous between treatment arms. This paper presents efficient yet robust designs for cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances, and compares these with 2 practical designs. First, the maximin design (MMD) is derived, which maximizes the minimum efficiency (minimizes the maximum sampling variance) of the treatment effect estimator over a range of treatment-to-control variance ratios. The MMD is then compared with the optimal design for homogeneous variances and costs (balanced design), and with that for homogeneous variances and treatment-dependent costs (cost-considered design). The results show that the balanced design is the MMD if the treatment-to control cost ratio is the same at both design levels (cluster, person) and within the range for the treatment-to-control variance ratio. It still is highly efficient and better than the cost-considered design if the cost ratio is within the range for the squared variance ratio. Outside that range, the cost-considered design is better and highly efficient, but it is not the MMD. An example shows sample size calculation for the MMD, and the computer code (SPSS and R) is provided as supplementary material. The MMD is recommended for trial planning if the study costs are treatment-dependent and homogeneity of variances cannot be assumed. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  13. Numerically stable algorithm for combining census and sample estimates with the multivariate composite estimator

    Treesearch

    R. L. Czaplewski

    2009-01-01

    The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...

  14. Multiple Signal Classification for Determining Direction of Arrival of Frequency Hopping Spread Spectrum Signals

    DTIC Science & Technology

    2014-03-27

    42 4.2.3 Number of Hops Hs . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2.4 Number of Sensors M... 45 4.5 Standard deviation vs. Ns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.6 Bias...laboratory MTM multiple taper method MUSIC multiple signal classification MVDR minimum variance distortionless reposnse PSK phase shift keying QAM

  15. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter

    PubMed Central

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan

    2018-01-01

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509

  16. Fast computation of an optimal controller for large-scale adaptive optics.

    PubMed

    Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Conan, Jean-Marc

    2011-11-01

    The linear quadratic Gaussian regulator provides the minimum-variance control solution for a linear time-invariant system. For adaptive optics (AO) applications, under the hypothesis of a deformable mirror with instantaneous response, such a controller boils down to a minimum-variance phase estimator (a Kalman filter) and a projection onto the mirror space. The Kalman filter gain can be computed by solving an algebraic Riccati matrix equation, whose computational complexity grows very quickly with the size of the telescope aperture. This "curse of dimensionality" makes the standard solvers for Riccati equations very slow in the case of extremely large telescopes. In this article, we propose a way of computing the Kalman gain for AO systems by means of an approximation that considers the turbulence phase screen as the cropped version of an infinite-size screen. We demonstrate the advantages of the methods for both off- and on-line computational time, and we evaluate its performance for classical AO as well as for wide-field tomographic AO with multiple natural guide stars. Simulation results are reported.

  17. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.

    PubMed

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan

    2018-02-06

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.

  18. Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices

    NASA Astrophysics Data System (ADS)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita

    2014-06-01

    Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.

  19. SU-F-T-18: The Importance of Immobilization Devices in Brachytherapy Treatments of Vaginal Cuff

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shojaei, M; Dumitru, N; Pella, S

    2016-06-15

    Purpose: High dose rate brachytherapy is a highly localized radiation therapy that has a very high dose gradient. Thus one of the most important parts of the treatment is the immobilization. The smallest movement of the patient or applicator can result in dose variation to the surrounding tissues as well as to the tumor to be treated. We will revise the ML Cylinder treatments and their localization challenges. Methods: A retrospective study of 25 patients with 5 treatments each looking into the applicator’s placement in regard to the organs at risk. Motion possibilities for each applicator intra and inter fractionationmore » with their dosimetric implications were covered and measured in regard with their dose variance. The localization immobilization devices used were assessed for the capability to prevent motion before and during the treatment delivery. Results: We focused on the 100% isodose on central axis and a 15 degree displacement due to possible rotation analyzing the dose variations to the bladder and rectum walls. The average dose variation for bladder was 15% of the accepted tolerance, with a minimum variance of 11.1% and a maximum one of 23.14% on the central axis. For the off axis measurements we found an average variation of 16.84% of the accepted tolerance, with a minimum variance of 11.47% and a maximum one of 27.69%. For the rectum we focused on the rectum wall closest to the 120% isodose line. The average dose variation was 19.4%, minimum 11.3% and a maximum of 34.02% from the accepted tolerance values Conclusion: Improved immobilization devices are recommended. For inter-fractionation, localization devices are recommended in place with consistent planning in regards with the initial fraction. Many of the present immobilization devices produced for external radiotherapy can be used to improve the localization of HDR applicators during transportation of the patient and during treatment.« less

  20. Prediction of episodic acidification in North-eastern USA: An empirical/mechanistic approach

    USGS Publications Warehouse

    Davies, T.D.; Tranter, M.; Wigington, P.J.; Eshleman, K.N.; Peters, N.E.; Van Sickle, J.; DeWalle, David R.; Murdoch, Peter S.

    1999-01-01

    Observations from the US Environmental Protection Agency's Episodic Response Project (ERP) in the North-eastern United States are used to develop an empirical/mechanistic scheme for prediction of the minimum values of acid neutralizing capacity (ANC) during episodes. An acidification episode is defined as a hydrological event during which ANC decreases. The pre-episode ANC is used to index the antecedent condition, and the stream flow increase reflects how much the relative contributions of sources of waters change during the episode. As much as 92% of the total variation in the minimum ANC in individual catchments can be explained (with levels of explanation >70% for nine of the 13 streams) by a multiple linear regression model that includes pre-episode ANC and change in discharge as independent variable. The predictive scheme is demonstrated to be regionally robust, with the regional variance explained ranging from 77 to 83%. The scheme is not successful for each ERP stream, and reasons are suggested for the individual failures. The potential for applying the predictive scheme to other watersheds is demonstrated by testing the model with data from the Panola Mountain Research Watershed in the South-eastern United States, where the variance explained by the model was 74%. The model can also be utilized to assess 'chemically new' and 'chemically old' water sources during acidification episodes.Observations from the US Environmental Protection Agency's Episodic Response Project (ERP) in the Northeastern United States are used to develop an empirical/mechanistic scheme for prediction of the minimum values of acid neutralizing capacity (ANC) during episodes. An acidification episode is defined as a hydrological event during which ANC decreases. The pre-episode ANC is used to index the antecedent condition, and the stream flow increase reflects how much the relative contributions of sources of waters change during the episode. As much as 92% of the total variation in the minimum ANC in individual catchments can be explained (with levels of explanation >70% for nine of the 13 streams) by a multiple linear regression model that includes pre-episode ANC and change in discharge as independent variables. The predictive scheme is demonstrated to be regionally robust, with the regional variance explained ranging from 77 to 83%. The scheme is not successful for each ERP stream, and reasons are suggested for the individual failures. The potential for applying the predictive scheme to other watersheds is demonstrated by testing the model with data from the Panola Mountain Research Watershed in the South-eastern United States, where the variance explained by the model was 74%. The model can also be utilized to assess `chemically new' and `chemically old' water sources during acidification episodes.

  1. Variable variance Preisach model for multilayers with perpendicular magnetic anisotropy

    NASA Astrophysics Data System (ADS)

    Franco, A. F.; Gonzalez-Fuentes, C.; Morales, R.; Ross, C. A.; Dumas, R.; Åkerman, J.; Garcia, C.

    2016-08-01

    We present a variable variance Preisach model that fully accounts for the different magnetization processes of a multilayer structure with perpendicular magnetic anisotropy by adjusting the evolution of the interaction variance as the magnetization changes. We successfully compare in a quantitative manner the results obtained with this model to experimental hysteresis loops of several [CoFeB/Pd ] n multilayers. The effect of the number of repetitions and the thicknesses of the CoFeB and Pd layers on the magnetization reversal of the multilayer structure is studied, and it is found that many of the observed phenomena can be attributed to an increase of the magnetostatic interactions and subsequent decrease of the size of the magnetic domains. Increasing the CoFeB thickness leads to the disappearance of the perpendicular anisotropy, and such a minimum thickness of the Pd layer is necessary to achieve an out-of-plane magnetization.

  2. Experimental study on an FBG strain sensor

    NASA Astrophysics Data System (ADS)

    Liu, Hong-lin; Zhu, Zheng-wei; Zheng, Yong; Liu, Bang; Xiao, Feng

    2018-01-01

    Landslides and other geological disasters occur frequently and often cause high financial and humanitarian cost. The real-time, early-warning monitoring of landslides has important significance in reducing casualties and property losses. In this paper, by taking the high initial precision and high sensitivity advantage of FBG, an FBG strain sensor is designed combining FBGs with inclinometer. The sensor was regarded as a cantilever beam with one end fixed. According to the anisotropic material properties of the inclinometer, a theoretical formula between the FBG wavelength and the deflection of the sensor was established using the elastic mechanics principle. Accuracy of the formula established had been verified through laboratory calibration testing and model slope monitoring experiments. The displacement of landslide could be calculated by the established theoretical formula using the changing values of FBG central wavelength obtained by the demodulation instrument remotely. Results showed that the maximum error at different heights was 9.09%; the average of the maximum error was 6.35%, and its corresponding variance was 2.12; the minimum error was 4.18%; the average of the minimum error was 5.99%, and its corresponding variance was 0.50. The maximum error of the theoretical and the measured displacement decrease gradually, and the variance of the error also decreases gradually. This indicates that the theoretical results are more and more reliable. It also shows that the sensor and the theoretical formula established in this paper can be used for remote, real-time, high precision and early warning monitoring of the slope.

  3. Post-stratified estimation: with-in strata and total sample size recommendations

    Treesearch

    James A. Westfall; Paul L. Patterson; John W. Coulston

    2011-01-01

    Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...

  4. Limited variance control in statistical low thrust guidance analysis. [stochastic algorithm for SEP comet Encke flyby mission

    NASA Technical Reports Server (NTRS)

    Jacobson, R. A.

    1975-01-01

    Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.

  5. Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita

    2014-06-19

    Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stablemore » information ratio.« less

  6. The Z Proteins of Pathogenic but Not Nonpathogenic Arenaviruses Inhibit RIG-i-Like Receptor-Dependent Interferon Production

    PubMed Central

    Xing, Junji; Ly, Hinh

    2014-01-01

    ABSTRACT Arenavirus pathogens cause a wide spectrum of diseases in humans ranging from central nervous system disease to lethal hemorrhagic fevers with few treatment options. The reason why some arenaviruses can cause severe human diseases while others cannot is unknown. We find that the Z proteins of all known pathogenic arenaviruses, lymphocytic choriomeningitis virus (LCMV) and Lassa, Junin, Machupo, Sabia, Guanarito, Chapare, Dandenong, and Lujo viruses, can inhibit retinoic acid-inducible gene 1 (RIG-i) and Melanoma Differentiation-Associated protein 5 (MDA5), in sharp contrast to those of 14 other nonpathogenic arenaviruses. Inhibition of the RIG-i-like receptors (RLRs) by pathogenic Z proteins is mediated by the protein-protein interactions of Z and RLRs, which lead to the disruption of the interactions between RLRs and mitochondrial antiviral signaling (MAVS). The Z-RLR interactive interfaces are located within the N-terminal domain (NTD) of the Z protein and the N-terminal CARD domains of RLRs. Swapping of the LCMV Z NTD into the nonpathogenic Pichinde virus (PICV) genome does not affect virus growth in Vero cells but significantly inhibits the type I interferon (IFN) responses and increases viral replication in human primary macrophages. In summary, our results show for the first time an innate immune-system-suppressive mechanism shared by the diverse pathogenic arenaviruses and thus shed important light on the pathogenic mechanism of human arenavirus pathogens. IMPORTANCE We show that all known human-pathogenic arenaviruses share an innate immune suppression mechanism that is based on viral Z protein-mediated RLR inhibition. Our report offers important insights into the potential mechanism of arenavirus pathogenesis, provides a convenient way to evaluate the pathogenic potential of known and/or emerging arenaviruses, and reveals a novel target for the development of broad-spectrum therapies to treat this group of diverse pathogens. More broadly, our report provides a better understanding of the mechanisms of viral immune suppression and host-pathogen interactions. PMID:25552708

  7. Intelligent ensemble T-S fuzzy neural networks with RCDPSO_DM optimization for effective handling of complex clinical pathway variances.

    PubMed

    Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang

    2013-07-01

    Takagi-Sugeno (T-S) fuzzy neural networks (FNNs) can be used to handle complex, fuzzy, uncertain clinical pathway (CP) variances. However, there are many drawbacks, such as slow training rate, propensity to become trapped in a local minimum and poor ability to perform a global search. In order to improve overall performance of variance handling by T-S FNNs, a new CP variance handling method is proposed in this study. It is based on random cooperative decomposing particle swarm optimization with double mutation mechanism (RCDPSO_DM) for T-S FNNs. Moreover, the proposed integrated learning algorithm, combining the RCDPSO_DM algorithm with a Kalman filtering algorithm, is applied to optimize antecedent and consequent parameters of constructed T-S FNNs. Then, a multi-swarm cooperative immigrating particle swarm algorithm ensemble method is used for intelligent ensemble T-S FNNs with RCDPSO_DM optimization to further improve stability and accuracy of CP variance handling. Finally, two case studies on liver and kidney poisoning variances in osteosarcoma preoperative chemotherapy are used to validate the proposed method. The result demonstrates that intelligent ensemble T-S FNNs based on the RCDPSO_DM achieves superior performances, in terms of stability, efficiency, precision and generalizability, over PSO ensemble of all T-S FNNs with RCDPSO_DM optimization, single T-S FNNs with RCDPSO_DM optimization, standard T-S FNNs, standard Mamdani FNNs and T-S FNNs based on other algorithms (cooperative particle swarm optimization and particle swarm optimization) for CP variance handling. Therefore, it makes CP variance handling more effective. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Assessing the Minimum Number of Synchronization Triggers Necessary for Temporal Variance Compensation in Commercial Electroencephalography (EEG) Systems

    DTIC Science & Technology

    2012-09-01

    by the ARL Translational Neuroscience Branch. It covers the Emotiv EPOC,6 Advanced Brain Monitoring (ABM) B-Alert X10,7 Quasar 8 DSI helmet-based...Systems; ARL-TR-5945; U.S. Army Research Laboratory: Aberdeen Proving Ground, MD, 2012 4 Ibid. 5 Ibid. 6 EPOC is a trademark of Emotiv . 7 B

  9. Foreign Language Training in U.S. Undergraduate IB Programs: Are We Providing Students What They Need to Be Successful?

    ERIC Educational Resources Information Center

    Johnson, Jim

    2017-01-01

    A growing number of U.S. business schools now offer an undergraduate degree in international business (IB), for which training in a foreign language is a requirement. However, there appears to be considerable variance in the minimum requirements for foreign language training across U.S. business schools, including the provision of…

  10. Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.

    PubMed

    Deylami, Ali Mohades; Asl, Babak Mohammadzadeh

    2018-06-04

    Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.

  11. GIS-based niche modeling for mapping species' habitats

    USGS Publications Warehouse

    Rotenberry, J.T.; Preston, K.L.; Knick, S.

    2006-01-01

    Ecological a??niche modelinga?? using presence-only locality data and large-scale environmental variables provides a powerful tool for identifying and mapping suitable habitat for species over large spatial extents. We describe a niche modeling approach that identifies a minimum (rather than an optimum) set of basic habitat requirements for a species, based on the assumption that constant environmental relationships in a species' distribution (i.e., variables that maintain a consistent value where the species occurs) are most likely to be associated with limiting factors. Environmental variables that take on a wide range of values where a species occurs are less informative because they do not limit a species' distribution, at least over the range of variation sampled. This approach is operationalized by partitioning Mahalanobis D2 (standardized difference between values of a set of environmental variables for any point and mean values for those same variables calculated from all points at which a species was detected) into independent components. The smallest of these components represents the linear combination of variables with minimum variance; increasingly larger components represent larger variances and are increasingly less limiting. We illustrate this approach using the California Gnatcatcher (Polioptila californica Brewster) and provide SAS code to implement it.

  12. Spectral analysis comparisons of Fourier-theory-based methods and minimum variance (Capon) methods

    NASA Astrophysics Data System (ADS)

    Garbanzo-Salas, Marcial; Hocking, Wayne. K.

    2015-09-01

    In recent years, adaptive (data dependent) methods have been introduced into many areas where Fourier spectral analysis has traditionally been used. Although the data-dependent methods are often advanced as being superior to Fourier methods, they do require some finesse in choosing the order of the relevant filters. In performing comparisons, we have found some concerns about the mappings, particularly when related to cases involving many spectral lines or even continuous spectral signals. Using numerical simulations, several comparisons between Fourier transform procedures and minimum variance method (MVM) have been performed. For multiple frequency signals, the MVM resolves most of the frequency content only for filters that have more degrees of freedom than the number of distinct spectral lines in the signal. In the case of Gaussian spectral approximation, MVM will always underestimate the width, and can misappropriate the location of spectral line in some circumstances. Large filters can be used to improve results with multiple frequency signals, but are computationally inefficient. Significant biases can occur when using MVM to study spectral information or echo power from the atmosphere. Artifacts and artificial narrowing of turbulent layers is one such impact.

  13. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum.

    PubMed

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness.

  14. Significant improvements of electrical discharge machining performance by step-by-step updated adaptive control laws

    NASA Astrophysics Data System (ADS)

    Zhou, Ming; Wu, Jianyang; Xu, Xiaoyi; Mu, Xin; Dou, Yunping

    2018-02-01

    In order to obtain improved electrical discharge machining (EDM) performance, we have dedicated more than a decade to correcting one essential EDM defect, the weak stability of the machining, by developing adaptive control systems. The instabilities of machining are mainly caused by complicated disturbances in discharging. To counteract the effects from the disturbances on machining, we theoretically developed three control laws from minimum variance (MV) control law to minimum variance and pole placements coupled (MVPPC) control law and then to a two-step-ahead prediction (TP) control law. Based on real-time estimation of EDM process model parameters and measured ratio of arcing pulses which is also called gap state, electrode discharging cycle was directly and adaptively tuned so that a stable machining could be achieved. To this end, we not only theoretically provide three proved control laws for a developed EDM adaptive control system, but also practically proved the TP control law to be the best in dealing with machining instability and machining efficiency though the MVPPC control law provided much better EDM performance than the MV control law. It was also shown that the TP control law also provided a burn free machining.

  15. The performance of matched-field track-before-detect methods using shallow-water Pacific data.

    PubMed

    Tantum, Stacy L; Nolte, Loren W; Krolik, Jeffrey L; Harmanci, Kerem

    2002-07-01

    Matched-field track-before-detect processing, which extends the concept of matched-field processing to include modeling of the source dynamics, has recently emerged as a promising approach for maintaining the track of a moving source. In this paper, optimal Bayesian and minimum variance beamforming track-before-detect algorithms which incorporate a priori knowledge of the source dynamics in addition to the underlying uncertainties in the ocean environment are presented. A Markov model is utilized for the source motion as a means of capturing the stochastic nature of the source dynamics without assuming uniform motion. In addition, the relationship between optimal Bayesian track-before-detect processing and minimum variance track-before-detect beamforming is examined, revealing how an optimal tracking philosophy may be used to guide the modification of existing beamforming techniques to incorporate track-before-detect capabilities. Further, the benefits of implementing an optimal approach over conventional methods are illustrated through application of these methods to shallow-water Pacific data collected as part of the SWellEX-1 experiment. The results show that incorporating Markovian dynamics for the source motion provides marked improvement in the ability to maintain target track without the use of a uniform velocity hypothesis.

  16. Number-phase minimum-uncertainty state with reduced number uncertainty in a Kerr nonlinear interferometer

    NASA Astrophysics Data System (ADS)

    Kitagawa, M.; Yamamoto, Y.

    1987-11-01

    An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.

  17. Relation between Pressure Balance Structures and Polar Plumes from Ulysses High Latitude Observations

    NASA Technical Reports Server (NTRS)

    Yamauchi, Yohei; Suess, Steven T.; Sakurai, Takashi

    2002-01-01

    Ulysses observations have shown that pressure balance structures (PBSs) are a common feature in high-latitude, fast solar wind near solar minimum. Previous studies of Ulysses/SWOOPS plasma data suggest these PBSs may be remnants of coronal polar plumes. Here we find support for this suggestion in an analysis of PBS magnetic structure. We used Ulysses magnetometer data and applied a minimum variance analysis to magnetic discontinuities in PBSs. We found that PBSs preferentially contain tangential discontinuities, as opposed to rotational discontinuities and to non-PBS regions in the solar wind. This suggests that PBSs contain structures like current sheets or plasmoids that may be associated with network activity at the base of plumes.

  18. Relation Between Pressure Balance Structures and Polar Plumes from Ulysses High Latitude Observations

    NASA Technical Reports Server (NTRS)

    Yamauchi, Y.; Suess, Steven T.; Sakurai, T.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    Ulysses observations have shown that pressure balance structures (PBSs) are a common feature in high-latitude, fast solar wind near solar minimum. Previous studies of Ulysses/SWOOPS plasma data suggest these PBSs may be remnants of coronal polar plumes. Here we find support for this suggestion in an analysis of PBS magnetic structure. We used Ulysses magnetometer data and applied a minimum variance analysis to discontinuities. We found that PBSs preferentially contain tangential discontinuities, as opposed to rotational discontinuities and to non-PBS regions in the solar wind. This suggests that PBSs contain structures like current sheets or plasmoids that may be associated with network activity at the base of plumes.

  19. Attempts to Simulate Anisotropies of Solar Wind Fluctuations Using MHD with a Turning Magnetic Field

    NASA Technical Reports Server (NTRS)

    Ghosh, Sanjoy; Roberts, D. Aaron

    2010-01-01

    We examine a "two-component" model of the solar wind to see if any of the observed anisotropies of the fields can be explained in light of the need for various quantities, such as the magnetic minimum variance direction, to turn along with the Parker spiral. Previous results used a 3-D MHD spectral code to show that neither Q2D nor slab-wave components will turn their wave vectors in a turning Parker-like field, and that nonlinear interactions between the components are required to reproduce observations. In these new simulations we use higher resolution in both decaying and driven cases, and with and without a turning background field, to see what, if any, conditions lead to variance anisotropies similar to observations. We focus especially on the middle spectral range, and not the energy-containing scales, of the simulation for comparison with the solar wind. Preliminary results have shown that it is very difficult to produce the required variances with a turbulent cascade.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com; Yusof, Fadhilah, E-mail: fadhilahy@utm.my; Daud, Zalina Mohd, E-mail: zalina@ic.utm.my

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during themore » monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.« less

  1. Experimental demonstration of quantum teleportation of a squeezed state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takei, Nobuyuki; Aoki, Takao; Yonezawa, Hidehiro

    2005-10-15

    Quantum teleportation of a squeezed state is demonstrated experimentally. Due to some inevitable losses in experiments, a squeezed vacuum necessarily becomes a mixed state which is no longer a minimum uncertainty state. We establish an operational method of evaluation for quantum teleportation of such a state using fidelity and discuss the classical limit for the state. The measured fidelity for the input state is 0.85{+-}0.05, which is higher than the classical case of 0.73{+-}0.04. We also verify that the teleportation process operates properly for the nonclassical state input and its squeezed variance is certainly transferred through the process. We observemore » the smaller variance of the teleported squeezed state than that for the vacuum state input.« less

  2. Quantizing and sampling considerations in digital phased-locked loops

    NASA Technical Reports Server (NTRS)

    Hurst, G. T.; Gupta, S. C.

    1974-01-01

    The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.

  3. Statistical procedures for determination and verification of minimum reporting levels for drinking water methods.

    PubMed

    Winslow, Stephen D; Pepich, Barry V; Martin, John J; Hallberg, George R; Munch, David J; Frebis, Christopher P; Hedrick, Elizabeth J; Krop, Richard A

    2006-01-01

    The United States Environmental Protection Agency's Office of Ground Water and Drinking Water has developed a single-laboratory quantitation procedure: the lowest concentration minimum reporting level (LCMRL). The LCMRL is the lowest true concentration for which future recovery is predicted to fall, with high confidence (99%), between 50% and 150%. The procedure takes into account precision and accuracy. Multiple concentration replicates are processed through the entire analytical method and the data are plotted as measured sample concentration (y-axis) versus true concentration (x-axis). If the data support an assumption of constant variance over the concentration range, an ordinary least-squares regression line is drawn; otherwise, a variance-weighted least-squares regression is used. Prediction interval lines of 99% confidence are drawn about the regression. At the points where the prediction interval lines intersect with data quality objective lines of 50% and 150% recovery, lines are dropped to the x-axis. The higher of the two values is the LCMRL. The LCMRL procedure is flexible because the data quality objectives (50-150%) and the prediction interval confidence (99%) can be varied to suit program needs. The LCMRL determination is performed during method development only. A simpler procedure for verification of data quality objectives at a given minimum reporting level (MRL) is also presented. The verification procedure requires a single set of seven samples taken through the entire method procedure. If the calculated prediction interval is contained within data quality recovery limits (50-150%), the laboratory performance at the MRL is verified.

  4. The effectiveness of texture analysis for mapping forest land using the panchromatic bands of Landsat 7, SPOT, and IRS imagery

    Treesearch

    Michael L. Hoppus; Rachel I. Riemann; Andrew J. Lister; Mark V. Finco

    2002-01-01

    The panchromatic bands of Landsat 7, SPOT, and IRS satellite imagery provide an opportunity to evaluate the effectiveness of texture analysis of satellite imagery for mapping of land use/cover, especially forest cover. A variety of texture algorithms, including standard deviation, Ryherd-Woodcock minimum variance adaptive window, low pass etc., were applied to moving...

  5. Solution Methods for Certain Evolution Equations

    NASA Astrophysics Data System (ADS)

    Vega-Guzman, Jose Manuel

    Solution methods for certain linear and nonlinear evolution equations are presented in this dissertation. Emphasis is placed mainly on the analytical treatment of nonautonomous differential equations, which are challenging to solve despite the existent numerical and symbolic computational software programs available. Ideas from the transformation theory are adopted allowing one to solve the problems under consideration from a non-traditional perspective. First, the Cauchy initial value problem is considered for a class of nonautonomous and inhomogeneous linear diffusion-type equation on the entire real line. Explicit transformations are used to reduce the equations under study to their corresponding standard forms emphasizing on natural relations with certain Riccati(and/or Ermakov)-type systems. These relations give solvability results for the Cauchy problem of the parabolic equation considered. The superposition principle allows to solve formally this problem from an unconventional point of view. An eigenfunction expansion approach is also considered for this general evolution equation. Examples considered to corroborate the efficacy of the proposed solution methods include the Fokker-Planck equation, the Black-Scholes model and the one-factor Gaussian Hull-White model. The results obtained in the first part are used to solve the Cauchy initial value problem for certain inhomogeneous Burgers-type equation. The connection between linear (the Diffusion-type) and nonlinear (Burgers-type) parabolic equations is stress in order to establish a strong commutative relation. Traveling wave solutions of a nonautonomous Burgers equation are also investigated. Finally, it is constructed explicitly the minimum-uncertainty squeezed states for quantum harmonic oscillators. They are derived by the action of corresponding maximal kinematical invariance group on the standard ground state solution. It is shown that the product of the variances attains the required minimum value only at the instances that one variance is a minimum and the other is a maximum, when the squeezing of one of the variances occurs. Such explicit construction is possible due to the relation between the diffusion-type equation studied in the first part and the time-dependent Schrodinger equation. A modication of the radiation field operators for squeezed photons in a perfect cavity is also suggested with the help of a nonstandard solution of Heisenberg's equation of motion.

  6. Eigenspace-based minimum variance beamformer combined with Wiener postfilter for medical ultrasound imaging.

    PubMed

    Zeng, Xing; Chen, Cheng; Wang, Yuanyuan

    2012-12-01

    In this paper, a new beamformer which combines the eigenspace-based minimum variance (ESBMV) beamformer with the Wiener postfilter is proposed for medical ultrasound imaging. The primary goal of this work is to further improve the medical ultrasound imaging quality on the basis of the ESBMV beamformer. In this method, we optimize the ESBMV weights with a Wiener postfilter. With the optimization of the Wiener postfilter, the output power of the new beamformer becomes closer to the actual signal power at the imaging point than the ESBMV beamformer. Different from the ordinary Wiener postfilter, the output signal and noise power needed in calculating the Wiener postfilter are estimated respectively by the orthogonal signal subspace and noise subspace constructed from the eigenstructure of the sample covariance matrix. We demonstrate the performance of the new beamformer when resolving point scatterers and cyst phantom using both simulated data and experimental data and compare it with the delay-and-sum (DAS), the minimum variance (MV) and the ESBMV beamformer. We use the full width at half maximum (FWHM) and the peak-side-lobe level (PSL) to quantify the performance of imaging resolution and the contrast ratio (CR) to quantify the performance of imaging contrast. The FWHM of the new beamformer is only 15%, 50% and 50% of those of the DAS, MV and ESBMV beamformer, while the PSL is 127.2dB, 115dB and 60dB lower. What is more, an improvement of 239.8%, 232.5% and 32.9% in CR using simulated data and an improvement of 814%, 1410.7% and 86.7% in CR using experimental data are achieved compared to the DAS, MV and ESBMV beamformer respectively. In addition, the effect of the sound speed error is investigated by artificially overestimating the speed used in calculating the propagation delay and the results show that the new beamformer provides better robustness against the sound speed errors. Therefore, the proposed beamformer offers a better performance than the DAS, MV and ESBMV beamformer, showing its potential in medical ultrasound imaging. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Solar-cycle dependence of a model turbulence spectrum using IMP and ACE observations over 38 years

    NASA Astrophysics Data System (ADS)

    Burger, R. A.; Nel, A. E.; Engelbrecht, N. E.

    2014-12-01

    Ab initio modulation models require a number of turbulence quantities as input for any reasonable diffusion tensor. While turbulence transport models describe the radial evolution of such quantities, they in turn require observations in the inner heliosphere as input values. So far we have concentrated on solar minimum conditions (e.g. Engelbrecht and Burger 2013, ApJ), but are now looking at long-term modulation which requires turbulence data over at a least a solar magnetic cycle. As a start we analyzed 1-minute resolution data for the N-component of the magnetic field, from 1974 to 2012, covering about two solar magnetic cycles (initially using IMP and then ACE data). We assume a very simple three-stage power-law frequency spectrum, calculate the integral from the highest to the lowest frequency, and fit it to variances calculated with lags from 5 minutes to 80 hours. From the fit we then obtain not only the asymptotic variance at large lags, but also the spectral index of the inertial and the energy, as well as the breakpoint between the inertial and energy range (bendover scale) and between the energy and cutoff range (cutoff scale). All values given here are preliminary. The cutoff range is a constraint imposed in order to ensure a finite energy density; the spectrum is forced to be either flat or to decrease with decreasing frequency in this range. Given that cosmic rays sample magnetic fluctuations over long periods in their transport through the heliosphere, we average the spectra over at least 27 days. We find that the variance of the N-component has a clear solar cycle dependence, with smaller values (~6 nT2) during solar minimum and larger during solar maximum periods (~17 nT2), well correlated with the magnetic field magnitude (e.g. Smith et al. 2006, ApJ). Whereas the inertial range spectral index (-1.65 ± 0.06) does not show a significant solar cycle variation, the energy range index (-1.1 ± 0.3) seems to be anti-correlated with the variance (Bieber et al. 1993, JGR); both indices show close to normal distributions. In contrast, the variance (e.g. Burlaga and Ness, 1998, JGR), and both the bendover scale (see Ruiz et al. 2014, Solar Physics) and cutoff scale appear to be log-normal distributed.

  8. A new Method for Determining the Interplanetary Current-Sheet Local Orientation

    NASA Astrophysics Data System (ADS)

    Blanco, J. J.; Rodríguez-pacheco, J.; Sequeiros, J.

    2003-03-01

    In this work we have developed a new method for determining the interplanetary current sheet local parameters. The method, called `HYTARO' (from Hyperbolic Tangent Rotation), is based on a modified Harris magnetic field. This method has been applied to a pool of 57 events, all of them recorded during solar minimum conditions. The model performance has been tested by comparing both, its outputs and noise response, with these of the `classic MVM' (from Minimum Variance Method). The results suggest that, despite the fact that in many cases they behave in a similar way, there are specific crossing conditions that produce an erroneous MVM response. Moreover, our method shows a lower noise level sensitivity than that of MVM.

  9. Determining Metacarpophalangeal Flexion Angle Tolerance for Reliable Volumetric Joint Space Measurements by High-resolution Peripheral Quantitative Computed Tomography.

    PubMed

    Tom, Stephanie; Frayne, Mark; Manske, Sarah L; Burghardt, Andrew J; Stok, Kathryn S; Boyd, Steven K; Barnabe, Cheryl

    2016-10-01

    The position-dependence of a method to measure the joint space of metacarpophalangeal (MCP) joints using high-resolution peripheral quantitative computed tomography (HR-pQCT) was studied. Cadaveric MCP were imaged at 7 flexion angles between 0 and 30 degrees. The variability in reproducibility for mean, minimum, and maximum joint space widths and volume measurements was calculated for increasing degrees of flexion. Root mean square coefficient of variance values were < 5% under 20 degrees of flexion for mean, maximum, and volumetric joint spaces. Values for minimum joint space width were optimized under 10 degrees of flexion. MCP joint space measurements should be acquired at < 10 degrees of flexion in longitudinal studies.

  10. Comparison of reproducibility of natural head position using two methods.

    PubMed

    Khan, Abdul Rahim; Rajesh, R N G; Dinesh, M R; Sanjay, N; Girish, K S; Venkataraghavan, Karthik

    2012-01-01

    Lateral cephalometric radiographs have become virtually indispensable to orthodontists in the treatment of patients. They are important in orthodontic growth analysis, diagnosis, treatment planning, monitoring of therapy and evaluation of final treatment outcome. The purpose of this study was to evaluate and compare the maximum reproducibility with minimum variation of natural head position using two methods, i.e. the mirror method and the fluid level device method. The study included two sets of 40 lateral cephalograms taken using two methods of obtaining natural head position: (1) The mirror method and (2) fluid level device method, with a time interval of 2 months. Inclusion criteria • Subjects were randomly selected aged between 18 to 26 years Exclusion criteria • History of orthodontic treatment • Any history of respiratory tract problem or chronic mouth breathing • Any congenital deformity • History of traumatically-induced deformity • History of myofacial pain syndrome • Any previous history of head and neck surgery. The result showed that both the methods for obtaining natural head position-the mirror method and fluid level device method were comparable, but maximum reproducibility was more with the fluid level device as shown by the Dahlberg's coefficient and Bland-Altman plot. The minimum variance was seen with the fluid level device method as shown by Precision and Pearson correlation. The mirror method and the fluid level device method used for obtaining natural head position were comparable without any significance, and the fluid level device method was more reproducible and showed less variance when compared to mirror method for obtaining natural head position. Fluid level device method was more reproducible and shows less variance when compared to mirror method for obtaining natural head position.

  11. A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China

    NASA Astrophysics Data System (ADS)

    Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.

    2016-12-01

    Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.

  12. Dynamic and Geometric Analyses of Nudaurelia capensis ωVirus Maturation Reveal the Energy Landscape of Particle Transitions

    PubMed Central

    Tang, Jinghua; Kearney, Bradley M.; Wang, Qiu; Doerschuk, Peter C.; Baker, Timothy S.; Johnson, John E.

    2014-01-01

    Quasi-equivalent viruses that infect animals and bacteria require a maturation process in which particles transition from initially assembled procapsids to infectious virions. Nudaurelia capensis ω virus (NωV) is a T=4, eukaryotic, ssRNA virus that has proved to be an excellent model system for studying the mechanisms of viral maturation. Structures of NωV procapsids (diam. = 480 Å), a maturation intermediate (410 Å), and the mature virion (410 Å) were determined by electron cryo-microscopy and three-dimensional image reconstruction (cryoEM). The cryoEM density for each particle type was analyzed with a recently developed Maximum Likelihood Variance (MLV) method for characterizing microstates occupied in the ensemble of particles used for the reconstructions. The procapsid and the mature capsid had overall low variance (i.e. uniform particle populations) while the maturation intermediate (that had not undergone post-assembly autocatalytic cleavage) had roughly 2-4 times the variance of the first two particles. Without maturation cleavage the particles assume a variety of microstates, as the frustrated subunits cannot reach a minimum energy configuration. Geometric analyses of subunit coordinates provided a quantitative description of the particle reorganization during maturation. Superposition of the four quasi-equivalent subunits in the procapsid had an average root mean square deviation (RMSD) of 3Å while the mature particle had an RMSD of 11Å, showing that the subunits differentiate from near equivalent environments in the procapsid to strikingly non-equivalent environments during maturation. Autocatalytic cleavage is clearly required for the reorganized mature particle to reach the minimum energy state required for stability and infectivity. PMID:24591180

  13. Dynamic and geometric analyses of Nudaurelia capensis ω virus maturation reveal the energy landscape of particle transitions.

    PubMed

    Tang, Jinghua; Kearney, Bradley M; Wang, Qiu; Doerschuk, Peter C; Baker, Timothy S; Johnson, John E

    2014-04-01

    Quasi-equivalent viruses that infect animals and bacteria require a maturation process in which particles transition from initially assembled procapsids to infectious virions. Nudaurelia capensis ω virus (NωV) is a T = 4, eukaryotic, single-stranded ribonucleic acid virus that has proved to be an excellent model system for studying the mechanisms of viral maturation. Structures of NωV procapsids (diameter = 480 Å), a maturation intermediate (410 Å), and the mature virion (410 Å) were determined by electron cryo-microscopy and three-dimensional image reconstruction (cryoEM). The cryoEM density for each particle type was analyzed with a recently developed maximum likelihood variance (MLV) method for characterizing microstates occupied in the ensemble of particles used for the reconstructions. The procapsid and the mature capsid had overall low variance (i.e., uniform particle populations) while the maturation intermediate (that had not undergone post-assembly autocatalytic cleavage) had roughly two to four times the variance of the first two particles. Without maturation cleavage, the particles assume a variety of microstates, as the frustrated subunits cannot reach a minimum energy configuration. Geometric analyses of subunit coordinates provided a quantitative description of the particle reorganization during maturation. Superposition of the four quasi-equivalent subunits in the procapsid had an average root mean square deviation (RMSD) of 3 Å while the mature particle had an RMSD of 11 Å, showing that the subunits differentiate from near equivalent environments in the procapsid to strikingly non-equivalent environments during maturation. Autocatalytic cleavage is clearly required for the reorganized mature particle to reach the minimum energy state required for stability and infectivity. Copyright © 2014 John Wiley & Sons, Ltd.

  14. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum

    PubMed Central

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents’ positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904

  15. A New Look at Some Solar Wind Turbulence Puzzles

    NASA Technical Reports Server (NTRS)

    Roberts, Aaron

    2006-01-01

    Some aspects of solar wind turbulence have defied explanation. While it seems likely that the evolution of Alfvenicity and power spectra are largely explained by the shearing of an initial population of solar-generated Alfvenic fluctuations, the evolution of the anisotropies of the turbulence does not fit into the model so far. A two-component model, consisting of slab waves and quasi-two-dimensional fluctuations, offers some ideas, but does not account for the turning of both wave-vector-space power anisotropies and minimum variance directions in the fluctuating vectors as the Parker spiral turns. We will show observations that indicate that the minimum variance evolution is likely not due to traditional turbulence mechanisms, and offer arguments that the idea of two-component turbulence is at best a local approximation that is of little help in explaining the evolution of the fluctuations. Finally, time-permitting, we will discuss some observations that suggest that the low Alfvenicity of many regions of the solar wind in the inner heliosphere is not due to turbulent evolution, but rather to the existence of convected structures, including mini-clouds and other twisted flux tubes, that were formed with low Alfvenicity. There is still a role for turbulence in the above picture, but it is highly modified from the traditional views.

  16. Stream-temperature patterns of the Muddy Creek basin, Anne Arundel County, Maryland

    USGS Publications Warehouse

    Pluhowski, E.J.

    1981-01-01

    Using a water-balance equation based on a 4.25-year gaging-station record on North Fork Muddy Creek, the following mean annual values were obtained for the Muddy Creek basin: precipitation, 49.0 inches; evapotranspiration, 28.0 inches; runoff, 18.5 inches; and underflow, 2.5 inches. Average freshwater outflow from the Muddy Creek basin to the Rhode River estuary was 12.2 cfs during the period October 1, 1971, to December 31, 1975. Harmonic equations were used to describe seasonal maximum and minimum stream-temperature patterns at 12 sites in the basin. These equations were fitted to continuous water-temperature data obtained periodically at each site between November 1970 and June 1978. The harmonic equations explain at least 78 percent of the variance in maximum stream temperatures and 81 percent of the variance in minimum temperatures. Standard errors of estimate averaged 2.3C (Celsius) for daily maximum water temperatures and 2.1C for daily minimum temperatures. Mean annual water temperatures developed for a 5.4-year base period ranged from 11.9C at Muddy Creek to 13.1C at Many Fork Branch. The largest variations in stream temperatures were detected at thermograph sites below ponded reaches and where forest coverage was sparse or missing. At most sites the largest variations in daily water temperatures were recorded in April whereas the smallest were in September and October. The low thermal inertia of streams in the Muddy Creek basin tends to amplify the impact of surface energy-exchange processes on short-period stream-temperature patterns. Thus, in response to meteorologic events, wide ranging stream-temperature perturbations of as much as 6C have been documented in the basin. (USGS)

  17. Predicting minimum uncertainties in the inversion of ocean color geophysical parameters based on Cramer-Rao bounds.

    PubMed

    Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique

    2018-01-22

    We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.

  18. Influential input classification in probabilistic multimedia models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.

    1999-05-01

    Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less

  19. MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Kamel, Mohamed S.

    2016-01-01

    In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.

  20. Response to selection while maximizing genetic variance in small populations.

    PubMed

    Cervantes, Isabel; Gutiérrez, Juan Pablo; Meuwissen, Theo H E

    2016-09-20

    Rare breeds represent a valuable resource for future market demands. These populations are usually well-adapted, but their low census compromises the genetic diversity and future of these breeds. Since improvement of a breed for commercial traits may also confer higher probabilities of survival for the breed, it is important to achieve good responses to artificial selection. Therefore, efficient genetic management of these populations is essential to ensure that they respond adequately to genetic selection in possible future artificial selection scenarios. Scenarios that maximize the maximum genetic variance in a unique population could be a valuable option. The aim of this work was to study the effect of the maximization of genetic variance to increase selection response and improve the capacity of a population to adapt to a new environment/production system. We simulated a random scenario (A), a full-sib scenario (B), a scenario applying the maximum variance total (MVT) method (C), a MVT scenario with a restriction on increases in average inbreeding (D), a MVT scenario with a restriction on average individual increases in inbreeding (E), and a minimum coancestry scenario (F). Twenty replicates of each scenario were simulated for 100 generations, followed by 10 generations of selection. Effective population size was used to monitor the outcomes of these scenarios. Although the best response to selection was achieved in scenarios B and C, they were discarded because they are unpractical. Scenario A was also discarded because of its low response to selection. Scenario D yielded less response to selection and a smaller effective population size than scenario E, for which response to selection was higher during early generations because of the moderately structured population. In scenario F, response to selection was slightly higher than in Scenario E in the last generations. Application of MVT with a restriction on individual increases in inbreeding resulted in the largest response to selection during early generations, but if inbreeding depression is a concern, a minimum coancestry scenario is then a valuable alternative, in particular for a long-term response to selection.

  1. A high-resolution speleothem record of western equatorial Pacific rainfall: Implications for Holocene ENSO evolution

    NASA Astrophysics Data System (ADS)

    Chen, Sang; Hoffmann, Sharon S.; Lund, David C.; Cobb, Kim M.; Emile-Geay, Julien; Adkins, Jess F.

    2016-05-01

    The El Niño-Southern Oscillation (ENSO) is the primary driver of interannual climate variability in the tropics and subtropics. Despite substantial progress in understanding ocean-atmosphere feedbacks that drive ENSO today, relatively little is known about its behavior on centennial and longer timescales. Paleoclimate records from lakes, corals, molluscs and deep-sea sediments generally suggest that ENSO variability was weaker during the mid-Holocene (4-6 kyr BP) than the late Holocene (0-4 kyr BP). However, discrepancies amongst the records preclude a clear timeline of Holocene ENSO evolution and therefore the attribution of ENSO variability to specific climate forcing mechanisms. Here we present δ18 O results from a U-Th dated speleothem in Malaysian Borneo sampled at sub-annual resolution. The δ18 O of Borneo rainfall is a robust proxy of regional convective intensity and precipitation amount, both of which are directly influenced by ENSO activity. Our estimates of stalagmite δ18 O variance at ENSO periods (2-7 yr) show a significant reduction in interannual variability during the mid-Holocene (3240-3380 and 5160-5230 yr BP) relative to both the late Holocene (2390-2590 yr BP) and early Holocene (6590-6730 yr BP). The Borneo results are therefore inconsistent with lacustrine records of ENSO from the eastern equatorial Pacific that show little or no ENSO variance during the early Holocene. Instead, our results support coral, mollusc and foraminiferal records from the central and eastern equatorial Pacific that show a mid-Holocene minimum in ENSO variance. Reduced mid-Holocene interannual δ18 O variability in Borneo coincides with an overall minimum in mean δ18 O from 3.5 to 5.5 kyr BP. Persistent warm pool convection would tend to enhance the Walker circulation during the mid-Holocene, which likely contributed to reduced ENSO variance during this period. This finding implies that both convective intensity and interannual variability in Borneo are driven by coupled air-sea dynamics that are sensitive to precessional insolation forcing. Isolating the exact mechanisms that drive long-term ENSO evolution will require additional high-resolution paleoclimatic reconstructions and further investigation of Holocene tropical climate evolution using coupled climate models.

  2. The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2016-01-01

    Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.

  3. [Determination and principal component analysis of mineral elements based on ICP-OES in Nitraria roborowskii fruits from different regions].

    PubMed

    Yuan, Yuan-Yuan; Zhou, Yu-Bi; Sun, Jing; Deng, Juan; Bai, Ying; Wang, Jie; Lu, Xue-Feng

    2017-06-01

    The content of elements in fifteen different regions of Nitraria roborowskii samples were determined by inductively coupled plasma-atomic emission spectrometry(ICP-OES), and its elemental characteristics were analyzed by principal component analysis. The results indicated that 18 mineral elements were detected in N. roborowskii of which V cannot be detected. In addition, contents of Na, K and Ca showed high concentration. Ti showed maximum content variance, while K is minimum. Four principal components were gained from the original data. The cumulative variance contribution rate is 81.542% and the variance contribution of the first principal component was 44.997%, indicating that Cr, Fe, P and Ca were the characteristic elements of N. roborowskii.Thus, the established method was simple, precise and can be used for determination of mineral elements in N.roborowskii Kom. fruits. The elemental distribution characteristics among N.roborowskii fruits are related to geographical origins which were clearly revealed by PCA. All the results will provide good basis for comprehensive utilization of N.roborowskii. Copyright© by the Chinese Pharmaceutical Association.

  4. Plasma dynamics on current-carrying magnetic flux tubes

    NASA Technical Reports Server (NTRS)

    Swift, Daniel W.

    1992-01-01

    A 1D numerical simulation is used to investigate the evolution of a plasma in a current-carrying magnetic flux tube of variable cross section. A large potential difference, parallel to the magnetic field, is applied across the domain. The result is that density minimum tends to deepen, primarily in the cathode end, and the entire potential drop becomes concentrated across the region of density minimum. The evolution of the simulation shows some sensitivity to particle boundary conditions, but the simulations inevitably evolve into a final state with a nearly stationary double layer near the cathode end. The simulation results are at sufficient variance with observations that it appears unlikely that auroral electrons can be explained by a simple process of acceleration through a field-aligned potential drop.

  5. Transcutaneous immunization with a novel imiquimod nanoemulsion induces superior T cell responses and virus protection.

    PubMed

    Lopez, Pamela Aranda; Denny, Mark; Hartmann, Ann-Kathrin; Alflen, Astrid; Probst, Hans Christian; von Stebut, Esther; Tenzer, Stefan; Schild, Hansjörg; Stassen, Michael; Langguth, Peter; Radsak, Markus P

    2017-09-01

    Transcutaneous immunization (TCI) is a novel vaccination strategy utilizing the skin associated lymphatic tissue to induce immune responses. TCI using a cytotoxic T lymphocyte (CTL) epitope and the Toll-like receptor 7 (TLR7) agonist imiquimod mounts strong CTL responses by activation and maturation of skin-derived dendritic cells (DCs) and their migration to lymph nodes. However, TCI based on the commercial formulation Aldara only induces transient CTL responses that needs further improvement for the induction of durable therapeutic immune responses. Therefore we aimed to develop a novel imiquimod solid nanoemulsion (IMI-Sol) for TCI with superior vaccination properties suited to induce high quality T cell responses for enhanced protection against infections. TCI was performed by applying a MHC class I or II restricted epitope along with IMI-Sol or Aldara (each containing 5% Imiquimod) on the shaved dorsum of C57BL/6, IL-1R, Myd88, Tlr7 or Ccr7 deficient mice. T cell responses as well as DC migration upon TCI were subsequently analyzed by flow cytometry. To determine in vivo efficacy of TCI induced immune responses, CTL responses and frequency of peptide specific T cells were evaluated on day 8 or 35 post vaccination and protection in a lymphocytic choriomeningitis virus (LCMV) infection model was assessed. TCI with the imiquimod formulation IMI-Sol displayed equal skin penetration of imiquimod compared to Aldara, but elicited superior CD8 + as well as CD4 + T cell responses. The induction of T-cell responses induced by IMI-Sol TCI was dependent on the TLR7/MyD88 pathway and independent of IL-1R. IMI-Sol TCI activated skin-derived DCs in skin-draining lymph nodes more efficiently compared to Aldara leading to enhanced protection in a LCMV infection model. Our data demonstrate that IMI-Sol TCI can overcome current limitations of previous imiquimod based TCI approaches opening new perspectives for transcutaneous vaccination strategies and allowing the use of this enhanced cutaneous drug-delivery system to be tailored for the improved prevention and treatment of infectious diseases and cancers. Copyright © 2017 Japanese Society for Investigative Dermatology. Published by Elsevier B.V. All rights reserved.

  6. Antigen-specific primed cytotoxic T cells eliminate tumour cells in vivo and prevent tumour development, regardless of the presence of anti-apoptotic mutations conferring drug resistance.

    PubMed

    Jaime-Sánchez, Paula; Catalán, Elena; Uranga-Murillo, Iratxe; Aguiló, Nacho; Santiago, Llipsy; M Lanuza, Pilar; de Miguel, Diego; A Arias, Maykel; Pardo, Julián

    2018-05-09

    Cytotoxic CD8 + T (Tc) cells are the main executors of transformed and cancer cells during cancer immunotherapy. The latest clinical results evidence a high efficacy of novel immunotherapy agents that modulate Tc cell activity against bad prognosis cancers. However, it has not been determined yet whether the efficacy of these treatments can be affected by selection of tumoural cells with mutations in the cell death machinery, known to promote drug resistance and cancer recurrence. Here, using a model of prophylactic tumour vaccination based on the LCMV-gp33 antigen and the mouse EL4 T lymphoma, we analysed the molecular mechanism employed by Tc cells to eliminate cancer cells in vivo and the impact of mutations in the apoptotic machinery on tumour development. First of all, we found that Tc cells, and perf and gzmB are required to efficiently eliminate EL4.gp33 cells after LCMV immunisation during short-term assays (1-4 h), and to prevent tumour development in the long term. Furthermore, we show that antigen-pulsed chemoresistant EL4 cells overexpressing Bcl-X L or a dominant negative form of caspase-3 are specifically eliminated from the peritoneum of infected animals, as fast as parental EL4 cells. Notably, antigen-specific Tc cells control the tumour growth of the mutated cells, as efficiently as in the case of parental cells. Altogether, expression of the anti-apoptotic mutations does not confer any advantage for tumour cells neither in the short-term survival nor in long-term tumour formation. Although the mechanism involved in the elimination of the apoptosis-resistant tumour cells is not completely elucidated, neither necroptosis nor pyroptosis seem to be involved. Our results provide the first experimental proof that chemoresistant cancer cells with mutations in the main cell death pathways are efficiently eliminated by Ag-specific Tc cells in vivo during immunotherapy and, thus, provide the molecular basis to treat chemoresistant cancer cells with CD8 Tc-based immunotherapy.

  7. A New Method for Estimating the Effective Population Size from Allele Frequency Changes

    PubMed Central

    Pollak, Edward

    1983-01-01

    A new procedure is proposed for estimating the effective population size, given that information is available on changes in frequencies of the alleles at one or more independently segregating loci and the population is observed at two or more separate times. Approximate expressions are obtained for the variances of the new statistic, as well as others, also based on allele frequency changes, that have been discussed in the literature. This analysis indicates that the new statistic will generally have a smaller variance than the others. Estimates of effective population sizes and of the standard errors of the estimates are computed for data on two fly populations that have been discussed in earlier papers. In both cases, there is evidence that the effective population size is very much smaller than the minimum census size of the population. PMID:17246147

  8. Evaluating climate change impacts on streamflow variability based on a multisite multivariate GCM downscaling method in the Jing River of China

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Jin, Jiming

    2017-11-01

    Projected hydrological variability is important for future resource and hazard management of water supplies because changes in hydrological variability can cause more disasters than changes in the mean state. However, climate change scenarios downscaled from Earth System Models (ESMs) at single sites cannot meet the requirements of distributed hydrologic models for simulating hydrological variability. This study developed multisite multivariate climate change scenarios via three steps: (i) spatial downscaling of ESMs using a transfer function method, (ii) temporal downscaling of ESMs using a single-site weather generator, and (iii) reconstruction of spatiotemporal correlations using a distribution-free shuffle procedure. Multisite precipitation and temperature change scenarios for 2011-2040 were generated from five ESMs under four representative concentration pathways to project changes in streamflow variability using the Soil and Water Assessment Tool (SWAT) for the Jing River, China. The correlation reconstruction method performed realistically for intersite and intervariable correlation reproduction and hydrological modeling. The SWAT model was found to be well calibrated with monthly streamflow with a model efficiency coefficient of 0.78. It was projected that the annual mean precipitation would not change, while the mean maximum and minimum temperatures would increase significantly by 1.6 ± 0.3 and 1.3 ± 0.2 °C; the variance ratios of 2011-2040 to 1961-2005 were 1.15 ± 0.13 for precipitation, 1.15 ± 0.14 for mean maximum temperature, and 1.04 ± 0.10 for mean minimum temperature. A warmer climate was predicted for the flood season, while the dry season was projected to become wetter and warmer; the findings indicated that the intra-annual and interannual variations in the future climate would be greater than in the current climate. The total annual streamflow was found to change insignificantly but its variance ratios of 2011-2040 to 1961-2005 increased by 1.25 ± 0.55. Streamflow variability was predicted to become greater over most months on the seasonal scale because of the increased monthly maximum streamflow and decreased monthly minimum streamflow. The increase in streamflow variability was attributed mainly to larger positive contributions from increased precipitation variances rather than negative contributions from increased mean temperatures.

  9. Identification, Characterization, and Utilization of Adult Meniscal Progenitor Cells

    DTIC Science & Technology

    2017-11-01

    approach including row scaling and Ward’s minimum variance method was chosen. This analysis revealed two groups of four samples each. For the selected...articular cartilage in an ovine model. Am J Sports Med. 2008;36(5):841-50. 7. Deshpande BR, Katz JN, Solomon DH, Yelin EH, Hunter DJ, Messier SP, et al...Miosge1,* 1Tissue Regeneration Work Group , Department of Prosthodontics, Medical Faculty, Georg-August-University, 37075 Goettingen, Germany 2Institute of

  10. An Analysis Of The Benefits And Application Of Earned Value Management (EVM) Project Management Techniques For Dod Programs That Do Not Meet Dod Policy Thresholds

    DTIC Science & Technology

    2017-12-01

    carefully to ensure only minimum information needed for effective management control is requested.  Requires cost-benefit analysis and PM...baseline offers metrics that highlights performance treads and program variances. This information provides Program Managers and higher levels of...The existing training philosophy is effective only if the managers using the information have well trained and experienced personnel that can

  11. Ways to improve your correlation functions

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    1993-01-01

    This paper describes a number of ways to improve on the standard method for measuring the two-point correlation function of large scale structure in the Universe. Issues addressed are: (1) the problem of the mean density, and how to solve it; (2) how to estimate the uncertainty in a measured correlation function; (3) minimum variance pair weighting; (4) unbiased estimation of the selection function when magnitudes are discrete; and (5) analytic computation of angular integrals in background pair counts.

  12. The Three-Dimensional Power Spectrum Of Galaxies from the Sloan Digital Sky Survey

    DTIC Science & Technology

    2004-05-10

    aspects of the three-dimensional clustering of a much larger data set involving over 200,000 galaxies with redshifts. This paper is focused on measuring... papers , we will constrain galaxy bias empirically by using clustering measurements on smaller scales (e.g., I. Zehavi et al. 2004, in preparation...minimum-variance measurements in 22 k-bands of both the clustering power and its anisotropy due to redshift-space distortions, with narrow and well

  13. Waveform-based spaceborne GNSS-R wind speed observation: Demonstration and analysis using UK TechDemoSat-1 data

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Yang, Dongkai; Zhang, Bo; Li, Weiqiang

    2018-03-01

    This paper explores two types of mathematical functions to fit single- and full-frequency waveform of spaceborne Global Navigation Satellite System-Reflectometry (GNSS-R), respectively. The metrics of the waveforms, such as the noise floor, peak magnitude, mid-point position of the leading edge, leading edge slope and trailing edge slope, can be derived from the parameters of the proposed models. Because the quality of the UK TDS-1 data is not at the level required by remote sensing mission, the waveforms buried in noise or from ice/land are removed by defining peak-to-mean ratio, cosine similarity of the waveform before wind speed are retrieved. The single-parameter retrieval models are developed by comparing the peak magnitude, leading edge slope and trailing edge slope derived from the parameters of the proposed models with in situ wind speed from the ASCAT scatterometer. To improve the retrieval accuracy, three types of multi-parameter observations based on the principle component analysis (PCA), minimum variance (MV) estimator and Back Propagation (BP) network are implemented. The results indicate that compared to the best results of the single-parameter observation, the approaches based on the principle component analysis and minimum variance could not significantly improve retrieval accuracy, however, the BP networks obtain improvement with the RMSE of 2.55 m/s and 2.53 m/s for single- and full-frequency waveform, respectively.

  14. Fast Minimum Variance Beamforming Based on Legendre Polynomials.

    PubMed

    Bae, MooHo; Park, Sung Bae; Kwon, Sung Jae

    2016-09-01

    Currently, minimum variance beamforming (MV) is actively investigated as a method that can improve the performance of an ultrasound beamformer, in terms of the lateral and contrast resolution. However, this method has the disadvantage of excessive computational complexity since the inverse spatial covariance matrix must be calculated. Some noteworthy methods among various attempts to solve this problem include beam space adaptive beamforming methods and the fast MV method based on principal component analysis, which are similar in that the original signal in the element space is transformed to another domain using an orthonormal basis matrix and the dimension of the covariance matrix is reduced by approximating the matrix only with important components of the matrix, hence making the inversion of the matrix very simple. Recently, we proposed a new method with further reduced computational demand that uses Legendre polynomials as the basis matrix for such a transformation. In this paper, we verify the efficacy of the proposed method through Field II simulations as well as in vitro and in vivo experiments. The results show that the approximation error of this method is less than or similar to those of the above-mentioned methods and that the lateral response of point targets and the contrast-to-speckle noise in anechoic cysts are also better than or similar to those methods when the dimensionality of the covariance matrices is reduced to the same dimension.

  15. Demographics of an ornate box turtle population experiencing minimal human-induced disturbances

    USGS Publications Warehouse

    Converse, S.J.; Iverson, J.B.; Savidge, J.A.

    2005-01-01

    Human-induced disturbances may threaten the viability of many turtle populations, including populations of North American box turtles. Evaluation of the potential impacts of these disturbances can be aided by long-term studies of populations subject to minimal human activity. In such a population of ornate box turtles (Terrapene ornata ornata) in western Nebraska, we examined survival rates and population growth rates from 1981-2000 based on mark-recapture data. The average annual apparent survival rate of adult males was 0.883 (SE = 0.021) and of adult females was 0.932 (SE = 0.014). Minimum winter temperature was the best of five climate variables as a predictor of adult survival. Survival rates were highest in years with low minimum winter temperatures, suggesting that global warming may result in declining survival. We estimated an average adult population growth rate (????) of 1.006 (SE = 0.065), with an estimated temporal process variance (????2) of 0.029 (95% CI = 0.005-0.176). Stochastic simulations suggest that this mean and temporal process variance would result in a 58% probability of a population decrease over a 20-year period. This research provides evidence that, unless unknown density-dependent mechanisms are operating in the adult age class, significant human disturbances, such as commercial harvest or turtle mortality on roads, represent a potential risk to box turtle populations. ?? 2005 by the Ecological Society of America.

  16. Engineering parvovirus-like particles for the induction of B-cell, CD4(+) and CTL responses.

    PubMed

    Rueda, P; Martínez-Torrecuadrada, J L; Sarraseca, J; Sedlik, C; del Barrio, M; Hurtado, A; Leclerc, C; Casal, J I

    1999-09-01

    An antigen delivery system based on hybrid recombinant parvovirus-like particles (VLPs) formed by the self-assembly of the capsid VP2 protein of porcine (PPV) or canine parvovirus (CPV) expressed in insect cells with the baculovirus system has been developed. PPV:VLPs containing a CD8(+) epitope from the LCMV nucleoprotein evoked a potent CTL response and were able to protect mice against a lethal infection with the virus. Also, PPV:VLPs containing the C3:T epitope from poliovirus elicited a CD4(+)3 log(10) units) against poliovirus. The possibility of combining different types of epitopes in different positions of a single particle to stimulate different branches of the immune system paves the way to the production of more potent vaccines in a simple and cheap way.

  17. Scores on Riley's stuttering severity instrument versions three and four for samples of different length and for different types of speech material.

    PubMed

    Todd, Helena; Mirawdeli, Avin; Costelloe, Sarah; Cavenagh, Penny; Davis, Stephen; Howell, Peter

    2014-12-01

    Riley stated that the minimum speech sample length necessary to compute his stuttering severity estimates was 200 syllables. This was investigated. Procedures supplied for the assessment of readers and non-readers were examined to see whether they give equivalent scores. Recordings of spontaneous speech samples from 23 young children (aged between 2 years 8 months and 6 years 3 months) and 31 older children (aged between 10 years 0 months and 14 years 7 months) were made. Riley's severity estimates were scored on extracts of different lengths. The older children provided spontaneous and read samples, which were scored for severity according to reader and non-reader procedures. Analysis of variance supported the use of 200-syllable-long samples as the minimum necessary for obtaining severity scores. There was no significant difference in SSI-3 scores for the older children when the reader and non-reader procedures were used. Samples that are 200-syllables long are the minimum that is appropriate for obtaining stable Riley's severity scores. The procedural variants provide similar severity scores.

  18. Future mission studies: Preliminary comparisons of solar flux models

    NASA Technical Reports Server (NTRS)

    Ashrafi, S.

    1991-01-01

    The results of comparisons of the solar flux models are presented. (The wavelength lambda = 10.7 cm radio flux is the best indicator of the strength of the ionizing radiations such as solar ultraviolet and x-ray emissions that directly affect the atmospheric density thereby changing the orbit lifetime of satellites. Thus, accurate forecasting of solar flux F sub 10.7 is crucial for orbit determination of spacecrafts.) The measured solar flux recorded by National Oceanic and Atmospheric Administration (NOAA) is compared against the forecasts made by Schatten, MSFC, and NOAA itself. The possibility of a combined linear, unbiased minimum-variance estimation that properly combines all three models into one that minimizes the variance is also discussed. All the physics inherent in each model are combined. This is considered to be the dead-end statistical approach to solar flux forecasting before any nonlinear chaotic approach.

  19. Optimal portfolio strategy with cross-correlation matrix composed by DCCA coefficients: Evidence from the Chinese stock market

    NASA Astrophysics Data System (ADS)

    Sun, Xuelian; Liu, Zixian

    2016-02-01

    In this paper, a new estimator of correlation matrix is proposed, which is composed of the detrended cross-correlation coefficients (DCCA coefficients), to improve portfolio optimization. In contrast to Pearson's correlation coefficients (PCC), DCCA coefficients acquired by the detrended cross-correlation analysis (DCCA) method can describe the nonlinear correlation between assets, and can be decomposed in different time scales. These properties of DCCA make it possible to improve the investment effect and more valuable to investigate the scale behaviors of portfolios. The minimum variance portfolio (MVP) model and the Mean-Variance (MV) model are used to evaluate the effectiveness of this improvement. Stability analysis shows the effect of two kinds of correlation matrices on the estimation error of portfolio weights. The observed scale behaviors are significant to risk management and could be used to optimize the portfolio selection.

  20. Demodulation of messages received with low signal to noise ratio

    NASA Astrophysics Data System (ADS)

    Marguinaud, A.; Quignon, T.; Romann, B.

    The implementation of this all-digital demodulator is derived from maximum likelihood considerations applied to an analytical representation of the received signal. Traditional adapted filters and phase lock loops are replaced by minimum variance estimators and hypothesis tests. These statistical tests become very simple when working on phase signal. These methods, combined with rigorous control data representation allow significant computation savings as compared to conventional realizations. Nominal operation has been verified down to energetic signal over noise of -3 dB upon a QPSK demodulator.

  1. An adaptive technique for estimating the atmospheric density profile during the AE mission

    NASA Technical Reports Server (NTRS)

    Argentiero, P.

    1973-01-01

    A technique is presented for processing accelerometer data obtained during the AE missions in order to estimate the atmospheric density profile. A minimum variance, adaptive filter is utilized. The trajectory of the probe and probe parameters are in a consider mode where their estimates are unimproved but their associated uncertainties are permitted an impact on filter behavior. Simulations indicate that the technique is effective in estimating a density profile to within a few percentage points.

  2. Real-time performance assessment and adaptive control for a water chiller unit in an HVAC system

    NASA Astrophysics Data System (ADS)

    Bai, Jianbo; Li, Yang; Chen, Jianhao

    2018-02-01

    The paper proposes an adaptive control method for a water chiller unit in a HVAC system. Based on the minimum variance evaluation, the adaptive control method was used to realize better control of the water chiller unit. To verify the performance of the adaptive control method, the proposed method was compared with an a conventional PID controller, the simulation results showed that adaptive control method had superior control performance to that of the conventional PID controller.

  3. Optimization of data analysis for the in vivo neutron activation analysis of aluminum in bone.

    PubMed

    Mohseni, H K; Matysiak, W; Chettle, D R; Byun, S H; Priest, N; Atanackovic, J; Prestwich, W V

    2016-10-01

    An existing system at McMaster University has been used for the in vivo measurement of aluminum in human bone. Precise and detailed analysis approaches are necessary to determine the aluminum concentration because of the low levels of aluminum found in the bone and the challenges associated with its detection. Phantoms resembling the composition of the human hand with varying concentrations of aluminum were made for testing the system prior to the application to human studies. A spectral decomposition model and a photopeak fitting model involving the inverse-variance weighted mean and a time-dependent analysis were explored to analyze the results and determine the model with the best performance and lowest minimum detection limit. The results showed that the spectral decomposition and the photopeak fitting model with the inverse-variance weighted mean both provided better results compared to the other methods tested. The spectral decomposition method resulted in a marginally lower detection limit (5μg Al/g Ca) compared to the inverse-variance weighted mean (5.2μg Al/g Ca), rendering both equally applicable to human measurements. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process.

    PubMed

    Haines, Aaron M; Zak, Matthew; Hammond, Katie; Scott, J Michael; Goble, Dale D; Rachlow, Janet L

    2013-08-13

    United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.

  5. Lekking without a paradox in the buff-breasted sandpiper

    USGS Publications Warehouse

    Lanctot, Richard B.; Scribner, Kim T.; Kempenaers, Bart; Weatherhead, Patrick J.

    1997-01-01

    Females in lek‐breeding species appear to copulate with a small subset of the available males. Such strong directional selection is predicted to decrease additive genetic variance in the preferred male traits, yet females continue to mate selectively, thus generating the lek paradox. In a study of buff‐breasted sandpipers (Tryngites subruficollis), we combine detailed behavioral observations with paternity analyses using single‐locus minisatellite DNA probes to provide the first evidence from a lek‐breeding species that the variance in male reproductive success is much lower than expected. In 17 and 30 broods sampled in two consecutive years, a minimum of 20 and 39 males, respectively, sired offspring. This low variance in male reproductive success resulted from effective use of alternative reproductive tactics by males, females mating with solitary males off leks, and multiple mating by females. Thus, the results of this study suggests that sexual selection through female choice is weak in buff‐breasted sandpipers. The behavior of other lek‐breeding birds is sufficiently similar to that of buff‐breasted sandpipers that paternity studies of those species should be conducted to determine whether leks generally are less paradoxical than they appear.

  6. Kriging analysis of mean annual precipitation, Powder River Basin, Montana and Wyoming

    USGS Publications Warehouse

    Karlinger, M.R.; Skrivan, James A.

    1981-01-01

    Kriging is a statistical estimation technique for regionalized variables which exhibit an autocorrelation structure. Such structure can be described by a semi-variogram of the observed data. The kriging estimate at any point is a weighted average of the data, where the weights are determined using the semi-variogram and an assumed drift, or lack of drift, in the data. Block, or areal, estimates can also be calculated. The kriging algorithm, based on unbiased and minimum-variance estimates, involves a linear system of equations to calculate the weights. Kriging variances can then be used to give confidence intervals of the resulting estimates. Mean annual precipitation in the Powder River basin, Montana and Wyoming, is an important variable when considering restoration of coal-strip-mining lands of the region. Two kriging analyses involving data at 60 stations were made--one assuming no drift in precipitation, and one a partial quadratic drift simulating orographic effects. Contour maps of estimates of mean annual precipitation were similar for both analyses, as were the corresponding contours of kriging variances. Block estimates of mean annual precipitation were made for two subbasins. Runoff estimates were 1-2 percent of the kriged block estimates. (USGS)

  7. Aircrew coordination and decisionmaking: Peer ratings of video tapes made during a full mission simulation

    NASA Technical Reports Server (NTRS)

    Murphy, M. R.; Awe, C. A.

    1986-01-01

    Six professionally active, retired captains rated the coordination and decisionmaking performances of sixteen aircrews while viewing videotapes of a simulated commercial air transport operation. The scenario featured a required diversion and a probable minimum fuel situation. Seven point Likert-type scales were used in rating variables on the basis of a model of crew coordination and decisionmaking. The variables were based on concepts of, for example, decision difficulty, efficiency, and outcome quality; and leader-subordin ate concepts such as person and task-oriented leader behavior, and competency motivation of subordinate crewmembers. Five-front-end variables of the model were in turn dependent variables for a hierarchical regression procedure. The variance in safety performance was explained 46%, by decision efficiency, command reversal, and decision quality. The variance of decision quality, an alternative substantive dependent variable to safety performance, was explained 60% by decision efficiency and the captain's quality of within-crew communications. The variance of decision efficiency, crew coordination, and command reversal were in turn explained 78%, 80%, and 60% by small numbers of preceding independent variables. A principle component, varimax factor analysis supported the model structure suggested by regression analyses.

  8. Signal-dependent noise determines motor planning

    NASA Astrophysics Data System (ADS)

    Harris, Christopher M.; Wolpert, Daniel M.

    1998-08-01

    When we make saccadic eye movements or goal-directed arm movements, there is an infinite number of possible trajectories that the eye or arm could take to reach the target,. However, humans show highly stereotyped trajectories in which velocity profiles of both the eye and hand are smooth and symmetric for brief movements,. Here we present a unifying theory of eye and arm movements based on the single physiological assumption that the neural control signals are corrupted by noise whose variance increases with the size of the control signal. We propose that in the presence of such signal-dependent noise, the shape of a trajectory is selected to minimize the variance of the final eye or arm position. This minimum-variance theory accurately predicts the trajectories of both saccades and arm movements and the speed-accuracy trade-off described by Fitt's law. These profiles are robust to changes in the dynamics of the eye or arm, as found empirically,. Moreover, the relation between path curvature and hand velocity during drawing movements reproduces the empirical `two-thirds power law',. This theory provides a simple and powerful unifying perspective for both eye and arm movement control.

  9. 3D facial landmarks: Inter-operator variability of manual annotation

    PubMed Central

    2014-01-01

    Background Manual annotation of landmarks is a known source of variance, which exist in all fields of medical imaging, influencing the accuracy and interpretation of the results. However, the variability of human facial landmarks is only sparsely addressed in the current literature as opposed to e.g. the research fields of orthodontics and cephalometrics. We present a full facial 3D annotation procedure and a sparse set of manually annotated landmarks, in effort to reduce operator time and minimize the variance. Method Facial scans from 36 voluntary unrelated blood donors from the Danish Blood Donor Study was randomly chosen. Six operators twice manually annotated 73 anatomical and pseudo-landmarks, using a three-step scheme producing a dense point correspondence map. We analyzed both the intra- and inter-operator variability, using mixed-model ANOVA. We then compared four sparse sets of landmarks in order to construct a dense correspondence map of the 3D scans with a minimum point variance. Results The anatomical landmarks of the eye were associated with the lowest variance, particularly the center of the pupils. Whereas points of the jaw and eyebrows have the highest variation. We see marginal variability in regards to intra-operator and portraits. Using a sparse set of landmarks (n=14), that capture the whole face, the dense point mean variance was reduced from 1.92 to 0.54 mm. Conclusion The inter-operator variability was primarily associated with particular landmarks, where more leniently landmarks had the highest variability. The variables embedded in the portray and the reliability of a trained operator did only have marginal influence on the variability. Further, using 14 of the annotated landmarks we were able to reduced the variability and create a dense correspondences mesh to capture all facial features. PMID:25306436

  10. How good is crude MDL for solving the bias-variance dilemma? An empirical investigation based on Bayesian networks.

    PubMed

    Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli

    2014-01-01

    The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size.

  11. How Good Is Crude MDL for Solving the Bias-Variance Dilemma? An Empirical Investigation Based on Bayesian Networks

    PubMed Central

    Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli

    2014-01-01

    The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike’s Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size. PMID:24671204

  12. Estimation of stable boundary-layer height using variance processing of backscatter lidar data

    NASA Astrophysics Data System (ADS)

    Saeed, Umar; Rocadenbosch, Francesc

    2017-04-01

    Stable boundary layer (SBL) is one of the most complex and less understood topics in atmospheric science. The type and height of the SBL is an important parameter for several applications such as understanding the formation of haze fog, and accuracy of chemical and pollutant dispersion models, etc. [1]. This work addresses nocturnal Stable Boundary-Layer Height (SBLH) estimation by using variance processing and attenuated backscatter lidar measurements, its principles and limitations. It is shown that temporal and spatial variance profiles of the attenuated backscatter signal are related to the stratification of aerosols in the SBL. A minimum variance SBLH estimator using local minima in the variance profiles of backscatter lidar signals is introduced. The method is validated using data from HD(CP)2 Observational Prototype Experiment (HOPE) campaign at Jülich, Germany [2], under different atmospheric conditions. This work has received funding from the European Union Seventh Framework Programme, FP7 People, ITN Marie Curie Actions Programme (2012-2016) in the frame of ITaRS project (GA 289923), H2020 programme under ACTRIS-2 project (GA 654109), the Spanish Ministry of Economy and Competitiveness - European Regional Development Funds under TEC2015-63832-P project, and from the Generalitat de Catalunya (Grup de Recerca Consolidat) 2014-SGR-583. [1] R. B. Stull, An Introduction to Boundary Layer Meteorology, chapter 12, Stable Boundary Layer, pp. 499-543, Springer, Netherlands, 1988. [2] U. Löhnert, J. H. Schween, C. Acquistapace, K. Ebell, M. Maahn, M. Barrera-Verdejo, A. Hirsikko, B. Bohn, A. Knaps, E. O'Connor, C. Simmer, A. Wahner, and S. Crewell, "JOYCE: Jülich Observatory for Cloud Evolution," Bull. Amer. Meteor. Soc., vol. 96, no. 7, pp. 1157-1174, 2015.

  13. K48-linked KLF4 ubiquitination by E3 ligase Mule controls T-cell proliferation and cell cycle progression.

    PubMed

    Hao, Zhenyue; Sheng, Yi; Duncan, Gordon S; Li, Wanda Y; Dominguez, Carmen; Sylvester, Jennifer; Su, Yu-Wen; Lin, Gloria H Y; Snow, Bryan E; Brenner, Dirk; You-Ten, Annick; Haight, Jillian; Inoue, Satoshi; Wakeham, Andrew; Elford, Alisha; Hamilton, Sara; Liang, Yi; Zúñiga-Pflücker, Juan C; He, Housheng Hansen; Ohashi, Pamela S; Mak, Tak W

    2017-01-13

    T-cell proliferation is regulated by ubiquitination but the underlying molecular mechanism remains obscure. Here we report that Lys-48-linked ubiquitination of the transcription factor KLF4 mediated by the E3 ligase Mule promotes T-cell entry into S phase. Mule is elevated in T cells upon TCR engagement, and Mule deficiency in T cells blocks proliferation because KLF4 accumulates and drives upregulation of its transcriptional targets E2F2 and the cyclin-dependent kinase inhibitors p21 and p27. T-cell-specific Mule knockout (TMKO) mice develop exacerbated experimental autoimmune encephalomyelitis (EAE), show impaired generation of antigen-specific CD8 + T cells with reduced cytokine production, and fail to clear LCMV infections. Thus, Mule-mediated ubiquitination of the novel substrate KLF4 regulates T-cell proliferation, autoimmunity and antiviral immune responses in vivo.

  14. The Lymphocytic Choriomeningitis Virus Matrix Protein PPXY Late Domain Drives the Production of Defective Interfering Particles

    PubMed Central

    Ziegler, Christopher M.; Eisenhauer, Philip; Bruce, Emily A.; Weir, Marion E.; King, Benjamin R.; Klaus, Joseph P.; Krementsov, Dimitry N.; Shirley, David J.; Ballif, Bryan A.; Botten, Jason

    2016-01-01

    Arenaviruses cause severe diseases in humans but establish asymptomatic, lifelong infections in rodent reservoirs. Persistently-infected rodents harbor high levels of defective interfering (DI) particles, which are thought to be important for establishing persistence and mitigating virus-induced cytopathic effect. Little is known about what drives the production of DI particles. We show that neither the PPXY late domain encoded within the lymphocytic choriomeningitis virus (LCMV) matrix protein nor a functional endosomal sorting complex transport (ESCRT) pathway is absolutely required for the generation of standard infectious virus particles. In contrast, DI particle release critically requires the PPXY late domain and is ESCRT-dependent. Additionally, the terminal tyrosine in the PPXY motif is reversibly phosphorylated and our findings indicate that this posttranslational modification may regulate DI particle formation. Thus we have uncovered a new role for the PPXY late domain and a possible mechanism for its regulation. PMID:27010636

  15. Measuring the Power Spectrum with Peculiar Velocities

    NASA Astrophysics Data System (ADS)

    Macaulay, Edward; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.

    2012-01-01

    The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large scale excess in the matter power spectrum, and can appear to be in some tension with the LCDM model. We use a composite catalogue of 4,537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results from Macaulay et al. (2011), studying minimum variance moments of the velocity field, as calculated by Watkins, Feldman & Hudson (2009) and Feldman, Watkins & Hudson (2010). We find good agreement with the LCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1, although with a 1 sigma uncertainty which includes the LCDM model. We find that the uncertainty in the excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and nonlinear clustering in simulated peculiar velocity catalogues, and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.

  16. Power spectrum estimation from peculiar velocity catalogues

    NASA Astrophysics Data System (ADS)

    Macaulay, E.; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.

    2012-09-01

    The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large-scale excess in the matter power spectrum and can appear to be in some tension with the Λ cold dark matter (ΛCDM) model. We use a composite catalogue of 4537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results by Macaulay et al., studying minimum variance moments of the velocity field, as calculated by Feldman, Watkins & Hudson. We find good agreement with the ΛCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1 with a 1σ uncertainty which includes the ΛCDM model. We find that the uncertainty in excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and non-linear clustering in simulated peculiar velocity catalogues and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.

  17. Automatic quantification of mammary glands on non-contrast x-ray CT by using a novel segmentation approach

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Kano, Takuya; Cai, Yunliang; Li, Shuo; Zhou, Xinxin; Hara, Takeshi; Yokoyama, Ryujiro; Fujita, Hiroshi

    2016-03-01

    This paper describes a brand new automatic segmentation method for quantifying volume and density of mammary gland regions on non-contrast CT images. The proposed method uses two processing steps: (1) breast region localization, and (2) breast region decomposition to accomplish a robust mammary gland segmentation task on CT images. The first step detects two minimum bounding boxes of left and right breast regions, respectively, based on a machine-learning approach that adapts to a large variance of the breast appearances on different age levels. The second step divides the whole breast region in each side into mammary gland, fat tissue, and other regions by using spectral clustering technique that focuses on intra-region similarities of each patient and aims to overcome the image variance caused by different scan-parameters. The whole approach is designed as a simple structure with very minimum number of parameters to gain a superior robustness and computational efficiency for real clinical setting. We applied this approach to a dataset of 300 CT scans, which are sampled with the equal number from 30 to 50 years-old-women. Comparing to human annotations, the proposed approach can measure volume and quantify distributions of the CT numbers of mammary gland regions successfully. The experimental results demonstrated that the proposed approach achieves results consistent with manual annotations. Through our proposed framework, an efficient and effective low cost clinical screening scheme may be easily implemented to predict breast cancer risk, especially on those already acquired scans.

  18. Influence of Layer Thickness, Raster Angle, Deformation Temperature and Recovery Temperature on the Shape-Memory Effect of 3D-Printed Polylactic Acid Samples

    PubMed Central

    Wu, Wenzheng; Ye, Wenli; Wu, Zichao; Geng, Peng; Wang, Yulei; Zhao, Ji

    2017-01-01

    The success of the 3D-printing process depends upon the proper selection of process parameters. However, the majority of current related studies focus on the influence of process parameters on the mechanical properties of the parts. The influence of process parameters on the shape-memory effect has been little studied. This study used the orthogonal experimental design method to evaluate the influence of the layer thickness H, raster angle θ, deformation temperature Td and recovery temperature Tr on the shape-recovery ratio Rr and maximum shape-recovery rate Vm of 3D-printed polylactic acid (PLA). The order and contribution of every experimental factor on the target index were determined by range analysis and ANOVA, respectively. The experimental results indicated that the recovery temperature exerted the greatest effect with a variance ratio of 416.10, whereas the layer thickness exerted the smallest effect on the shape-recovery ratio with a variance ratio of 4.902. The recovery temperature exerted the most significant effect on the maximum shape-recovery rate with the highest variance ratio of 1049.50, whereas the raster angle exerted the minimum effect with a variance ratio of 27.163. The results showed that the shape-memory effect of 3D-printed PLA parts depended strongly on recovery temperature, and depended more weakly on the deformation temperature and 3D-printing parameters. PMID:28825617

  19. Genetic parameters of legendre polynomials for first parity lactation curves.

    PubMed

    Pool, M H; Janss, L L; Meuwissen, T H

    2000-11-01

    Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.

  20. The High Degree of Sequence Plasticity of the Arenavirus Noncoding Intergenic Region (IGR) Enables the Use of a Nonviral Universal Synthetic IGR To Attenuate Arenaviruses

    PubMed Central

    Iwasaki, Masaharu; Cubitt, Beatrice; Sullivan, Brian M.

    2016-01-01

    ABSTRACT Hemorrhagic fever arenaviruses (HFAs) pose important public health problems in regions where they are endemic. Concerns about human-pathogenic arenaviruses are exacerbated because of the lack of FDA-licensed arenavirus vaccines and because current antiarenaviral therapy is limited to an off-label use of ribavirin that is only partially effective. We have recently shown that the noncoding intergenic region (IGR) present in each arenavirus genome segment, the S and L segments (S-IGR and L-IGR, respectively), plays important roles in the control of virus protein expression and that this knowledge could be harnessed for the development of live-attenuated vaccine strains to combat HFAs. In this study, we further investigated the sequence plasticity of the arenavirus IGR. We demonstrate that recombinants of the prototypic arenavirus lymphocytic choriomeningitis virus (rLCMVs), whose S-IGRs were replaced by the S-IGR of Lassa virus (LASV) or an entirely nonviral S-IGR-like sequence (Ssyn), are viable, indicating that the function of S-IGR tolerates a high degree of sequence plasticity. In addition, rLCMVs whose L-IGRs were replaced by Ssyn or S-IGRs of the very distantly related reptarenavirus Golden Gate virus (GGV) were viable and severely attenuated in vivo but able to elicit protective immunity against a lethal challenge with wild-type LCMV. Our findings indicate that replacement of L-IGR by a nonviral Ssyn could serve as a universal molecular determinant of arenavirus attenuation. IMPORTANCE Hemorrhagic fever arenaviruses (HFAs) cause high rates of morbidity and mortality and pose important public health problems in regions where they are endemic. Implementation of live-attenuated vaccines (LAVs) will represent a major step to combat HFAs. Here we document that the arenavirus noncoding intergenic region (IGR) has a high degree of plasticity compatible with virus viability. This observation led us to generate recombinant LCMVs containing nonviral synthetic IGRs. These rLCMVs were severely attenuated in vivo but able to elicit protective immunity against a lethal challenge with wild-type LCMV. These nonviral synthetic IGRs can be used as universal molecular determinants of arenavirus attenuation for the rapid development of safe and effective, as well as stable, LAVs to combat HFA. PMID:26739049

  1. Lassa-Vesicular Stomatitis Chimeric Virus Safely Destroys Brain Tumors

    PubMed Central

    Wollmann, Guido; Drokhlyansky, Eugene; Davis, John N.; Cepko, Connie

    2015-01-01

    ABSTRACT High-grade tumors in the brain are among the deadliest of cancers. Here, we took a promising oncolytic virus, vesicular stomatitis virus (VSV), and tested the hypothesis that the neurotoxicity associated with the virus could be eliminated without blocking its oncolytic potential in the brain by replacing the neurotropic VSV glycoprotein with the glycoprotein from one of five different viruses, including Ebola virus, Marburg virus, lymphocytic choriomeningitis virus (LCMV), rabies virus, and Lassa virus. Based on in vitro infections of normal and tumor cells, we selected two viruses to test in vivo. Wild-type VSV was lethal when injected directly into the brain. In contrast, a novel chimeric virus (VSV-LASV-GPC) containing genes from both the Lassa virus glycoprotein precursor (GPC) and VSV showed no adverse actions within or outside the brain and targeted and completely destroyed brain cancer, including high-grade glioblastoma and melanoma, even in metastatic cancer models. When mice had two brain tumors, intratumoral VSV-LASV-GPC injection in one tumor (glioma or melanoma) led to complete tumor destruction; importantly, the virus moved contralaterally within the brain to selectively infect the second noninjected tumor. A chimeric virus combining VSV genes with the gene coding for the Ebola virus glycoprotein was safe in the brain and also selectively targeted brain tumors but was substantially less effective in destroying brain tumors and prolonging survival of tumor-bearing mice. A tropism for multiple cancer types combined with an exquisite tumor specificity opens a new door to widespread application of VSV-LASV-GPC as a safe and efficacious oncolytic chimeric virus within the brain. IMPORTANCE Many viruses have been tested for their ability to target and kill cancer cells. Vesicular stomatitis virus (VSV) has shown substantial promise, but a key problem is that if it enters the brain, it can generate adverse neurologic consequences, including death. We tested a series of chimeric viruses containing genes coding for VSV, together with a gene coding for the glycoprotein from other viruses, including Ebola virus, Lassa virus, LCMV, rabies virus, and Marburg virus, which was substituted for the VSV glycoprotein gene. Ebola and Lassa chimeric viruses were safe in the brain and targeted brain tumors. Lassa-VSV was particularly effective, showed no adverse side effects even when injected directly into the brain, and targeted and destroyed two different types of deadly brain cancer, including glioblastoma and melanoma. PMID:25878115

  2. VizieR Online Data Catalog: AGNs in submm-selected Lockman Hole galaxies (Serjeant+, 2010)

    NASA Astrophysics Data System (ADS)

    Serjeant, S.; Negrello, M.; Pearson, C.; Mortier, A.; Austermann, J.; Aretxaga, I.; Clements, D.; Chapman, S.; Dye, S.; Dunlop, J.; Dunne, L.; Farrah, D.; Hughes, D.; Lee, H. M.; Matsuhara, H.; Ibar, E.; Im, M.; Jeong, W.-S.; Kim, S.; Oyabu, S.; Takagi, T.; Wada, T.; Wilson, G.; Vaccari, M.; Yun, M.

    2013-11-01

    We present a comparison of the SCUBA half degree extragalactic survey (SHADES) at 450μm, 850μm and 1100μm with deep guaranteed time 15μm AKARI FU-HYU survey data and Spitzer guaranteed time data at 3.6-24μm in the Lockman hole east. The AKARI data was analysed using bespoke software based in part on the drizzling and minimum-variance matched filtering developed for SHADES, and was cross-calibrated against ISO fluxes. (2 data files).

  3. A Bayesian approach to parameter and reliability estimation in the Poisson distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.

  4. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  5. Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow

    NASA Astrophysics Data System (ADS)

    Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke

    2017-04-01

    Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.

  6. Law of the Minimum paradoxes.

    PubMed

    Gorban, Alexander N; Pokidysheva, Lyudmila I; Smirnova, Elena V; Tyukina, Tatiana A

    2011-09-01

    The "Law of the Minimum" states that growth is controlled by the scarcest resource (limiting factor). This concept was originally applied to plant or crop growth (Justus von Liebig, 1840, Salisbury, Plant physiology, 4th edn., Wadsworth, Belmont, 1992) and quantitatively supported by many experiments. Some generalizations based on more complicated "dose-response" curves were proposed. Violations of this law in natural and experimental ecosystems were also reported. We study models of adaptation in ensembles of similar organisms under load of environmental factors and prove that violation of Liebig's law follows from adaptation effects. If the fitness of an organism in a fixed environment satisfies the Law of the Minimum then adaptation equalizes the pressure of essential factors and, therefore, acts against the Liebig's law. This is the the Law of the Minimum paradox: if for a randomly chosen pair "organism-environment" the Law of the Minimum typically holds, then in a well-adapted system, we have to expect violations of this law.For the opposite interaction of factors (a synergistic system of factors which amplify each other), adaptation leads from factor equivalence to limitations by a smaller number of factors.For analysis of adaptation, we develop a system of models based on Selye's idea of the universal adaptation resource (adaptation energy). These models predict that under the load of an environmental factor a population separates into two groups (phases): a less correlated, well adapted group and a highly correlated group with a larger variance of attributes, which experiences problems with adaptation. Some empirical data are presented and evidences of interdisciplinary applications to econometrics are discussed. © Society for Mathematical Biology 2010

  7. MOnthly TEmperature DAtabase of Spain 1951-2010: MOTEDAS (2): The Correlation Decay Distance (CDD) and the spatial variability of maximum and minimum monthly temperature in Spain during (1981-2010).

    NASA Astrophysics Data System (ADS)

    Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos

    2014-05-01

    One of the key point in the develop of the MOTEDAS dataset (see Poster 1 MOTEDAS) in the framework of the HIDROCAES Project (Impactos Hidrológicos del Calentamiento Global en España, Spanish Ministery of Research CGL2011-27574-C02-01) is the reference series for which no generalized metadata exist. In this poster we present an analysis of spatial variability of monthly minimum and maximum temperatures in the conterminous land of Spain (Iberian Peninsula, IP), by using the Correlation Decay Distance function (CDD), with the aim of evaluating, at sub-regional level, the optimal threshold distance between neighbouring stations for producing the set of reference series used in the quality control (see MOTEDAS Poster 1) and the reconstruction (see MOREDAS Poster 3). The CDD analysis for Tmax and Tmin was performed calculating a correlation matrix at monthly scale between 1981-2010 among monthly mean values of maximum (Tmax) and minimum (Tmin) temperature series (with at least 90% of data), free of anomalous data and homogenized (see MOTEDAS Poster 1), obtained from AEMEt archives (National Spanish Meteorological Agency). Monthly anomalies (difference between data and mean 1981-2010) were used to prevent the dominant effect of annual cycle in the CDD annual estimation. For each station, and time scale, the common variance r2 (using the square of Pearson's correlation coefficient) was calculated between all neighbouring temperature series and the relation between r2 and distance was modelled according to the following equation (1): Log (r2ij) = b*°dij (1) being Log(rij2) the common variance between target (i) and neighbouring series (j), dij the distance between them and b the slope of the ordinary least-squares linear regression model applied taking into account only the surrounding stations within a starting radius of 50 km and with a minimum of 5 stations required. Finally, monthly, seasonal and annual CDD values were interpolated using the Ordinary Kriging with a spherical variogram over conterminous land of Spain, and converted on a regular 10 km2 grid (resolution similar to the mean distance between stations) to map the results. In the conterminous land of Spain the distance at which couples of stations have a common variance in temperature (both maximum Tmax, and minimum Tmin) above the selected threshold (50%, r Pearson ~0.70) on average does not exceed 400 km, with relevant spatial and temporal differences. The spatial distribution of the CDD shows a clear coastland-to-inland gradient at annual, seasonal and monthly scale, with highest spatial variability along the coastland areas and lower variability inland. The highest spatial variability coincide particularly with coastland areas surrounded by mountain chains and suggests that the orography is one of the most driving factor causing higher interstation variability. Moreover, there are some differences between the behaviour of Tmax and Tmin, being Tmin spatially more homogeneous than Tmax, but its lower CDD values indicate that night-time temperature is more variable than diurnal one. The results suggest that in general local factors affects the spatial variability of monthly Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for Tmin respect to Tmax. The results suggest that in general local factors affects the spatial variability of Tmin more than Tmax and then higher network density would be necessary to capture the higher spatial variability highlighted for minimum temperature respect to maximum temperature. A conservative distance for reference series could be evaluated in 200 km, that we propose for continental land of Spain and use in the development of MOTEDAS.

  8. Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.

    PubMed

    Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano

    2008-07-01

    Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.

  9. Using geostatistical methods to estimate snow water equivalence distribution in a mountain watershed

    USGS Publications Warehouse

    Balk, B.; Elder, K.; Baron, Jill S.

    1998-01-01

    Knowledge of the spatial distribution of snow water equivalence (SWE) is necessary to adequately forecast the volume and timing of snowmelt runoff.  In April 1997, peak accumulation snow depth and density measurements were independently taken in the Loch Vale watershed (6.6 km2), Rocky Mountain National Park, Colorado.  Geostatistics and classical statistics were used to estimate SWE distribution across the watershed.  Snow depths were spatially distributed across the watershed through kriging interpolation methods which provide unbiased estimates that have minimum variances.  Snow densities were spatially modeled through regression analysis.  Combining the modeled depth and density with snow-covered area (SCA produced an estimate of the spatial distribution of SWE.  The kriged estimates of snow depth explained 37-68% of the observed variance in the measured depths.  Steep slopes, variably strong winds, and complex energy balance in the watershed contribute to a large degree of heterogeneity in snow depth.

  10. Unbiased estimation in seamless phase II/III trials with unequal treatment effect variances and hypothesis-driven selection rules.

    PubMed

    Robertson, David S; Prevost, A Toby; Bowden, Jack

    2016-09-30

    Seamless phase II/III clinical trials offer an efficient way to select an experimental treatment and perform confirmatory analysis within a single trial. However, combining the data from both stages in the final analysis can induce bias into the estimates of treatment effects. Methods for bias adjustment developed thus far have made restrictive assumptions about the design and selection rules followed. In order to address these shortcomings, we apply recent methodological advances to derive the uniformly minimum variance conditionally unbiased estimator for two-stage seamless phase II/III trials. Our framework allows for the precision of the treatment arm estimates to take arbitrary values, can be utilised for all treatments that are taken forward to phase III and is applicable when the decision to select or drop treatment arms is driven by a multiplicity-adjusted hypothesis testing procedure. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  11. Additive-Multiplicative Approximation of Genotype-Environment Interaction

    PubMed Central

    Gimelfarb, A.

    1994-01-01

    A model of genotype-environment interaction in quantitative traits is considered. The model represents an expansion of the traditional additive (first degree polynomial) approximation of genotypic and environmental effects to a second degree polynomial incorporating a multiplicative term besides the additive terms. An experimental evaluation of the model is suggested and applied to a trait in Drosophila melanogaster. The environmental variance of a genotype in the model is shown to be a function of the genotypic value: it is a convex parabola. The broad sense heritability in a population depends not only on the genotypic and environmental variances, but also on the position of the genotypic mean in the population relative to the minimum of the parabola. It is demonstrated, using the model, that GXE interaction rectional may cause a substantial non-linearity in offspring-parent regression and a reversed response to directional selection. It is also shown that directional selection may be accompanied by an increase in the heritability. PMID:7896113

  12. Combinatorics of least-squares trees.

    PubMed

    Mihaescu, Radu; Pachter, Lior

    2008-09-09

    A recurring theme in the least-squares approach to phylogenetics has been the discovery of elegant combinatorial formulas for the least-squares estimates of edge lengths. These formulas have proved useful for the development of efficient algorithms, and have also been important for understanding connections among popular phylogeny algorithms. For example, the selection criterion of the neighbor-joining algorithm is now understood in terms of the combinatorial formulas of Pauplin for estimating tree length. We highlight a phylogenetically desirable property that weighted least-squares methods should satisfy, and provide a complete characterization of methods that satisfy the property. The necessary and sufficient condition is a multiplicative four-point condition that the variance matrix needs to satisfy. The proof is based on the observation that the Lagrange multipliers in the proof of the Gauss-Markov theorem are tree-additive. Our results generalize and complete previous work on ordinary least squares, balanced minimum evolution, and the taxon-weighted variance model. They also provide a time-optimal algorithm for computation.

  13. A Quantitative Microscopy Technique for Determining the Number of Specific Proteins in Cellular Compartments

    PubMed Central

    Mutch, Sarah A.; Gadd, Jennifer C.; Fujimoto, Bryant S.; Kensel-Hammes, Patricia; Schiro, Perry G.; Bajjalieh, Sandra M.; Chiu, Daniel T.

    2013-01-01

    This protocol describes a method to determine both the average number and variance of proteins in the few to tens of copies in isolated cellular compartments, such as organelles and protein complexes. Other currently available protein quantification techniques either provide an average number but lack information on the variance or are not suitable for reliably counting proteins present in the few to tens of copies. This protocol entails labeling the cellular compartment with fluorescent primary-secondary antibody complexes, TIRF (total internal reflection fluorescence) microscopy imaging of the cellular compartment, digital image analysis, and deconvolution of the fluorescence intensity data. A minimum of 2.5 days is required to complete the labeling, imaging, and analysis of a set of samples. As an illustrative example, we describe in detail the procedure used to determine the copy number of proteins in synaptic vesicles. The same procedure can be applied to other organelles or signaling complexes. PMID:22094731

  14. Estimating contaminant loads in rivers: An application of adjusted maximum likelihood to type 1 censored data

    USGS Publications Warehouse

    Cohn, Timothy A.

    2005-01-01

    This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored‐data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet‐Cramér‐Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real‐time water quality monitoring.

  15. Random regression models using Legendre orthogonal polynomials to evaluate the milk production of Alpine goats.

    PubMed

    Silva, F G; Torres, R A; Brito, L F; Euclydes, R F; Melo, A L P; Souza, N O; Ribeiro, J I; Rodrigues, M T

    2013-12-11

    The objective of this study was to identify the best random regression model using Legendre orthogonal polynomials to evaluate Alpine goats genetically and to estimate the parameters for test day milk yield. On the test day, we analyzed 20,710 records of milk yield of 667 goats from the Goat Sector of the Universidade Federal de Viçosa. The evaluated models had combinations of distinct fitting orders for polynomials (2-5), random genetic (1-7), and permanent environmental (1-7) fixed curves and a number of classes for residual variance (2, 4, 5, and 6). WOMBAT software was used for all genetic analyses. A random regression model using the best Legendre orthogonal polynomial for genetic evaluation of milk yield on the test day of Alpine goats considered a fixed curve of order 4, curve of genetic additive effects of order 2, curve of permanent environmental effects of order 7, and a minimum of 5 classes of residual variance because it was the most economical model among those that were equivalent to the complete model by the likelihood ratio test. Phenotypic variance and heritability were higher at the end of the lactation period, indicating that the length of lactation has more genetic components in relation to the production peak and persistence. It is very important that the evaluation utilizes the best combination of fixed, genetic additive and permanent environmental regressions, and number of classes of heterogeneous residual variance for genetic evaluation using random regression models, thereby enhancing the precision and accuracy of the estimates of parameters and prediction of genetic values.

  16. A training image evaluation and selection method based on minimum data event distance for multiple-point geostatistics

    NASA Astrophysics Data System (ADS)

    Feng, Wenjie; Wu, Shenghe; Yin, Yanshu; Zhang, Jiajia; Zhang, Ke

    2017-07-01

    A training image (TI) can be regarded as a database of spatial structures and their low to higher order statistics used in multiple-point geostatistics (MPS) simulation. Presently, there are a number of methods to construct a series of candidate TIs (CTIs) for MPS simulation based on a modeler's subjective criteria. The spatial structures of TIs are often various, meaning that the compatibilities of different CTIs with the conditioning data are different. Therefore, evaluation and optimal selection of CTIs before MPS simulation is essential. This paper proposes a CTI evaluation and optimal selection method based on minimum data event distance (MDevD). In the proposed method, a set of MDevD properties are established through calculation of the MDevD of conditioning data events in each CTI. Then, CTIs are evaluated and ranked according to the mean value and variance of the MDevD properties. The smaller the mean value and variance of an MDevD property are, the more compatible the corresponding CTI is with the conditioning data. In addition, data events with low compatibility in the conditioning data grid can be located to help modelers select a set of complementary CTIs for MPS simulation. The MDevD property can also help to narrow the range of the distance threshold for MPS simulation. The proposed method was evaluated using three examples: a 2D categorical example, a 2D continuous example, and an actual 3D oil reservoir case study. To illustrate the method, a C++ implementation of the method is attached to the paper.

  17. Cortical neuron activation induced by electromagnetic stimulation: a quantitative analysis via modelling and simulation.

    PubMed

    Wu, Tiecheng; Fan, Jie; Lee, Kim Seng; Li, Xiaoping

    2016-02-01

    Previous simulation works concerned with the mechanism of non-invasive neuromodulation has isolated many of the factors that can influence stimulation potency, but an inclusive account of the interplay between these factors on realistic neurons is still lacking. To give a comprehensive investigation on the stimulation-evoked neuronal activation, we developed a simulation scheme which incorporates highly detailed physiological and morphological properties of pyramidal cells. The model was implemented on a multitude of neurons; their thresholds and corresponding activation points with respect to various field directions and pulse waveforms were recorded. The results showed that the simulated thresholds had a minor anisotropy and reached minimum when the field direction was parallel to the dendritic-somatic axis; the layer 5 pyramidal cells always had lower thresholds but substantial variances were also observed within layers; reducing pulse length could magnify the threshold values as well as the variance; tortuosity and arborization of axonal segments could obstruct action potential initiation. The dependence of the initiation sites on both the orientation and the duration of the stimulus implies that the cellular excitability might represent the result of the competition between various firing-capable axonal components, each with a unique susceptibility determined by the local geometry. Moreover, the measurements obtained in simulation intimately resemble recordings in physiological and clinical studies, which seems to suggest that, with minimum simplification of the neuron model, the cable theory-based simulation approach can have sufficient verisimilitude to give quantitatively accurate evaluation of cell activities in response to the externally applied field.

  18. Potential Seasonal Terrestrial Water Storage Monitoring from GPS Vertical Displacements: A Case Study in the Lower Three-Rivers Headwater Region, China.

    PubMed

    Zhang, Bao; Yao, Yibin; Fok, Hok Sum; Hu, Yufeng; Chen, Qiang

    2016-09-19

    This study uses the observed vertical displacements of Global Positioning System (GPS) time series obtained from the Crustal Movement Observation Network of China (CMONOC) with careful pre- and post-processing to estimate the seasonal crustal deformation in response to the hydrological loading in lower three-rivers headwater region of southwest China, followed by inferring the annual EWH changes through geodetic inversion methods. The Helmert Variance Component Estimation (HVCE) and the Minimum Mean Square Error (MMSE) criterion were successfully employed. The GPS inferred EWH changes agree well qualitatively with the Gravity Recovery and Climate Experiment (GRACE)-inferred and the Global Land Data Assimilation System (GLDAS)-inferred EWH changes, with a discrepancy of 3.2-3.9 cm and 4.8-5.2 cm, respectively. In the research areas, the EWH changes in the Lancang basin is larger than in the other regions, with a maximum of 21.8-24.7 cm and a minimum of 3.1-6.9 cm.

  19. Development of an Empirical Model for Optimization of Machining Parameters to Minimize Power Consumption

    NASA Astrophysics Data System (ADS)

    Kant Garg, Girish; Garg, Suman; Sangwan, K. S.

    2018-04-01

    The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.

  20. Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods

    NASA Astrophysics Data System (ADS)

    Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong

    2008-12-01

    Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.

  1. Comparative efficacy of storage bags, storability and damage potential of bruchid beetle.

    PubMed

    Harish, G; Nataraja, M V; Ajay, B C; Holajjer, Prasanna; Savaliya, S D; Gedia, M V

    2014-12-01

    Groundnut during storage is attacked by number of stored grain pests and management of these insect pests particularly bruchid beetle, Caryedon serratus (Oliver) is of prime importance as they directly damage the pod and kernels. In this regard different storage bags that could be used and duration up to which we can store groundnut has been studied. Super grain bag recorded minimum number of eggs laid and less damage and minimum weight loss in pods and kernels in comparison to other storage bags. Analysis of variance for multiple regression models were found to be significant in all bags for variables viz, number of eggs laid, damage in pods and kernels, weight loss in pods and kernels throughout the season. Multiple comparison results showed that there was a high probability of eggs laid and pod damage in lino bag, fertilizer bag and gunny bag, whereas super grain bag was found to be more effective in managing the C. serratus owing to very low air circulation.

  2. A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    NASA Astrophysics Data System (ADS)

    Yang, Liusha; Couillet, Romain; McKay, Matthew R.

    2015-12-01

    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.

  3. Noise sensitivity of portfolio selection in constant conditional correlation GARCH models

    NASA Astrophysics Data System (ADS)

    Varga-Haszonits, I.; Kondor, I.

    2007-11-01

    This paper investigates the efficiency of minimum variance portfolio optimization for stock price movements following the Constant Conditional Correlation GARCH process proposed by Bollerslev. Simulations show that the quality of portfolio selection can be improved substantially by computing optimal portfolio weights from conditional covariances instead of unconditional ones. Measurement noise can be further reduced by applying some filtering method on the conditional correlation matrix (such as Random Matrix Theory based filtering). As an empirical support for the simulation results, the analysis is also carried out for a time series of S&P500 stock prices.

  4. A Sparse Matrix Approach for Simultaneous Quantification of Nystagmus and Saccade

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Stone, Lee; Boyle, Richard D.

    2012-01-01

    The vestibulo-ocular reflex (VOR) consists of two intermingled non-linear subsystems; namely, nystagmus and saccade. Typically, nystagmus is analysed using a single sufficiently long signal or a concatenation of them. Saccade information is not analysed and discarded due to insufficient data length to provide consistent and minimum variance estimates. This paper presents a novel sparse matrix approach to system identification of the VOR. It allows for the simultaneous estimation of both nystagmus and saccade signals. We show via simulation of the VOR that our technique provides consistent and unbiased estimates in the presence of output additive noise.

  5. Statistical indicators of collective behavior and functional clusters in gene networks of yeast

    NASA Astrophysics Data System (ADS)

    Živković, J.; Tadić, B.; Wick, N.; Thurner, S.

    2006-03-01

    We analyze gene expression time-series data of yeast (S. cerevisiae) measured along two full cell-cycles. We quantify these data by using q-exponentials, gene expression ranking and a temporal mean-variance analysis. We construct gene interaction networks based on correlation coefficients and study the formation of the corresponding giant components and minimum spanning trees. By coloring genes according to their cell function we find functional clusters in the correlation networks and functional branches in the associated trees. Our results suggest that a percolation point of functional clusters can be identified on these gene expression correlation networks.

  6. Gravity anomalies, compensation mechanisms, and the geodynamics of western Ishtar Terra, Venus

    NASA Technical Reports Server (NTRS)

    Grimm, Robert E.; Phillips, Roger J.

    1991-01-01

    Pioneer Venus line-of-sight orbital accelerations were utilized to calculate the geoid and vertical gravity anomalies for western Ishtar Terra on various planes of altitude z sub 0. The apparent depth of isostatic compensation at z sub 0 = 1400 km is 180 + or - 20 km based on the usual method of minimum variance in the isostatic anomaly. An attempt is made here to explain this observation, as well as the regional elevation, peripheral mountain belts, and inferred age of western Ishtar Terra, in terms of one or three broad geodynamic models.

  7. Minimal Model of Prey Localization through the Lateral-Line System

    NASA Astrophysics Data System (ADS)

    Franosch, Jan-Moritz P.; Sobotka, Marion C.; Elepfandt, Andreas; van Hemmen, J. Leo

    2003-10-01

    The clawed frog Xenopus is an aquatic predator catching prey at night by detecting water movements caused by its prey. We present a general method, a “minimal model” based on a minimum-variance estimator, to explain prey detection through the frog's many lateral-line organs, even in case several of them are defunct. We show how waveform reconstruction allows Xenopus' neuronal system to determine both the direction and the character of the prey and even to distinguish two simultaneous wave sources. The results can be applied to many aquatic amphibians, fish, or reptiles such as crocodilians.

  8. Beamforming approaches for untethered, ultrasonic neural dust motes for cortical recording: a simulation study.

    PubMed

    Bertrand, Alexander; Seo, Dongjin; Maksimovic, Filip; Carmena, Jose M; Maharbiz, Michel M; Alon, Elad; Rabaey, Jan M

    2014-01-01

    In this paper, we examine the use of beamforming techniques to interrogate a multitude of neural implants in a distributed, ultrasound-based intra-cortical recording platform known as Neural Dust. We propose a general framework to analyze system design tradeoffs in the ultrasonic beamformer that extracts neural signals from modulated ultrasound waves that are backscattered by free-floating neural dust (ND) motes. Simulations indicate that high-resolution linearly-constrained minimum variance beamforming sufficiently suppresses interference from unselected ND motes and can be incorporated into the ND-based cortical recording system.

  9. Statistical evaluation of metal fill widths for emulated metal fill in parasitic extraction methodology

    NASA Astrophysics Data System (ADS)

    J-Me, Teh; Noh, Norlaili Mohd.; Aziz, Zalina Abdul

    2015-05-01

    In the chip industry today, the key goal of a chip development organization is to develop and market chips within a short time frame to gain foothold on market share. This paper proposes a design flow around the area of parasitic extraction to improve the design cycle time. The proposed design flow utilizes the usage of metal fill emulation as opposed to the current flow which performs metal fill insertion directly. By replacing metal fill structures with an emulation methodology in earlier iterations of the design flow, this is targeted to help reduce runtime in fill insertion stage. Statistical design of experiments methodology utilizing the randomized complete block design was used to select an appropriate emulated metal fill width to improve emulation accuracy. The experiment was conducted on test cases of different sizes, ranging from 1000 gates to 21000 gates. The metal width was varied from 1 x minimum metal width to 6 x minimum metal width. Two-way analysis of variance and Fisher's least significant difference test were used to analyze the interconnect net capacitance values of the different test cases. This paper presents the results of the statistical analysis for the 45 nm process technology. The recommended emulated metal fill width was found to be 4 x the minimum metal width.

  10. Claw length recommendations for dairy cow foot trimming

    PubMed Central

    Archer, S. C.; Newsome, R.; Dibble, H.; Sturrock, C. J.; Chagunda, M. G. G.; Mason, C. S.; Huxley, J. N.

    2015-01-01

    The aim was to describe variation in length of the dorsal hoof wall in contact with the dermis for cows on a single farm, and hence, derive minimum appropriate claw lengths for routine foot trimming. The hind feet of 68 Holstein-Friesian dairy cows were collected post mortem, and the internal structures were visualised using x-ray µCT. The internal distance from the proximal limit of the wall horn to the distal tip of the dermis was measured from cross-sectional sagittal images. A constant was added to allow for a minimum sole thickness of 5 mm and an average wall thickness of 8 mm. Data were evaluated using descriptive statistics and two-level linear regression models with claw nested within cow. Based on 219 claws, the recommended dorsal wall length from the proximal limit of hoof horn was up to 90 mm for 96 per cent of claws, and the median value was 83 mm. Dorsal wall length increased by 1 mm per year of age, yet 85 per cent of the null model variance remained unexplained. Overtrimming can have severe consequences; the authors propose that the minimum recommended claw length stated in training materials for all Holstein-Friesian cows should be increased to 90 mm. PMID:26220848

  11. Estimating multilevel logistic regression models when the number of clusters is low: a comparison of different statistical software procedures.

    PubMed

    Austin, Peter C

    2010-04-22

    Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.

  12. Production of IL-10 by CD4+ regulatory T cells during the resolution of infection promotes the maturation of memory CD8+ T cells

    PubMed Central

    Laidlaw, Brian J; Cui, Weiguo; Amezquita, Robert A; Gray, Simon M; Guan, Tianxia; Lu, Yisi; Kobayashi, Yasushi; Flavell, Richard A; Kleinstein, Steven H; Craft, Joe; Kaech, Susan M

    2016-01-01

    Memory CD8+ T cells are critical for host defense upon reexposure to intracellular pathogens. We found that interleukin 10 (IL-10) derived from CD4+ regulatory T cells (Treg cells) was necessary for the maturation of memory CD8+ T cells following acute infection with lymphocytic choriomeningitis virus (LCMV). Treg cell–derived IL-10 was most important during the resolution phase, calming inflammation and the activation state of dendritic cells. Adoptive transfer of IL-10-sufficient Treg cells during the resolution phase ‘restored’ the maturation of memory CD8+ T cells in IL-10-deficient mice. Our data indicate that Treg cell–derived IL-10 is needed to insulate CD8+ T cells from inflammatory signals, and reveal that the resolution phase of infection is a critical period that influences the quality and function of developing memory CD8+ T cells. PMID:26147684

  13. Blimp-1 represses CD8 T cell expression of PD-1 using a feed-forward transcriptional circuit during acute viral infection

    PubMed Central

    Lu, Peiyuan; Youngblood, Benjamin A.; Austin, James W.; Rasheed Mohammed, Ata Ur; Butler, Royce; Ahmed, Rafi

    2014-01-01

    Programmed cell death 1 (PD-1) is an inhibitory immune receptor that regulates T cell function, yet the molecular events that control its expression are largely unknown. We show here that B lymphocyte–induced maturation protein 1 (Blimp-1)–deficient CD8 T cells fail to repress PD-1 during the early stages of CD8 T cell differentiation after acute infection with lymphocytic choriomeningitis virus (LCMV) strain Armstrong. Blimp-1 represses PD-1 through a feed-forward repressive circuit by regulating PD-1 directly and by repressing NFATc1 expression, an activator of PD-1 expression. Blimp-1 binding induces a repressive chromatin structure at the PD-1 locus, leading to the eviction of NFATc1 from its site. These data place Blimp-1 at an important phase of the CD8 T cell effector response and provide a molecular mechanism for its repression of PD-1. PMID:24590765

  14. Nature of Fluctuations on Directional Discontinuities Inside a Solar Ejection: Wind and IMP 8 Observations

    NASA Technical Reports Server (NTRS)

    Vasquez, Bernard J.; Farrugia, Charles J.; Markovskii, Sergei A.; Hollweg, Joseph V.; Richardson, Ian G.; Ogilvie, Keith W.; Lepping, Ronald P.; Lin, Robert P.; Larson, Davin; White, Nicholas E. (Technical Monitor)

    2001-01-01

    A solar ejection passed the Wind spacecraft between December 23 and 26, 1996. On closer examination, we find a sequence of ejecta material, as identified by abnormally low proton temperatures, separated by plasmas with typical solar wind temperatures at 1 AU. Large and abrupt changes in field and plasma properties occurred near the separation boundaries of these regions. At the one boundary we examine here, a series of directional discontinuities was observed. We argue that Alfvenic fluctuations in the immediate vicinity of these discontinuities distort minimum variance normals, introducing uncertainty into the identification of the discontinuities as either rotational or tangential. Carrying out a series of tests on plasma and field data including minimum variance, velocity and magnetic field correlations, and jump conditions, we conclude that the discontinuities are tangential. Furthermore, we find waves superposed on these tangential discontinuities (TDs). The presence of discontinuities allows the existence of both surface waves and ducted body waves. Both probably form in the solar atmosphere where many transverse nonuniformities exist and where theoretically they have been expected. We add to prior speculation that waves on discontinuities may in fact be a common occurrence. In the solar wind, these waves can attain large amplitudes and low frequencies. We argue that such waves can generate dynamical changes at TDs through advection or forced reconnection. The dynamics might so extensively alter the internal structure that the discontinuity would no longer be identified as tangential. Such processes could help explain why the occurrence frequency of TDs observed throughout the solar wind falls off with increasing heliocentric distance. The presence of waves may also alter the nature of the interactions of TDs with the Earth's bow shock in so-called hot flow anomalies.

  15. Brillouin Frequency Shift of Fiber Distributed Sensors Extracted from Noisy Signals by Quadratic Fitting.

    PubMed

    Zheng, Hanrong; Fang, Zujie; Wang, Zhaoyong; Lu, Bin; Cao, Yulong; Ye, Qing; Qu, Ronghui; Cai, Haiwen

    2018-01-31

    It is a basic task in Brillouin distributed fiber sensors to extract the peak frequency of the scattering spectrum, since the peak frequency shift gives information on the fiber temperature and strain changes. Because of high-level noise, quadratic fitting is often used in the data processing. Formulas of the dependence of the minimum detectable Brillouin frequency shift (BFS) on the signal-to-noise ratio (SNR) and frequency step have been presented in publications, but in different expressions. A detailed deduction of new formulas of BFS variance and its average is given in this paper, showing especially their dependences on the data range used in fitting, including its length and its center respective to the real spectral peak. The theoretical analyses are experimentally verified. It is shown that the center of the data range has a direct impact on the accuracy of the extracted BFS. We propose and demonstrate an iterative fitting method to mitigate such effects and improve the accuracy of BFS measurement. The different expressions of BFS variances presented in previous papers are explained and discussed.

  16. Sleep and nutritional deprivation and performance of house officers.

    PubMed

    Hawkins, M R; Vichick, D A; Silsby, H D; Kruzich, D J; Butler, R

    1985-07-01

    A study was conducted by the authors to compare cognitive functioning in acutely and chronically sleep-deprived house officers. A multivariate analysis of variance revealed significant deficits in primary mental tasks involving basic rote memory, language, and numeric skills as well as in tasks requiring high-order cognitive functioning and traditional intellective abilities. These deficits existed only for the acutely sleep-deprived group. The finding of deficits in individuals who reported five hours or less of sleep in a 24-hour period suggests that the minimum standard of four hours that has been considered by some to be adequate for satisfactory performance may be insufficient for more complex cognitive functioning.

  17. An empirical Bayes approach for the Poisson life distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1973-01-01

    A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.

  18. What determines the direction of minimum variance of the magnetic field fluctuations in the solar wind?

    NASA Technical Reports Server (NTRS)

    Grappin, R.; Velli, M.

    1995-01-01

    The solar wind is not an isotropic medium; two symmetry axis are provided, first the radial direction (because the mean wind is radial) and second the spiral direction of the mean magnetic field, which depends on heliocentric distance. Observations show very different anisotropy directions, depending on the frequency waveband; while the large-scale velocity fluctuations are essentially radial, the smaller scale magnetic field fluctuations are mostly perpendicular to the mean field direction, which is not the expected linear (WkB) result. We attempt to explain how these properties are related, with the help of numerical simulations.

  19. Reliability analysis of the objective structured clinical examination using generalizability theory.

    PubMed

    Trejo-Mejía, Juan Andrés; Sánchez-Mendiola, Melchor; Méndez-Ramírez, Ignacio; Martínez-González, Adrián

    2016-01-01

    The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.

  20. Reliability analysis of the objective structured clinical examination using generalizability theory.

    PubMed

    Trejo-Mejía, Juan Andrés; Sánchez-Mendiola, Melchor; Méndez-Ramírez, Ignacio; Martínez-González, Adrián

    2016-01-01

    Background The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. Methods An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. Results The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. Conclusions Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.

  1. Multiple regression analysis of anthropometric measurements influencing the cephalic index of male Japanese university students.

    PubMed

    Hossain, Md Golam; Saw, Aik; Alam, Rashidul; Ohtsuki, Fumio; Kamarul, Tunku

    2013-09-01

    Cephalic index (CI), the ratio of head breadth to head length, is widely used to categorise human populations. The aim of this study was to access the impact of anthropometric measurements on the CI of male Japanese university students. This study included 1,215 male university students from Tokyo and Kyoto, selected using convenient sampling. Multiple regression analysis was used to determine the effect of anthropometric measurements on CI. The variance inflation factor (VIF) showed no evidence of a multicollinearity problem among independent variables. The coefficients of the regression line demonstrated a significant positive relationship between CI and minimum frontal breadth (p < 0.01), bizygomatic breadth (p < 0.01) and head height (p < 0.05), and a negative relationship between CI and morphological facial height (p < 0.01) and head circumference (p < 0.01). Moreover, the coefficient and odds ratio of logistic regression analysis showed a greater likelihood for minimum frontal breadth (p < 0.01) and bizygomatic breadth (p < 0.01) to predict round-headedness, and morphological facial height (p < 0.05) and head circumference (p < 0.01) to predict long-headedness. Stepwise regression analysis revealed bizygomatic breadth, head circumference, minimum frontal breadth, head height and morphological facial height to be the best predictor craniofacial measurements with respect to CI. The results suggest that most of the variables considered in this study appear to influence the CI of adult male Japanese students.

  2. Scale-dependent correlation of seabirds with schooling fish in a coastal ecosystem

    USGS Publications Warehouse

    Schneider, Davod C.; Piatt, John F.

    1986-01-01

    The distribution of piscivorous seabirds relative to schooling fish was investigated by repeated censusing of 2 intersecting transects in the Avalon Channel, which carries the Labrador Current southward along the east coast of Newfoundland. Murres (primarily common murres Uria aalge), Atlantic puffins Fratercula arctica, and schooling fish (primarily capelin Mallotus villosus) were highly aggregated at spatial scales ranging from 0.25 to 15 km. Patchiness of murres, puffins and schooling fish was scale-dependent, as indicated by significantly higher variance-to-mean ratios at large measurement distances than at the minimum distance, 0.25 km. Patch scale of puffins ranged from 2.5 to 15 km, of murres from 3 to 8.75 km, and of schooling fish from 1.25 to 15 km. Patch scale of birds and schooling fish was similar m 6 out of 9 comparisons. Correlation between seabirds and schooling birds was significant at the minimum measurement distance in 6 out of 12 comparisons. Correlation was scale-dependent, as indicated by significantly higher coefficients at large measurement distances than at the minimum distance. Tracking scale, as indicated by the maximum significant correlation between birds and schooling fish, ranged from 2 to 6 km. Our analysis showed that extended aggregations of seabirds are associated with extended aggregations of schooling fish and that correlation of these marine carnivores with their prey is scale-dependent.

  3. Solar Control of Earth's Ionosphere: Observations from Solar Cycle 23

    NASA Astrophysics Data System (ADS)

    Doe, R. A.; Thayer, J. P.; Solomon, S. C.

    2005-05-01

    A nine year database of sunlit E-region electron density altitude profiles (Ne(z)) measured by the Sondrestrom ISR has been partitioned over a 30-bin parameter space of averaged 10.7 cm solar radio flux (F10.7) and solar zenith angle (χ) to investigate long-term solar and thermospheric variability, and to validate contemporary EUV photoionization models. A two stage filter, based on rejection of Ne(z) profiles with large Hall to Pedersen ratio, is used to minimize auroral contamination. Resultant filtered mean Ne(z) compares favorably with subauroral Ne measured for the same F10.7 and χ conditions at the Millstone Hill ISR. Mean Ne, as expected, increases with solar activity and decreases with large χ, and the variance around mean Ne is shown to be greatest at low F10.7 (solar minimum). ISR-derived mean Ne is compared with two EUV models: (1) a simple model without photoelectrons and based on the 5 -- 105 nm EUVAC model solar flux [Richards et al., 1994] and (2) the GLOW model [Solomon et al., 1988; Solomon and Abreu, 1989] suitably modified for inclusion of XUV spectral components and photoelectron flux. Across parameter space and for all altitudes, Model 2 provides a closer match to ISR mean Ne and suggests that the photoelectron and XUV enhancements are essential to replicate measured plasma densities below 150 km. Simulated Ne variance envelopes, given by perturbing the Model 2 neutral atmosphere input by the measured extremum in Ap, F10.7, and Te, are much narrower than ISR-derived geophysical variance envelopes. We thus conclude that long-term variability of the EUV spectra dominates over thermospheric variability and that EUV spectral variability is greatest at solar minimum. ISR -- model comparison also provides evidence for the emergence of an H (Lyman β) Ne feature at solar maximum. Richards, P. G., J. A. Fennelly, and D. G. Torr, EUVAC: A solar EUV flux model for aeronomic calculations, J. Geophys. Res., 99, 8981, 1994. Solomon, S. C., P. B. Hays, and V. J. Abreu, The auroral 6300 Å emission: Observations and Modeling, J. Geophys. Res., 93, 9867, 1988. Solomon, S. C. and V. J. Abreu, The 630 nm dayglow, J. Geophys. Res., 94, 6817, 1989.

  4. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    PubMed

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  5. Electron Pitch-Angle Distribution in Pressure Balance Structures Measured by Ulysses/SWOOPS

    NASA Technical Reports Server (NTRS)

    Yamauchi, Yohei; Suess, Steven T.; Sakurai, Takashi; Six, N. Frank (Technical Monitor)

    2002-01-01

    Pressure balance structures (PBSs) are a common feature in the high-latitude solar wind near solar minimum. From previous studies, PBSs are believed to be remnants of coronal plumes. Yamauchi et al [2002] investigated the magnetic structures of the PBSs, applying a minimum variance analysis to Ulysses/Magnetometer data. They found that PBSs contain structures like current sheets or plasmoids, and suggested that PBSs are associated with network activity such as magnetic reconnection in the photosphere at the base of polar plumes. We have investigated energetic electron data from Ulysses/SWOOPS to see whether bi-directional electron flow exists and we have found evidence supporting the earlier conclusions. We find that 45 ot of 53 PBSs show local bi-directional or isotopic electron flux or flux associated with current-sheet structure. Only five events show the pitch-angle distribution expected for Alfvenic fluctuations. We conclude that PBSs do contain magnetic structures such as current sheets or plasmoids that are expected as a result of network activity at the base of polar plumes.

  6. Potential Seasonal Terrestrial Water Storage Monitoring from GPS Vertical Displacements: A Case Study in the Lower Three-Rivers Headwater Region, China

    PubMed Central

    Zhang, Bao; Yao, Yibin; Fok, Hok Sum; Hu, Yufeng; Chen, Qiang

    2016-01-01

    This study uses the observed vertical displacements of Global Positioning System (GPS) time series obtained from the Crustal Movement Observation Network of China (CMONOC) with careful pre- and post-processing to estimate the seasonal crustal deformation in response to the hydrological loading in lower three-rivers headwater region of southwest China, followed by inferring the annual EWH changes through geodetic inversion methods. The Helmert Variance Component Estimation (HVCE) and the Minimum Mean Square Error (MMSE) criterion were successfully employed. The GPS inferred EWH changes agree well qualitatively with the Gravity Recovery and Climate Experiment (GRACE)-inferred and the Global Land Data Assimilation System (GLDAS)-inferred EWH changes, with a discrepancy of 3.2–3.9 cm and 4.8–5.2 cm, respectively. In the research areas, the EWH changes in the Lancang basin is larger than in the other regions, with a maximum of 21.8–24.7 cm and a minimum of 3.1–6.9 cm. PMID:27657064

  7. Minimum of the order parameter fluctuations of seismicity before major earthquakes in Japan.

    PubMed

    Sarlis, Nicholas V; Skordas, Efthimios S; Varotsos, Panayiotis A; Nagao, Toshiyasu; Kamogawa, Masashi; Tanaka, Haruo; Uyeda, Seiya

    2013-08-20

    It has been shown that some dynamic features hidden in the time series of complex systems can be uncovered if we analyze them in a time domain called natural time χ. The order parameter of seismicity introduced in this time domain is the variance of χ weighted for normalized energy of each earthquake. Here, we analyze the Japan seismic catalog in natural time from January 1, 1984 to March 11, 2011, the day of the M9 Tohoku earthquake, by considering a sliding natural time window of fixed length comprised of the number of events that would occur in a few months. We find that the fluctuations of the order parameter of seismicity exhibit distinct minima a few months before all of the shallow earthquakes of magnitude 7.6 or larger that occurred during this 27-y period in the Japanese area. Among the minima, the minimum before the M9 Tohoku earthquake was the deepest. It appears that there are two kinds of minima, namely precursory and nonprecursory, to large earthquakes.

  8. Applications of GARCH models to energy commodities

    NASA Astrophysics Data System (ADS)

    Humphreys, H. Brett

    This thesis uses GARCH methods to examine different aspects of the energy markets. The first part of the thesis examines seasonality in the variance. This study modifies the standard univariate GARCH models to test for seasonal components in both the constant and the persistence in natural gas, heating oil and soybeans. These commodities exhibit seasonal price movements and, therefore, may exhibit seasonal variances. In addition, the heating oil model is tested for a structural change in variance during the Gulf War. The results indicate the presence of an annual seasonal component in the persistence for all commodities. Out-of-sample volatility forecasting for natural gas outperforms standard forecasts. The second part of this thesis uses a multivariate GARCH model to examine volatility spillovers within the crude oil forward curve and between the London and New York crude oil futures markets. Using these results the effect of spillovers on dynamic hedging is examined. In addition, this research examines cointegration within the oil markets using investable returns rather than fixed prices. The results indicate the presence of strong volatility spillovers between both markets, weak spillovers from the front of the forward curve to the rest of the curve, and cointegration between the long term oil price on the two markets. The spillover dynamic hedge models lead to a marginal benefit in terms of variance reduction, but a substantial decrease in the variability of the dynamic hedge; thereby decreasing the transactions costs associated with the hedge. The final portion of the thesis uses portfolio theory to demonstrate how the energy mix consumed in the United States could be chosen given a national goal to reduce the risks to the domestic macroeconomy of unanticipated energy price shocks. An efficient portfolio frontier of U.S. energy consumption is constructed using a covariance matrix estimated with GARCH models. The results indicate that while the electric utility industry is operating close to the minimum variance position, a shift towards coal consumption would reduce price volatility for overall U.S. energy consumption. With the inclusion of potential externality costs, the shift remains away from oil but towards natural gas instead of coal.

  9. Unique relation between surface-limited evaporation and relative humidity profiles holds in both field data and climate model simulations

    NASA Astrophysics Data System (ADS)

    Salvucci, G.; Rigden, A. J.; Gentine, P.; Lintner, B. R.

    2013-12-01

    A new method was recently proposed for estimating evapotranspiration (ET) from weather station data without requiring measurements of surface limiting factors (e.g. soil moisture, leaf area, canopy conductance) [Salvucci and Gentine, 2013, PNAS, 110(16): 6287-6291]. Required measurements include diurnal air temperature, specific humidity, wind speed, net shortwave radiation, and either measured or estimated incoming longwave radiation and ground heat flux. The approach is built around the idea that the key, rate-limiting, parameter of typical ET models, the land-surface resistance to water vapor transport, can be estimated from an emergent relationship between the diurnal cycle of the relative humidity profile and ET. The emergent relation is that the vertical variance of the relative humidity profile is less than what would occur for increased or decreased evaporation rates, suggesting that land-atmosphere feedback processes minimize this variance. This relation was found to hold over a wide range of climate conditions (arid to humid) and limiting factors (soil moisture, leaf area, energy) at a set of Ameriflux field sites. While the field tests in Salvucci and Gentine (2013) supported the minimum variance hypothesis, the analysis did not reveal the mechanisms responsible for the behavior. Instead the paper suggested, heuristically, that the results were due to an equilibration of the relative humidity between the land surface and the surface layer of the boundary layer. Here we apply this method using surface meteorological fields simulated by a global climate model (GCM), and compare the predicted ET to that simulated by the climate model. Similar to the field tests, the GCM simulated ET is in agreement with that predicted by minimizing the profile relative humidity variance. A reasonable interpretation of these results is that the feedbacks responsible for the minimization of the profile relative humidity variance in nature are represented in the climate model. The climate model components, in particular the land surface model and boundary layer representation, can thus be analyzed in controlled numerical experiments to discern the specific processes leading to the observed behavior. Results of this analysis will be presented.

  10. Solar Drivers of 11-yr and Long-Term Cosmic Ray Modulation

    NASA Technical Reports Server (NTRS)

    Cliver, E. W.; Richardson, I. G.; Ling, A. G.

    2011-01-01

    In the current paradigm for the modulation of galactic cosmic rays (GCRs), diffusion is taken to be the dominant process during solar maxima while drift dominates at minima. Observations during the recent solar minimum challenge the pre-eminence of drift: at such times. In 2009, the approx.2 GV GCR intensity measured by the Newark neutron monitor increased by approx.5% relative to its maximum value two cycles earlier even though the average tilt angle in 2009 was slightly larger than that in 1986 (approx.20deg vs. approx.14deg), while solar wind B was significantly lower (approx.3.9 nT vs. approx.5.4 nT). A decomposition of the solar wind into high-speed streams, slow solar wind, and coronal mass ejections (CMEs; including postshock flows) reveals that the Sun transmits its message of changing magnetic field (diffusion coefficient) to the heliosphere primarily through CMEs at solar maximum and high-speed streams at solar minimum. Long-term reconstructions of solar wind B are in general agreement for the approx. 1900-present interval and can be used to reliably estimate GCR intensity over this period. For earlier epochs, however, a recent Be-10-based reconstruction covering the past approx. 10(exp 4) years shows nine abrupt and relatively short-lived drops of B to < or approx.= 0 nT, with the first of these corresponding to the Sporer minimum. Such dips are at variance with the recent suggestion that B has a minimum or floor value of approx.2.8 nT. A floor in solar wind B implies a ceiling in the GCR intensity (a permanent modulation of the local interstellar spectrum) at a given energy/rigidity. The 30-40% increase in the intensity of 2.5 GV electrons observed by Ulysses during the recent solar minimum raises an interesting paradox that will need to be resolved.

  11. T cell responses in experimental viral retinitis: mechanisms, peculiarities and implications for gene therapy with viral vectors.

    PubMed

    Zinkernagel, Martin S; McMenamin, Paul G; Forrester, John V; Degli-Esposti, Mariapia A

    2011-07-01

    T lymphocytes play a decisive role in the course and clinical outcome of viral retinal infection. This review focuses on aspects of the adaptive cellular immune response against viral pathogens in the retina. Two distinct models to study adaptive cell mediated immune responses in viral retinitis are presented: (i) experimental retinitis induced by murine cytomegalovirus (MCMV), where the immune system prevents necrotizing damage to the retina and (ii) retinitis induced by the non-cytopathic lymphocytic choriomeningitis virus (LCMV), where the retinal microanatomy is compromised not by the virus, but by the immune response itself. From these studies it is clear that, in the context of viral infections, the cytotoxic T cell response against a pathogen in the retina does not differ from that seen in other organs, and that once such a response has been initiated, clearing of virus from retinal tissue has priority over preservation of retinal architecture and function. Furthermore, implications drawn from these models for gene therapy in retinal diseases are discussed. Copyright © 2011. Published by Elsevier Ltd.

  12. Interleukin-10 from CD4+ follicular regulatory T cells promotes the germinal center response.

    PubMed

    Laidlaw, Brian J; Lu, Yisi; Amezquita, Robert A; Weinstein, Jason S; Vander Heiden, Jason A; Gupta, Namita T; Kleinstein, Steven H; Kaech, Susan M; Craft, Joe

    2017-10-20

    CD4 + follicular regulatory T (T fr ) cells suppress B cell responses through modulation of follicular helper T (T fh ) cells and germinal center (GC) development. We found that T fr cells can also promote the GC response through provision of interleukin-10 (IL-10) after acute infection with lymphocytic choriomeningitis virus (LCMV). Sensing of IL-10 by B cells was necessary for optimal development of the GC response. GC B cells formed in the absence of T reg cell-derived IL-10 displayed an altered dark zone state and decreased expression of the transcription factor Forkhead box protein 1 (FOXO1). IL-10 promoted nuclear translocation of FOXO1 in activated B cells. These data indicate that T fr cells play a multifaceted role in the fine-tuning of the GC response and identify IL-10 as an important mediator by which T fr cells support the GC reaction. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  13. Pancreatic Tissue Transplanted in TheraCyte Encapsulation Devices Is Protected and Prevents Hyperglycemia in a Mouse Model of Immune-Mediated Diabetes.

    PubMed

    Boettler, Tobias; Schneider, Darius; Cheng, Yang; Kadoya, Kuniko; Brandon, Eugene P; Martinson, Laura; von Herrath, Matthias

    2016-01-01

    Type 1 diabetes (T1D) is characterized by destruction of glucose-responsive insulin-producing pancreatic β-cells and exhibits immune infiltration of pancreatic islets, where CD8 lymphocytes are most prominent. Curative transplantation of pancreatic islets is seriously hampered by the persistence of autoreactive immune cells that require high doses of immunosuppressive drugs. An elegant approach to confer graft protection while obviating the need for immunosuppression is the use of encapsulation devices that allow for the transfer of oxygen and nutrients, yet prevent immune cells from making direct contact with the islet grafts. Here we demonstrate that macroencapsulation devices (TheraCyte) loaded with neonatal pancreatic tissue and transplanted into RIP-LCMV.GP mice prevented disease onset in a model of virus-induced diabetes mellitus. Histological analyses revealed that insulin-producing cells survived within the device in animal models of diabetes. Our results demonstrate that these encapsulation devices can protect from an immune-mediated attack and can contain a sufficient amount of insulin-producing cells to prevent overt hyperglycemia.

  14. Persistence of viral infection despite similar killing efficacy of antiviral CD8(+) T cells during acute and chronic phases of infection.

    PubMed

    Ganusov, Vitaly V; Lukacher, Aron E; Byers, Anthony M

    2010-09-15

    Why some viruses establish chronic infections while others do not is poorly understood. One possibility is that the host's immune response is impaired during chronic infections and is unable to clear the virus from the host. In this report, we use a recently proposed framework to estimate the per capita killing efficacy of CD8(+) T cells, specific for the polyoma virus (PyV), which establishes a chronic infection in mice. Surprisingly, the estimated per cell killing efficacy of PyV-specific effector CD8(+) T cells during the acute phase of the infection was very similar to the efficacy of effector CD8(+) T cells specific to lymphocytic choriomeningitis virus (LCMV-Armstrong), which is cleared from the host. Our results suggest that persistence of PyV does not result from the generation of an inefficient PyV-specific CD8(+) T cell response, and that other host or viral factors are responsible for the ability of PyV to establish chronic infection. Copyright 2010 Elsevier Inc. All rights reserved.

  15. Bioenergetic Insufficiencies Due to Metabolic Alterations Regulated by the Inhibitory Receptor PD-1 Are an Early Driver of CD8(+) T Cell Exhaustion.

    PubMed

    Bengsch, Bertram; Johnson, Andy L; Kurachi, Makoto; Odorizzi, Pamela M; Pauken, Kristen E; Attanasio, John; Stelekati, Erietta; McLane, Laura M; Paley, Michael A; Delgoffe, Greg M; Wherry, E John

    2016-08-16

    Dynamic reprogramming of metabolism is essential for T cell effector function and memory formation. However, the regulation of metabolism in exhausted CD8(+) T (Tex) cells is poorly understood. We found that during the first week of chronic lymphocytic choriomeningitis virus (LCMV) infection, before severe dysfunction develops, virus-specific CD8(+) T cells were already unable to match the bioenergetics of effector T cells generated during acute infection. Suppression of T cell bioenergetics involved restricted glucose uptake and use, despite persisting mechanistic target of rapamycin (mTOR) signaling and upregulation of many anabolic pathways. PD-1 regulated early glycolytic and mitochondrial alterations and repressed transcriptional coactivator PGC-1α. Improving bioenergetics by overexpression of PGC-1α enhanced function in developing Tex cells. Therapeutic reinvigoration by anti-PD-L1 reprogrammed metabolism in a subset of Tex cells. These data highlight a key metabolic control event early in exhaustion and suggest that manipulating glycolytic and mitochondrial metabolism might enhance checkpoint blockade outcomes. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Modeling the Lymphocytic Choriomeningitis Virus: Insights into understanding its epidemiology in the wild

    NASA Astrophysics Data System (ADS)

    Contreras, Christy; McKay, John; Blattman, Joseph; Holechek, Susan

    2015-03-01

    The lymphocytic choriomenigitis virus (LCMV) is a rodent-spread virus commonly recognized as causing neurological disease that exhibits asymptomatic pathology. The virus is a pathogen normally carried among rodents that can be transmitted to humans by direct or indirect contact with the virus in excretions and secretions from rodents and can cause aseptic meningitis and other conditions in humans. We consider an epidemiological system within rodent populations modeled by a system of ordinary differential equations that captures the dynamics of the diseases transmission and present our findings. The asymptotic nature of the pathogen plays a large role in its spread within a given population, which has motivated us to expand upon an existing SIRC model (Holechek et al in preparation) that accounts for susceptible-, infected-, recovered-, and carrier-mice on the basis of their gender. We are interested in observing and determining the conditions under which the carrier population will reach a disease free equilibrium, and we focus our investigation on the sensitivity of our model to gender, pregnancy related infection, and reproduction rate conditions.

  17. Cryogenic sapphire oscillator using a low-vibration design pulse-tube cryocooler: first results.

    PubMed

    Hartnett, John; Nand, Nitin; Wang, Chao; Floch, Jean-Michel

    2010-05-01

    A cryogenic sapphire oscillator (CSO) has been implemented at 11.2 GHz using a low-vibration design pulsetube cryocooler. Compared with a state-of-the-art liquid helium cooled CSO in the same laboratory, the square root Allan variance of their combined fractional frequency instability is sigma(y) = 1.4 x 10(-15)tau(-1/2) for integration times 1 < tau < 10 s, dominated by white frequency noise. The minimum sigmay = 5.3 x 10(-16) for the two oscillators was reached at tau = 20 s. Assuming equal contributions from both CSOs, the single oscillator phase noise S(phi) approximately -96 dB x rad(2)/Hz at 1 Hz set from the carrier.

  18. Analysis of portfolio optimization with lot of stocks amount constraint: case study index LQ45

    NASA Astrophysics Data System (ADS)

    Chin, Liem; Chendra, Erwinna; Sukmana, Agus

    2018-01-01

    To form an optimum portfolio (in the sense of minimizing risk and / or maximizing return), the commonly used model is the mean-variance model of Markowitz. However, there is no amount of lots of stocks constraint. And, retail investors in Indonesia cannot do short selling. So, in this study we will develop an existing model by adding an amount of lot of stocks and short-selling constraints to get the minimum risk of portfolio with and without any target return. We will analyse the stocks listed in the LQ45 index based on the stock market capitalization. To perform this analysis, we will use Solver that available in Microsoft Excel.

  19. Robust human machine interface based on head movements applied to assistive robotics.

    PubMed

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.

  20. Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics

    PubMed Central

    Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano

    2013-01-01

    This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877

  1. Quaternion-valued single-phase model for three-phase power system

    NASA Astrophysics Data System (ADS)

    Gou, Xiaoming; Liu, Zhiwen; Liu, Wei; Xu, Yougen; Wang, Jiabin

    2018-03-01

    In this work, a quaternion-valued model is proposed in lieu of the Clarke's α, β transformation to convert three-phase quantities to a hypercomplex single-phase signal. The concatenated signal can be used for harmonic distortion detection in three-phase power systems. In particular, the proposed model maps all the harmonic frequencies into frequencies in the quaternion domain, while the Clarke's transformation-based methods will fail to detect the zero sequence voltages. Based on the quaternion-valued model, the Fourier transform, the minimum variance distortionless response (MVDR) algorithm and the multiple signal classification (MUSIC) algorithm are presented as examples to detect harmonic distortion. Simulations are provided to demonstrate the potentials of this new modeling method.

  2. Object aggregation using Neyman-Pearson analysis

    NASA Astrophysics Data System (ADS)

    Bai, Li; Hinman, Michael L.

    2003-04-01

    This paper presents a novel approach to: 1) distinguish military vehicle groups, and 2) identify names of military vehicle convoys in the level-2 fusion process. The data is generated from a generic Ground Moving Target Indication (GMTI) simulator that utilizes Matlab and Microsoft Access. This data is processed to identify the convoys and number of vehicles in the convoy, using the minimum timed distance variance (MTDV) measurement. Once the vehicle groups are formed, convoy association is done using hypothesis techniques based upon Neyman Pearson (NP) criterion. One characteristic of NP is the low error probability when a-priori information is unknown. The NP approach was demonstrated with this advantage over a Bayesian technique.

  3. Correction of gene expression data: Performance-dependency on inter-replicate and inter-treatment biases.

    PubMed

    Darbani, Behrooz; Stewart, C Neal; Noeparvar, Shahin; Borg, Søren

    2014-10-20

    This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies. For maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce an analytical approach to examine the suitability of correction methods by considering the inter-treatment bias as well as the inter-replicate variance, which allows use of the best correction method with minimum residual bias. Analyses of RNA sequencing and microarray data showed that the efficiencies of correction methods are influenced by the inter-treatment bias as well as the inter-replicate variance. Therefore, we recommend inspecting both of the bias sources in order to apply the most efficient correction method. As an alternative correction strategy, sequential application of different correction approaches is also advised. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Refining a case-mix measure for nursing homes: Resource Utilization Groups (RUG-III).

    PubMed

    Fries, B E; Schneider, D P; Foley, W J; Gavazzi, M; Burke, R; Cornelius, E

    1994-07-01

    A case-mix classification system for nursing home residents is developed, based on a sample of 7,658 residents in seven states. Data included a broad assessment of resident characteristics, corresponding to items of the Minimum Data Set, and detailed measurement of nursing staff care time over a 24-hour period and therapy staff time over a 1-week period. The Resource Utilization Groups, Version III (RUG-III) system, with 44 distinct groups, achieves 55.5% variance explanation of total (nursing and therapy) per diem cost and meets goals of clinical validity and payment incentives. The mean resource use (case-mix index) of groups spans a nine-fold range. The RUG-III system improves on an earlier version not only by increasing the variance explanation (from 43%), but, more importantly, by identifying residents with "high tech" procedures (e.g., ventilators, respirators, and parenteral feeding) and those with cognitive impairments; by using better multiple activities of daily living; and by providing explicit qualifications for the Medicare nursing home benefit. RUG-III is being implemented for nursing home payment in 11 states (six as part of a federal multistate demonstration) and can be used in management, staffing level determination, and quality assurance.

  5. TARGETED SEQUENTIAL DESIGN FOR TARGETED LEARNING INFERENCE OF THE OPTIMAL TREATMENT RULE AND ITS MEAN REWARD.

    PubMed

    Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J

    2017-01-01

    This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.

  6. The Principle of Energetic Consistency

    NASA Technical Reports Server (NTRS)

    Cohn, Stephen E.

    2009-01-01

    A basic result in estimation theory is that the minimum variance estimate of the dynamical state, given the observations, is the conditional mean estimate. This result holds independently of the specifics of any dynamical or observation nonlinearity or stochasticity, requiring only that the probability density function of the state, conditioned on the observations, has two moments. For nonlinear dynamics that conserve a total energy, this general result implies the principle of energetic consistency: if the dynamical variables are taken to be the natural energy variables, then the sum of the total energy of the conditional mean and the trace of the conditional covariance matrix (the total variance) is constant between observations. Ensemble Kalman filtering methods are designed to approximate the evolution of the conditional mean and covariance matrix. For them the principle of energetic consistency holds independently of ensemble size, even with covariance localization. However, full Kalman filter experiments with advection dynamics have shown that a small amount of numerical dissipation can cause a large, state-dependent loss of total variance, to the detriment of filter performance. The principle of energetic consistency offers a simple way to test whether this spurious loss of variance limits ensemble filter performance in full-blown applications. The classical second-moment closure (third-moment discard) equations also satisfy the principle of energetic consistency, independently of the rank of the conditional covariance matrix. Low-rank approximation of these equations offers an energetically consistent, computationally viable alternative to ensemble filtering. Current formulations of long-window, weak-constraint, four-dimensional variational methods are designed to approximate the conditional mode rather than the conditional mean. Thus they neglect the nonlinear bias term in the second-moment closure equation for the conditional mean. The principle of energetic consistency implies that, to precisely the extent that growing modes are important in data assimilation, this term is also important.

  7. Fine-scale variability of isopycnal salinity in the California Current System

    NASA Astrophysics Data System (ADS)

    Itoh, Sachihiko; Rudnick, Daniel L.

    2017-09-01

    This paper examines the fine-scale structure and seasonal fluctuations of the isopycnal salinity of the California Current System from 2007 to 2013 using temperature and salinity profiles obtained from a series of underwater glider surveys. The seasonal mean distributions of the spectral power of the isopycnal salinity gradient averaged over submesoscale (12-30 km) and mesoscale (30-60 km) ranges along three survey lines off Monterey Bay, Point Conception, and Dana Point were obtained from 298 transects. The mesoscale and submesoscale variance increased as coastal upwelling caused the isopycnal salinity gradient to steepen. Areas of elevated variance were clearly observed around the salinity front during the summer then spread offshore through the fall and winter. The high fine-scale variances were observed typically above 25.8 kg m-3 and decreased with depth to a minimum at around 26.3 kg m-3. The mean spectral slope of the isopycnal salinity gradient with respect to wavenumber was 0.19 ± 0.27 over the horizontal scale of 12-60 km, and 31%-35% of the spectra had significantly positive slopes. In contrast, the spectral slope over 12-30 km was mostly flat, with mean values of -0.025 ± 0.32. An increase in submesoscale variability accompanying the steepening of the spectral slope was often observed in inshore areas; e.g., off Monterey Bay in winter, where a sharp front developed between the California Current and the California Under Current, and the lower layers of the Southern California Bight, where vigorous interaction between a synoptic current and bottom topography is to be expected.

  8. A weighted least squares estimation of the polynomial regression model on paddy production in the area of Kedah and Perlis

    NASA Astrophysics Data System (ADS)

    Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd

    2017-08-01

    The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.

  9. Security practices and regulatory compliance in the healthcare industry.

    PubMed

    Kwon, Juhee; Johnson, M Eric

    2013-01-01

    Securing protected health information is a critical responsibility of every healthcare organization. We explore information security practices and identify practice patterns that are associated with improved regulatory compliance. We employed Ward's cluster analysis using minimum variance based on the adoption of security practices. Variance between organizations was measured using dichotomous data indicating the presence or absence of each security practice. Using t tests, we identified the relationships between the clusters of security practices and their regulatory compliance. We utilized the results from the Kroll/Healthcare Information and Management Systems Society telephone-based survey of 250 US healthcare organizations including adoption status of security practices, breach incidents, and perceived compliance levels on Health Information Technology for Economic and Clinical Health, Health Insurance Portability and Accountability Act, Red Flags rules, Centers for Medicare and Medicaid Services, and state laws governing patient information security. Our analysis identified three clusters (which we call leaders, followers, and laggers) based on the variance of security practice patterns. The clusters have significant differences among non-technical practices rather than technical practices, and the highest level of compliance was associated with hospitals that employed a balanced approach between technical and non-technical practices (or between one-off and cultural practices). Hospitals in the highest level of compliance were significantly managing third parties' breaches and training. Audit practices were important to those who scored in the middle of the pack on compliance. Our results provide security practice benchmarks for healthcare administrators and can help policy makers in developing strategic and practical guidelines for practice adoption.

  10. Security practices and regulatory compliance in the healthcare industry

    PubMed Central

    Kwon, Juhee; Johnson, M Eric

    2013-01-01

    Objective Securing protected health information is a critical responsibility of every healthcare organization. We explore information security practices and identify practice patterns that are associated with improved regulatory compliance. Design We employed Ward's cluster analysis using minimum variance based on the adoption of security practices. Variance between organizations was measured using dichotomous data indicating the presence or absence of each security practice. Using t tests, we identified the relationships between the clusters of security practices and their regulatory compliance. Measurement We utilized the results from the Kroll/Healthcare Information and Management Systems Society telephone-based survey of 250 US healthcare organizations including adoption status of security practices, breach incidents, and perceived compliance levels on Health Information Technology for Economic and Clinical Health, Health Insurance Portability and Accountability Act, Red Flags rules, Centers for Medicare and Medicaid Services, and state laws governing patient information security. Results Our analysis identified three clusters (which we call leaders, followers, and laggers) based on the variance of security practice patterns. The clusters have significant differences among non-technical practices rather than technical practices, and the highest level of compliance was associated with hospitals that employed a balanced approach between technical and non-technical practices (or between one-off and cultural practices). Conclusions Hospitals in the highest level of compliance were significantly managing third parties’ breaches and training. Audit practices were important to those who scored in the middle of the pack on compliance. Our results provide security practice benchmarks for healthcare administrators and can help policy makers in developing strategic and practical guidelines for practice adoption. PMID:22955497

  11. Estimating fluvial wood discharge from timelapse photography with varying sampling intervals

    NASA Astrophysics Data System (ADS)

    Anderson, N. K.

    2013-12-01

    There is recent focus on calculating wood budgets for streams and rivers to help inform management decisions, ecological studies and carbon/nutrient cycling models. Most work has measured in situ wood in temporary storage along stream banks or estimated wood inputs from banks. Little effort has been employed monitoring and quantifying wood in transport during high flows. This paper outlines a procedure for estimating total seasonal wood loads using non-continuous coarse interval sampling and examines differences in estimation between sampling at 1, 5, 10 and 15 minutes. Analysis is performed on wood transport for the Slave River in Northwest Territories, Canada. Relative to the 1 minute dataset, precision decreased by 23%, 46% and 60% for the 5, 10 and 15 minute datasets, respectively. Five and 10 minute sampling intervals provided unbiased equal variance estimates of 1 minute sampling, whereas 15 minute intervals were biased towards underestimation by 6%. Stratifying estimates by day and by discharge increased precision over non-stratification by 4% and 3%, respectively. Not including wood transported during ice break-up, the total minimum wood load estimated at this site is 3300 × 800$ m3 for the 2012 runoff season. The vast majority of the imprecision in total wood volumes came from variance in estimating average volume per log. Comparison of proportions and variance across sample intervals using bootstrap sampling to achieve equal n. Each trial was sampled for n=100, 10,000 times and averaged. All trials were then averaged to obtain an estimate for each sample interval. Dashed lines represent values from the one minute dataset.

  12. Predicting dense nonaqueous phase liquid dissolution using a simplified source depletion model parameterized with partitioning tracers

    NASA Astrophysics Data System (ADS)

    Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.

    2008-07-01

    Simulations of nonpartitioning and partitioning tracer tests were used to parameterize the equilibrium stream tube model (ESM) that predicts the dissolution dynamics of dense nonaqueous phase liquids (DNAPLs) as a function of the Lagrangian properties of DNAPL source zones. Lagrangian, or stream-tube-based, approaches characterize source zones with as few as two trajectory-integrated parameters, in contrast to the potentially thousands of parameters required to describe the point-by-point variability in permeability and DNAPL in traditional Eulerian modeling approaches. The spill and subsequent dissolution of DNAPLs were simulated in two-dimensional domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1, and 3) using the multiphase flow and transport simulator UTCHEM. Nonpartitioning and partitioning tracers were used to characterize the Lagrangian properties (travel time and trajectory-integrated DNAPL content statistics) of DNAPL source zones, which were in turn shown to be sufficient for accurate prediction of source dissolution behavior using the ESM throughout the relatively broad range of hydraulic conductivity variances tested here. The results were found to be relatively insensitive to travel time variability, suggesting that dissolution could be accurately predicted even if the travel time variance was only coarsely estimated. Estimation of the ESM parameters was also demonstrated using an approximate technique based on Eulerian data in the absence of tracer data; however, determining the minimum amount of such data required remains for future work. Finally, the stream tube model was shown to be a more unique predictor of dissolution behavior than approaches based on the ganglia-to-pool model for source zone characterization.

  13. Adaptive color halftoning for minimum perceived error using the blue noise mask

    NASA Astrophysics Data System (ADS)

    Yu, Qing; Parker, Kevin J.

    1997-04-01

    Color halftoning using a conventional screen requires careful selection of screen angles to avoid Moire patterns. An obvious advantage of halftoning using a blue noise mask (BNM) is that there are no conventional screen angle or Moire patterns produced. However, a simple strategy of employing the same BNM on all color planes is unacceptable in case where a small registration error can cause objectionable color shifts. In a previous paper by Yao and Parker, strategies were presented for shifting or inverting the BNM as well as using mutually exclusive BNMs for different color planes. In this paper, the above schemes will be studied in CIE-LAB color space in terms of root mean square error and variance for luminance channel and chrominance channel respectively. We will demonstrate that the dot-on-dot scheme results in minimum chrominance error, but maximum luminance error and the 4-mask scheme results in minimum luminance error but maximum chrominance error, while the shift scheme falls in between. Based on this study, we proposed a new adaptive color halftoning algorithm that takes colorimetric color reproduction into account by applying 2-mutually exclusive BNMs on two different color planes and applying an adaptive scheme on other planes to reduce color error. We will show that by having one adaptive color channel, we obtain increased flexibility to manipulate the output so as to reduce colorimetric error while permitting customization to specific printing hardware.

  14. Variation of Care Time Between Nursing Units in Classification-Based Nurse-to-Resident Ratios: A Multilevel Analysis

    PubMed Central

    Planer, Katarina; Hagel, Anja

    2018-01-01

    A validity test was conducted to determine how care level–based nurse-to-resident ratios compare with actual daily care times per resident in Germany. Stability across different long-term care facilities was tested. Care level–based nurse-to-resident ratios were compared with the standard minimum nurse-to-resident ratios. Levels of care are determined by classification authorities in long-term care insurance programs and are used to distribute resources. Care levels are a powerful tool for classifying authorities in long-term care insurance. We used observer-based measurement of assignable direct and indirect care time in 68 nursing units for 2028 residents across 2 working days. Organizational data were collected at the end of the quarter in which the observation was made. Data were collected from January to March, 2012. We used a null multilevel model with random intercepts and multilevel models with fixed and random slopes to analyze data at both the organization and resident levels. A total of 14% of the variance in total care time per day was explained by membership in nursing units. The impact of care levels on care time differed significantly between nursing units. Forty percent of residents at the lowest care level received less than the standard minimum registered nursing time per day. For facilities that have been significantly disadvantaged in the current staffing system, a higher minimum standard will function more effectively than a complex classification system without scientific controls. PMID:29442533

  15. Variation of Care Time Between Nursing Units in Classification-Based Nurse-to-Resident Ratios: A Multilevel Analysis.

    PubMed

    Brühl, Albert; Planer, Katarina; Hagel, Anja

    2018-01-01

    A validity test was conducted to determine how care level-based nurse-to-resident ratios compare with actual daily care times per resident in Germany. Stability across different long-term care facilities was tested. Care level-based nurse-to-resident ratios were compared with the standard minimum nurse-to-resident ratios. Levels of care are determined by classification authorities in long-term care insurance programs and are used to distribute resources. Care levels are a powerful tool for classifying authorities in long-term care insurance. We used observer-based measurement of assignable direct and indirect care time in 68 nursing units for 2028 residents across 2 working days. Organizational data were collected at the end of the quarter in which the observation was made. Data were collected from January to March, 2012. We used a null multilevel model with random intercepts and multilevel models with fixed and random slopes to analyze data at both the organization and resident levels. A total of 14% of the variance in total care time per day was explained by membership in nursing units. The impact of care levels on care time differed significantly between nursing units. Forty percent of residents at the lowest care level received less than the standard minimum registered nursing time per day. For facilities that have been significantly disadvantaged in the current staffing system, a higher minimum standard will function more effectively than a complex classification system without scientific controls.

  16. A six hundred-year annual minimum temperature history for the central Tibetan Plateau derived from tree-ring width series

    NASA Astrophysics Data System (ADS)

    He, Minhui; Yang, Bao; Datsenko, Nina M.

    2014-08-01

    The recent unprecedented warming found in different regions has aroused much attention in the past years. How temperature has really changed on the Tibetan Plateau (TP) remains unknown since very limited high-resolution temperature series can be found over this region, where large areas of snow and ice exist. Herein, we develop two Juniperus tibetica Kom. tree-ring width chronologies from different elevations. We found that the two tree-ring series only share high-frequency variability. Correlation, response function and partial correlation analysis indicate that prior year annual (January-December) minimum temperature is most responsible for the higher belt juniper radial growth, while more or less precipitation signal is contained by the tree-ring width chronology at the lower belt and is thus excluded from further analysis. The tree growth-climate model accounted for 40 % of the total variance in actual temperature during the common period 1957-2010. The detected temperature signal is further robustly verified by other results. Consequently, a six century long annual minimum temperature history was firstly recovered for the Yushu region, central TP. Interestingly, the rapid warming trend during the past five decades is identified as a significant cold phase in the context of the past 600 years. The recovered temperature series reflects low-frequency variability consistent with other temperature reconstructions over the whole TP region. Furthermore, the present recovered temperature series is associated with the Asian monsoon strength on decadal to multidecadal scales over the past 600 years.

  17. Variogram analysis of stable oxygen isotope composition of daily precipitation over the British Isles

    NASA Astrophysics Data System (ADS)

    Kohán, Balázs; Tyler, Jonathan; Jones, Matthew; Kern, Zoltán

    2017-04-01

    Water stable isotopes are important natural tracers in the hydrological cycle on global, regional and local scales. Daily precipitation water samples were collected from 70 sites over the British Isles on the 23rd, 24th, and 25th January, 2012 [1]. Samples were collected as part of a pilot study for the British Isotopes in Rainfall Project, a community engagement initiative, in collaboration with volunteer weather observers and the UK Met Office. Spatial correlation structure of daily precipitation stable oxygen isotope composition (δ18OP) has been explored by variogram analysis [2]. Since the variograms from the raw data suggested a pronounced trend, owing to the spatial trend discussed in the original study [1], a second order polynomial trend was removed from the raw δ18OP data and variograms were calculated on the residuals. Directional experimental semivariograms were calculated (steps: 10°, tolerance: 30°) and aggregated into variogram surface plots to explore the spatial dependence structure of daily δ18OP. Each daily data set produced distinct variogram plots. -A well expressed anisotropic structure can be seen for Jan 23. The lowest and highest variance was observed in the SW-NE and NNE-SSW direction, respectively. Meteorological observations showed that the majority of the atmospheric flow was SW on this day, so the direction of low variance seems to reflect this flow direction, while the maximum variance might reflect the moisture variance near the elongation of the frontal system. -A less characteristic but still expressed anisotropic structure was found for Jan 24 when a warm front passed the British Isles perpendicular to the east coast, leading to a characteristic east-west δ18OP gradient suggestive of progressive rainout. The low variance central zone has a 100 km radius which might correspond well to the width of the warm front zone. Although, the axis of minimum variance was similarly SW-NE, the zone of maximum variance was broader and practically perpendicular to it. In this case, however, directions of the axes appear misaligned with the flow direction. -We could not observe similar characteristic patterns in the last variogram calculated from the Jan 25 data set. These preliminary results suggest that variogram analysis is a promising approach to link δ18OP patterns to atmospheric processes. NKFIH: SNN118205/ARRS: N1-0054 References 1.Tyler, J. J., Jones, M., Arrowsmith, C., Allott, T., & Leng, M. J. (2016). Spatial patterns in the oxygen isotope composition of daily rainfall in the British Isles. Climate Dynamics 47:1971-1987 2.Webster, R. Oliver M.A. (2007) Geostatistics for Environmental Scientists. John Wiley & Sons, Chichester

  18. SU-D-218-05: Material Quantification in Spectral X-Ray Imaging: Optimization and Validation.

    PubMed

    Nik, S J; Thing, R S; Watts, R; Meyer, J

    2012-06-01

    To develop and validate a multivariate statistical method to optimize scanning parameters for material quantification in spectral x-rayimaging. An optimization metric was constructed by extensively sampling the thickness space for the expected number of counts for m (two or three) materials. This resulted in an m-dimensional confidence region ofmaterial quantities, e.g. thicknesses. Minimization of the ellipsoidal confidence region leads to the optimization of energy bins. For the given spectrum, the minimum counts required for effective material separation can be determined by predicting the signal-to-noise ratio (SNR) of the quantification. A Monte Carlo (MC) simulation framework using BEAM was developed to validate the metric. Projection data of the m-materials was generated and material decomposition was performed for combinations of iodine, calcium and water by minimizing the z-score between the expected spectrum and binned measurements. The mean square error (MSE) and variance were calculated to measure the accuracy and precision of this approach, respectively. The minimum MSE corresponds to the optimal energy bins in the BEAM simulations. In the optimization metric, this is equivalent to the smallest confidence region. The SNR of the simulated images was also compared to the predictions from the metric. TheMSE was dominated by the variance for the given material combinations,which demonstrates accurate material quantifications. The BEAMsimulations revealed that the optimization of energy bins was accurate to within 1keV. The SNRs predicted by the optimization metric yielded satisfactory agreement but were expectedly higher for the BEAM simulations due to the inclusion of scattered radiation. The validation showed that the multivariate statistical method provides accurate material quantification, correct location of optimal energy bins and adequateprediction of image SNR. The BEAM code system is suitable for generating spectral x- ray imaging simulations. © 2012 American Association of Physicists in Medicine.

  19. Read distance performance and variation of 5 low-frequency radio frequency identification panel transceiver manufacturers.

    PubMed

    Ryan, S E; Blasi, D A; Anglin, C O; Bryant, A M; Rickard, B A; Anderson, M P; Fike, K E

    2010-07-01

    Use of electronic animal identification technologies by livestock managers is increasing, but performance of these technologies can be variable when used in livestock production environments. This study was conducted to determine whether 1) read distance of low-frequency radio frequency identification (RFID) transceivers is affected by type of transponder being interrogated; 2) read distance variation of low-frequency RFID transceivers is affected by transceiver manufacturer; and 3) read distance of various transponder-transceiver manufacturer combinations meet the 2004 United States Animal Identification Plan (USAIP) bovine standards subcommittee minimum read distance recommendation of 60 cm. Twenty-four transceivers (n = 5 transceivers per manufacturer for Allflex, Boontech, Farnam, and Osborne; n = 4 transceivers for Destron Fearing) were tested with 60 transponders [n = 10 transponders per type for Allflex full duplex B (FDX-B), Allflex half duplex (HDX), Destron Fearing FDX-B, Farnam FDX-B, and Y-Tex FDX-B; n = 6 for Temple FDX-B (EM Microelectronic chip); and n = 4 for Temple FDX-B (HiTag chip)] presented in the parallel orientation. All transceivers and transponders met International Organization for Standardization 11784 and 11785 standards. Transponders represented both one-half duplex and full duplex low-frequency air interface technologies. Use of a mechanical trolley device enabled the transponders to be presented to the center of each transceiver at a constant rate, thereby reducing human error. Transponder and transceiver manufacturer interacted (P < 0.0001) to affect read distance, indicating that transceiver performance was greatly dependent upon the transponder type being interrogated. Twenty-eight of 30 combinations of transceivers and transponders evaluated met the minimum recommended USAIP read distance. The mean read distance across all 30 combinations was 45.1 to 129.4 cm. Transceiver manufacturer and transponder type interacted to affect read distance variance (P < 0.05). Maximum read distance performance of low-frequency RFID technologies with low variance can be achieved by selecting specific transponder-transceiver combinations.

  20. Small-scale Pressure-balanced Structures Driven by Mirror-mode Waves in the Solar Wind

    NASA Astrophysics Data System (ADS)

    Yao, Shuo; He, J.-S.; Tu, C.-Y.; Wang, L.-H.; Marsch, E.

    2013-10-01

    Recently, small-scale pressure-balanced structures (PBSs) have been studied with regard to their dependence on the direction of the local mean magnetic field B0 . The present work continues these studies by investigating the compressive wave mode forming small PBSs, here for B0 quasi-perpendicular to the x-axis of Geocentric Solar Ecliptic coordinates (GSE-x). All the data used were measured by WIND in the quiet solar wind. From the distribution of PBSs on the plane determined by the temporal scale and angle θxB between the GSE-x and B0 , we notice that at θxB = 115° the PBSs appear at temporal scales ranging from 700 s to 60 s. In the corresponding temporal segment, the correlations between the plasma thermal pressure P th and the magnetic pressure P B, as well as that between the proton density N p and the magnetic field strength B, are investigated. In addition, we use the proton velocity distribution functions to calculate the proton temperatures T and T ∥. Minimum Variance Analysis is applied to find the magnetic field minimum variance vector BN . We also study the time variation of the cross-helicity σc and the compressibility C p and compare these with values from numerical predictions for the mirror mode. In this way, we finally identify a short segment that has T > T ∥, proton β ~= 1, both pairs of P th-P B and N p-B showing anti-correlation, and σc ≈ 0 with C p > 0. Although the examination of σc and C p is not conclusive, it provides helpful additional information for the wave mode identification. Additionally, BN is found to be highly oblique to B0 . Thus, this work suggests that a candidate mechanism for forming small-scale PBSs in the quiet solar wind is due to mirror-mode waves.

  1. A Study of the Southern Ocean: Mean State, Eddy Genesis & Demise, and Energy Pathways

    NASA Astrophysics Data System (ADS)

    Zajaczkovski, Uriel

    The Southern Ocean (SO), due to its deep penetrating jets and eddies, is well-suited for studies that combine surface and sub-surface data. This thesis explores the use of Argo profiles and sea surface height ( SSH) altimeter data from a statistical point of view. A linear regression analysis of SSH and hydrographic data reveals that the altimeter can explain, on average, about 35% of the variance contained in the hydrographic fields and more than 95% if estimated locally. Correlation maxima are found at mid-depth, where dynamics are dominated by geostrophy. Near the surface, diabatic processes are significant, and the variance explained by the altimeter is lower. Since SSH variability is associated with eddies, the regression of SSH with temperature (T) and salinity (S) shows the relative importance of S vs T in controlling density anomalies. The AAIW salinity minimum separates two distinct regions; above the minimum density changes are dominated by T, while below the minimum S dominates over T. The regression analysis provides a method to remove eddy variability, effectively reducing the variance of the hydrographic fields. We use satellite altimetry and output from an assimilating numerical model to show that the SO has two distinct eddy motion regimes. North and south of the Antarctic Circumpolar Current (ACC), eddies propagate westward with a mean meridional drift directed poleward for cyclonic eddies (CEs) and equatorward for anticyclonic eddies (AEs). Eddies formed within the boundaries of the ACC have an effective eastward propagation with respect to the mean deep ACC flow, and the mean meridional drift is reversed, with warm-core AEs propagating poleward and cold-core CEs propagating equatorward. This circulation pattern drives downgradient eddy heat transport, which could potentially transport a significant fraction (24 to 60 x 1013 W) of the net poleward ACC eddy heat flux. We show that the generation of relatively large amplitude eddies is not a ubiquitous feature of the SO but rather a phenomenon that is constrained to five isolated, well-defined "hotspots". These hotspots are located downstream of major topographic features, with their boundaries closely following f/H contours. Eddies generated in these locations show no evidence of a bias in polarity and decay within the boundaries of the generation area. Eddies tend to disperse along f/H contours rather than following lines of latitude. We found enhanced values of both buoyancy (BP) and shear production (SP) inside the hotspots, with BP one order of magnitude larger than SP. This is consistent with baroclinic instability being the main mechanism of eddy generation. The mean potential density field estimated from Argo floats shows that inside the hotspots, isopycnal slopes are steep, indicating availability of potential energy. The hotspots identified in this thesis overlap with previously identified regions of standing meanders. We provide evidence that hotspot locations can be explained by the combined effect of topography, standing meanders that enhance baroclinic instability, and availability of potential energy to generate eddies via baroclinic instabilities.

  2. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  3. Electron Heat Flux in Pressure Balance Structures at Ulysses

    NASA Technical Reports Server (NTRS)

    Yamauchi, Yohei; Suess, Steven T.; Sakurai, Takashi; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    Pressure balance structures (PBSs) are a common feature in the high-latitude solar wind near solar minimum. Rom previous studies, PBSs are believed to be remnants of coronal plumes and be related to network activity such as magnetic reconnection in the photosphere. We investigated the magnetic structures of the PBSs, applying a minimum variance analysis to Ulysses/Magnetometer data. At 2001 AGU Spring meeting, we reported that PBSs have structures like current sheets or plasmoids, and suggested that they are associated with network activity at the base of polar plumes. In this paper, we have analyzed high-energy electron data at Ulysses/SWOOPS to see whether bi-directional electron flow exists and confirm the conclusions more precisely. As a result, although most events show a typical flux directed away from the Sun, we have obtained evidence that some PBSs show bi-directional electron flux and others show an isotropic distribution of electron pitch angles. The evidence shows that plasmoids are flowing away from the Sun, changing their flow direction dynamically in a way not caused by Alfven waves. From this, we have concluded that PBSs are generated due to network activity at the base of polar plumes and their magnetic structures axe current sheets or plasmoids.

  4. Antimicrobial Effect of Jasminum grandiflorum L. and Hibiscus rosa-sinensis L. Extracts Against Pathogenic Oral Microorganisms--An In Vitro Comparative Study.

    PubMed

    Nagarajappa, Ramesh; Batra, Mehak; Sharda, Archana J; Asawa, Kailash; Sanadhya, Sudhanshu; Daryani, Hemasha; Ramesh, Gayathri

    2015-01-01

    To assess and compare the antimicrobial potential and determine the minimum inhibitory concentration (MIC) of Jasminum grandiflorum and Hibiscus rosa-sinensis extracts as potential anti-pathogenic agents in dental caries. Aqueous and ethanol (cold and hot) extracts prepared from leaves of Jasminum grandiflorum and Hibiscus rosa-sinensis were screened for in vitro antimicrobial activity against Streptococcus mutans and Lactobacillus acidophilus using the agar well diffusion method. The lowest concentration of every extract considered as the minimum inhibitory concentration (MIC) was determined for both test organisms. Statistical analysis was performed with one-way analysis of variance (ANOVA). At lower concentrations, hot ethanol Jasminum grandiflorum (10 μg/ml) and Hibiscus rosa-sinensis (25 μg/ml) extracts were found to have statistically significant (P≤0.05) antimicrobial activity against S. mutans and L. acidophilus with MIC values of 6.25 μg/ml and 25 μg/ml, respectively. A proportional increase in their antimicrobial activity (zone of inhibition) was observed. Both extracts were found to be antimicrobially active and contain compounds with therapeutic potential. Nevertheless, clinical trials on the effect of these plants are essential before advocating large-scale therapy.

  5. A negentropy minimization approach to adaptive equalization for digital communication systems.

    PubMed

    Choi, Sooyong; Lee, Te-Won

    2004-07-01

    In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE). Negentropy includes higher order statistical information and its minimization provides improved converge, performance and accuracy compared to traditional methods such as MMSE in terms of bit error rate (BER). The proposed negentropy minimization (NEGMIN) equalizer has two kinds of solutions, the MMSE solution and the other one, depending on the ratio of the normalization parameters. The NEGMIN equalizer has best BER performance when the ratio of the normalization parameters is properly adjusted to maximize the output power(variance) of the NEGMIN equalizer. Simulation experiments show that BER performance of the NEGMIN equalizer with the other solution than the MMSE one has similar characteristics to the adaptive minimum bit error rate (AMBER) equalizer. The main advantage of the proposed equalizer is that it needs significantly fewer training symbols than the AMBER equalizer. Furthermore, the proposed equalizer is more robust to nonlinear distortions than the MMSE equalizer.

  6. Summertime Minimum Streamflow Elasticity to Antecendent Winter Precipitation, Peak Snow Water Equivalent and Summertime Evaporative Demand in the Western US Maritime Mountains

    NASA Astrophysics Data System (ADS)

    Schaperow, J.; Cooper, M. G.; Cooley, S. W.; Alam, S.; Smith, L. C.; Lettenmaier, D. P.

    2017-12-01

    As climate regimes shift, streamflows and our ability to predict them will change, as well. Elasticity of summer minimum streamflow is estimated for 138 unimpaired headwater river basins across the maritime western US mountains to better understand how climatologic variables and geologic characteristics interact to determine the response of summer low flows to winter precipitation (PPT), spring snow water equivalent (SWE), and summertime potential evapotranspiration (PET). Elasticities are calculated using log log linear regression, and linear reservoir storage coefficients are used to represent basin geology. Storage coefficients are estimated using baseflow recession analysis. On average, SWE, PET, and PPT explain about 1/3 of the summertime low flow variance. Snow-dominated basins with long timescales of baseflow recession are least sensitive to changes in SWE, PPT, and PET, while rainfall-dominated, faster draining basins are most sensitive. There are also implications for the predictability of summer low flows. The R2 between streamflow and SWE drops from 0.62 to 0.47 from snow-dominated to rain-dominated basins, while there is no corresponding increase in R2 between streamflow and PPT.

  7. Evaluation of an active humidification system for inspired gas.

    PubMed

    Roux, Nicolás G; Plotnikow, Gustavo A; Villalba, Darío S; Gogniat, Emiliano; Feld, Vivivana; Ribero Vairo, Noelia; Sartore, Marisa; Bosso, Mauro; Scapellato, José L; Intile, Dante; Planells, Fernando; Noval, Diego; Buñirigo, Pablo; Jofré, Ricardo; Díaz Nielsen, Ernesto

    2015-03-01

    The effectiveness of the active humidification systems (AHS) in patients already weaned from mechanical ventilation and with an artificial airway has not been very well described. The objective of this study was to evaluate the performance of an AHS in chronically tracheostomized and spontaneously breathing patients. Measurements were quantified at three levels of temperature (T°) of the AHS: level I, low; level II, middle; and level III, high and at different flow levels (20 to 60 L/minute). Statistical analysis of repeated measurements was performed using analysis of variance and significance was set at a P<0.05. While the lowest temperature setting (level I) did not condition gas to the minimum recommended values for any of the flows that were used, the medium temperature setting (level II) only conditioned gas with flows of 20 and 30 L/minute. Finally, at the highest temperature setting (level III), every flow reached the minimum absolute humidity (AH) recommended of 30 mg/L. According to our results, to obtain appropiate relative humidity, AH and T° of gas one should have a device that maintains water T° at least at 53℃ for flows between 20 and 30 L/m, or at T° of 61℃ at any flow rate.

  8. Reexamining the minimum viable population concept for long-lived species.

    PubMed

    Shoemaker, Kevin T; Breisch, Alvin R; Jaycox, Jesse W; Gibbs, James P

    2013-06-01

    For decades conservation biologists have proposed general rules of thumb for minimum viable population size (MVP); typically, they range from hundreds to thousands of individuals. These rules have shifted conservation resources away from small and fragmented populations. We examined whether iteroparous, long-lived species might constitute an exception to general MVP guidelines. On the basis of results from a 10-year capture-recapture study in eastern New York (U.S.A.), we developed a comprehensive demographic model for the globally threatened bog turtle (Glyptemys muhlenbergii), which is designated as endangered by the IUCN in 2011. We assessed population viability across a wide range of initial abundances and carrying capacities. Not accounting for inbreeding, our results suggest that bog turtle colonies with as few as 15 breeding females have >90% probability of persisting for >100 years, provided vital rates and environmental variance remain at currently estimated levels. On the basis of our results, we suggest that MVP thresholds may be 1-2 orders of magnitude too high for many long-lived organisms. Consequently, protection of small and fragmented populations may constitute a viable conservation option for such species, especially in a regional or metapopulation context. © 2013 Society for Conservation Biology.

  9. Framework to trade optimality for local processing in large-scale wavefront reconstruction problems.

    PubMed

    Haber, Aleksandar; Verhaegen, Michel

    2016-11-15

    We show that the minimum variance wavefront estimation problems permit localized approximate solutions, in the sense that the wavefront value at a point (excluding unobservable modes, such as the piston mode) can be approximated by a linear combination of the wavefront slope measurements in the point's neighborhood. This enables us to efficiently compute a wavefront estimate by performing a single sparse matrix-vector multiplication. Moreover, our results open the possibility for the development of wavefront estimators that can be easily implemented in a decentralized/distributed manner, and in which the estimate optimality can be easily traded for computational efficiency. We numerically validate our approach on Hudgin wavefront sensor geometries, and the results can be easily generalized to Fried geometries.

  10. Theory of Financial Risk and Derivative Pricing

    NASA Astrophysics Data System (ADS)

    Bouchaud, Jean-Philippe; Potters, Marc

    2009-01-01

    Foreword; Preface; 1. Probability theory: basic notions; 2. Maximum and addition of random variables; 3. Continuous time limit, Ito calculus and path integrals; 4. Analysis of empirical data; 5. Financial products and financial markets; 6. Statistics of real prices: basic results; 7. Non-linear correlations and volatility fluctuations; 8. Skewness and price-volatility correlations; 9. Cross-correlations; 10. Risk measures; 11. Extreme correlations and variety; 12. Optimal portfolios; 13. Futures and options: fundamental concepts; 14. Options: hedging and residual risk; 15. Options: the role of drift and correlations; 16. Options: the Black and Scholes model; 17. Options: some more specific problems; 18. Options: minimum variance Monte-Carlo; 19. The yield curve; 20. Simple mechanisms for anomalous price statistics; Index of most important symbols; Index.

  11. Theory of Financial Risk and Derivative Pricing - 2nd Edition

    NASA Astrophysics Data System (ADS)

    Bouchaud, Jean-Philippe; Potters, Marc

    2003-12-01

    Foreword; Preface; 1. Probability theory: basic notions; 2. Maximum and addition of random variables; 3. Continuous time limit, Ito calculus and path integrals; 4. Analysis of empirical data; 5. Financial products and financial markets; 6. Statistics of real prices: basic results; 7. Non-linear correlations and volatility fluctuations; 8. Skewness and price-volatility correlations; 9. Cross-correlations; 10. Risk measures; 11. Extreme correlations and variety; 12. Optimal portfolios; 13. Futures and options: fundamental concepts; 14. Options: hedging and residual risk; 15. Options: the role of drift and correlations; 16. Options: the Black and Scholes model; 17. Options: some more specific problems; 18. Options: minimum variance Monte-Carlo; 19. The yield curve; 20. Simple mechanisms for anomalous price statistics; Index of most important symbols; Index.

  12. Constraints on the power spectrum of the primordial density field from large-scale data - Microwave background and predictions of inflation

    NASA Technical Reports Server (NTRS)

    Kashlinsky, A.

    1992-01-01

    It is shown here that, by using galaxy catalog correlation data as input, measurements of microwave background radiation (MBR) anisotropies should soon be able to test two of the inflationary scenario's most basic predictions: (1) that the primordial density fluctuations produced were scale-invariant and (2) that the universe is flat. They should also be able to detect anisotropies of large-scale structure formed by gravitational evolution of density fluctuations present at the last scattering epoch. Computations of MBR anisotropies corresponding to the minimum of the large-scale variance of the MBR anisotropy are presented which favor an open universe with P(k) significantly different from the Harrison-Zeldovich spectrum predicted by most inflationary models.

  13. MRI brain tumor segmentation based on improved fuzzy c-means method

    NASA Astrophysics Data System (ADS)

    Deng, Wankai; Xiao, Wei; Pan, Chao; Liu, Jianguo

    2009-10-01

    This paper focuses on the image segmentation, which is one of the key problems in medical image processing. A new medical image segmentation method is proposed based on fuzzy c- means algorithm and spatial information. Firstly, we classify the image into the region of interest and background using fuzzy c means algorithm. Then we use the information of the tissues' gradient and the intensity inhomogeneities of regions to improve the quality of segmentation. The sum of the mean variance in the region and the reciprocal of the mean gradient along the edge of the region are chosen as an objective function. The minimum of the sum is optimum result. The result shows that the clustering segmentation algorithm is effective.

  14. Model-based Acceleration Control of Turbofan Engines with a Hammerstein-Wiener Representation

    NASA Astrophysics Data System (ADS)

    Wang, Jiqiang; Ye, Zhifeng; Hu, Zhongzhi; Wu, Xin; Dimirovsky, Georgi; Yue, Hong

    2017-05-01

    Acceleration control of turbofan engines is conventionally designed through either schedule-based or acceleration-based approach. With the widespread acceptance of model-based design in aviation industry, it becomes necessary to investigate the issues associated with model-based design for acceleration control. In this paper, the challenges for implementing model-based acceleration control are explained; a novel Hammerstein-Wiener representation of engine models is introduced; based on the Hammerstein-Wiener model, a nonlinear generalized minimum variance type of optimal control law is derived; the feature of the proposed approach is that it does not require the inversion operation that usually upsets those nonlinear control techniques. The effectiveness of the proposed control design method is validated through a detailed numerical study.

  15. Intermediate energy proton-deuteron elastic scattering

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.

    1973-01-01

    A fully symmetrized multiple scattering series is considered for the description of proton-deuteron elastic scattering. An off-shell continuation of the experimentally known twobody amplitudes that retains the exchange symmeteries required for the calculation is presented. The one boson exchange terms of the two body amplitudes are evaluated exactly in this off-shell prescription. The first two terms of the multiple scattering series are calculated explicitly whereas multiple scattering effects are obtained as minimum variance estimates from the 146-MeV data of Postma and Wilson. The multiple scattering corrections indeed consist of low order partial waves as suggested by Sloan based on model studies with separable interactions. The Hamada-Johnston wave function is shown consistent with the data for internucleon distances greater than about 0.84 fm.

  16. Stochastic investigation of wind process for climatic variability identification

    NASA Astrophysics Data System (ADS)

    Deligiannis, Ilias; Tyrogiannis, Vassilis; Daskalou, Olympia; Dimitriadis, Panayiotis; Markonis, Yannis; Iliopoulou, Theano; Koutsoyiannis, Demetris

    2016-04-01

    The wind process is considered one of the hydrometeorological processes that generates and drives the climate dynamics. We use a dataset comprising hourly wind records to identify statistical variability with emphasis on the last period. Specifically, we investigate the occurrence of mean, maximum and minimum values and we estimate statistical properties such as marginal probability distribution function and the type of decay of the climacogram (i.e., mean process variance vs. scale) for various time periods. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.

  17. Stochastic investigation of precipitation process for climatic variability identification

    NASA Astrophysics Data System (ADS)

    Sotiriadou, Alexia; Petsiou, Amalia; Feloni, Elisavet; Kastis, Paris; Iliopoulou, Theano; Markonis, Yannis; Tyralis, Hristos; Dimitriadis, Panayiotis; Koutsoyiannis, Demetris

    2016-04-01

    The precipitation process is important not only to hydrometeorology but also to renewable energy resources management. We use a dataset consisting of daily and hourly records around the globe to identify statistical variability with emphasis on the last period. Specifically, we investigate the occurrence of mean, maximum and minimum values and we estimate statistical properties such as marginal probability distribution function and the type of decay of the climacogram (i.e., mean process variance vs. scale). Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.

  18. Transition of Attention in Terminal Area NextGen Operations Using Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Ellis, Kyle K. E.; Kramer, Lynda J.; Shelton, Kevin J.; Arthur, Shelton, J. J., III; Prinzel, Lance J., III; Norman, Robert M.

    2011-01-01

    This experiment investigates the capability of Synthetic Vision Systems (SVS) to provide significant situation awareness in terminal area operations, specifically in low visibility conditions. The use of a Head-Up Display (HUD) and Head-Down Displays (HDD) with SVS is contrasted to baseline standard head down displays in terms of induced workload and pilot behavior in 1400 RVR visibility levels. Variances across performance and pilot behavior were reviewed for acceptability when using HUD or HDD with SVS under reduced minimums to acquire the necessary visual components to continue to land. The data suggest superior performance for HUD implementations. Improved attentional behavior is also suggested for HDD implementations of SVS for low-visibility approach and landing operations.

  19. Estimating gene function with least squares nonnegative matrix factorization.

    PubMed

    Wang, Guoli; Ochs, Michael F

    2007-01-01

    Nonnegative matrix factorization is a machine learning algorithm that has extracted information from data in a number of fields, including imaging and spectral analysis, text mining, and microarray data analysis. One limitation with the method for linking genes through microarray data in order to estimate gene function is the high variance observed in transcription levels between different genes. Least squares nonnegative matrix factorization uses estimates of the uncertainties on the mRNA levels for each gene in each condition, to guide the algorithm to a local minimum in normalized chi2, rather than a Euclidean distance or divergence between the reconstructed data and the data itself. Herein, application of this method to microarray data is demonstrated in order to predict gene function.

  20. Effects on Vibration and Surface Roughness in High Speed Micro End-Milling of Inconel 718 with Minimum Quantity Lubrication

    NASA Astrophysics Data System (ADS)

    Rahman, Mohamed Abd; Yeakub Ali, Mohammad; Saddam Khairuddin, Amir

    2017-03-01

    This paper presents the study on vibration and surface roughness of Inconel 718 workpiece produced by micro end-milling using Mikrotools Integrated Multi-Process machine tool DT-110 with control parameters; spindle speed (15000 rpm and 30000 rpm), feed rate (2 mm/min and 4 mm/min) and depth of cut (0.10 mm and 0.15mm). The vibration was measured using DYTRAN accelerometer instrument and the average surface roughness Ra was measured using Wyko NT1100. The analysis of variance (ANOVA) by using Design Expert software revealed that feed rate and depth of cut are the most significant factors on vibration meanwhile for average surface roughness, Ra, spindle speed is the most significant factor.

  1. Estimation of the simple correlation coefficient.

    PubMed

    Shieh, Gwowen

    2010-11-01

    This article investigates some unfamiliar properties of the Pearson product-moment correlation coefficient for the estimation of simple correlation coefficient. Although Pearson's r is biased, except for limited situations, and the minimum variance unbiased estimator has been proposed in the literature, researchers routinely employ the sample correlation coefficient in their practical applications, because of its simplicity and popularity. In order to support such practice, this study examines the mean squared errors of r and several prominent formulas. The results reveal specific situations in which the sample correlation coefficient performs better than the unbiased and nearly unbiased estimators, facilitating recommendation of r as an effect size index for the strength of linear association between two variables. In addition, related issues of estimating the squared simple correlation coefficient are also considered.

  2. Evidence for the presence of quasi-two-dimensional nearly incompressible fluctuations in the solar wind

    NASA Technical Reports Server (NTRS)

    Matthaeus, William H.; Goldstein, Melvyn L.; Roberts, D. Aaron

    1990-01-01

    Assuming that the slab and isotropic models of solar wind turbulence need modification (largely due to the observed anisotropy of the interplanetary fluctuations and the results of laboratory plasma experiments), this paper proposes a model of the solar wind. The solar wind is seen as a fluid which contains both classical transverse Alfvenic fluctuations and a population of quasi-transverse fluctuations. In quasi-two-dimensional turbulence, the pitch angle scattering by resonant wave-particle interactions is suppressed, and the direction of minimum variance of interplanetary fluctuations is parallel to the mean magnetic field. The assumed incompressibility is consistent with the fact that the density fluctuations are small and anticorrelated, and that the total pressure at small scales is nearly constant.

  3. Assumption-free estimation of the genetic contribution to refractive error across childhood.

    PubMed

    Guggenheim, Jeremy A; St Pourcain, Beate; McMahon, George; Timpson, Nicholas J; Evans, David M; Williams, Cathy

    2015-01-01

    Studies in relatives have generally yielded high heritability estimates for refractive error: twins 75-90%, families 15-70%. However, because related individuals often share a common environment, these estimates are inflated (via misallocation of unique/common environment variance). We calculated a lower-bound heritability estimate for refractive error free from such bias. Between the ages 7 and 15 years, participants in the Avon Longitudinal Study of Parents and Children (ALSPAC) underwent non-cycloplegic autorefraction at regular research clinics. At each age, an estimate of the variance in refractive error explained by single nucleotide polymorphism (SNP) genetic variants was calculated using genome-wide complex trait analysis (GCTA) using high-density genome-wide SNP genotype information (minimum N at each age=3,404). The variance in refractive error explained by the SNPs ("SNP heritability") was stable over childhood: Across age 7-15 years, SNP heritability averaged 0.28 (SE=0.08, p<0.001). The genetic correlation for refractive error between visits varied from 0.77 to 1.00 (all p<0.001) demonstrating that a common set of SNPs was responsible for the genetic contribution to refractive error across this period of childhood. Simulations suggested lack of cycloplegia during autorefraction led to a small underestimation of SNP heritability (adjusted SNP heritability=0.35; SE=0.09). To put these results in context, the variance in refractive error explained (or predicted) by the time participants spent outdoors was <0.005 and by the time spent reading was <0.01, based on a parental questionnaire completed when the child was aged 8-9 years old. Genetic variation captured by common SNPs explained approximately 35% of the variation in refractive error between unrelated subjects. This value sets an upper limit for predicting refractive error using existing SNP genotyping arrays, although higher-density genotyping in larger samples and inclusion of interaction effects is expected to raise this figure toward twin- and family-based heritability estimates. The same SNPs influenced refractive error across much of childhood. Notwithstanding the strong evidence of association between time outdoors and myopia, and time reading and myopia, less than 1% of the variance in myopia at age 15 was explained by crude measures of these two risk factors, indicating that their effects may be limited, at least when averaged over the whole population.

  4. Minimizing the Standard Deviation of Spatially Averaged Surface Cross-Sectional Data from the Dual-Frequency Precipitation Radar

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Kim, Hyokyung

    2016-01-01

    For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.

  5. Relationship between distal radius fracture malunion and arm-related disability: A prospective population-based cohort study with 1-year follow-up

    PubMed Central

    2011-01-01

    Background Distal radius fracture is a common injury and may result in substantial dysfunction and pain. The purpose was to investigate the relationship between distal radius fracture malunion and arm-related disability. Methods The prospective population-based cohort study included 143 consecutive patients above 18 years with an acute distal radius fracture treated with closed reduction and either cast (55 patients) or external and/or percutaneous pin fixation (88 patients). The patients were evaluated with the disabilities of the arm, shoulder and hand (DASH) questionnaire at baseline (concerning disabilities before fracture) and one year after fracture. The 1-year follow-up included the SF-12 health status questionnaire and clinical and radiographic examinations. Patients were classified into three hypothesized severity categories based on fracture malunion; no malunion, malunion involving either dorsal tilt (>10 degrees) or ulnar variance (≥1 mm), and combined malunion involving both dorsal tilt and ulnar variance. Multivariate regression analyses were performed to determine the relationship between the 1-year DASH score and malunion and the relative risk (RR) of obtaining DASH score ≥15 and the number needed to harm (NNH) were calculated. Results The mean DASH score at one year after fracture was significantly higher by a minimum of 10 points with each malunion severity category. The RR for persistent disability was 2.5 if the fracture healed with malunion involving either dorsal tilt or ulnar variance and 3.7 if the fracture healed with combined malunion. The NNH was 2.5 (95% CI 1.8-5.4). Malunion had a statistically significant relationship with worse SF-12 score (physical health) and grip strength. Conclusion Malunion after distal radius fracture was associated with higher arm-related disability regardless of age. PMID:21232088

  6. Climate Drivers of Blue Intensity from Two Eastern North American Conifers

    NASA Astrophysics Data System (ADS)

    Rayback, S. A.; Kilbride, J.; Pontius, J.; Tait, E.; Little, J.

    2016-12-01

    Gaining a comprehensive understanding of the climatic factors that drive tree radial growth over time is important in the context of global climate change. Herein, we explore minimum blue intensity (BI), a measure of lignin context in the latewood of tree rings, with the objective of developing BI chronologies for two eastern North American conifers to identify and explore climatic drivers and to compare BI-climate relationships to those of tree-ring widths (TRW). Using dendrochronological techniques, Tsuga canadensis and Picea rubens TRW and BI chronologies were developed at Abbey Pond (ABP) and The Cape National Research Area (CAPE), Vermont, USA, respectively. Climate drivers (1901-2010) were investigated using correlation and response function analyses and generalized linear mixed models. The ABP T. canadensis BI model explained the highest amount of variance (R2 = 0.350, adjR2=0.324) with September Tmin and June total percent cloudiness as predictors. The ABP T. canadensis TRW model explained 34% of the variance (R2 = 0.340, adjR2=0.328) with summer total precipitation and June PDSI as predictors. The CAPE P. rubens TRW and BI models explained 31% of the variance (R2 = 0.33, adjR2=0.310), based on p July Tmax, p August Tmean and fall Tmin as predictors, and 7% (R2 = 0.068, adjR2=0.060) based on Spring Tmin as the predictor, respectively. Moving window analyses confirm the moisture sensitivity of T. canadensis TRW and now BI and suggest an extension of the growing season. Similarly, P. rubens TRW responded consistently negative to high growing season temperatures, but TRW and BI benefited from a longer growing season. This study introduces two new BI chronologies, the first from northeastern North America, and highlights shifts underway in tree response to changing climate.

  7. How many days of accelerometer monitoring predict weekly physical activity behaviour in obese youth?

    PubMed

    Vanhelst, Jérémy; Fardy, Paul S; Duhamel, Alain; Béghin, Laurent

    2014-09-01

    The aim of this study was to determine the type and the number of accelerometer monitoring days needed to predict weekly sedentary behaviour and physical activity in obese youth. Fifty-three obese youth wore a triaxial accelerometer for 7 days to measure physical activity in free-living conditions. Analyses of variance for repeated measures, Intraclass coefficient (ICC) and regression linear analyses were used. Obese youth spent significantly less time in physical activity on weekends or free days compared with school days. ICC analyses indicated a minimum of 2 days is needed to estimate physical activity behaviour. ICC were 0·80 between weekly physical activity and weekdays and 0·92 between physical activity and weekend days. The model has to include a weekday and a weekend day. Using any combination of one weekday and one weekend day, the percentage of variance explained is >90%. Results indicate that 2 days of monitoring are needed to estimate the weekly physical activity behaviour in obese youth with an accelerometer. Our results also showed the importance of taking into consideration school day versus free day and weekday versus weekend day in assessing physical activity in obese youth. © 2013 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  8. MANUSCRIPT IN PRESS: DEMENTIA & GERIATRIC COGNITIVE DISORDERS

    PubMed Central

    O’Bryant, Sid E.; Xiao, Guanghua; Barber, Robert; Cullum, C. Munro; Weiner, Myron; Hall, James; Edwards, Melissa; Grammas, Paula; Wilhelmsen, Kirk; Doody, Rachelle; Diaz-Arrastia, Ramon

    2015-01-01

    Background Prior work on the link between blood-based biomarkers and cognitive status has largely been based on dichotomous classifications rather than detailed neuropsychological functioning. The current project was designed to create serum-based biomarker algorithms that predict neuropsychological test performance. Methods A battery of neuropsychological measures was administered. Random forest analyses were utilized to create neuropsychological test-specific biomarker risk scores in a training set that were entered into linear regression models predicting the respective test scores in the test set. Serum multiplex biomarker data were analyzed on 108 proteins from 395 participants (197 AD cases and 198 controls) from the Texas Alzheimer’s Research and Care Consortium. Results The biomarker risk scores were significant predictors (p<0.05) of scores on all neuropsychological tests. With the exception of premorbid intellectual status (6.6%), the biomarker risk scores alone accounted for a minimum of 12.9% of the variance in neuropsychological scores. Biomarker algorithms (biomarker risk scores + demographics) accounted for substantially more variance in scores. Review of the variable importance plots indicated differential patterns of biomarker significance for each test, suggesting the possibility of domain-specific biomarker algorithms. Conclusions Our findings provide proof-of-concept for a novel area of scientific discovery, which we term “molecular neuropsychology.” PMID:24107792

  9. CMB bispectrum, trispectrum, non-Gaussianity, and the Cramer-Rao bound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamionkowski, Marc; Smith, Tristan L.; Heavens, Alan

    Minimum-variance estimators for the parameter f{sub nl} that quantifies local-model non-Gaussianity can be constructed from the cosmic microwave background (CMB) bispectrum (three-point function) and also from the trispectrum (four-point function). Some have suggested that a comparison between the estimates for the values of f{sub nl} from the bispectrum and trispectrum allow a consistency test for the model. But others argue that the saturation of the Cramer-Rao bound--which gives a lower limit to the variance of an estimator--by the bispectrum estimator implies that no further information on f{sub nl} can be obtained from the trispectrum. Here, we elaborate the nature ofmore » the correlation between the bispectrum and trispectrum estimators for f{sub nl}. We show that the two estimators become statistically independent in the limit of large number of CMB pixels, and thus that the trispectrum estimator does indeed provide additional information on f{sub nl} beyond that obtained from the bispectrum. We explain how this conclusion is consistent with the Cramer-Rao bound. Our discussion of the Cramer-Rao bound may be of interest to those doing Fisher-matrix parameter-estimation forecasts or data analysis in other areas of physics as well.« less

  10. Data mining on long-term barometric data within the ARISE2 project

    NASA Astrophysics Data System (ADS)

    Hupe, Patrick; Ceranna, Lars; Pilger, Christoph

    2016-04-01

    The Comprehensive nuclear-Test-Ban Treaty (CTBT) led to the implementation of an international infrasound array network. The International Monitoring System (IMS) network includes 48 certified stations, each providing data for up to 15 years. As part of work package 3 of the ARISE2 project (Atmospheric dynamics Research InfraStructure in Europe, phase 2) the data sets will be statistically evaluated with regard on atmospheric dynamics. The current study focusses on fluctuations of absolute air pressure. Time series have been analysed for 17 monitoring stations which are located all over the world between Greenland and Antarctica along the latitudes to represent different climate zones and characteristic atmospheric conditions. Hence this enables quantitative comparisons between those regions. Analyses are shown including wavelet power spectra, multi-annual time series of average variances with regard to long-wave scales, and spectral densities to derive characteristics and special events. Evaluations reveal periodicities in average variances on 2 to 20 day scale with a maximum in the winter months and a minimum in summer of the respective hemisphere. This basically applies to time series of IMS stations beyond the tropics where the dominance of cyclones and anticyclones changes with seasons. Furthermore, spectral density analyses illustrate striking signals for several dynamic activities within one day, e.g., the semidiurnal tide.

  11. [Development and Application of a Performance Prediction Model for Home Care Nursing Based on a Balanced Scorecard using the Bayesian Belief Network].

    PubMed

    Noh, Wonjung; Seomun, Gyeongae

    2015-06-01

    This study was conducted to develop key performance indicators (KPIs) for home care nursing (HCN) based on a balanced scorecard, and to construct a performance prediction model of strategic objectives using the Bayesian Belief Network (BBN). This methodological study included four steps: establishment of KPIs, performance prediction modeling, development of a performance prediction model using BBN, and simulation of a suggested nursing management strategy. An HCN expert group and a staff group participated. The content validity index was analyzed using STATA 13.0, and BBN was analyzed using HUGIN 8.0. We generated a list of KPIs composed of 4 perspectives, 10 strategic objectives, and 31 KPIs. In the validity test of the performance prediction model, the factor with the greatest variance for increasing profit was maximum cost reduction of HCN services. The factor with the smallest variance for increasing profit was a minimum image improvement for HCN. During sensitivity analysis, the probability of the expert group did not affect the sensitivity. Furthermore, simulation of a 10% image improvement predicted the most effective way to increase profit. KPIs of HCN can estimate financial and non-financial performance. The performance prediction model for HCN will be useful to improve performance.

  12. Examining the Prey Mass of Terrestrial and Aquatic Carnivorous Mammals: Minimum, Maximum and Range

    PubMed Central

    Tucker, Marlee A.; Rogers, Tracey L.

    2014-01-01

    Predator-prey body mass relationships are a vital part of food webs across ecosystems and provide key information for predicting the susceptibility of carnivore populations to extinction. Despite this, there has been limited research on the minimum and maximum prey size of mammalian carnivores. Without information on large-scale patterns of prey mass, we limit our understanding of predation pressure, trophic cascades and susceptibility of carnivores to decreasing prey populations. The majority of studies that examine predator-prey body mass relationships focus on either a single or a subset of mammalian species, which limits the strength of our models as well as their broader application. We examine the relationship between predator body mass and the minimum, maximum and range of their prey's body mass across 108 mammalian carnivores, from weasels to baleen whales (Carnivora and Cetacea). We test whether mammals show a positive relationship between prey and predator body mass, as in reptiles and birds, as well as examine how environment (aquatic and terrestrial) and phylogenetic relatedness play a role in this relationship. We found that phylogenetic relatedness is a strong driver of predator-prey mass patterns in carnivorous mammals and accounts for a higher proportion of variance compared with the biological drivers of body mass and environment. We show a positive predator-prey body mass pattern for terrestrial mammals as found in reptiles and birds, but no relationship for aquatic mammals. Our results will benefit our understanding of trophic interactions, the susceptibility of carnivores to population declines and the role of carnivores within ecosystems. PMID:25162695

  13. Examining the prey mass of terrestrial and aquatic carnivorous mammals: minimum, maximum and range.

    PubMed

    Tucker, Marlee A; Rogers, Tracey L

    2014-01-01

    Predator-prey body mass relationships are a vital part of food webs across ecosystems and provide key information for predicting the susceptibility of carnivore populations to extinction. Despite this, there has been limited research on the minimum and maximum prey size of mammalian carnivores. Without information on large-scale patterns of prey mass, we limit our understanding of predation pressure, trophic cascades and susceptibility of carnivores to decreasing prey populations. The majority of studies that examine predator-prey body mass relationships focus on either a single or a subset of mammalian species, which limits the strength of our models as well as their broader application. We examine the relationship between predator body mass and the minimum, maximum and range of their prey's body mass across 108 mammalian carnivores, from weasels to baleen whales (Carnivora and Cetacea). We test whether mammals show a positive relationship between prey and predator body mass, as in reptiles and birds, as well as examine how environment (aquatic and terrestrial) and phylogenetic relatedness play a role in this relationship. We found that phylogenetic relatedness is a strong driver of predator-prey mass patterns in carnivorous mammals and accounts for a higher proportion of variance compared with the biological drivers of body mass and environment. We show a positive predator-prey body mass pattern for terrestrial mammals as found in reptiles and birds, but no relationship for aquatic mammals. Our results will benefit our understanding of trophic interactions, the susceptibility of carnivores to population declines and the role of carnivores within ecosystems.

  14. Variable selection for confounder control, flexible modeling and Collaborative Targeted Minimum Loss-based Estimation in causal inference

    PubMed Central

    Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan

    2015-01-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129

  15. Variable Selection for Confounder Control, Flexible Modeling and Collaborative Targeted Minimum Loss-Based Estimation in Causal Inference.

    PubMed

    Schnitzer, Mireille E; Lok, Judith J; Gruber, Susan

    2016-05-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010 [27]) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low- and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios.

  16. General deterrence effects of U.S. statutory DUI fine and jail penalties: long-term follow-up in 32 states.

    PubMed

    Wagenaar, Alexander C; Maldonado-Molina, Mildred M; Erickson, Darin J; Ma, Linan; Tobler, Amy L; Komro, Kelli A

    2007-09-01

    We examined effects of state statutory changes in DUI fine or jail penalties for firsttime offenders from 1976 to 2002. A quasi-experimental time-series design was used (n=324 monthly observations). Four outcome measures of drivers involved in alcohol-related fatal crashes are: single-vehicle nighttime, low BAC (0.01-0.07g/dl), medium BAC (0.08-0.14g/dl), high BAC (>/=0.15g/dl). All analyses of BAC outcomes included multiple imputation procedures for cases with missing data. Comparison series of non-alcohol-related crashes were included to efficiently control for effects of other factors. Statistical models include state-specific Box-Jenkins ARIMA models, and pooled general linear mixed models. Twenty-six states implemented mandatory minimum fine policies and 18 states implemented mandatory minimum jail penalties. Estimated effects varied widely from state to state. Using variance weighted meta-analysis methods to aggregate results across states, mandatory fine policies are associated with an average reduction in fatal crash involvement by drivers with BAC>/=0.08g/dl of 8% (averaging 13 per state per year). Mandatory minimum jail policies are associated with a decline in single-vehicle nighttime fatal crash involvement of 6% (averaging 5 per state per year), and a decline in low-BAC cases of 9% (averaging 3 per state per year). No significant effects were observed for the other outcome measures. The overall pattern of results suggests a possible effect of mandatory fine policies in some states, but little effect of mandatory jail policies.

  17. The association of remotely-sensed outdoor temperature with blood pressure levels in REGARDS: a cross-sectional study of a large, national cohort of African-American and white participants

    PubMed Central

    2011-01-01

    Background Evidence is mounting regarding the clinically significant effect of temperature on blood pressure. Methods In this cross-sectional study the authors obtained minimum and maximum temperatures and their respective previous week variances at the geographic locations of the self-reported residences of 26,018 participants from a national cohort of blacks and whites, aged 45+. Linear regression of data from 20,623 participants was used in final multivariable models to determine if these temperature measures were associated with levels of systolic or diastolic blood pressure, and whether these relations were modified by stroke-risk region, race, education, income, sex hypertensive medication status, or age. Results After adjustment for confounders, same-day maximum temperatures 20°F lower had significant associations with 1.4 mmHg (95% CI: 1.0, 1.9) higher systolic and 0.5 mmHg (95% CI: 0.3, 0.8) higher diastolic blood pressures. Same-day minimum temperatures 20°F lower had a significant association with 0.7 mmHg (95% CI: 0.3, 1.0) higher systolic blood pressures but no significant association with diastolic blood pressure differences. Maximum and minimum previous-week temperature variabilities showed significant but weak relationships with blood pressures. Parameter estimates showed effect modification of negligible magnitude. Conclusions This study found significant associations between outdoor temperature and blood pressure levels, which remained after adjustment for various confounders including season. This relationship showed negligible effect modification. PMID:21247466

  18. Forecasting Total Water Storage Changes in the Amazon basin using Atlantic and Pacific Sea Surface Temperatures

    NASA Astrophysics Data System (ADS)

    De Linage, C.; Famiglietti, J. S.; Randerson, J. T.

    2013-12-01

    Floods and droughts frequently affect the Amazon River basin, impacting the transportation, river navigation, agriculture, economy and the carbon balance and biodiversity of several South American countries. The present study aims to find the main variables controlling the natural interannual variability of terrestrial water storage in the Amazon region and to propose a modeling framework for flood and drought forecasting. We propose three simple empirical models using a linear combination of lagged spatial averages of central Pacific (Niño 4 index) and tropical North Atlantic (TNAI index) sea surface temperatures (SST) to predict a decade-long record of 3°, monthly terrestrial water storage anomalies (TWSA) observed by the Gravity Recovery And Climate Experiment (GRACE) mission. In addition to a SST forcing term, the models included a relaxation term to simulate the memory of water storage anomalies in response to external variability in forcing. Model parameters were spatially-variable and individually optimized for each 3° grid cell. We also investigated the evolution of the predictive capability of our models with increasing minimum lead times for TWSA forecasts. TNAI was the primary external forcing for the central and western regions of the southern Amazon (35% of variance explained with a 3-month forecast), whereas Niño 4 was dominant in the northeastern part of the basin (61% of variance explained with a 3-month forecast). Forcing the model with a combination of the two indices improved the fit significantly (p<0.05) for at least 64% of the grid cells, compared to models forced solely with Niño 4 or TNAI. The combined model was able to explain 43% of the variance in the Amazon basin as a whole with a 3-month lead time. While 66% of the observed variance was explained in the northeastern Amazon, only 39% of the variance was captured by the combined model in the central and western regions, suggesting that other, more local, forcing sources were important in these regions. The predictive capability of the combined model was monotonically degraded with increasing lead times. Degradation was smaller in the northeastern Amazon (where 49% of the variance was explained using a 8-month lead time versus 69% for a 1 month lead time) compared to the western and central regions of southern Amazon (where 22% of the variance was explained at 8 months versus 43% at 1 month). Our model may provide early warning information about flooding in the northeastern region of the Amazon basin, where floodplain areas are extensive and the sensitivity of floods to external SST forcing was shown to be high. This work also strengthens our understanding of the mechanisms regulating interannual variability in Amazon fires, as TWSA deficits may subsequently lead to atmospheric water vapor deficits and reduced cloudiness via water-limited evapotranspiration. Finally, this work helps to bridge the gap between the current GRACE mission and the follow-on gravity mission.

  19. Simulating future uncertainty to guide the selection of survey designs for long-term monitoring

    USGS Publications Warehouse

    Garman, Steven L.; Schweiger, E. William; Manier, Daniel J.; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.

    2012-01-01

    A goal of environmental monitoring is to provide sound information on the status and trends of natural resources (Messer et al. 1991, Theobald et al. 2007, Fancy et al. 2009). When monitoring observations are acquired by measuring a subset of the population of interest, probability sampling as part of a well-constructed survey design provides the most reliable and legally defensible approach to achieve this goal (Cochran 1977, Olsen et al. 1999, Schreuder et al. 2004; see Chapters 2, 5, 6, 7). Previous works have described the fundamentals of sample surveys (e.g. Hansen et al. 1953, Kish 1965). Interest in survey designs and monitoring over the past 15 years has led to extensive evaluations and new developments of sample selection methods (Stevens and Olsen 2004), of strategies for allocating sample units in space and time (Urquhart et al. 1993, Overton and Stehman 1996, Urquhart and Kincaid 1999), and of estimation (Lesser and Overton 1994, Overton and Stehman 1995) and variance properties (Larsen et al. 1995, Stevens and Olsen 2003) of survey designs. Carefully planned, “scientific” (Chapter 5) survey designs have become a standard in contemporary monitoring of natural resources. Based on our experience with the long-term monitoring program of the US National Park Service (NPS; Fancy et al. 2009; Chapters 16, 22), operational survey designs tend to be selected using the following procedures. For a monitoring indicator (i.e. variable or response), a minimum detectable trend requirement is specified, based on the minimum level of change that would result in meaningful change (e.g. degradation). A probability of detecting this trend (statistical power) and an acceptable level of uncertainty (Type I error; see Chapter 2) within a specified time frame (e.g. 10 years) are specified to ensure timely detection. Explicit statements of the minimum detectable trend, the time frame for detecting the minimum trend, power, and acceptable probability of Type I error (α) collectively form the quantitative sampling objective.

  20. Changing climate and endangered high mountain ecosystems in Colombia.

    PubMed

    Ruiz, Daniel; Moreno, Hernán Alonso; Gutiérrez, María Elena; Zapata, Paula Andrea

    2008-07-15

    High mountain ecosystems are among the most sensitive environments to changes in climatic conditions occurring on global, regional and local scales. The article describes the changing conditions observed over recent years in the high mountain basin of the Claro River, on the west flank of the Colombian Andean Central mountain range. Local ground truth data gathered at 4150 m, regional data available at nearby weather stations, and satellite info were used to analyze changes in the mean and the variance, and significant trends in climatic time series. Records included minimum, mean and maximum temperatures, relative humidity, rainfall, sunshine, and cloud characteristics. In high levels, minimum and maximum temperatures during the coldest days increased at a rate of about 0.6 degrees C/decade, whereas maximum temperatures during the warmest days increased at a rate of about 1.3 degrees C/decade. Rates of increase in maximum, mean and minimum diurnal temperature range reached 0.6, 0.7, and 0.5 degrees C/decade. Maximum, mean and minimum relative humidity records showed reductions of about 1.8, 3.9 and 6.6%/decade. The total number of sunny days per month increased in almost 2.1 days. The headwaters exhibited no changes in rainfall totals, but evidenced an increased occurrence of unusually heavy rainfall events. Reductions in the amount of all cloud types over the area reached 1.9%/decade. In low levels changes in mean monthly temperatures and monthly rainfall totals exceeded + 0.2 degrees C and - 4% per decade, respectively. These striking changes might have contributed to the retreat of glacier icecaps and to the disappearance of high altitude water bodies, as well as to the occurrence and rapid spread of natural and man-induced forest fires. Significant reductions in water supply, important disruptions of the integrity of high mountain ecosystems, and dramatic losses of biodiversity are now a steady menu of the severe climatic conditions experienced by these fragile tropical environments.

  1. Sustained IFN-I Expression during Established Persistent Viral Infection: A “Bad Seed” for Protective Immunity

    PubMed Central

    Murira, Armstrong; Laulhé, Xavier; Stäger, Simona; Lamarre, Alain; van Grevenynghe, Julien

    2017-01-01

    Type I interferons (IFN-I) are one of the primary immune defenses against viruses. Similar to all other molecular mechanisms that are central to eliciting protective immune responses, IFN-I expression is subject to homeostatic controls that regulate cytokine levels upon clearing the infection. However, in the case of established persistent viral infection, sustained elevation of IFN-I expression bears deleterious effects to the host and is today considered as the major driver of inflammation and immunosuppression. In fact, numerous emerging studies place sustained IFN-I expression as a common nexus in the pathogenesis of multiple chronic diseases including persistent infections with the human immunodeficiency virus type 1 (HIV-1), simian immunodeficiency virus (SIV), as well as the rodent-borne lymphocytic choriomeningitis virus clone 13 (LCMV clone 13). In this review, we highlight recent studies illustrating the molecular dysregulation and resultant cellular dysfunction in both innate and adaptive immune responses driven by sustained IFN-I expression. Here, we place particular emphasis on the efficacy of IFN-I receptor (IFNR) blockade towards improving immune responses against viral infections given the emerging therapeutic approach of blocking IFNR using neutralizing antibodies (Abs) in chronically infected patients. PMID:29301196

  2. Genetic absence of PD-1 promotes accumulation of terminally differentiated exhausted CD8+ T cells

    PubMed Central

    Odorizzi, Pamela M.; Pauken, Kristen E.; Paley, Michael A.; Sharpe, Arlene

    2015-01-01

    Programmed Death-1 (PD-1) has received considerable attention as a key regulator of CD8+ T cell exhaustion during chronic infection and cancer because blockade of this pathway partially reverses T cell dysfunction. Although the PD-1 pathway is critical in regulating established “exhausted” CD8+ T cells (TEX cells), it is unclear whether PD-1 directly causes T cell exhaustion. We show that PD-1 is not required for the induction of exhaustion in mice with chronic lymphocytic choriomeningitis virus (LCMV) infection. In fact, some aspects of exhaustion are more severe with genetic deletion of PD-1 from the onset of infection. Increased proliferation between days 8 and 14 postinfection is associated with subsequent decreased CD8+ T cell survival and disruption of a critical proliferative hierarchy necessary to maintain exhausted populations long term. Ultimately, the absence of PD-1 leads to the accumulation of more cytotoxic, but terminally differentiated, CD8+ TEX cells. These results demonstrate that CD8+ T cell exhaustion can occur in the absence of PD-1. They also highlight a novel role for PD-1 in preserving TEX cell populations from overstimulation, excessive proliferation, and terminal differentiation. PMID:26034050

  3. Cooperativity Between CD8+ T Cells, Non-Neutralizing Antibodies, and Alveolar Macrophages Is Important for Heterosubtypic Influenza Virus Immunity

    PubMed Central

    Laidlaw, Brian J.; Decman, Vilma; Ali, Mohammed-Alkhatim A.; Abt, Michael C.; Wolf, Amaya I.; Monticelli, Laurel A.; Mozdzanowska, Krystyna; Angelosanto, Jill M.; Artis, David; Erikson, Jan; Wherry, E. John

    2013-01-01

    Seasonal epidemics of influenza virus result in ∼36,000 deaths annually in the United States. Current vaccines against influenza virus elicit an antibody response specific for the envelope glycoproteins. However, high mutation rates result in the emergence of new viral serotypes, which elude neutralization by preexisting antibodies. T lymphocytes have been reported to be capable of mediating heterosubtypic protection through recognition of internal, more conserved, influenza virus proteins. Here, we demonstrate using a recombinant influenza virus expressing the LCMV GP33-41 epitope that influenza virus-specific CD8+ T cells and virus-specific non-neutralizing antibodies each are relatively ineffective at conferring heterosubtypic protective immunity alone. However, when combined virus-specific CD8 T cells and non-neutralizing antibodies cooperatively elicit robust protective immunity. This synergistic improvement in protective immunity is dependent, at least in part, on alveolar macrophages and/or other lung phagocytes. Overall, our studies suggest that an influenza vaccine capable of eliciting both CD8+ T cells and antibodies specific for highly conserved influenza proteins may be able to provide heterosubtypic protection in humans, and act as the basis for a potential “universal” vaccine. PMID:23516357

  4. Logistic quantile regression provides improved estimates for bounded avian counts: A case study of California Spotted Owl fledgling production

    USGS Publications Warehouse

    Cade, Brian S.; Noon, Barry R.; Scherer, Rick D.; Keane, John J.

    2017-01-01

    Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical conditional distribution of a bounded discrete random variable. The logistic quantile regression model requires that counts are randomly jittered to a continuous random variable, logit transformed to bound them between specified lower and upper values, then estimated in conventional linear quantile regression, repeating the 3 steps and averaging estimates. Back-transformation to the original discrete scale relies on the fact that quantiles are equivariant to monotonic transformations. We demonstrate this statistical procedure by modeling 20 years of California Spotted Owl fledgling production (0−3 per territory) on the Lassen National Forest, California, USA, as related to climate, demographic, and landscape habitat characteristics at territories. Spotted Owl fledgling counts increased nonlinearly with decreasing precipitation in the early nesting period, in the winter prior to nesting, and in the prior growing season; with increasing minimum temperatures in the early nesting period; with adult compared to subadult parents; when there was no fledgling production in the prior year; and when percentage of the landscape surrounding nesting sites (202 ha) with trees ≥25 m height increased. Changes in production were primarily driven by changes in the proportion of territories with 2 or 3 fledglings. Average variances of the discrete cumulative distributions of the estimated fledgling counts indicated that temporal changes in climate and parent age class explained 18% of the annual variance in owl fledgling production, which was 34% of the total variance. Prior fledgling production explained as much of the variance in the fledgling counts as climate, parent age class, and landscape habitat predictors. Our logistic quantile regression model can be used for any discrete response variables with fixed upper and lower bounds.

  5. Design and grayscale fabrication of beamfanners in a silicon substrate

    NASA Astrophysics Data System (ADS)

    Ellis, Arthur Cecil

    2001-11-01

    This dissertation addresses important first steps in the development of a grayscale fabrication process for multiple phase diffractive optical elements (DOS's) in silicon. Specifically, this process was developed through the design, fabrication, and testing of 1-2 and 1-4 beamfanner arrays for 5-micron illumination. The 1-2 beamfanner arrays serve as a test-of- concept and basic developmental step toward the construction of the 1-4 beamfanners. The beamfanners are 50 microns wide, and have features with dimensions of between 2 and 10 microns. The Iterative Annular Spectrum Approach (IASA) method, developed by Steve Mellin of UAH, and the Boundary Element Method (BEM) are the design and testing tools used to create the beamfanner profiles and predict their performance. Fabrication of the beamfanners required the techniques of grayscale photolithography and reactive ion etching (RIE). A 2-3micron feature size 1-4 silicon beamfanner array was fabricated, but the small features and contact photolithographic techniques available prevented its construction to specifications. A second and more successful attempt was made in which both 1-4 and 1-2 beamfanner arrays were fabricated with a 5-micron minimum feature size. Photolithography for the UAH array was contracted to MEMS-Optical of Huntsville, Alabama. A repeatability study was performed, using statistical techniques, of 14 photoresist arrays and the subsequent RIE process used to etch the arrays in silicon. The variance in selectivity between the 14 processes was far greater than the variance between the individual etched features within each process. Specifically, the ratio of the variance of the selectivities averaged over each of the 14 etch processes to the variance of individual feature selectivities within the processes yielded a significance level below 0.1% by F-test, indicating that good etch-to-etch process repeatability was not attained. One of the 14 arrays had feature etch-depths close enough to design specifications for optical testing, but 5- micron IR illumination of the 1-4 and 1-2 beamfanners yielded no convincing results of beam splitting in the detector plane 340 microns from the surface of the beamfanner array.

  6. Sequential Modelling of Building Rooftops by Integrating Airborne LIDAR Data and Optical Imagery: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Sohn, G.; Jung, J.; Jwa, Y.; Armenakis, C.

    2013-05-01

    This paper presents a sequential rooftop modelling method to refine initial rooftop models derived from airborne LiDAR data by integrating it with linear cues retrieved from single imagery. A cue integration between two datasets is facilitated by creating new topological features connecting between the initial model and image lines, with which new model hypotheses (variances to the initial model) are produced. We adopt Minimum Description Length (MDL) principle for competing the model candidates and selecting the optimal model by considering the balanced trade-off between the model closeness and the model complexity. Our preliminary results, combined with the Vaihingen data provided by ISPRS WGIII/4 demonstrate the image-driven modelling cues can compensate the limitations posed by LiDAR data in rooftop modelling.

  7. An analysis of relational complexity in an air traffic control conflict detection task.

    PubMed

    Boag, Christine; Neal, Andrew; Loft, Shayne; Halford, Graeme S

    2006-11-15

    Theoretical analyses of air traffic complexity were carried out using the Method for the Analysis of Relational Complexity. Twenty-two air traffic controllers examined static air traffic displays and were required to detect and resolve conflicts. Objective measures of performance included conflict detection time and accuracy. Subjective perceptions of mental workload were assessed by a complexity-sorting task and subjective ratings of the difficulty of different aspects of the task. A metric quantifying the complexity of pair-wise relations among aircraft was able to account for a substantial portion of the variance in the perceived complexity and difficulty of conflict detection problems, as well as reaction time. Other variables that influenced performance included the mean minimum separation between aircraft pairs and the amount of time that aircraft spent in conflict.

  8. Stochastic investigation of temperature process for climatic variability identification

    NASA Astrophysics Data System (ADS)

    Lerias, Eleutherios; Kalamioti, Anna; Dimitriadis, Panayiotis; Markonis, Yannis; Iliopoulou, Theano; Koutsoyiannis, Demetris

    2016-04-01

    The temperature process is considered as the most characteristic hydrometeorological process and has been thoroughly examined in the climate-change framework. We use a dataset comprising hourly temperature and dew point records to identify statistical variability with emphasis on the last period. Specifically, we investigate the occurrence of mean, maximum and minimum values and we estimate statistical properties such as marginal probability distribution function and the type of decay of the climacogram (i.e., mean process variance vs. scale) for various time periods. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.

  9. Self-tuning regulators for multicyclic control of helicopter vibration

    NASA Technical Reports Server (NTRS)

    Johnson, W.

    1982-01-01

    A class of algorithms for the multicyclic control of helicopter vibration and loads is derived and discussed. This class is characterized by a linear, quasi-static, frequency-domain model of the helicopter response to control; identification of the helicopter model by least-squared-error or Kalman filter methods; and a minimum variance or quadratic performance function controller. Previous research on such controllers is reviewed. The derivations and discussions cover the helicopter model; the identification problem, including both off-line and on-line (recursive) algorithms; the control problem, including both open-loop and closed-loop feedback; and the various regulator configurations possible within this class. Conclusions from analysis and numerical simulations of the regulators provide guidance in the design and selection of algorithms for further development, including wind tunnel and flight tests.

  10. Obtaining Reliable Predictions of Terrestrial Energy Coupling From Real-Time Solar Wind Measurements

    NASA Technical Reports Server (NTRS)

    Weimer, Daniel R.

    2002-01-01

    Measurements of the interplanetary magnetic field (IMF) from the ACE (Advanced Composition Explorer), Wind, IMP-8 (Interplanetary Monitoring Platform), and Geotail spacecraft have revealed that the IMF variations are contained in phase planes that are tilted with respect to the propagation direction, resulting in continuously variable changes in propagation times between spacecraft, and therefore, to the Earth. Techniques for using 'minimum variance analysis' have been developed in order to be able to measure the phase front tilt angles, and better predict the actual propagation times from the L1 orbit to the Earth, using only the real-time IMF measurements from one spacecraft. The use of empirical models with the IMF measurements at L1 from ACE (or future satellites) for predicting 'space weather' effects has also been demonstrated.

  11. Special Effects: Antenna Wetting, Short Distance Diversity and Depolarization

    NASA Technical Reports Server (NTRS)

    Acosta, Roberto J.

    2000-01-01

    The Advanced Communication Technology Satellite (ACTS) communications system operates in the Ka frequency band. ACTS uses multiple, hopping, narrow beams and very small aperture terminal (VSAT) technology to establish a system availability of 99.5% for bit-error-rates of 5 x 10(exp -7) Or better over the continental United States. In order maintain this minimum system availability in all US rain zones, ACTS uses an adaptive rain fade compensation protocol to reduce the impact of signal attenuation resulting from propagation effects. The purpose of this paper is to present the results of system and sub-system characterizations considering the statistical effects of system variances due to antenna wetting and depolarization effects. In addition the availability enhancements using short distance diversity in a sub-tropical rain zone are investigated.

  12. Layer-oriented multigrid wavefront reconstruction algorithms for multi-conjugate adaptive optics

    NASA Astrophysics Data System (ADS)

    Gilles, Luc; Ellerbroek, Brent L.; Vogel, Curtis R.

    2003-02-01

    Multi-conjugate adaptive optics (MCAO) systems with 104-105 degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wavefront control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of AO degrees of freedom. In this paper, we develop an iterative sparse matrix implementation of minimum variance wavefront reconstruction for telescope diameters up to 32m with more than 104 actuators. The basic approach is the preconditioned conjugate gradient method, using a multigrid preconditioner incorporating a layer-oriented (block) symmetric Gauss-Seidel iterative smoothing operator. We present open-loop numerical simulation results to illustrate algorithm convergence.

  13. Polynomial-Time Approximation Algorithm for the Problem of Cardinality-Weighted Variance-Based 2-Clustering with a Given Center

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Motkova, A. V.

    2018-01-01

    A strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters is considered. The solution criterion is the minimum of the sum (over both clusters) of weighted sums of squared distances from the elements of each cluster to its geometric center. The weights of the sums are equal to the cardinalities of the desired clusters. The center of one cluster is given as input, while the center of the other is unknown and is determined as the point of space equal to the mean of the cluster elements. A version of the problem is analyzed in which the cardinalities of the clusters are given as input. A polynomial-time 2-approximation algorithm for solving the problem is constructed.

  14. Robust design of a 2-DOF GMV controller: a direct self-tuning and fuzzy scheduling approach.

    PubMed

    Silveira, Antonio S; Rodríguez, Jaime E N; Coelho, Antonio A R

    2012-01-01

    This paper presents a study on self-tuning control strategies with generalized minimum variance control in a fixed two degree of freedom structure-or simply GMV2DOF-within two adaptive perspectives. One, from the process model point of view, using a recursive least squares estimator algorithm for direct self-tuning design, and another, using a Mamdani fuzzy GMV2DOF parameters scheduling technique based on analytical and physical interpretations from robustness analysis of the system. Both strategies are assessed by simulation and real plants experimentation environments composed of a damped pendulum and an under development wind tunnel from the Department of Automation and Systems of the Federal University of Santa Catarina. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  15. On the robustness of a Bayes estimate. [in reliability theory

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1974-01-01

    This paper examines the robustness of a Bayes estimator with respect to the assigned prior distribution. A Bayesian analysis for a stochastic scale parameter of a Weibull failure model is summarized in which the natural conjugate is assigned as the prior distribution of the random parameter. The sensitivity analysis is carried out by the Monte Carlo method in which, although an inverted gamma is the assigned prior, realizations are generated using distribution functions of varying shape. For several distributional forms and even for some fixed values of the parameter, simulated mean squared errors of Bayes and minimum variance unbiased estimators are determined and compared. Results indicate that the Bayes estimator remains squared-error superior and appears to be largely robust to the form of the assigned prior distribution.

  16. A new approach to importance sampling for the simulation of false alarms. [in radar systems

    NASA Technical Reports Server (NTRS)

    Lu, D.; Yao, K.

    1987-01-01

    In this paper a modified importance sampling technique for improving the convergence of Importance Sampling is given. By using this approach to estimate low false alarm rates in radar simulations, the number of Monte Carlo runs can be reduced significantly. For one-dimensional exponential, Weibull, and Rayleigh distributions, a uniformly minimum variance unbiased estimator is obtained. For Gaussian distribution the estimator in this approach is uniformly better than that of previously known Importance Sampling approach. For a cell averaging system, by combining this technique and group sampling, the reduction of Monte Carlo runs for a reference cell of 20 and false alarm rate of lE-6 is on the order of 170 as compared to the previously known Importance Sampling approach.

  17. A CLT on the SNR of Diagonally Loaded MVDR Filters

    NASA Astrophysics Data System (ADS)

    Rubio, Francisco; Mestre, Xavier; Hachem, Walid

    2012-08-01

    This paper studies the fluctuations of the signal-to-noise ratio (SNR) of minimum variance distorsionless response (MVDR) filters implementing diagonal loading in the estimation of the covariance matrix. Previous results in the signal processing literature are generalized and extended by considering both spatially as well as temporarily correlated samples. Specifically, a central limit theorem (CLT) is established for the fluctuations of the SNR of the diagonally loaded MVDR filter, under both supervised and unsupervised training settings in adaptive filtering applications. Our second-order analysis is based on the Nash-Poincar\\'e inequality and the integration by parts formula for Gaussian functionals, as well as classical tools from statistical asymptotic theory. Numerical evaluations validating the accuracy of the CLT confirm the asymptotic Gaussianity of the fluctuations of the SNR of the MVDR filter.

  18. Noise level in a neonatal intensive care unit in Santa Marta - Colombia.

    PubMed

    Garrido Galindo, Angélica Patricia; Camargo Caicedo, Yiniva; Velez-Pereira, Andres M

    2017-09-30

    The environment of neonatal intensive care units is influenced by numerous sources of noise emission, which contribute to raise the noise levels, and may cause hearing impairment and other physiological and psychological changes on the newborn, as well as problems with care staff. To evaluate the level and sources of noise in the neonatal intensive care unit. Sampled for 20 consecutive days every 60 seconds in A-weighting curves and fast mode with a Type I sound level meter. Recorded the average, maximum and minimum, and the 10th, 50th and 90th percentiles. The values are integrated into hours and work shift, and studied by analysis of variance. The sources were characterized in thirds of octaves. The average level was 64.00 ±3.62 dB(A), with maximum of 76.04 ±5.73 dB(A), minimum of 54.84 ±2.61dB(A), and background noise of 57.95 ±2.83 dB(A). We found four sources with levels between 16.8-63.3 dB(A). Statistical analysis showed significant differences between the hours and work shift, with higher values in the early hours of the day. The values presented exceed the standards suggested by several organizations. The sources identified and measured recorded high values in low frequencies.

  19. Development of the psychometric property of a Minimum Data-Set-Based Depression Rating Scale for use in long-term care facilities in Taiwan.

    PubMed

    Hsiao, C Y; Lan, C F; Chang, P L; Li, I C

    2015-01-01

    Our aim is to develop the psychometric property of the Minimum Data-Set-Based Depression Rating Scale (MDS-DRS) to ensure its use to assess service needs and guide care plans for institutionalized residents. 378 residents were recruited from the Haoran Senior Citizen Home in northern Taiwan. The MDS-DRS and GDS-SF were used to identify observable features of depression symptoms in the elderly residents. A total of 378 residents participated in this study. The receiver operating characteristic (ROC) curve indicated that the MDS-DRS has a 43.3% sensitivity and a 90.6% specificity when screening for depression symptoms. The total variance, explained by the two factors 'sadness' and 'distress,' was 58.1% based on the factor analysis. Reliable assessment tools for nurses are important because they allow the early detection of depression symptoms. The MDS-DRS items perform as well as the GDS-SF items in detecting depression symptoms. Furthermore, the MDS-DRS has the advantage of providing information to staff about care process implementation, which can facilitate the identification of areas that need improvement. Further research is needed to validate the use of the MDS-DRS in long-term care facilities.

  20. Evaluation of an Active Humidification System for Inspired Gas

    PubMed Central

    Roux, Nicolás G.; Villalba, Darío S.; Gogniat, Emiliano; Feld, Vivivana; Ribero Vairo, Noelia; Sartore, Marisa; Bosso, Mauro; Scapellato, José L.; Intile, Dante; Planells, Fernando; Noval, Diego; Buñirigo, Pablo; Jofré, Ricardo; Díaz Nielsen, Ernesto

    2015-01-01

    Objectives The effectiveness of the active humidification systems (AHS) in patients already weaned from mechanical ventilation and with an artificial airway has not been very well described. The objective of this study was to evaluate the performance of an AHS in chronically tracheostomized and spontaneously breathing patients. Methods Measurements were quantified at three levels of temperature (T°) of the AHS: level I, low; level II, middle; and level III, high and at different flow levels (20 to 60 L/minute). Statistical analysis of repeated measurements was performed using analysis of variance and significance was set at a P<0.05. Results While the lowest temperature setting (level I) did not condition gas to the minimum recommended values for any of the flows that were used, the medium temperature setting (level II) only conditioned gas with flows of 20 and 30 L/minute. Finally, at the highest temperature setting (level III), every flow reached the minimum absolute humidity (AH) recommended of 30 mg/L. Conclusion According to our results, to obtain appropiate relative humidity, AH and T° of gas one should have a device that maintains water T° at least at 53℃ for flows between 20 and 30 L/m, or at T° of 61℃ at any flow rate. PMID:25729499

  1. High resolution beamforming on large aperture vertical line arrays: Processing synthetic data

    NASA Astrophysics Data System (ADS)

    Tran, Jean-Marie Q.; Hodgkiss, William S.

    1990-09-01

    This technical memorandum studies the beamforming of large aperture line arrays deployed vertically in the water column. The work concentrates on the use of high resolution techniques. Two processing strategies are envisioned: (1) full aperture coherent processing which offers in theory the best processing gain; and (2) subaperture processing which consists in extracting subapertures from the array and recombining the angular spectra estimated from these subarrays. The conventional beamformer, the minimum variance distortionless response (MVDR) processor, the multiple signal classification (MUSIC) algorithm and the minimum norm method are used in this study. To validate the various processing techniques, the ATLAS normal mode program is used to generate synthetic data which constitute a realistic signals environment. A deep-water, range-independent sound velocity profile environment, characteristic of the North-East Pacific, is being studied for two different 128 sensor arrays: a very long one cut for 30 Hz and operating at 20 Hz; and a shorter one cut for 107 Hz and operating at 100 Hz. The simulated sound source is 5 m deep. The full aperture and subaperture processing are being implemented with curved and plane wavefront replica vectors. The beamforming results are examined and compared to the ray-theory results produced by the generic sonar model.

  2. Radial orbit error reduction and sea surface topography determination using satellite altimetry

    NASA Technical Reports Server (NTRS)

    Engelis, Theodossios

    1987-01-01

    A method is presented in satellite altimetry that attempts to simultaneously determine the geoid and sea surface topography with minimum wavelengths of about 500 km and to reduce the radial orbit error caused by geopotential errors. The modeling of the radial orbit error is made using the linearized Lagrangian perturbation theory. Secular and second order effects are also included. After a rather extensive validation of the linearized equations, alternative expressions of the radial orbit error are derived. Numerical estimates for the radial orbit error and geoid undulation error are computed using the differences of two geopotential models as potential coefficient errors, for a SEASAT orbit. To provide statistical estimates of the radial distances and the geoid, a covariance propagation is made based on the full geopotential covariance. Accuracy estimates for the SEASAT orbits are given which agree quite well with already published results. Observation equations are develped using sea surface heights and crossover discrepancies as observables. A minimum variance solution with prior information provides estimates of parameters representing the sea surface topography and corrections to the gravity field that is used for the orbit generation. The simulation results show that the method can be used to effectively reduce the radial orbit error and recover the sea surface topography.

  3. Beltless translocation domain of botulinum neurotoxin A embodies a minimum ion-conductive channel.

    PubMed

    Fischer, Audrey; Sambashivan, Shilpa; Brunger, Axel T; Montal, Mauricio

    2012-01-13

    Botulinum neurotoxin, the causative agent of the paralytic disease botulism, is an endopeptidase composed of a catalytic domain (or light chain (LC)) and a heavy chain (HC) encompassing the translocation domain (TD) and receptor-binding domain. Upon receptor-mediated endocytosis, the LC and TD are proposed to undergo conformational changes in the acidic endocytic environment resulting in the formation of an LC protein-conducting TD channel. The mechanism of channel formation and the conformational changes in the toxin upon acidification are important but less well understood aspects of botulinum neurotoxin intoxication. Here, we have identified a minimum channel-forming truncation of the TD, the "beltless" TD, that forms transmembrane channels with ion conduction properties similar to those of the full-length TD. At variance with the holotoxin and the HC, channel formation for both the TD and the beltless TD occurs independent of a transmembrane pH gradient. Furthermore, acidification in solution induces moderate secondary structure changes. The subtle nature of the conformational changes evoked by acidification on the TD suggests that, in the context of the holotoxin, larger structural rearrangements and LC unfolding occur preceding or concurrent to channel formation. This notion is consistent with the hypothesis that although each domain of the holotoxin functions individually, each domain serves as a chaperone for the others.

  4. Computer-aided interpretation approach for optical tomographic images

    NASA Astrophysics Data System (ADS)

    Klose, Christian D.; Klose, Alexander D.; Netz, Uwe J.; Scheel, Alexander K.; Beuthan, Jürgen; Hielscher, Andreas H.

    2010-11-01

    A computer-aided interpretation approach is proposed to detect rheumatic arthritis (RA) in human finger joints using optical tomographic images. The image interpretation method employs a classification algorithm that makes use of a so-called self-organizing mapping scheme to classify fingers as either affected or unaffected by RA. Unlike in previous studies, this allows for combining multiple image features, such as minimum and maximum values of the absorption coefficient for identifying affected and not affected joints. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index, and mutual information. Different methods (i.e., clinical diagnostics, ultrasound imaging, magnet resonance imaging, and inspection of optical tomographic images), were used to produce ground truth benchmarks to determine the performance of image interpretations. Using data from 100 finger joints, findings suggest that some parameter combinations lead to higher sensitivities, while others to higher specificities when compared to single parameter classifications employed in previous studies. Maximum performances are reached when combining the minimum/maximum ratio of the absorption coefficient and image variance. In this case, sensitivities and specificities over 0.9 can be achieved. These values are much higher than values obtained when only single parameter classifications were used, where sensitivities and specificities remained well below 0.8.

  5. Dynamic association rules for gene expression data analysis.

    PubMed

    Chen, Shu-Chuan; Tsai, Tsung-Hsien; Chung, Cheng-Han; Li, Wen-Hsiung

    2015-10-14

    The purpose of gene expression analysis is to look for the association between regulation of gene expression levels and phenotypic variations. This association based on gene expression profile has been used to determine whether the induction/repression of genes correspond to phenotypic variations including cell regulations, clinical diagnoses and drug development. Statistical analyses on microarray data have been developed to resolve gene selection issue. However, these methods do not inform us of causality between genes and phenotypes. In this paper, we propose the dynamic association rule algorithm (DAR algorithm) which helps ones to efficiently select a subset of significant genes for subsequent analysis. The DAR algorithm is based on association rules from market basket analysis in marketing. We first propose a statistical way, based on constructing a one-sided confidence interval and hypothesis testing, to determine if an association rule is meaningful. Based on the proposed statistical method, we then developed the DAR algorithm for gene expression data analysis. The method was applied to analyze four microarray datasets and one Next Generation Sequencing (NGS) dataset: the Mice Apo A1 dataset, the whole genome expression dataset of mouse embryonic stem cells, expression profiling of the bone marrow of Leukemia patients, Microarray Quality Control (MAQC) data set and the RNA-seq dataset of a mouse genomic imprinting study. A comparison of the proposed method with the t-test on the expression profiling of the bone marrow of Leukemia patients was conducted. We developed a statistical way, based on the concept of confidence interval, to determine the minimum support and minimum confidence for mining association relationships among items. With the minimum support and minimum confidence, one can find significant rules in one single step. The DAR algorithm was then developed for gene expression data analysis. Four gene expression datasets showed that the proposed DAR algorithm not only was able to identify a set of differentially expressed genes that largely agreed with that of other methods, but also provided an efficient and accurate way to find influential genes of a disease. In the paper, the well-established association rule mining technique from marketing has been successfully modified to determine the minimum support and minimum confidence based on the concept of confidence interval and hypothesis testing. It can be applied to gene expression data to mine significant association rules between gene regulation and phenotype. The proposed DAR algorithm provides an efficient way to find influential genes that underlie the phenotypic variance.

  6. Geophysical Inversion with Adaptive Array Processing of Ambient Noise

    NASA Astrophysics Data System (ADS)

    Traer, James

    2011-12-01

    Land-based seismic observations of microseisms generated during Tropical Storms Ernesto and Florence are dominated by signals in the 0.15--0.5Hz band. Data from seafloor hydrophones in shallow water (70m depth, 130 km off the New Jersey coast) show dominant signals in the gravity-wave frequency band, 0.02--0.18Hz and low amplitudes from 0.18--0.3Hz, suggesting significant opposing wave components necessary for DF microseism generation were negligible at the site. Both storms produced similar spectra, despite differing sizes, suggesting near-coastal shallow water as the dominant region for observed microseism generation. A mathematical explanation for a sign-inversion induced to the passive fathometer response by minimum variance distortionless response (MVDR) beamforming is presented. This shows that, in the region containing the bottom reflection, the MVDR fathometer response is identical to that obtained with conventional processing multiplied by a negative factor. A model is presented for the complete passive fathometer response to ocean surface noise, interfering discrete noise sources, and locally uncorrelated noise in an ideal waveguide. The leading order term of the ocean surface noise produces the cross-correlation of vertical multipaths and yields the depth of sub-bottom reflectors. Discrete noise incident on the array via multipaths give multiple peaks in the fathometer response. These peaks may obscure the sub-bottom reflections but can be attenuated with use of Minimum Variance Distortionless Response (MVDR) steering vectors. A theory is presented for the Signal-to-Noise-Ratio (SNR) for the seabed reflection peak in the passive fathometer response as a function of seabed depth, seabed reflection coefficient, averaging time, bandwidth and spatial directivity of the noise field. The passive fathometer algorithm was applied to data from two drifting array experiments in the Mediterranean, Boundary 2003 and 2004, with 0.34s of averaging time. In the 2004 experiment, the response showed the array depth varied periodically with an amplitude of 1 m and a period of 7 s consistent with wave driven motion of the array. This introduced a destructive interference which prevents the SNR growing with averaging time, unless the motion is removed by use of a peak tracker.

  7. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    PubMed

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with the estimation process rendered results from the BLQ model questionable. Importantly, accounting for heterogeneous variance enhanced inferential precision as the breadth of the confidence interval for the mean breakpoint decreased by approximately 44%. In summary, the article illustrates the use of linear and nonlinear mixed models for dose-response relationships accounting for heterogeneous residual variances, discusses important diagnostics and their implications for inference, and provides practical recommendations for computational troubleshooting.

  8. Anthropogenic noise decreases urban songbird diversity and may contribute to homogenization.

    PubMed

    Proppe, Darren S; Sturdy, Christopher B; St Clair, Colleen Cassady

    2013-04-01

    More humans reside in urban areas than at any other time in history. Protected urban green spaces and transportation greenbelts support many species, but diversity in these areas is generally lower than in undeveloped landscapes. Habitat degradation and fragmentation contribute to lowered diversity and urban homogenization, but less is known about the role of anthropogenic noise. Songbirds are especially vulnerable to anthropogenic noise because they rely on acoustic signals for communication. Recent studies suggest that anthropogenic noise reduces the density and reproductive success of some bird species, but that species which vocalize at frequencies above those of anthropogenic noise are more likely to inhabit noisy areas. We hypothesize that anthropogenic noise is contributing to declines in urban diversity by reducing the abundance of select species in noisy areas, and that species with low-frequency songs are those most likely to be affected. To examine this relationship, we calculated the noise-associated change in overall species richness and in abundance for seven common songbird species. After accounting for variance due to vegetative differences, species richness and the abundance of three of seven species were reduced in noisier locations. Acoustic analysis revealed that minimum song frequency was highly predictive of a species' response to noise, with lower minimum song frequencies incurring greater noise-associated reduction in abundance. These results suggest that anthropogenic noise affects some species independently of vegetative conditions, exacerbating the exclusion of some songbird species in otherwise suitable habitat. Minimum song frequency may provide a useful metric to predict how particular species will be affected by noise. In sum, mitigation of noise may enhance habitat suitability for many songbird species, especially for species with songs that include low-frequency elements. © 2012 Blackwell Publishing Ltd.

  9. Fisher information and Cramér-Rao lower bound for experimental design in parallel imaging.

    PubMed

    Bouhrara, Mustapha; Spencer, Richard G

    2018-06-01

    The Cramér-Rao lower bound (CRLB) is widely used in the design of magnetic resonance (MR) experiments for parameter estimation. Previous work has considered only Gaussian or Rician noise distributions in this calculation. However, the noise distribution for multi-coil acquisitions, such as in parallel imaging, obeys the noncentral χ-distribution under many circumstances. The purpose of this paper is to present the CRLB calculation for parameter estimation from multi-coil acquisitions. We perform explicit calculations of Fisher matrix elements and the associated CRLB for noise distributions following the noncentral χ-distribution. The special case of diffusion kurtosis is examined as an important example. For comparison with analytic results, Monte Carlo (MC) simulations were conducted to evaluate experimental minimum standard deviations (SDs) in the estimation of diffusion kurtosis model parameters. Results were obtained for a range of signal-to-noise ratios (SNRs), and for both the conventional case of Gaussian noise distribution and noncentral χ-distribution with different numbers of coils, m. At low-to-moderate SNR, the noncentral χ-distribution deviates substantially from the Gaussian distribution. Our results indicate that this departure is more pronounced for larger values of m. As expected, the minimum SDs (i.e., CRLB) in derived diffusion kurtosis model parameters assuming a noncentral χ-distribution provided a closer match to the MC simulations as compared to the Gaussian results. Estimates of minimum variance for parameter estimation and experimental design provided by the CRLB must account for the noncentral χ-distribution of noise in multi-coil acquisitions, especially in the low-to-moderate SNR regime. Magn Reson Med 79:3249-3255, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  10. Running Technique is an Important Component of Running Economy and Performance

    PubMed Central

    FOLLAND, JONATHAN P.; ALLEN, SAM J.; BLACK, MATTHEW I.; HANDSAKER, JOSEPH C.; FORRESTER, STEPHANIE E.

    2017-01-01

    ABSTRACT Despite an intuitive relationship between technique and both running economy (RE) and performance, and the diverse techniques used by runners to achieve forward locomotion, the objective importance of overall technique and the key components therein remain to be elucidated. Purpose This study aimed to determine the relationship between individual and combined kinematic measures of technique with both RE and performance. Methods Ninety-seven endurance runners (47 females) of diverse competitive standards performed a discontinuous protocol of incremental treadmill running (4-min stages, 1-km·h−1 increments). Measurements included three-dimensional full-body kinematics, respiratory gases to determine energy cost, and velocity of lactate turn point. Five categories of kinematic measures (vertical oscillation, braking, posture, stride parameters, and lower limb angles) and locomotory energy cost (LEc) were averaged across 10–12 km·h−1 (the highest common velocity < velocity of lactate turn point). Performance was measured as season's best (SB) time converted to a sex-specific z-score. Results Numerous kinematic variables were correlated with RE and performance (LEc, 19 variables; SB time, 11 variables). Regression analysis found three variables (pelvis vertical oscillation during ground contact normalized to height, minimum knee joint angle during ground contact, and minimum horizontal pelvis velocity) explained 39% of LEc variability. In addition, four variables (minimum horizontal pelvis velocity, shank touchdown angle, duty factor, and trunk forward lean) combined to explain 31% of the variability in performance (SB time). Conclusions This study provides novel and robust evidence that technique explains a substantial proportion of the variance in RE and performance. We recommend that runners and coaches are attentive to specific aspects of stride parameters and lower limb angles in part to optimize pelvis movement, and ultimately enhance performance. PMID:28263283

  11. An Approach to Maximize Weld Penetration During TIG Welding of P91 Steel Plates by Utilizing Image Processing and Taguchi Orthogonal Array

    NASA Astrophysics Data System (ADS)

    Singh, Akhilesh Kumar; Debnath, Tapas; Dey, Vidyut; Rai, Ram Naresh

    2017-10-01

    P-91 is modified 9Cr-1Mo steel. Fabricated structures and components of P-91 has a lot of application in power and chemical industry owing to its excellent properties like high temperature stress corrosion resistance, less susceptibility to thermal fatigue at high operating temperatures. The weld quality and surface finish of fabricated structure of P91 is very good when welded by Tungsten Inert Gas welding (TIG). However, the process has its limitation regarding weld penetration. The success of a welding process lies in fabricating with such a combination of parameters that gives maximum weld penetration and minimum weld width. To carry out an investigation on the effect of the autogenous TIG welding parameters on weld penetration and weld width, bead-on-plate welds were carried on P91 plates of thickness 6 mm in accordance to a Taguchi L9 design. Welding current, welding speed and gas flow rate were the three control variables in the investigation. After autogenous (TIG) welding, the dimension of the weld width, weld penetration and weld area were successfully measured by an image analysis technique developed for the study. The maximum error for the measured dimensions of the weld width, penetration and area with the developed image analysis technique was only 2 % compared to the measurements of Leica-Q-Win-V3 software installed in optical microscope. The measurements with the developed software, unlike the measurements under a microscope, required least human intervention. An Analysis of Variance (ANOVA) confirms the significance of the selected parameters. Thereafter, Taguchi's method was successfully used to trade-off between maximum penetration and minimum weld width while keeping the weld area at a minimum.

  12. Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time.

    PubMed

    Dhar, Amrit; Minin, Vladimir N

    2017-05-01

    Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences.

  13. Estimation of distribution overlap of urn models.

    PubMed

    Hampton, Jerrad; Lladser, Manuel E

    2012-01-01

    A classical problem in statistics is estimating the expected coverage of a sample, which has had applications in gene expression, microbial ecology, optimization, and even numismatics. Here we consider a related extension of this problem to random samples of two discrete distributions. Specifically, we estimate what we call the dissimilarity probability of a sample, i.e., the probability of a draw from one distribution not being observed in [Formula: see text] draws from another distribution. We show our estimator of dissimilarity to be a [Formula: see text]-statistic and a uniformly minimum variance unbiased estimator of dissimilarity over the largest appropriate range of [Formula: see text]. Furthermore, despite the non-Markovian nature of our estimator when applied sequentially over [Formula: see text], we show it converges uniformly in probability to the dissimilarity parameter, and we present criteria when it is approximately normally distributed and admits a consistent jackknife estimator of its variance. As proof of concept, we analyze V35 16S rRNA data to discern between various microbial environments. Other potential applications concern any situation where dissimilarity of two discrete distributions may be of interest. For instance, in SELEX experiments, each urn could represent a random RNA pool and each draw a possible solution to a particular binding site problem over that pool. The dissimilarity of these pools is then related to the probability of finding binding site solutions in one pool that are absent in the other.

  14. On optimal current patterns for electrical impedance tomography.

    PubMed

    Demidenko, Eugene; Hartov, Alex; Soni, Nirmal; Paulsen, Keith D

    2005-02-01

    We develop a statistical criterion for optimal patterns in planar circular electrical impedance tomography. These patterns minimize the total variance of the estimation for the resistance or conductance matrix. It is shown that trigonometric patterns (Isaacson, 1986), originally derived from the concept of distinguishability, are a special case of our optimal statistical patterns. New optimal random patterns are introduced. Recovering the electrical properties of the measured body is greatly simplified when optimal patterns are used. The Neumann-to-Dirichlet map and the optimal patterns are derived for a homogeneous medium with an arbitrary distribution of the electrodes on the periphery. As a special case, optimal patterns are developed for a practical EIT system with a finite number of electrodes. For a general nonhomogeneous medium, with no a priori restriction, the optimal patterns for the resistance and conductance matrix are the same. However, for a homogeneous medium, the best current pattern is the worst voltage pattern and vice versa. We study the effect of the number and the width of the electrodes on the estimate of resistivity and conductivity in a homogeneous medium. We confirm experimentally that the optimal patterns produce minimum conductivity variance in a homogeneous medium. Our statistical model is able to discriminate between a homogenous agar phantom and one with a 2 mm air hole with error probability (p-value) 1/1000.

  15. Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time

    PubMed Central

    Dhar, Amrit

    2017-01-01

    Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780

  16. Doppler color imaging. Principles and instrumentation.

    PubMed

    Kremkau, F W

    1992-01-01

    DCI acquires Doppler-shifted echoes from a cross-section of tissue scanned by an ultrasound beam. These echoes are then presented in color and superimposed on the gray-scale anatomic image of non-Doppler-shifted echoes received during the scan. The flow echoes are assigned colors according to the color map chosen. Usually red, yellow, or white indicates positive Doppler shifts (approaching flow) and blue, cyan, or white indicates negative shifts (receding flow). Green is added to indicate variance (disturbed or turbulent flow). Several pulses (the number is called the ensemble length) are needed to generate a color scan line. Linear, convex, phased, and annular arrays are used to acquire the gray-scale and color-flow information. Doppler color-flow instruments are pulsed-Doppler instruments and are subject to the same limitations, such as Doppler angle dependence and aliasing, as other Doppler instruments. Color controls include gain, TGC, map selection, variance on/off, persistence, ensemble length, color/gray priority. Nyquist limit (PRF), baseline shift, wall filter, and color window angle, location, and size. Doppler color-flow instruments generally have output intensities intermediate between those of gray-scale imaging and pulsed-Doppler duplex instruments. Although there is no known risk with the use of color-flow instruments, prudent practice dictates that they be used for medical indications and with the minimum exposure time and instrument output required to obtain the needed diagnostic information.

  17. Multifractal Properties of Process Control Variables

    NASA Astrophysics Data System (ADS)

    Domański, Paweł D.

    2017-06-01

    Control system is an inevitable element of any industrial installation. Its quality affects overall process performance significantly. The assessment, whether control system needs any improvement or not, requires relevant and constructive measures. There are various methods, like time domain based, Minimum Variance, Gaussian and non-Gaussian statistical factors, fractal and entropy indexes. Majority of approaches use time series of control variables. They are able to cover many phenomena. But process complexities and human interventions cause effects that are hardly visible for standard measures. It is shown that the signals originating from industrial installations have multifractal properties and such an analysis may extend standard approach to further observations. The work is based on industrial and simulation data. The analysis delivers additional insight into the properties of control system and the process. It helps to discover internal dependencies and human factors, which are hardly detectable.

  18. Computation of Optimal Actuator/Sensor Locations

    DTIC Science & Technology

    2013-12-26

    weighting matrices Q = I and R = 0.01, and a minimum variance LQ-cost (with V = I ), a plot of the L2 norm of the control signal versus actuator...0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.05 0.1 0.15 0.2 0.25 actuator location lin ea r− qu ad ra tic c os t ( re la tiv e) Q = I , R = 100 Q... I , R = 1 Q = I , R = 0.01 Q = I , R = 0.0001 (a) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 actuator location lin

  19. Spatio-temporal Reconstruction of Neural Sources Using Indirect Dominant Mode Rejection.

    PubMed

    Jafadideh, Alireza Talesh; Asl, Babak Mohammadzadeh

    2018-04-27

    Adaptive minimum variance based beamformers (MVB) have been successfully applied to magnetoencephalogram (MEG) and electroencephalogram (EEG) data to localize brain activities. However, the performance of these beamformers falls down in situations where correlated or interference sources exist. To overcome this problem, we propose indirect dominant mode rejection (iDMR) beamformer application in brain source localization. This method by modifying measurement covariance matrix makes MVB applicable in source localization in the presence of correlated and interference sources. Numerical results on both EEG and MEG data demonstrate that presented approach accurately reconstructs time courses of active sources and localizes those sources with high spatial resolution. In addition, the results of real AEF data show the good performance of iDMR in empirical situations. Hence, iDMR can be reliably used for brain source localization especially when there are correlated and interference sources.

  20. Modelling and Simulation of the Dynamics of the Antigen-Specific T Cell Response Using Variable Structure Control Theory.

    PubMed

    Anelone, Anet J N; Spurgeon, Sarah K

    2016-01-01

    Experimental and mathematical studies in immunology have revealed that the dynamics of the programmed T cell response to vigorous infection can be conveniently modelled using a sigmoidal or a discontinuous immune response function. This paper hypothesizes strong synergies between this existing work and the dynamical behaviour of engineering systems with a variable structure control (VSC) law. These findings motivate the interpretation of the immune system as a variable structure control system. It is shown that dynamical properties as well as conditions to analytically assess the transition from health to disease can be developed for the specific T cell response from the theory of variable structure control. In particular, it is shown that the robustness properties of the specific T cell response as observed in experiments can be explained analytically using a VSC perspective. Further, the predictive capacity of the VSC framework to determine the T cell help required to overcome chronic Lymphocytic Choriomeningitis Virus (LCMV) infection is demonstrated. The findings demonstrate that studying the immune system using variable structure control theory provides a new framework for evaluating immunological dynamics and experimental observations. A modelling and simulation tool results with predictive capacity to determine how to modify the immune response to achieve healthy outcomes which may have application in drug development and vaccine design.

  1. Impact of clinical input variable uncertainties on ten-year atherosclerotic cardiovascular disease risk using new pooled cohort equations.

    PubMed

    Gupta, Himanshu; Schiros, Chun G; Sharifov, Oleg F; Jain, Apurva; Denney, Thomas S

    2016-08-31

    Recently released American College of Cardiology/American Heart Association (ACC/AHA) guideline recommends the Pooled Cohort equations for evaluating atherosclerotic cardiovascular risk of individuals. The impact of the clinical input variable uncertainties on the estimates of ten-year cardiovascular risk based on ACC/AHA guidelines is not known. Using a publicly available the National Health and Nutrition Examination Survey dataset (2005-2010), we computed maximum and minimum ten-year cardiovascular risks by assuming clinically relevant variations/uncertainties in input of age (0-1 year) and ±10 % variation in total-cholesterol, high density lipoprotein- cholesterol, and systolic blood pressure and by assuming uniform distribution of the variance of each variable. We analyzed the changes in risk category compared to the actual inputs at 5 % and 7.5 % risk limits as these limits define the thresholds for consideration of drug therapy in the new guidelines. The new-pooled cohort equations for risk estimation were implemented in a custom software package. Based on our input variances, changes in risk category were possible in up to 24 % of the population cohort at both 5 % and 7.5 % risk boundary limits. This trend was consistently noted across all subgroups except in African American males where most of the cohort had ≥7.5 % baseline risk regardless of the variation in the variables. The uncertainties in the input variables can alter the risk categorization. The impact of these variances on the ten-year risk needs to be incorporated into the patient/clinician discussion and clinical decision making. Incorporating good clinical practices for the measurement of critical clinical variables and robust standardization of laboratory parameters to more stringent reference standards is extremely important for successful implementation of the new guidelines. Furthermore, ability to customize the risk calculator inputs to better represent unique clinical circumstances specific to individual needs would be highly desirable in the future versions of the risk calculator.

  2. African-American adolescents’ stress responses after the 9/11/01 terrorist attacks

    PubMed Central

    Barnes, Vernon A.; Treiber, Frank A.; Ludwig, David A.

    2012-01-01

    Purpose To examine the impact of indirect exposure to the 9/11/01 attacks upon physical and emotional stress-related responses in a community sample of African-American (AA) adolescents. Methods Three months after the 9/11/01 terrorist attacks, 406 AA adolescents (mean age [SD] of 16.1 ± 1.3 years) from an inner-city high school in Augusta, GA were evaluated with a 12-item 5-point Likert scale measuring loss of psychosocial resources (PRS) such as control, hope, optimism, and perceived support, a 17-item 5-point Likert scale measuring post-traumatic stress symptomatology (PCL), and measures of state and trait anger, anger expression, and hostility. Given the observational nature of the study, statistical differences and correlations were evaluated for effect size before statistical testing (5% minimum variance explained). Bootstrapping was used for testing mean differences and differences between correlations. Results PCL scores indicated that approximately 10% of the sample was experiencing probable clinically significant levels of post-traumatic distress (PCL score > 50). The PCL and PRS were moderately correlated with a r = .59. Gender differences for the PCL and PRS were small, accounting for 1% of the total variance. Higher PCL scores were associated with higher state anger (r = .47), as well as measures of anger-out (r = .32) and trait anger (r = .34). Higher PRS scores were associated only with higher state anger (r = .27). Scores on the two 9/11/01-related scales were not statistically associated (i.e., less than 5% of the variance explained) with traits of anger control, anger-in, or hostility. Conclusions The majority of students were not overly stressed by indirect exposure to the events of 9/11/01, perhaps owing to the temporal, social, and/or geographical distance from the event. Those who reported greater negative impact appeared to also be experiencing higher levels of current anger and exhibited a characterologic style of higher overt anger expression. PMID:15737775

  3. Management of fresh water weeds (macrophytes) by vermicomposting using Eisenia fetida.

    PubMed

    Najar, Ishtiyaq Ahmed; Khan, Anisa B

    2013-09-01

    In the present study, potential of Eisenia fetida to recycle the different types of fresh water weeds (macrophytes) used as substrate in different reactors (Azolla pinnata reactor, Trapa natans reactor, Ceratophyllum demersum reactor, free-floating macrophytes mixture reactor, and submerged macrophytes mixture reactor) during 2 months experiment is investigated. E. fetida showed significant variation in number and weight among the reactors and during the different fortnights (P <0.05) with maximum in A. pinnata reactor (number 343.3 ± 10.23 %; weight 98.62 ± 4.23 % ) and minimum in submerged macrophytes mixture reactor (number 105 ± 5.77 %; weight 41.07 ± 3.97 % ). ANOVA showed significant variation in cocoon production (F4 = 15.67, P <0.05) and mean body weight (F4 = 13.49, P <0.05) among different reactors whereas growth rate (F3 = 23.62, P <0.05) and relative growth rate (F3 = 4.91, P <0.05) exhibited significant variation during different fortnights. Reactors showed significant variation (P <0.05) in pH, Electrical conductivity (EC), Organic carbon (OC), Organic nitrogen (ON), and C/N ratio during different fortnights with increase in pH, EC, N, and K whereas decrease in OC and C/N ratio. Hierarchical cluster analysis grouped five substrates (weeds) into three clusters-poor vermicompost substrates, moderate vermicompost substrate, and excellent vermicompost substrate. Two principal components (PCs) have been identified by factor analysis with a cumulative variance of 90.43 %. PC1 accounts for 47.17 % of the total variance represents "reproduction factor" and PC2 explaining 43.26 % variance representing "growth factor." Thus, the nature of macrophyte affects the growth and reproduction pattern of E. fetida among the different reactors, further the addition of A. pinnata in other macrophytes reactors can improve their recycling by E. fetida.

  4. Noise level in a neonatal intensive care unit in Santa Marta - Colombia.

    PubMed Central

    Garrido Galindo, Angélica Patricia; Velez-Pereira, Andres M

    2017-01-01

    Abstract Introduction: The environment of neonatal intensive care units is influenced by numerous sources of noise emission, which contribute to raise the noise levels, and may cause hearing impairment and other physiological and psychological changes on the newborn, as well as problems with care staff. Objective: To evaluate the level and sources of noise in the neonatal intensive care unit. Methods: Sampled for 20 consecutive days every 60 seconds in A-weighting curves and fast mode with a Type I sound level meter. Recorded the average, maximum and minimum, and the 10th, 50th and 90th percentiles. The values are integrated into hours and work shift, and studied by analysis of variance. The sources were characterized in thirds of octaves. Results: The average level was 64.00 ±3.62 dB(A), with maximum of 76.04 ±5.73 dB(A), minimum of 54.84 ±2.61dB(A), and background noise of 57.95 ±2.83 dB(A). We found four sources with levels between 16.8-63.3 dB(A). Statistical analysis showed significant differences between the hours and work shift, with higher values in the early hours of the day. Conclusion: The values presented exceed the standards suggested by several organizations. The sources identified and measured recorded high values in low frequencies. PMID:29213154

  5. Distribution of kriging errors, the implications and how to communicate them

    NASA Astrophysics Data System (ADS)

    Li, Hong Yi; Milne, Alice; Webster, Richard

    2016-04-01

    Kriging in one form or another has become perhaps the most popular method for spatial prediction in environmental science. Each prediction is unbiased and of minimum variance, which itself is estimated. The kriging variances depend on the mathematical model chosen to describe the spatial variation; different models, however plausible, give rise to different minimized variances. Practitioners often compare models by so-called cross-validation before finally choosing the most appropriate for their kriging. One proceeds as follows. One removes a unit (a sampling point) from the whole set, kriges the value there and compares the kriged value with the value observed to obtain the deviation or error. One repeats the process for each and every point in turn and for all plausible models. One then computes the mean errors (MEs) and the mean of the squared errors (MSEs). Ideally a squared error should equal the corresponding kriging variance (σK2), and so one is advised to choose the model for which on average the squared errors most nearly equal the kriging variances, i.e. the ratio MSDR = MSE/σK2 ≈ 1. Maximum likelihood estimation of models almost guarantees that the MSDR equals 1, and so the kriging variances are unbiased predictors of the squared error across the region. The method is based on the assumption that the errors have a normal distribution. The squared deviation ratio (SDR) should therefore be distributed as χ2 with one degree of freedom with a median of 0.455. We have found that often the median of the SDR (MedSDR) is less, in some instances much less, than 0.455 even though the mean of the SDR is close to 1. It seems that in these cases the distributions of the errors are leptokurtic, i.e. they have an excess of predictions close to the true values, excesses near the extremes and a dearth of predictions in between. In these cases the kriging variances are poor measures of the uncertainty at individual sites. The uncertainty is typically under-estimated for the extreme observations and compensated for by over estimating for other observations. Statisticians must tell users when they present maps of predictions. We illustrate the situation with results from mapping salinity in land reclaimed from the Yangtze delta in the Gulf of Hangzhou, China. There the apparent electrical conductivity (ECa) of the topsoil was measured at 525 points in a field of 2.3 ha. The marginal distribution of the observations was strongly positively skewed, and so the observed ECas were transformed to their logarithms to give an approximately symmetric distribution. That distribution was strongly platykurtic with short tails and no evident outliers. The logarithms were analysed as a mixed model of quadratic drift plus correlated random residuals with a spherical variogram. The kriged predictions that deviated from their true values with an MSDR of 0.993, but with a medSDR=0.324. The coefficient of kurtosis of the deviations was 1.45, i.e. substantially larger than 0 for a normal distribution. The reasons for this behaviour are being sought. The most likely explanation is that there are spatial outliers, i.e. points at which the observed values that differ markedly from those at their their closest neighbours.

  6. Distribution of kriging errors, the implications and how to communicate them

    NASA Astrophysics Data System (ADS)

    Li, HongYi; Milne, Alice; Webster, Richard

    2015-04-01

    Kriging in one form or another has become perhaps the most popular method for spatial prediction in environmental science. Each prediction is unbiased and of minimum variance, which itself is estimated. The kriging variances depend on the mathematical model chosen to describe the spatial variation; different models, however plausible, give rise to different minimized variances. Practitioners often compare models by so-called cross-validation before finally choosing the most appropriate for their kriging. One proceeds as follows. One removes a unit (a sampling point) from the whole set, kriges the value there and compares the kriged value with the value observed to obtain the deviation or error. One repeats the process for each and every point in turn and for all plausible models. One then computes the mean errors (MEs) and the mean of the squared errors (MSEs). Ideally a squared error should equal the corresponding kriging variance (σ_K^2), and so one is advised to choose the model for which on average the squared errors most nearly equal the kriging variances, i.e. the ratio MSDR=MSE/ σ_K2 ≈1. Maximum likelihood estimation of models almost guarantees that the MSDR equals 1, and so the kriging variances are unbiased predictors of the squared error across the region. The method is based on the assumption that the errors have a normal distribution. The squared deviation ratio (SDR) should therefore be distributed as χ2 with one degree of freedom with a median of 0.455. We have found that often the median of the SDR (MedSDR) is less, in some instances much less, than 0.455 even though the mean of the SDR is close to 1. It seems that in these cases the distributions of the errors are leptokurtic, i.e. they have an excess of predictions close to the true values, excesses near the extremes and a dearth of predictions in between. In these cases the kriging variances are poor measures of the uncertainty at individual sites. The uncertainty is typically under-estimated for the extreme observations and compensated for by over estimating for other observations. Statisticians must tell users when they present maps of predictions. We illustrate the situation with results from mapping salinity in land reclaimed from the Yangtze delta in the Gulf of Hangzhou, China. There the apparent electrical conductivity (EC_a) of the topsoil was measured at 525 points in a field of 2.3~ha. The marginal distribution of the observations was strongly positively skewed, and so the observed EC_as were transformed to their logarithms to give an approximately symmetric distribution. That distribution was strongly platykurtic with short tails and no evident outliers. The logarithms were analysed as a mixed model of quadratic drift plus correlated random residuals with a spherical variogram. The kriged predictions that deviated from their true values with an MSDR of 0.993, but with a medSDR=0.324. The coefficient of kurtosis of the deviations was 1.45, i.e. substantially larger than 0 for a normal distribution. The reasons for this behaviour are being sought. The most likely explanation is that there are spatial outliers, i.e. points at which the observed values that differ markedly from those at their their closest neighbours.

  7. Spatial Prediction and Optimized Sampling Design for Sodium Concentration in Groundwater

    PubMed Central

    Shabbir, Javid; M. AbdEl-Salam, Nasser; Hussain, Tajammal

    2016-01-01

    Sodium is an integral part of water, and its excessive amount in drinking water causes high blood pressure and hypertension. In the present paper, spatial distribution of sodium concentration in drinking water is modeled and optimized sampling designs for selecting sampling locations is calculated for three divisions in Punjab, Pakistan. Universal kriging and Bayesian universal kriging are used to predict the sodium concentrations. Spatial simulated annealing is used to generate optimized sampling designs. Different estimation methods (i.e., maximum likelihood, restricted maximum likelihood, ordinary least squares, and weighted least squares) are used to estimate the parameters of the variogram model (i.e, exponential, Gaussian, spherical and cubic). It is concluded that Bayesian universal kriging fits better than universal kriging. It is also observed that the universal kriging predictor provides minimum mean universal kriging variance for both adding and deleting locations during sampling design. PMID:27683016

  8. Low-noise encoding of active touch by layer 4 in the somatosensory cortex.

    PubMed

    Hires, Samuel Andrew; Gutnisky, Diego A; Yu, Jianing; O'Connor, Daniel H; Svoboda, Karel

    2015-08-06

    Cortical spike trains often appear noisy, with the timing and number of spikes varying across repetitions of stimuli. Spiking variability can arise from internal (behavioral state, unreliable neurons, or chaotic dynamics in neural circuits) and external (uncontrolled behavior or sensory stimuli) sources. The amount of irreducible internal noise in spike trains, an important constraint on models of cortical networks, has been difficult to estimate, since behavior and brain state must be precisely controlled or tracked. We recorded from excitatory barrel cortex neurons in layer 4 during active behavior, where mice control tactile input through learned whisker movements. Touch was the dominant sensorimotor feature, with >70% spikes occurring in millisecond timescale epochs after touch onset. The variance of touch responses was smaller than expected from Poisson processes, often reaching the theoretical minimum. Layer 4 spike trains thus reflect the millisecond-timescale structure of tactile input with little noise.

  9. Magnetopause surface fluctuations observed by Voyager 1

    NASA Technical Reports Server (NTRS)

    Lepping, R. P.; Burlaga, L. F.

    1979-01-01

    Moving out of the dawnside of the earth's magnetosphere, Voyager 1 crossed the magnetopause apparently seven times, despite the high spacecraft speed of 11 km/sec. Normals to the magnetopause and their associated error cones were estimated for each of the crossings using a minimum variance analysis of the internal magnetic field. The oscillating nature of the ecliptic plane component of these normals indicates that most of the multiple crossings were due to a wave-like surface disturbance moving tailward along the magnetopause. The wave, which was aperiodic, was modeled as a sequence of sine waves. The amplitude, wavelength, and speed were determined for two pairs of intervals from the measured slopes, occurrence times, and relative positions of six magnetopause crossings. The magnetopause thickness was estimated to lie in the range 300 to 700 km with higher values possible. The estimated amplitude of these waves was obviously small compared to their wavelengths.

  10. Variation and extrema of human interpupillary distance

    NASA Astrophysics Data System (ADS)

    Dodgson, Neil A.

    2004-05-01

    Mean interpupillary distance (IPD) is an important and oft-quoted measure in stereoscopic work. However, there is startlingly little agreement on what it should be. Mean IPD has been quoted in the stereoscopic literature as being anything from 58 mm to 70 mm. It is known to vary with respect to age, gender and race. Furthermore, the stereoscopic industry requires information on not just mean IPD, but also its variance and its extrema, because our products need to be able to cope with all possible users, including those with the smallest and largest IPDs. This paper brings together those statistics on IPD which are available. The key results are that mean adult IPD is around 63 mm, the vast majority of adults have IPDs in the range 50-75 mm, the wider range of 45-80 mm is likely to include (almost) all adults, and the minimum IPD for children (down to five years old) is around 40 mm.

  11. Determination of nitrosourea compounds in brain tissue by gas chromatography and electron capture detection.

    PubMed

    Hassenbusch, S J; Colvin, O M; Anderson, J H

    1995-07-01

    A relatively simple, high-sensitivity gas chromatographic assay is described for nitrosourea compounds, such as BCNU [1,3-bis(2-chloroethyl)-1-nitrosourea] and MeCCNU [1-(2-chloroethyl)-3-(trans-4-methylcyclohexyl)-1-nitrosourea], in small biopsy samples of brain and other tissues. After extraction with ethyl acetate, secondary amines in BCNU and MeCCNU are derivatized with trifluoroacetic anhydride. Compounds are separated and quantitated by gas chromatography using a capillary column with temperature programming and an electron capture detector. Standard curves of BCNU indicate a coefficient of variance of 0.066 +/- 0.018, a correlation coefficient of 0.929, and an extraction efficiency from whole brain of 68% with a minimum detectable amount of 20 ng in 5-10 mg samples. The assay has been facile and sensitive in over 1000 brain biopsy specimens after intravenous and intraarterial infusions of BCNU.

  12. A robust pseudo-inverse spectral filter applied to the Earth Radiation Budget Experiment (ERBE) scanning channels

    NASA Technical Reports Server (NTRS)

    Avis, L. M.; Green, R. N.; Suttles, J. T.; Gupta, S. K.

    1984-01-01

    Computer simulations of a least squares estimator operating on the ERBE scanning channels are discussed. The estimator is designed to minimize the errors produced by nonideal spectral response to spectrally varying and uncertain radiant input. The three ERBE scanning channels cover a shortwave band a longwave band and a ""total'' band from which the pseudo inverse spectral filter estimates the radiance components in the shortwave band and a longwave band. The radiance estimator draws on instantaneous field of view (IFOV) scene type information supplied by another algorithm of the ERBE software, and on a priori probabilistic models of the responses of the scanning channels to the IFOV scene types for given Sun scene spacecraft geometry. It is found that the pseudoinverse spectral filter is stable, tolerant of errors in scene identification and in channel response modeling, and, in the absence of such errors, yields minimum variance and essentially unbiased radiance estimates.

  13. Educators' perceptions and attitudes toward school counseling and student personnel services: A cultural perspective

    NASA Astrophysics Data System (ADS)

    McPhee, Sidney A.

    1985-12-01

    This study was designed to survey and compare attitudes and perceptions toward school counseling and student personnel programs as held by educators in the Caribbean. The subjects in the study comprised 275 teachers and administrators employed in public and private junior and senior high schools in Nassau, Bahamas. The statistical tests used to analyze the data were the Kruskal-Wallis one-way analysis of variance and the Friedman two-way analysis for repeated measures. The findings indicate that administrators at all levels expressed significantly more favorable attitudes and perceptions toward counseling and student personnel programs in the schools than teachers. Teachers in the study expressed the following: (a) serious concern regarding the competency of practicing counselors in their schools; (b) a need for clarification of their role and function in the guidance process and a clarification of the counselor's role; and (c) minimum acceptable standards should be established for school counseling positions.

  14. ESTIMATING LOW-FLOW FREQUENCIES OF UNGAGED STREAMS IN NEW ENGLAND.

    USGS Publications Warehouse

    Wandle, S. William

    1987-01-01

    Equations to estimate low flows were developed using multiple-regression analysis with a sample of 48 river basins, which were selected from the U. S. Geological Survey's network of gaged river basins in Massachusetts, New Hampshire, Rhode Island, Vermont, and southwestern Maine. Low-flow characteristics are represented by the 7Q2 and 7Q10 (the annual minimum 7-day mean low flow at the 2- and 10-year recurrence intervals). These statistics for each of the 48 basins were determined from a low-flow frequency analysis of streamflow records for 1942-71, or from a graphical or mathematical relationship if the record did not cover this 30-year period. Estimators for the mean and variance of the 7-day low flows at the index and short-term sites were used for two stations where discharge measurements of base flow were available and for two sites where the graphical technique was unsatisfactory.

  15. Application of inertial instruments for DSN antenna pointing and tracking

    NASA Technical Reports Server (NTRS)

    Eldred, D. B.; Nerheim, N. M.; Holmes, K. G.

    1990-01-01

    The feasibility of using inertial instruments to determine the pointing attitude of the NASA Deep Space Network antennas is examined. The objective is to obtain 1 mdeg pointing knowledge in both blind pointing and tracking modes to facilitate operation of the Deep Space Network 70 m antennas at 32 GHz. A measurement system employing accelerometers, an inclinometer, and optical gyroscopes is proposed. The initial pointing attitude is established by determining the direction of the local gravity vector using the accelerometers and the inclinometer, and the Earth's spin axis using the gyroscopes. Pointing during long-term tracking is maintained by integrating the gyroscope rates and augmenting these measurements with knowledge of the local gravity vector. A minimum-variance estimator is used to combine measurements to obtain the antenna pointing attitude. A key feature of the algorithm is its ability to recalibrate accelerometer parameters during operation. A survey of available inertial instrument technologies is also given.

  16. Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique

    PubMed Central

    Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep

    2015-01-01

    In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032

  17. Grey Comprehensive Evaluation of Biomass Power Generation Project Based on Group Judgement

    NASA Astrophysics Data System (ADS)

    Xia, Huicong; Niu, Dongxiao

    2017-06-01

    The comprehensive evaluation of benefit is an important task needed to be carried out at all stages of biomass power generation projects. This paper proposed an improved grey comprehensive evaluation method based on triangle whiten function. To improve the objectivity of weight calculation result of only reference comparison judgment method, this paper introduced group judgment to the weighting process. In the process of grey comprehensive evaluation, this paper invited a number of experts to estimate the benefit level of projects, and optimized the basic estimations based on the minimum variance principle to improve the accuracy of evaluation result. Taking a biomass power generation project as an example, the grey comprehensive evaluation result showed that the benefit level of this project was good. This example demonstrates the feasibility of grey comprehensive evaluation method based on group judgment for benefit evaluation of biomass power generation project.

  18. Optimal coordination and control of posture and movements.

    PubMed

    Johansson, Rolf; Fransson, Per-Anders; Magnusson, Måns

    2009-01-01

    This paper presents a theoretical model of stability and coordination of posture and locomotion, together with algorithms for continuous-time quadratic optimization of motion control. Explicit solutions to the Hamilton-Jacobi equation for optimal control of rigid-body motion are obtained by solving an algebraic matrix equation. The stability is investigated with Lyapunov function theory and it is shown that global asymptotic stability holds. It is also shown how optimal control and adaptive control may act in concert in the case of unknown or uncertain system parameters. The solution describes motion strategies of minimum effort and variance. The proposed optimal control is formulated to be suitable as a posture and movement model for experimental validation and verification. The combination of adaptive and optimal control makes this algorithm a candidate for coordination and control of functional neuromuscular stimulation as well as of prostheses. Validation examples with experimental data are provided.

  19. Computer simulations and real-time control of ELT AO systems using graphical processing units

    NASA Astrophysics Data System (ADS)

    Wang, Lianqi; Ellerbroek, Brent

    2012-07-01

    The adaptive optics (AO) simulations at the Thirty Meter Telescope (TMT) have been carried out using the efficient, C based multi-threaded adaptive optics simulator (MAOS, http://github.com/lianqiw/maos). By porting time-critical parts of MAOS to graphical processing units (GPU) using NVIDIA CUDA technology, we achieved a 10 fold speed up for each GTX 580 GPU used compared to a modern quad core CPU. Each time step of full scale end to end simulation for the TMT narrow field infrared AO system (NFIRAOS) takes only 0.11 second in a desktop with two GTX 580s. We also demonstrate that the TMT minimum variance reconstructor can be assembled in matrix vector multiply (MVM) format in 8 seconds with 8 GTX 580 GPUs, meeting the TMT requirement for updating the reconstructor. Analysis show that it is also possible to apply the MVM using 8 GTX 580s within the required latency.

  20. Analysis of linkage effects among industry sectors in China's stock market before and after the financial crisis

    NASA Astrophysics Data System (ADS)

    Yang, Rui; Li, Xiangyang; Zhang, Tong

    2014-10-01

    This paper uses two physics-derived techniques, the minimum spanning tree and the hierarchical tree, to investigate the networks formed by CITIC (China International Trust and Investment Corporation) industry indices in three periods from 2006 to 2013. The study demonstrates that obvious industry clustering effects exist in the networks, and Durable Consumer Goods, Industrial Products, Information Technology, Frequently Consumption and Financial Industry are the core nodes in the networks. We also use the rolling window technique to investigate the dynamic evolution of the networks' stability, by calculating the mean correlations and mean distances, as well as the variance of correlations and the distances of these indices. China's stock market is still immature and subject to administrative interventions. Therefore, through this analysis, regulators can focus on monitoring the core nodes to ensure the overall stability of the entire market, while investors can enhance their portfolio allocations or investment decision-making.

  1. First observation of lion roar-like emissions in Saturn's magnetosheath

    NASA Astrophysics Data System (ADS)

    Pisa, David; Sulaiman, Ali H.; Santolik, Ondrej; Hospodarsky, George B.; Kurth, William S.; Gurnett, Donald A.

    2017-04-01

    Electromagnetic whistler mode waves known as "lion roars" have been reported by many missions inside the terrestrial magnetosheath. We show the observation of similar intense emissions in Saturn's magnetosheath as detected by the Cassini spacecraft. The emissions were observed inside the dawn sector (MLT˜0730) of the magnetosheath over a time period of nine hours before the satellite crossed the bow shock and entered the solar wind. The emissions were narrow-banded with a typical frequency of about 15 Hz well below the local electron cyclotron frequency (fce ˜100 Hz). Using the minimum variance analysis method, we show that the waves are right hand circularly polarized and propagate at small wave normal angles with respect to the ambient magnetic field. Here, for the first time, we report the evidence of lion roar-like emissions in Saturn's magnetosheath which represents a new and unique parameter regime.

  2. Method and computer product to increase accuracy of time-based software verification for sensor networks

    DOEpatents

    Foo Kune, Denis [Saint Paul, MN; Mahadevan, Karthikeyan [Mountain View, CA

    2011-01-25

    A recursive verification protocol to reduce the time variance due to delays in the network by putting the subject node at most one hop from the verifier node provides for an efficient manner to test wireless sensor nodes. Since the software signatures are time based, recursive testing will give a much cleaner signal for positive verification of the software running on any one node in the sensor network. In this protocol, the main verifier checks its neighbor, who in turn checks its neighbor, and continuing this process until all nodes have been verified. This ensures minimum time delays for the software verification. Should a node fail the test, the software verification downstream is halted until an alternative path (one not including the failed node) is found. Utilizing techniques well known in the art, having a node tested twice, or not at all, can be avoided.

  3. 2dFLenS and KiDS: determining source redshift distributions with cross-correlations

    NASA Astrophysics Data System (ADS)

    Johnson, Andrew; Blake, Chris; Amon, Alexandra; Erben, Thomas; Glazebrook, Karl; Harnois-Deraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Joudaki, Shahab; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Marin, Felipe A.; McFarland, John; Morrison, Christopher B.; Parkinson, David; Poole, Gregory B.; Radovich, Mario; Wolf, Christian

    2017-03-01

    We develop a statistical estimator to infer the redshift probability distribution of a photometric sample of galaxies from its angular cross-correlation in redshift bins with an overlapping spectroscopic sample. This estimator is a minimum-variance weighted quadratic function of the data: a quadratic estimator. This extends and modifies the methodology presented by McQuinn & White. The derived source redshift distribution is degenerate with the source galaxy bias, which must be constrained via additional assumptions. We apply this estimator to constrain source galaxy redshift distributions in the Kilo-Degree imaging survey through cross-correlation with the spectroscopic 2-degree Field Lensing Survey, presenting results first as a binned step-wise distribution in the range z < 0.8, and then building a continuous distribution using a Gaussian process model. We demonstrate the robustness of our methodology using mock catalogues constructed from N-body simulations, and comparisons with other techniques for inferring the redshift distribution.

  4. Bayesian estimation of the discrete coefficient of determination.

    PubMed

    Chen, Ting; Braga-Neto, Ulisses M

    2016-12-01

    The discrete coefficient of determination (CoD) measures the nonlinear interaction between discrete predictor and target variables and has had far-reaching applications in Genomic Signal Processing. Previous work has addressed the inference of the discrete CoD using classical parametric and nonparametric approaches. In this paper, we introduce a Bayesian framework for the inference of the discrete CoD. We derive analytically the optimal minimum mean-square error (MMSE) CoD estimator, as well as a CoD estimator based on the Optimal Bayesian Predictor (OBP). For the latter estimator, exact expressions for its bias, variance, and root-mean-square (RMS) are given. The accuracy of both Bayesian CoD estimators with non-informative and informative priors, under fixed or random parameters, is studied via analytical and numerical approaches. We also demonstrate the application of the proposed Bayesian approach in the inference of gene regulatory networks, using gene-expression data from a previously published study on metastatic melanoma.

  5. QUANTUM MECHANICS. Quantum squeezing of motion in a mechanical resonator.

    PubMed

    Wollman, E E; Lei, C U; Weinstein, A J; Suh, J; Kronwald, A; Marquardt, F; Clerk, A A; Schwab, K C

    2015-08-28

    According to quantum mechanics, a harmonic oscillator can never be completely at rest. Even in the ground state, its position will always have fluctuations, called the zero-point motion. Although the zero-point fluctuations are unavoidable, they can be manipulated. Using microwave frequency radiation pressure, we have manipulated the thermal fluctuations of a micrometer-scale mechanical resonator to produce a stationary quadrature-squeezed state with a minimum variance of 0.80 times that of the ground state. We also performed phase-sensitive, back-action evading measurements of a thermal state squeezed to 1.09 times the zero-point level. Our results are relevant to the quantum engineering of states of matter at large length scales, the study of decoherence of large quantum systems, and for the realization of ultrasensitive sensing of force and motion. Copyright © 2015, American Association for the Advancement of Science.

  6. Ionic strength and DOC determinations from various freshwater sources to the San Francisco Bay

    USGS Publications Warehouse

    Hunter, Y.R.; Kuwabara, J.S.

    1994-01-01

    An exact estimation of dissolved organic carbon (DOC) within the salinity gradient of zinc and copper metals is significant in understanding the limit to which DOC could influence metal speciation. A low-temperature persulfate/oxygen/ultraviolet wet oxidation procedure was utilized for analyzing DOC samples adapted for ionic strength from major freshwater sources of the northern and southern regions of San Francisco Bay. The ionic strength of samples was modified with a chemically defined seawater medium up to 0.7M. Based on the results, a minimum effect of ionic strength on oxidation proficiency for DOC sources to the Bay over an ionic strength gradient of 0.0 to 0.7 M was observed. There was no major impacts of ionic strength on two Suwanee River fulvic acids. In general, the noted effects associated with ionic strength were smaller than the variances seen in the aquatic environment between high- and low-temperature methods.

  7. Climatological variables and the incidence of Dengue fever in Barbados.

    PubMed

    Depradine, Colin; Lovell, Ernest

    2004-12-01

    A retrospective study to determine relationships between the incidence of dengue cases and climatological variables and to obtain a predictive equation was carried out for the relatively small Caribbean island of Barbados which is divided into 11 parishes. The study used the weekly dengue cases and precipitation data for the years (1995 - 2000) that occurred in the small area of a single parish. Other climatological data were obtained from the local meteorological offices. The study used primarily cross correlation analysis and found the strongest correlation with the vapour pressure at a lag of 6 weeks. A weaker correlation occurred at a lag of 7 weeks for the precipitation. The minimum temperature had its strongest correlation at a lag of 12 weeks and the maximum temperature a lag of 16 weeks. There was a negative correlation with the wind speed at a lag of 3 weeks. The predictive models showed a maximum explained variance of 35%.

  8. The effect of covariate mean differences on the standard error and confidence interval for the comparison of treatment means.

    PubMed

    Liu, Xiaofeng Steven

    2011-05-01

    The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.

  9. Minimum Energy-Variance Filters for the detection of compact sources in crowded astronomical images

    NASA Astrophysics Data System (ADS)

    Herranz, D.; Sanz, J. L.; López-Caniego, M.; González-Nuevo, J.

    2006-10-01

    In this paper we address the common problem of the detection and identification of compact sources, such as stars or far galaxies, in Astronomical images. The common approach, that consist in applying a matched filter to the data in order to remove noise and to search for intensity peaks above a certain detection threshold, does not work well when the sources to be detected appear in large number over small regions of the sky due to the effect of source overlapping and interferences among the filtered profiles of the sources. A new class of filter that balances noise removal with signal spatial concentration is introduced, then it is applied to simulated astronomical images of the sky at 857 GHz. We show that with the new filter it is possible to improve the ratio between true detections and false alarms with respect to the matched filter. For low detection thresholds, the improvement is ~ 40%.

  10. Texture and haptic cues in slant discrimination: reliability-based cue weighting without statistically optimal cue combination

    NASA Astrophysics Data System (ADS)

    Rosas, Pedro; Wagemans, Johan; Ernst, Marc O.; Wichmann, Felix A.

    2005-05-01

    A number of models of depth-cue combination suggest that the final depth percept results from a weighted average of independent depth estimates based on the different cues available. The weight of each cue in such an average is thought to depend on the reliability of each cue. In principle, such a depth estimation could be statistically optimal in the sense of producing the minimum-variance unbiased estimator that can be constructed from the available information. Here we test such models by using visual and haptic depth information. Different texture types produce differences in slant-discrimination performance, thus providing a means for testing a reliability-sensitive cue-combination model with texture as one of the cues to slant. Our results show that the weights for the cues were generally sensitive to their reliability but fell short of statistically optimal combination - we find reliability-based reweighting but not statistically optimal cue combination.

  11. A fully redundant double difference algorithm for obtaining minimum variance estimates from GPS observations

    NASA Technical Reports Server (NTRS)

    Melbourne, William G.

    1986-01-01

    In double differencing a regression system obtained from concurrent Global Positioning System (GPS) observation sequences, one either undersamples the system to avoid introducing colored measurement statistics, or one fully samples the system incurring the resulting non-diagonal covariance matrix for the differenced measurement errors. A suboptimal estimation result will be obtained in the undersampling case and will also be obtained in the fully sampled case unless the color noise statistics are taken into account. The latter approach requires a least squares weighting matrix derived from inversion of a non-diagonal covariance matrix for the differenced measurement errors instead of inversion of the customary diagonal one associated with white noise processes. Presented is the so-called fully redundant double differencing algorithm for generating a weighted double differenced regression system that yields equivalent estimation results, but features for certain cases a diagonal weighting matrix even though the differenced measurement error statistics are highly colored.

  12. Generation of uniformly distributed dose points for anatomy-based three-dimensional dose optimization methods in brachytherapy.

    PubMed

    Lahanas, M; Baltas, D; Giannouli, S; Milickovic, N; Zamboglou, N

    2000-05-01

    We have studied the accuracy of statistical parameters of dose distributions in brachytherapy using actual clinical implants. These include the mean, minimum and maximum dose values and the variance of the dose distribution inside the PTV (planning target volume), and on the surface of the PTV. These properties have been studied as a function of the number of uniformly distributed sampling points. These parameters, or the variants of these parameters, are used directly or indirectly in optimization procedures or for a description of the dose distribution. The accurate determination of these parameters depends on the sampling point distribution from which they have been obtained. Some optimization methods ignore catheters and critical structures surrounded by the PTV or alternatively consider as surface dose points only those on the contour lines of the PTV. D(min) and D(max) are extreme dose values which are either on the PTV surface or within the PTV. They must be avoided for specification and optimization purposes in brachytherapy. Using D(mean) and the variance of D which we have shown to be stable parameters, achieves a more reliable description of the dose distribution on the PTV surface and within the PTV volume than does D(min) and D(max). Generation of dose points on the real surface of the PTV is obligatory and the consideration of catheter volumes results in a realistic description of anatomical dose distributions.

  13. Minimum variance rooting of phylogenetic trees and implications for species tree reconstruction.

    PubMed

    Mai, Uyen; Sayyari, Erfan; Mirarab, Siavash

    2017-01-01

    Phylogenetic trees inferred using commonly-used models of sequence evolution are unrooted, but the root position matters both for interpretation and downstream applications. This issue has been long recognized; however, whether the potential for discordance between the species tree and gene trees impacts methods of rooting a phylogenetic tree has not been extensively studied. In this paper, we introduce a new method of rooting a tree based on its branch length distribution; our method, which minimizes the variance of root to tip distances, is inspired by the traditional midpoint rerooting and is justified when deviations from the strict molecular clock are random. Like midpoint rerooting, the method can be implemented in a linear time algorithm. In extensive simulations that consider discordance between gene trees and the species tree, we show that the new method is more accurate than midpoint rerooting, but its relative accuracy compared to using outgroups to root gene trees depends on the size of the dataset and levels of deviations from the strict clock. We show high levels of error for all methods of rooting estimated gene trees due to factors that include effects of gene tree discordance, deviations from the clock, and gene tree estimation error. Our simulations, however, did not reveal significant differences between two equivalent methods for species tree estimation that use rooted and unrooted input, namely, STAR and NJst. Nevertheless, our results point to limitations of existing scalable rooting methods.

  14. Visualizing Hyolaryngeal Mechanics in Swallowing Using Dynamic MRI

    PubMed Central

    Pearson, William G.; Zumwalt, Ann C.

    2013-01-01

    Introduction Coordinates of anatomical landmarks are captured using dynamic MRI to explore whether a proposed two-sling mechanism underlies hyolaryngeal elevation in pharyngeal swallowing. A principal components analysis (PCA) is applied to coordinates to determine the covariant function of the proposed mechanism. Methods Dynamic MRI (dMRI) data were acquired from eleven healthy subjects during a repeated swallows task. Coordinates mapping the proposed mechanism are collected from each dynamic (frame) of a dynamic MRI swallowing series of a randomly selected subject in order to demonstrate shape changes in a single subject. Coordinates representing minimum and maximum hyolaryngeal elevation of all 11 subjects were also mapped to demonstrate shape changes of the system among all subjects. MophoJ software was used to perform PCA and determine vectors of shape change (eigenvectors) for elements of the two-sling mechanism of hyolaryngeal elevation. Results For both single subject and group PCAs, hyolaryngeal elevation accounted for the first principal component of variation. For the single subject PCA, the first principal component accounted for 81.5% of the variance. For the between subjects PCA, the first principal component accounted for 58.5% of the variance. Eigenvectors and shape changes associated with this first principal component are reported. Discussion Eigenvectors indicate that two-muscle slings and associated skeletal elements function as components of a covariant mechanism to elevate the hyolaryngeal complex. Morphological analysis is useful to model shape changes in the two-sling mechanism of hyolaryngeal elevation. PMID:25090608

  15. Do drug treatment variables predict cognitive performance in multidrug-treated opioid-dependent patients? A regression analysis study

    PubMed Central

    2012-01-01

    Background Cognitive deficits and multiple psychoactive drug regimens are both common in patients treated for opioid-dependence. Therefore, we examined whether the cognitive performance of patients in opioid-substitution treatment (OST) is associated with their drug treatment variables. Methods Opioid-dependent patients (N = 104) who were treated either with buprenorphine or methadone (n = 52 in both groups) were given attention, working memory, verbal, and visual memory tests after they had been a minimum of six months in treatment. Group-wise results were analysed by analysis of variance. Predictors of cognitive performance were examined by hierarchical regression analysis. Results Buprenorphine-treated patients performed statistically significantly better in a simple reaction time test than methadone-treated ones. No other significant differences between groups in cognitive performance were found. In each OST drug group, approximately 10% of the attention performance could be predicted by drug treatment variables. Use of benzodiazepine medication predicted about 10% of performance variance in working memory. Treatment with more than one other psychoactive drug (than opioid or BZD) and frequent substance abuse during the past month predicted about 20% of verbal memory performance. Conclusions Although this study does not prove a causal relationship between multiple prescription drug use and poor cognitive functioning, the results are relevant for psychosocial recovery, vocational rehabilitation, and psychological treatment of OST patients. Especially for patients with BZD treatment, other treatment options should be actively sought. PMID:23121989

  16. A statistical mechanical theory for a two-dimensional model of water

    PubMed Central

    Urbic, Tomaz; Dill, Ken A.

    2010-01-01

    We develop a statistical mechanical model for the thermal and volumetric properties of waterlike fluids. Each water molecule is a two-dimensional disk with three hydrogen-bonding arms. Each water interacts with neighboring waters through a van der Waals interaction and an orientation-dependent hydrogen-bonding interaction. This model, which is largely analytical, is a variant of the Truskett and Dill (TD) treatment of the “Mercedes-Benz” (MB) model. The present model gives better predictions than TD for hydrogen-bond populations in liquid water by distinguishing strong cooperative hydrogen bonds from weaker ones. We explore properties versus temperature T and pressure p. We find that the volumetric and thermal properties follow the same trends with T as real water and are in good general agreement with Monte Carlo simulations of MB water, including the density anomaly, the minimum in the isothermal compressibility, and the decreased number of hydrogen bonds for increasing temperature. The model reproduces that pressure squeezes out water’s heat capacity and leads to a negative thermal expansion coefficient at low temperatures. In terms of water structuring, the variance in hydrogen-bonding angles increases with both T and p, while the variance in water density increases with T but decreases with p. Hydrogen bonding is an energy storage mechanism that leads to water’s large heat capacity (for its size) and to the fragility in its cagelike structures, which are easily melted by temperature and pressure to a more van der Waals-like liquid state. PMID:20550408

  17. A statistical mechanical theory for a two-dimensional model of water

    NASA Astrophysics Data System (ADS)

    Urbic, Tomaz; Dill, Ken A.

    2010-06-01

    We develop a statistical mechanical model for the thermal and volumetric properties of waterlike fluids. Each water molecule is a two-dimensional disk with three hydrogen-bonding arms. Each water interacts with neighboring waters through a van der Waals interaction and an orientation-dependent hydrogen-bonding interaction. This model, which is largely analytical, is a variant of the Truskett and Dill (TD) treatment of the "Mercedes-Benz" (MB) model. The present model gives better predictions than TD for hydrogen-bond populations in liquid water by distinguishing strong cooperative hydrogen bonds from weaker ones. We explore properties versus temperature T and pressure p. We find that the volumetric and thermal properties follow the same trends with T as real water and are in good general agreement with Monte Carlo simulations of MB water, including the density anomaly, the minimum in the isothermal compressibility, and the decreased number of hydrogen bonds for increasing temperature. The model reproduces that pressure squeezes out water's heat capacity and leads to a negative thermal expansion coefficient at low temperatures. In terms of water structuring, the variance in hydrogen-bonding angles increases with both T and p, while the variance in water density increases with T but decreases with p. Hydrogen bonding is an energy storage mechanism that leads to water's large heat capacity (for its size) and to the fragility in its cagelike structures, which are easily melted by temperature and pressure to a more van der Waals-like liquid state.

  18. A statistical mechanical theory for a two-dimensional model of water.

    PubMed

    Urbic, Tomaz; Dill, Ken A

    2010-06-14

    We develop a statistical mechanical model for the thermal and volumetric properties of waterlike fluids. Each water molecule is a two-dimensional disk with three hydrogen-bonding arms. Each water interacts with neighboring waters through a van der Waals interaction and an orientation-dependent hydrogen-bonding interaction. This model, which is largely analytical, is a variant of the Truskett and Dill (TD) treatment of the "Mercedes-Benz" (MB) model. The present model gives better predictions than TD for hydrogen-bond populations in liquid water by distinguishing strong cooperative hydrogen bonds from weaker ones. We explore properties versus temperature T and pressure p. We find that the volumetric and thermal properties follow the same trends with T as real water and are in good general agreement with Monte Carlo simulations of MB water, including the density anomaly, the minimum in the isothermal compressibility, and the decreased number of hydrogen bonds for increasing temperature. The model reproduces that pressure squeezes out water's heat capacity and leads to a negative thermal expansion coefficient at low temperatures. In terms of water structuring, the variance in hydrogen-bonding angles increases with both T and p, while the variance in water density increases with T but decreases with p. Hydrogen bonding is an energy storage mechanism that leads to water's large heat capacity (for its size) and to the fragility in its cagelike structures, which are easily melted by temperature and pressure to a more van der Waals-like liquid state.

  19. Minimum variance rooting of phylogenetic trees and implications for species tree reconstruction

    PubMed Central

    Sayyari, Erfan; Mirarab, Siavash

    2017-01-01

    Phylogenetic trees inferred using commonly-used models of sequence evolution are unrooted, but the root position matters both for interpretation and downstream applications. This issue has been long recognized; however, whether the potential for discordance between the species tree and gene trees impacts methods of rooting a phylogenetic tree has not been extensively studied. In this paper, we introduce a new method of rooting a tree based on its branch length distribution; our method, which minimizes the variance of root to tip distances, is inspired by the traditional midpoint rerooting and is justified when deviations from the strict molecular clock are random. Like midpoint rerooting, the method can be implemented in a linear time algorithm. In extensive simulations that consider discordance between gene trees and the species tree, we show that the new method is more accurate than midpoint rerooting, but its relative accuracy compared to using outgroups to root gene trees depends on the size of the dataset and levels of deviations from the strict clock. We show high levels of error for all methods of rooting estimated gene trees due to factors that include effects of gene tree discordance, deviations from the clock, and gene tree estimation error. Our simulations, however, did not reveal significant differences between two equivalent methods for species tree estimation that use rooted and unrooted input, namely, STAR and NJst. Nevertheless, our results point to limitations of existing scalable rooting methods. PMID:28800608

  20. Multiscale field-aligned current analyzer

    NASA Astrophysics Data System (ADS)

    Bunescu, C.; Marghitu, O.; Constantinescu, D.; Narita, Y.; Vogt, J.; Blǎgǎu, A.

    2015-11-01

    The magnetosphere-ionosphere coupling is achieved, essentially, by a superposition of quasi-stationary and time-dependent field-aligned currents (FACs), over a broad range of spatial and temporal scales. The planarity of the FAC structures observed by satellite data and the orientation of the planar FAC sheets can be investigated by the well-established minimum variance analysis (MVA) of the magnetic perturbation. However, such investigations are often constrained to a predefined time window, i.e., to a specific scale of the FAC. The multiscale field-aligned current analyzer, introduced here, relies on performing MVA continuously and over a range of scales by varying the width of the analyzing window, appropriate for the complexity of the magnetic field signatures above the auroral oval. The proposed technique provides multiscale information on the planarity and orientation of the observed FACs. A new approach, based on the derivative of the largest eigenvalue of the magnetic variance matrix with respect to the length of the analysis window, makes possible the inference of the current structures' location (center) and scale (thickness). The capabilities of the FAC analyzer are explored analytically for the magnetic field profile of the Harris sheet and tested on synthetic FAC structures with uniform current density and infinite or finite geometry in the cross-section plane of the FAC. The method is illustrated with data observed by the Cluster spacecraft on crossing the nightside auroral region, and the results are cross checked with the optical observations from the Time History of Events and Macroscale Interactions during Substorms ground network.

  1. More about arc-polarized structures in the solar wind

    NASA Astrophysics Data System (ADS)

    Haaland, S.; Sonnerup, B.; Paschmann, G.

    2012-05-01

    We report results from a Cluster-based study of the properties of 28 arc-polarized magnetic structures (also called rotational discontinuities) in the solar wind. These Alfvénic events were selected from the database created and analyzed by Knetter (2005) by use of criteria chosen to eliminate ambiguous cases. His studies showed that standard, four-spacecraft timing analysis in most cases lacks sufficient accuracy to identify the small normal magnetic field components expected to accompany such structures, leaving unanswered the question of their existence. Our study aims to break this impasse. By careful application of minimum variance analysis of the magnetic field (MVAB) from each individual spacecraft, we show that, in most cases, a small but significantly non-zero magnetic field component was present in the direction perpendicular to the discontinuity. In the very few cases where this component was found to be large, examination revealed that MVAB had produced an unusual and unexplained orientation of the normal vector. On the whole, MVAB shows that many verifiable rotational discontinuities (Bn ≠ 0) exist in the solar wind and that their eigenvalue ratio (EVR = intermediate/minimum variance) can be extremely large (up to EVR = 400). Each of our events comprises four individual spacecraft crossings. The events include 17 ion-polarized cases and 11 electron-polarized ones. Fifteen of the ion events have widths ranging from 9 to 21 ion inertial lengths, with two outliers at 46 and 54. The electron-polarized events are generally thicker: nine cases fall in the range 20-71 ion inertial lengths, with two outliers at 9 and 13. In agreement with theoretical predictions from a one-dimensional, ideal, Hall-MHD description (Sonnerup et al., 2010), the ion-polarized events show a small depression in field magnitude, while the electron-polarized ones tend to show a small enhancement. This effect was also predicted by Wu and Lee (2000). Judging only from the sense of the plasma flow across our DDs, their propagation appears to be sunward as often as anti-sunward. However, we argue that this result can be misleading as a consequence of the possible presence of magnetic islands within the DDs. How the rotational discontinuities come into existence, how they evolve with time, and what roles they play in the solar wind remain open questions.

  2. A comparison between temporal and subband minimum variance adaptive beamforming

    NASA Astrophysics Data System (ADS)

    Diamantis, Konstantinos; Voxen, Iben H.; Greenaway, Alan H.; Anderson, Tom; Jensen, Jørgen A.; Sboros, Vassilis

    2014-03-01

    This paper compares the performance between temporal and subband Minimum Variance (MV) beamformers for medical ultrasound imaging. Both adaptive methods provide an optimized set of apodization weights but are implemented in the time and frequency domains respectively. Their performance is evaluated with simulated synthetic aperture data obtained from Field II and is quantified by the Full-Width-Half-Maximum (FWHM), the Peak-Side-Lobe level (PSL) and the contrast level. From a point phantom, a full sequence of 128 emissions with one transducer element transmitting and all 128 elements receiving each time, provides a FWHM of 0.03 mm (0.14λ) for both implementations at a depth of 40 mm. This value is more than 20 times lower than the one achieved by conventional beamforming. The corresponding values of PSL are -58 dB and -63 dB for time and frequency domain MV beamformers, while a value no lower than -50 dB can be obtained from either Boxcar or Hanning weights. Interestingly, a single emission with central element #64 as the transmitting aperture provides results comparable to the full sequence. The values of FWHM are 0.04 mm and 0.03 mm and those of PSL are -42 dB and -46 dB for temporal and subband approaches. From a cyst phantom and for 128 emissions, the contrast level is calculated at -54 dB and -63 dB respectively at the same depth, with the initial shape of the cyst being preserved in contrast to conventional beamforming. The difference between the two adaptive beamformers is less significant in the case of a single emission, with the contrast level being estimated at -42 dB for the time domain and -43 dB for the frequency domain implementation. For the estimation of a single MV weight of a low resolution image formed by a single emission, 0.44 * 109 calculations per second are required for the temporal approach. The same numbers for the subband approach are 0.62 * 109 for the point and 1.33 * 109 for the cyst phantom. The comparison demonstrates similar resolution but slightly lower side-lobes and higher contrast for the subband approach at the expense of increased computation time.

  3. Long-term temperature observations from the troposphere to upper mesosphere over Mauna Loa, HI (19.5N, 155.6W) and Table Mountain, CA (34.4N, 117.7W) by JPL Lidars and nearby Radiosondes

    NASA Astrophysics Data System (ADS)

    Li, T.; Leblanc, T.; McDermid, S.; Wu, D. L.

    2007-12-01

    The JPL Rayleigh lidars at Mauna Loa Observatory (MLO), HI (19.5N, 155.6W) and Table Mountain Observatory (TMO), CA (34.4N, 117.7W) have been operated for the regular nighttime data acquisition of temperature since 1994 and 1989 respectively. Using the monthly mean temperature vertical profiles observed by the JPL lidars (35- 85km) and nearby radiosondes (5-30km), and with the linear regression analysis, we are able to extract the temperature trend, solar cycle, El Nino South Oscillation (ENSO), and Quasi-Biennial Oscillation (QBO) signals from the troposphere to upper mesosphere over MLO and TMO. The temperature trends show different behaviors at two sites, minor trend at MLO, but more negative trend at TMO. The solar cycle responses in temperature are generally positive above the middle stratosphere at both sites, but negative response at MLO and positive at TMO below. During the El Nino events, the warmer temperatures in the troposphere and upper mesosphere, and the colder temperatures in the stratosphere and lower mesosphere were observed at MLO and almost visa verse at TMO. The significant QBO oscillations were observed in the stratosphere with amplitudes of ~2-3K and with clearer downward phase progression at MLO than that at TMO. The mesospheric QBO near 75-85km is clearly present at both sites with amplitude of ~2K and with longer vertical wavelength than that in stratosphere. In addition, we calculated the GW variances using lidar temperature profiles with 30min and 1km resolutions in the upper stratosphere (38-50km) and lower mesosphere (50-62km), and nearby radiosondes in the lower stratosphere (18-30km). The monthly mean GW variances clearly show an annual oscillation with a maximum in the winter and minimum in the summer. The QBO signature could be clearly seen in the lower stratosphere. In the upper stratosphere, a longer period oscillation (~5-6 years) with maxima in 2000-2001 and 2006 was revealed to synchronize with the solar maximum and minimum. No clear signature of GW activity in the lower mesosphere could be associated to that in the upper stratosphere, suggesting that part of gravity waves may either dissipated or reflected when crossing the stratopause region.

  4. Curative procedures of oral health and structural characteristics of primary dental care.

    PubMed

    Baumgarten, Alexandre; Hugo, Fernando Neves; Bulgarelli, Alexandre Fávero; Hilgert, Juliana Balbinot

    2018-04-09

    To evaluate if the provision of clinical dental care, by means of the main curative procedures recommended in Primary Health Care, is associated with team structural characteristics, considering the presence of a minimum set of equipment, instrument, and supplies in Brazil's primary health care services. A cross-sectional exploratory study based on data collected from 18,114 primary healthcare services with dental health teams in Brazil, in 2014. The outcome was created from the confirmation of five clinical procedures performed by the dentist, accounting for the presence of minimum equipment, instrument, and supplies to carry them out. Covariables were related to structural characteristics. Poisson regression with robust variance was used to obtain crude and adjusted prevalence ratios, with 95% confidence intervals. A total of 1,190 (6.5%) dental health teams did not present the minimum equipment to provide clinical dental care and only 2,498 (14.8%) had all the instrument and supplies needed and provided the five curative procedures assessed. There was a positive association between the outcome and the composition of dental health teams, higher workload, performing analysis of health condition, and monitoring of oral health indicators. Additionally, the dental health teams that planned and programmed oral health actions with the primary care team monthly provided the procedures more frequently. Dentists with better employment status, career plans, graduation in public health or those who underwent permanent education activities provided the procedures more frequently. A relevant number of Primary Health Care services did not have the infrastructure to provide clinical dental care. However, better results were found in dental health teams with oral health technicians, with higher workload and that plan their activities, as well as in those that employed dentists with better working relationships, who had dentists with degrees in public health and who underwent permanent education activities.

  5. Counseling Received by Adolescents Undergoing Voluntary Medical Male Circumcision: Moving Toward Age-Equitable Comprehensive Human Immunodeficiency Virus Prevention Measures.

    PubMed

    Kaufman, Michelle R; Patel, Eshan U; Dam, Kim H; Packman, Zoe R; Van Lith, Lynn M; Hatzold, Karin; Marcell, Arik V; Mavhu, Webster; Kahabuka, Catherine; Mahlasela, Lusanda; Njeuhmeli, Emmanuel; Seifert Ahanda, Kim; Ncube, Getrude; Lija, Gissenge; Bonnecwe, Collen; Tobian, Aaron A R

    2018-04-03

    The minimum package of voluntary medical male circumcision (VMMC) services, as defined by the World Health Organization, includes human immunodeficiency virus (HIV) testing, HIV prevention counseling, screening/treatment for sexually transmitted infections, condom promotion, and the VMMC procedure. The current study aimed to assess whether adolescents received these key elements. Quantitative surveys were conducted among male adolescents aged 10-19 years (n = 1293) seeking VMMC in South Africa, Tanzania, and Zimbabwe. We used a summative index score of 8 self-reported binary items to measure receipt of important elements of the World Health Organization-recommended HIV minimum package and the US President's Emergency Plan for AIDS Relief VMMC recommendations. Counseling sessions were observed for a subset of adolescents (n = 44). To evaluate factors associated with counseling content, we used Poisson regression models with generalized estimating equations and robust variance estimation. Although counseling included VMMC benefits, little attention was paid to risks, including how to identify complications, what to do if they arise, and why avoiding sex and masturbation could prevent complications. Overall, older adolescents (aged 15-19 years) reported receiving more items in the recommended minimum package than younger adolescents (aged 10-14 years; adjusted β, 0.17; 95% confidence interval [CI], .12-.21; P < .001). Older adolescents were also more likely to report receiving HIV test education and promotion (42.7% vs 29.5%; adjusted prevalence ratio [aPR], 1.53; 95% CI, 1.16-2.02) and a condom demonstration with condoms to take home (16.8% vs 4.4%; aPR, 2.44; 95% CI, 1.30-4.58). No significant age differences appeared in reports of explanations of VMMC risks and benefits or uptake of HIV testing. These self-reported findings were confirmed during counseling observations. Moving toward age-equitable HIV prevention services during adolescent VMMC likely requires standardizing counseling content, as there are significant age differences in HIV prevention content received by adolescents.

  6. Single-row, double-row, and transosseous equivalent techniques for isolated supraspinatus tendon tears with minimal atrophy: A retrospective comparative outcome and radiographic analysis at minimum 2-year followup

    PubMed Central

    McCormick, Frank; Gupta, Anil; Bruce, Ben; Harris, Josh; Abrams, Geoff; Wilson, Hillary; Hussey, Kristen; Cole, Brian J.

    2014-01-01

    Purpose: The purpose of this study was to measure and compare the subjective, objective, and radiographic healing outcomes of single-row (SR), double-row (DR), and transosseous equivalent (TOE) suture techniques for arthroscopic rotator cuff repair. Materials and Methods: A retrospective comparative analysis of arthroscopic rotator cuff repairs by one surgeon from 2004 to 2010 at minimum 2-year followup was performed. Cohorts were matched for age, sex, and tear size. Subjective outcome variables included ASES, Constant, SST, UCLA, and SF-12 scores. Objective outcome variables included strength, active range of motion (ROM). Radiographic healing was assessed by magnetic resonance imaging (MRI). Statistical analysis was performed using analysis of variance (ANOVA), Mann — Whitney and Kruskal — Wallis tests with significance, and the Fisher exact probability test <0.05. Results: Sixty-three patients completed the study requirements (20 SR, 21 DR, 22 TOE). There was a clinically and statistically significant improvement in outcomes with all repair techniques (ASES mean improvement P = <0.0001). The mean final ASES scores were: SR 83; (SD 21.4); DR 87 (SD 18.2); TOE 87 (SD 13.2); (P = 0.73). There was a statistically significant improvement in strength for each repair technique (P < 0.001). There was no significant difference between techniques across all secondary outcome assessments: ASES improvement, Constant, SST, UCLA, SF-12, ROM, Strength, and MRI re-tear rates. There was a decrease in re-tear rates from single row (22%) to double-row (18%) to transosseous equivalent (11%); however, this difference was not statistically significant (P = 0.6). Conclusions: Compared to preoperatively, arthroscopic rotator cuff repair, using SR, DR, or TOE techniques, yielded a clinically and statistically significant improvement in subjective and objective outcomes at a minimum 2-year follow-up. Level of Evidence: Therapeutic level 3. PMID:24926159

  7. Cardiac data increase association between self-report and both expert ratings of task load and task performance in flight simulator tasks: An exploratory study.

    PubMed

    Lehrer, Paul; Karavidas, Maria; Lu, Shou-En; Vaschillo, Evgeny; Vaschillo, Bronya; Cheng, Andrew

    2010-05-01

    Seven professional airplane pilots participated in a one-session test in a Boeing 737-800 simulator. Mental workload for 18 flight tasks was rated by experienced test pilots (hereinafter called "expert ratings") and by study participants' self-report on NASA's Task Load Index (TLX) scale. Pilot performance was rated by a check pilot. The standard deviation of R-R intervals (SDNN) significantly added 3.7% improvement over the TLX in distinguishing high from moderate-load tasks and 2.3% improvement in distinguishing high from combined moderate and low-load tasks. Minimum RRI in the task significantly discriminated high- from medium- and low-load tasks, but did not add significant predictive variance to the TLX. The low-frequency/high-frequency (LF:HF) RRI ratio based on spectral analysis of R-R intervals, and ventricular relaxation time were each negatively related to pilot performance ratings independently of TLX values, while minimum and average RRI were positively related, showing added contribution of these cardiac measures for predicting performance. Cardiac results were not affected by controlling either for respiration rate or motor activity assessed by accelerometry. The results suggest that cardiac assessment can be a useful addition to self-report measures for determining flight task mental workload and risk for performance decrements. Replication on a larger sample is needed to confirm and extend the results. Copyright 2010 Elsevier B.V. All rights reserved.

  8. Correlation of root dentin thickness and length of roots in mesial roots of mandibular molars.

    PubMed

    Dwivedi, Shweta; Dwivedi, Chandra Dhar; Mittal, Neelam

    2014-09-01

    The purpose of this study was to analyze the relation of tooth length and distal wall thickness of mesial roots in mandibular molars at different locations (ie, 2 mm below the furcation and at the junction between the middle and apical third). Forty-five mandibular first molars were taken, and the length of each tooth was measured. Then, specimens were divided into three groups according to their length: group I-long (24.2 mm ± 1.8), group II-medium (21 mm ± 1.5) and group III-short (16.8 mm ± 1.8). mesial root of each marked at two levels - at 2 mm below the furcation as well as at junction of apical and middle third of roots. The minimum thickness of the distal root dentine associated with the buccal and lingual canals of the mesial roots was measured, The distance between the buccal and lingual canals and the depth of concavity in the distal surface of the mesial roots were also measured. Statistical analysis was performed by using analysis of variance and the Student-Newman-Keuls test. The minimum thickness of the distal wall of the mesiobuccal canal was significantly different (P < .001) between groups 1 (long) and 3 (short). Distal wall thickness of the mesiobuccal root and distal concavity of the mesial root of mandibular first molars were found to be thinner in longer teeth compared with shorter teeth. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  9. Experimental design and analysis of activators regenerated by electron transfer-atom transfer radical polymerization experimental conditions for grafting sodium styrene sulfonate from titanium substrates.

    PubMed

    Foster, Rami N; Johansson, Patrik K; Tom, Nicole R; Koelsch, Patrick; Castner, David G

    2015-09-01

    A 2 4 factorial design was used to optimize the activators regenerated by electron transfer-atom transfer radical polymerization (ARGET-ATRP) grafting of sodium styrene sulfonate (NaSS) films from trichlorosilane/10-undecen-1-yl 2-bromo-2-methylpropionate (ester ClSi) functionalized titanium substrates. The process variables explored were: (1) ATRP initiator surface functionalization reaction time; (2) grafting reaction time; (3) CuBr 2 concentration; and (4) reducing agent (vitamin C) concentration. All samples were characterized using x-ray photoelectron spectroscopy (XPS). Two statistical methods were used to analyze the results: (1) analysis of variance with [Formula: see text], using average [Formula: see text] XPS atomic percent as the response; and (2) principal component analysis using a peak list compiled from all the XPS composition results. Through this analysis combined with follow-up studies, the following conclusions are reached: (1) ATRP-initiator surface functionalization reaction times have no discernable effect on NaSS film quality; (2) minimum (≤24 h for this system) grafting reaction times should be used on titanium substrates since NaSS film quality decreased and variability increased with increasing reaction times; (3) minimum (≤0.5 mg cm -2 for this system) CuBr 2 concentrations should be used to graft thicker NaSS films; and (4) no deleterious effects were detected with increasing vitamin C concentration.

  10. Aroma types of flue-cured tobacco in China: spatial distribution and association with climatic factors

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Wu, Wei; Wu, Shu-Cheng; Liu, Hong-Bin; Peng, Qing

    2014-02-01

    Aroma types of flue-cured tobacco (FCT) are classified into light, medium, and heavy in China. However, the spatial distribution of FCT aroma types and the relationships among aroma types, chemical parameters, and climatic variables were still unknown at national scale. In the current study, multi-year averaged chemical parameters (total sugars, reducing sugars, nicotine, total nitrogen, chloride, and K2O) of FCT samples with grade of C3F and climatic variables (mean, minimum and maximum temperatures, rainfall, relative humidity, and sunshine hours) during the growth periods were collected from main planting areas across China. Significant relationships were found between chemical parameters and climatic variables ( p < 0.05). A spatial distribution map of FCT aroma types were produced using support vector machine algorithms and chemical parameters. Significant differences in chemical parameters and climatic variables were observed among the three aroma types based on one-way analysis of variance ( p < 0.05). Areas with light aroma type had significantly lower values of mean, maximum, and minimum temperatures than regions with medium and heavy aroma types ( p < 0.05). Areas with heavy aroma type had significantly lower values of rainfall and relative humidity and higher values of sunshine hours than regions with light and medium aroma types ( p < 0.05). The output produced by classification and regression trees showed that sunshine hours, rainfall, and maximum temperature were the most important factors affecting FCT aroma types at national scale.

  11. Heartbeat of the Sun from Principal Component Analysis and prediction of solar activity on a millenium timescale

    PubMed Central

    Zharkova, V. V.; Shepherd, S. J.; Popova, E.; Zharkov, S. I.

    2015-01-01

    We derive two principal components (PCs) of temporal magnetic field variations over the solar cycles 21–24 from full disk magnetograms covering about 39% of data variance, with σ = 0.67. These PCs are attributed to two main magnetic waves travelling from the opposite hemispheres with close frequencies and increasing phase shift. Using symbolic regeression analysis we also derive mathematical formulae for these waves and calculate their summary curve which we show is linked to solar activity index. Extrapolation of the PCs backward for 800 years reveals the two 350-year grand cycles superimposed on 22 year-cycles with the features showing a remarkable resemblance to sunspot activity reported in the past including the Maunder and Dalton minimum. The summary curve calculated for the next millennium predicts further three grand cycles with the closest grand minimum occurring in the forthcoming cycles 26–27 with the two magnetic field waves separating into the opposite hemispheres leading to strongly reduced solar activity. These grand cycle variations are probed by α − Ω dynamo model with meridional circulation. Dynamo waves are found generated with close frequencies whose interaction leads to beating effects responsible for the grand cycles (350–400 years) superimposed on a standard 22 year cycle. This approach opens a new era in investigation and confident prediction of solar activity on a millenium timescale. PMID:26511513

  12. Deterring sales and provision of alcohol to minors: a study of enforcement in 295 counties in four states.

    PubMed

    Wagenaar, A C; Wolfson, M

    1995-01-01

    The authors analyzed patterns of criminal and administrative enforcement of the legal minimum age for drinking across 295 counties in four States. Data on all arrests and other actions for liquor law violations from 1988 through 1990 were collected from the Federal Bureau of Investigation Uniform Crime Reporting System, State Uniform Crime Reports, and State Alcohol Beverage Control Agencies. Analytic methods used include Spearman rank-order correlation, single-linkage cluster analysis, and multiple regression modeling. Results confirmed low rates of enforcement of the legal drinking age, particularly for actions against those who sell or provide alcohol to underage youth. More than a quarter of all counties examined had no Alcoholic Beverage Control Agency actions against retailers for sales of alcohol to minors during the three periods studied. Analyses indicate that 58 percent of the county-by-county variance in enforcement of the youth liquor law can be accounted by eight community characteristics. Rate of arrests for general minor crime was strongly related to rate of arrests for violations of the youth liquor law, while the number of law enforcement officers per population was not related to arrests for underage drinking. Raising the legal age for drinking to 21 years had substantial benefits in terms of reduced drinking and reduced automobile crashes among youths, despite low level of enforcement. Potential benefits of active enforcement of minimum drinking age statutes are substantial, particularly if efforts are focused on those who provide alcohol to youth.

  13. Comparison of the in vitro Effect of Chemical and Herbal Mouthwashes on Candida albicans

    PubMed Central

    Talebi, Somayeh; Sabokbar, Azar; Riazipour, Majid; Saffari, Mohsen

    2014-01-01

    Background: During the recent decades research has focused to find scientific evidence for the effects of herbal medicines. Researchers are interested in herbal remedies for medication and aim to substitute herbal material instead of chemical formula with limited side effects for human being. Objectives: The aim of the current study was to compare the in vitro effect of herbal and chemical mouthwashes against Candida albicans. Materials and Methods: In this research, we used a standard strain of C. albicans, PTCC 5027. The suspension was made by a fresh culture of C. albicans (24 hours) and the optical density (turbidity equating to a McFarland standard of 0.5) was read at 530 nm. The C. albicans suspension was cultured on Sabouraud dextrose agar plate. Next, two wells were filled with mouthwashes and after incubation at 30ºC for 24 hours, the inhibition zone was measured. Minimum inhibitory concentration (MIC) and minimum fungicidal concentration (MFC) of mouthwashes were determined. Data were analyzed using the SPSS software, independent T-tests and one-sided variance analysis (ANOVA-one way). Results: Based on these findings on agar diffusion with (P = 0.764), MIC and MFC tests (P = 0.879), there were no significant differences between the antifungal effect of herbal and chemical mouthwashes. Conclusions: This study showed that, chemical mouthwashes acted better than herbal mouthwashes and among different chemical mouthwashes, Oral B was most effective. PMID:25741429

  14. The effect of substrate, season, and agroecological zone on mycoflora and aflatoxin contamination of poultry feed from Khyber Pakhtunkhwa, Pakistan.

    PubMed

    Alam, Sahib; Shah, Hamid Ullah; Khan, Habibullah; Magan, Naresh

    2012-10-01

    To study the effects of and interactions among feed types, seasons, and agroecological zones on the total fungal viable count and aflatoxins B1 (AFB1), B2 (AFB2), G1 (AFG1), and G2 (AFG2) production in poultry feed, an experiment was conducted using three-factorial design. A total of 216 samples of poultry feed ingredients, viz. maize, wheat, rice, cotton seed meal (CSM), and finished products, that is, starter and finisher broilers' rations, were collected from Peshawar, Swat, and D. I. Khan districts of Khyber Pakhtunkhwa, Pakistan, during the winter, spring, summer, and autumn seasons of the year 2007/2008. Analysis of variance showed that there was a complex interaction among all these factors and that this influenced the total fungal viable count and relative concentrations of the aflatoxins produced. Minimum total culturable fungi (6.43 × 10³ CFUs/g) were counted in CSM from D. I. Khan region in winter season while maximum (26.68 × 10³ CFUs/g) in starter ration from Peshawar region in summer. Maximum concentrations of AFB1 (191.65 ng/g), AFB2 (86.85 ng/g), and AFG2 (89.90 ng/g) were examined during the summer season whereas the concentration of AFG1 was maximum (167.82 ng/g) in autumn in finisher ration from Peshawar region. Minimum aflatoxins were produced in the winter season across all the three agroecological zones.

  15. Consequences of exposure to ionizing radiation for effector T cell function in vivo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rouse, B.T.; Hartley, D.; Doherty, P.C.

    1989-01-01

    The adoptive transfer of acutely primed and memory virus-immune CD8+ T cells causes enhanced meningitis in both cyclophosphamide (Cy) suppressed, and unsuppressed, recipients infected with lymphocytic choriomeningitis virus (LCMV). The severity of meningitis is assessed by counting cells in cerebrospinal fluid (CSF) obtained from the cisterna magna, which allows measurement of significant inflammatory process ranging from 3 to more than 300 times the background number of cells found in mice injected with virus alone. Exposure of the donor immune population to ionizing radiation prior to transfer has shown that activated T cells from mice primed 7 or 8 days previouslymore » with virus may still promote a low level of meningitis in unsuppressed recipients following as much as 800 rads, while this effect is lost totally in Cy-suppressed mice at 600 rads. Memory T cells are more susceptible and show no evidence of in vivo effector function in either recipient population subsequent to 400 rads, a dose level which also greatly reduces the efficacy of acutely-primed T cells. The results are interpreted as indicating that heavily irradiated cells that are already fully functional show evidence of primary localization to the CNS and a limited capacity to cause pathology. Secondary localization, and events that require further proliferation of the T cells in vivo, are greatly inhibited by irradiation.« less

  16. Targeting of non-dominant antigens as a vaccine strategy to broaden T-cell responses during chronic viral infection.

    PubMed

    Holst, Peter J; Jensen, Benjamin A H; Ragonnaud, Emeline; Thomsen, Allan R; Christensen, Jan P

    2015-01-01

    In this study, we compared adenoviral vaccine vectors with the capacity to induce equally potent immune responses against non-dominant and immunodominant epitopes of murine lymphocytic choriomeningitis virus (LCMV). Our results demonstrate that vaccination targeting non-dominant epitopes facilitates potent virus-induced T-cell responses against immunodominant epitopes during subsequent challenge with highly invasive virus. In contrast, when an immunodominant epitope was included in the vaccine, the T-cell response associated with viral challenge remained focussed on that epitope. Early after challenge with live virus, the CD8+ T cells specific for vaccine-encoded epitopes, displayed a phenotype typically associated with prolonged/persistent antigenic stimulation marked by high levels of KLRG-1, as compared to T cells reacting to epitopes not included in the vaccine. Notably, this association was lost over time in T cells specific for the dominant T cell epitopes, and these cells were fully capable of expanding in response to a new viral challenge. Overall, our data suggests a potential for broadening of the antiviral CD8+ T-cell response by selecting non-dominant antigens to be targeted by vaccination. In addition, our findings suggest that prior adenoviral vaccination is not likely to negatively impact the long-term and protective immune response induced and maintained by a vaccine-attenuated chronic viral infection.

  17. Analysis of the role of tripeptidyl peptidase II in MHC class I antigen presentation in vivo1

    PubMed Central

    Kawahara, Masahiro; York, Ian A.; Hearn, Arron; Farfan, Diego; Rock, Kenneth L.

    2015-01-01

    Previous experiments using enzyme inhibitors and RNAi in cell lysates and cultured cells have suggested that tripeptidyl peptidase II (TPPII) plays a role in creating and destroying MHC class I-presented peptides. However, its precise contribution to these processes has been controversial. To elucidate the importance of TPPII in MHC class I antigen presentation, we analyzed TPPII-deficient gene-trapped mice and cell lines from these animals. In these mice, the expression level of TPPII was reduced by >90% compared to wild-type mice. Thymocytes from TPPII gene-trapped mice displayed more MHC class I on the cell surface, suggesting that TPPII normally limits antigen presentation by destroying peptides overall. TPPII gene-trapped mice responded as well as did wild-type mice to four epitopes from lymphocytic choriomeningitis virus (LCMV). The processing and presentation of peptide precursors with long N-terminal extensions in TPPII gene-trapped embryonic fibroblasts was modestly reduced, but in vivo immunization with recombinant lentiviral or vaccinia virus vectors revealed that such peptide precursors induced an equivalent CD8 T cell response in wild type and TPPII-deficient mice. These data indicate while TPPII contributes to the trimming of peptides with very long N-terminal extensions, TPPII is not essential for generating most MHC class I-presented peptides or for stimulating CTL responses to several antigens in vivo. PMID:19841172

  18. Impact of interpatient variability on organ dose estimates according to MIRD schema: Uncertainty and variance-based sensitivity analysis.

    PubMed

    Zvereva, Alexandra; Kamp, Florian; Schlattl, Helmut; Zankl, Maria; Parodi, Katia

    2018-05-17

    Variance-based sensitivity analysis (SA) is described and applied to the radiation dosimetry model proposed by the Committee on Medical Internal Radiation Dose (MIRD) for the organ-level absorbed dose calculations in nuclear medicine. The uncertainties in the dose coefficients thus calculated are also evaluated. A Monte Carlo approach was used to compute first-order and total-effect SA indices, which rank the input factors according to their influence on the uncertainty in the output organ doses. These methods were applied to the radiopharmaceutical (S)-4-(3- 18 F-fluoropropyl)-L-glutamic acid ( 18 F-FSPG) as an example. Since 18 F-FSPG has 11 notable source regions, a 22-dimensional model was considered here, where 11 input factors are the time-integrated activity coefficients (TIACs) in the source regions and 11 input factors correspond to the sets of the specific absorbed fractions (SAFs) employed in the dose calculation. The SA was restricted to the foregoing 22 input factors. The distributions of the input factors were built based on TIACs of five individuals to whom the radiopharmaceutical 18 F-FSPG was administered and six anatomical models, representing two reference, two overweight, and two slim individuals. The self-absorption SAFs were mass-scaled to correspond to the reference organ masses. The estimated relative uncertainties were in the range 10%-30%, with a minimum and a maximum for absorbed dose coefficients for urinary bladder wall and heart wall, respectively. The applied global variance-based SA enabled us to identify the input factors that have the highest influence on the uncertainty in the organ doses. With the applied mass-scaling of the self-absorption SAFs, these factors included the TIACs for absorbed dose coefficients in the source regions and the SAFs from blood as source region for absorbed dose coefficients in highly vascularized target regions. For some combinations of proximal target and source regions, the corresponding cross-fire SAFs were found to have an impact. Global variance-based SA has been for the first time applied to the MIRD schema for internal dose calculation. Our findings suggest that uncertainties in computed organ doses can be substantially reduced by performing an accurate determination of TIACs in the source regions, accompanied by the estimation of individual source region masses along with the usage of an appropriate blood distribution in a patient's body and, in a few cases, the cross-fire SAFs from proximal source regions. © 2018 American Association of Physicists in Medicine.

  19. Factors associated to quality of life in active elderly.

    PubMed

    Alexandre, Tiago da Silva; Cordeiro, Renata Cereda; Ramos, Luiz Roberto

    2009-08-01

    To analyze whether quality of life in active, healthy elderly individuals is influenced by functional status and sociodemographic characteristics, as well as psychological parameters. Study conducted in a sample of 120 active elderly subjects recruited from two open universities of the third age in the cities of São Paulo and São José dos Campos (Southeastern Brazil) between May 2005 and April 2006. Quality of life was measured using the abbreviated Brazilian version of the World Health Organization Quality of Live (WHOQOL-bref) questionnaire. Sociodemographic, clinical and functional variables were measured through crossculturally validated assessments by the Mini Mental State Examination, Geriatric Depression Scale, Functional Reach, One-Leg Balance Test, Timed Up and Go Test, Six-Minute Walk Test, Human Activity Profile and a complementary questionnaire. Simple descriptive analyses, Pearson's correlation coefficient, Student's t-test for non-related samples, analyses of variance, linear regression analyses and variance inflation factor were performed. The significance level for all statistical tests was set at 0.05. Linear regression analysis showed an independent correlation without colinearity between depressive symptoms measured by the Geriatric Depression Scale and four domains of the WHOQOL-bref. Not having a conjugal life implied greater perception in the social domain; developing leisure activities and having an income over five minimum wages implied greater perception in the environment domain. Functional status had no influence on the Quality of Life variable in the analysis models in active elderly. In contrast, psychological factors, as assessed by the Geriatric Depression Scale, and sociodemographic characteristics, such as marital status, income and leisure activities, had an impact on quality of life.

  20. Towards understanding temporal and spatial dynamics of Bactrocera oleae (Rossi) infestations using decade-long agrometeorological time series

    NASA Astrophysics Data System (ADS)

    Marchi, Susanna; Guidotti, Diego; Ricciolini, Massimo; Petacchi, Ruggero

    2016-11-01

    Insect dynamics depend on temperature patterns, and therefore, global warming may lead to increasing frequencies and intensities of insect outbreaks. The aim of this work was to analyze the dynamics of the olive fruit fly, Bactrocera oleae (Rossi), in Tuscany (Italy). We profited from long-term records of insect infestation and weather data available from the regional database and agrometeorological network. We tested whether the analysis of 13 years of monitoring campaigns can be used as basis for prediction models of B. oleae infestation. We related the percentage of infestation observed in the first part of the host-pest interaction and throughout the whole year to agrometeorological indices formulated for different time periods. A two-step approach was adopted to inspect the effect of weather on infestation: generalized linear model with a binomial error distribution and principal component regression to reduce the number of the agrometeorological factors and remove their collinearity. We found a consistent relationship between the degree of infestation and the temperature-based indices calculated for the previous period. The relationship was stronger with the minimum temperature of winter season. Higher infestation was observed in years following warmer winters. The temperature of the previous winter and spring explained 66 % of variance of early-season infestation. The temperature of previous winter and spring, and current summer, explained 72 % of variance of total annual infestation. These results highlight the importance of multiannual monitoring activity to fully understand the dynamics of B. oleae populations at a regional scale.

  1. Estimation of lipids and lean mass of migrating sandpipers

    USGS Publications Warehouse

    Skagen, Susan K.; Knopf, Fritz L.; Cade, Brian S.

    1993-01-01

    Estimation of lean mass and lipid levels in birds involves the derivation of predictive equations that relate morphological measurements and, more recently, total body electrical conductivity (TOBEC) indices to known lean and lipid masses. Using cross-validation techniques, we evaluated the ability of several published and new predictive equations to estimate lean and lipid mass of Semipalmated Sandpipers (Calidris pusilla) and White-rumped Sandpipers (C. fuscicollis). We also tested ideas of Morton et al. (1991), who stated that current statistical approaches to TOBEC methodology misrepresent precision in estimating body fat. Three published interspecific equations using TOBEC indices predicted lean and lipid masses of our sample of birds with average errors of 8-28% and 53-155%, respectively. A new two-species equation relating lean mass and TOBEC indices revealed average errors of 4.6% and 23.2% in predicting lean and lipid mass, respectively. New intraspecific equations that estimate lipid mass directly from body mass, morphological measurements, and TOBEC indices yielded about a 13% error in lipid estimates. Body mass and morphological measurements explained a substantial portion of the variance (about 90%) in fat mass of both species. Addition of TOBEC indices improved the predictive model more for the smaller than for the larger sandpiper. TOBEC indices explained an additional 7.8% and 2.6% of the variance in fat mass and reduced the minimum breadth of prediction intervals by 0.95 g (32%) and 0.39 g (13%) for Semipalmated and White-rumped Sandpipers, respectively. The breadth of prediction intervals for models used to predict fat levels of individual birds must be considered when interpreting the resultant lipid estimates.

  2. Towards understanding temporal and spatial dynamics of Bactrocera oleae (Rossi) infestations using decade-long agrometeorological time series.

    PubMed

    Marchi, Susanna; Guidotti, Diego; Ricciolini, Massimo; Petacchi, Ruggero

    2016-11-01

    Insect dynamics depend on temperature patterns, and therefore, global warming may lead to increasing frequencies and intensities of insect outbreaks. The aim of this work was to analyze the dynamics of the olive fruit fly, Bactrocera oleae (Rossi), in Tuscany (Italy). We profited from long-term records of insect infestation and weather data available from the regional database and agrometeorological network. We tested whether the analysis of 13 years of monitoring campaigns can be used as basis for prediction models of B. oleae infestation. We related the percentage of infestation observed in the first part of the host-pest interaction and throughout the whole year to agrometeorological indices formulated for different time periods. A two-step approach was adopted to inspect the effect of weather on infestation: generalized linear model with a binomial error distribution and principal component regression to reduce the number of the agrometeorological factors and remove their collinearity. We found a consistent relationship between the degree of infestation and the temperature-based indices calculated for the previous period. The relationship was stronger with the minimum temperature of winter season. Higher infestation was observed in years following warmer winters. The temperature of the previous winter and spring explained 66 % of variance of early-season infestation. The temperature of previous winter and spring, and current summer, explained 72 % of variance of total annual infestation. These results highlight the importance of multiannual monitoring activity to fully understand the dynamics of B. oleae populations at a regional scale.

  3. Visibility characteristics and the impacts of air pollutants and meteorological conditions over Shanghai, China.

    PubMed

    Xue, Dan; Li, Chengfan; Liu, Qian

    2015-06-01

    In China, visibility condition has become an important issue that concerns both society and the scientific community. In order to study visibility characteristics and its influencing factors, visibility data, air pollutants, and meteorological data during the year 2013 were obtained over Shanghai. The temporal variation of atmospheric visibility was analyzed. The mean value of daily visibility of Shanghai was 19.1 km. Visibility exhibited an obvious seasonal cycle. The maximum and minimum visibility occurred in September and December with the values of 27.5 and 7.7 km, respectively. The relationships between the visibility and air pollutant data were calculated. The visibility had negative correlation with NO2, CO, PM2.5, PM10, and SO2 and weak positive correlation with O3. Meteorological data were clustered into four groups to reveal the joint contribution of meteorological variables to the daily average visibility. Usually, under the meteorological condition of high temperature and wind speed, the visibility of Shanghai reached about 25 km, while visibility decreased to 16 km under the weather type of low wind speed and temperature and high relative humid. Principle component analysis was also applied to identify the main cause of visibility variance. The results showed that the low visibility over Shanghai was mainly due to the high air pollution concentrations associated with low wind speed, which explained the total variance of 44.99 %. These results provide new knowledge for better understanding the variations of visibility and have direct implications to supply sound policy on visibility improvement in Shanghai.

  4. An Empirical Study of Design Parameters for Assessing Differential Impacts for Students in Group Randomized Trials.

    PubMed

    Jaciw, Andrew P; Lin, Li; Ma, Boya

    2016-10-18

    Prior research has investigated design parameters for assessing average program impacts on achievement outcomes with cluster randomized trials (CRTs). Less is known about parameters important for assessing differential impacts. This article develops a statistical framework for designing CRTs to assess differences in impact among student subgroups and presents initial estimates of critical parameters. Effect sizes and minimum detectable effect sizes for average and differential impacts are calculated before and after conditioning on effects of covariates using results from several CRTs. Relative sensitivities to detect average and differential impacts are also examined. Student outcomes from six CRTs are analyzed. Achievement in math, science, reading, and writing. The ratio of between-cluster variation in the slope of the moderator divided by total variance-the "moderator gap variance ratio"-is important for designing studies to detect differences in impact between student subgroups. This quantity is the analogue of the intraclass correlation coefficient. Typical values were .02 for gender and .04 for socioeconomic status. For studies considered, in many cases estimates of differential impact were larger than of average impact, and after conditioning on effects of covariates, similar power was achieved for detecting average and differential impacts of the same size. Measuring differential impacts is important for addressing questions of equity, generalizability, and guiding interpretation of subgroup impact findings. Adequate power for doing this is in some cases reachable with CRTs designed to measure average impacts. Continuing collection of parameters for assessing differential impacts is the next step. © The Author(s) 2016.

  5. Fractal dimension and the navigational information provided by natural scenes.

    PubMed

    Shamsyeh Zahedi, Moosarreza; Zeil, Jochen

    2018-01-01

    Recent work on virtual reality navigation in humans has suggested that navigational success is inversely correlated with the fractal dimension (FD) of artificial scenes. Here we investigate the generality of this claim by analysing the relationship between the fractal dimension of natural insect navigation environments and a quantitative measure of the navigational information content of natural scenes. We show that the fractal dimension of natural scenes is in general inversely proportional to the information they provide to navigating agents on heading direction as measured by the rotational image difference function (rotIDF). The rotIDF determines the precision and accuracy with which the orientation of a reference image can be recovered or maintained and the range over which a gradient descent in image differences will find the minimum of the rotIDF, that is the reference orientation. However, scenes with similar fractal dimension can differ significantly in the depth of the rotIDF, because FD does not discriminate between the orientations of edges, while the rotIDF is mainly affected by edge orientation parallel to the axis of rotation. We present a new equation for the rotIDF relating navigational information to quantifiable image properties such as contrast to show (1) that for any given scene the maximum value of the rotIDF (its depth) is proportional to pixel variance and (2) that FD is inversely proportional to pixel variance. This contrast dependence, together with scene differences in orientation statistics, explains why there is no strict relationship between FD and navigational information. Our experimental data and their numerical analysis corroborate these results.

  6. Sleep and Cognitive Function in Multiple Sclerosis.

    PubMed

    Braley, Tiffany J; Kratz, Anna L; Kaplish, Neeraj; Chervin, Ronald D

    2016-08-01

    To examine associations between cognitive performance and polysomnographic measures of obstructive sleep apnea in patients with multiple sclerosis (MS). Participants underwent a comprehensive MS-specific cognitive testing battery (the Minimal Assessment of Cognitive Function in MS, or MACFIMS) and in-laboratory overnight PSG. In adjusted linear regression models, the oxygen desaturation index (ODI) and minimum oxygen saturation (MinO2) were significantly associated with performance on multiple MACFIMS measures, including the Paced Auditory Serial Addition Test (PASAT; 2-sec and 3-sec versions), which assesses working memory, processing speed, and attention, and on the Brief Visuospatial Memory Test-Revised, a test of delayed visual memory. The respiratory disturbance index (RDI) was also significantly associated with PASAT-3 scores as well as the California Verbal Learning Test-II (CVLT-II) Discriminability Index, a test of verbal memory and response inhibition. Among these associations, apnea severity measures accounted for between 12% and 23% of the variance in cognitive test performance. Polysomnographic measures of sleep fragmentation (as reflected by the total arousal index) and total sleep time also showed significant associations with a component of the CVLT-II that assesses response inhibition, explaining 18% and 27% of the variance in performance. Among patients with MS, obstructive sleep apnea and sleep disturbance are significantly associated with diminished visual memory, verbal memory, executive function (as reflected by response inhibition), attention, processing speed, and working memory. If sleep disorders degrade these cognitive functions, effective treatment could offer new opportunities to improve cognitive functioning in patients with MS. A commentary on this article appears in this issue on page 1489. © 2016 Associated Professional Sleep Societies, LLC.

  7. The role of early fine and gross motor development on later motor and cognitive ability.

    PubMed

    Piek, Jan P; Dawson, Lisa; Smith, Leigh M; Gasson, Natalie

    2008-10-01

    The aim of this study was to determine whether information obtained from measures of motor performance taken from birth to 4 years of age predicted motor and cognitive performance of children once they reached school age. Participants included 33 children aged from 6 years to 11 years and 6 months who had been assessed at ages 4 months to 4 years using the ages and stages questionnaires (ASQ: [Squires, J. K., Potter, L., & Bricker, D. (1995). The ages and stages questionnaire users guide. Baltimore: Brookes]). These scores were used to obtain trajectory information consisting of the age of asymptote, maximum or minimum score, and the variance of ASQ scores. At school age, both motor and cognitive ability were assessed using the McCarron Assessment of Neuromuscular Development (MAND: [McCarron, L. (1997). McCarron assessment of neuromuscular development: Fine and gross motor abilities (revised ed.). Dallas, TX: Common Market Press.]), and the Wechsler Intelligence Scale for Children-Version IV (WISC-IV: [Wechsler, D. (2004). WISC-IV integrated technical and interpretive manual. San Antonio, Texas: Harcourt Assessment]). In contrast to previous research, results demonstrated that, although socio-economic status (SES) predicted fine motor performance and three of four cognitive domains at school age, gestational age was not a significant predictor of later development. This may have been due to the low-risk nature of the sample. After controlling for SES, fine motor trajectory information did not account for a significant proportion of the variance in school aged fine motor performance or cognitive performance. The ASQ gross motor trajectory set of predictors accounted for a significant proportion of the variance for cognitive performance once SES was controlled for. Further analysis showed a significant predictive relationship for gross motor trajectory information and the subtests of working memory and processing speed. These results provide evidence for detecting children at risk of developmental delays or disorders with a parent report questionnaire prior to school age. The findings also add to recent investigations into the relationship between early motor development and later cognitive function, and support the need for ongoing research into a potential etiological relationship.

  8. Regional melt-pond fraction and albedo of thin Arctic first-year drift ice in late summer

    NASA Astrophysics Data System (ADS)

    Divine, D. V.; Granskog, M. A.; Hudson, S. R.; Pedersen, C. A.; Karlsen, T. I.; Divina, S. A.; Renner, A. H. H.; Gerland, S.

    2015-02-01

    The paper presents a case study of the regional (≈150 km) morphological and optical properties of a relatively thin, 70-90 cm modal thickness, first-year Arctic sea ice pack in an advanced stage of melt. The study combines in situ broadband albedo measurements representative of the four main surface types (bare ice, dark melt ponds, bright melt ponds and open water) and images acquired by a helicopter-borne camera system during ice-survey flights. The data were collected during the 8-day ICE12 drift experiment carried out by the Norwegian Polar Institute in the Arctic, north of Svalbard at 82.3° N, from 26 July to 3 August 2012. A set of > 10 000 classified images covering about 28 km2 revealed a homogeneous melt across the study area with melt-pond coverage of ≈ 0.29 and open-water fraction of ≈ 0.11. A decrease in pond fractions observed in the 30 km marginal ice zone (MIZ) occurred in parallel with an increase in open-water coverage. The moving block bootstrap technique applied to sequences of classified sea-ice images and albedo of the four surface types yielded a regional albedo estimate of 0.37 (0.35; 0.40) and regional sea-ice albedo of 0.44 (0.42; 0.46). Random sampling from the set of classified images allowed assessment of the aggregate scale of at least 0.7 km2 for the study area. For the current setup configuration it implies a minimum set of 300 images to process in order to gain adequate statistics on the state of the ice cover. Variance analysis also emphasized the importance of longer series of in situ albedo measurements conducted for each surface type when performing regional upscaling. The uncertainty in the mean estimates of surface type albedo from in situ measurements contributed up to 95% of the variance of the estimated regional albedo, with the remaining variance resulting from the spatial inhomogeneity of sea-ice cover.

  9. Development of a method of robust rain gauge network optimization based on intensity-duration-frequency results

    NASA Astrophysics Data System (ADS)

    Chebbi, A.; Bargaoui, Z. K.; da Conceição Cunha, M.

    2012-12-01

    Based on rainfall intensity-duration-frequency (IDF) curves, a robust optimization approach is proposed to identify the best locations to install new rain gauges. The advantage of robust optimization is that the resulting design solutions yield networks which behave acceptably under hydrological variability. Robust optimisation can overcome the problem of selecting representative rainfall events when building the optimization process. This paper reports an original approach based on Montana IDF model parameters. The latter are assumed to be geostatistical variables and their spatial interdependence is taken into account through the adoption of cross-variograms in the kriging process. The problem of optimally locating a fixed number of new monitoring stations based on an existing rain gauge network is addressed. The objective function is based on the mean spatial kriging variance and rainfall variogram structure using a variance-reduction method. Hydrological variability was taken into account by considering and implementing several return periods to define the robust objective function. Variance minimization is performed using a simulated annealing algorithm. In addition, knowledge of the time horizon is needed for the computation of the robust objective function. A short and a long term horizon were studied, and optimal networks are identified for each. The method developed is applied to north Tunisia (area = 21 000 km2). Data inputs for the variogram analysis were IDF curves provided by the hydrological bureau and available for 14 tipping bucket type rain gauges. The recording period was from 1962 to 2001, depending on the station. The study concerns an imaginary network augmentation based on the network configuration in 1973, which is a very significant year in Tunisia because there was an exceptional regional flood event in March 1973. This network consisted of 13 stations and did not meet World Meteorological Organization (WMO) recommendations for the minimum spatial density. So, it is proposed to virtually augment it by 25, 50, 100 and 160% which is the rate that would meet WMO requirements. Results suggest that for a given augmentation robust networks remain stable overall for the two time horizons.

  10. Development of a method of robust rain gauge network optimization based on intensity-duration-frequency results

    NASA Astrophysics Data System (ADS)

    Chebbi, A.; Bargaoui, Z. K.; da Conceição Cunha, M.

    2013-10-01

    Based on rainfall intensity-duration-frequency (IDF) curves, fitted in several locations of a given area, a robust optimization approach is proposed to identify the best locations to install new rain gauges. The advantage of robust optimization is that the resulting design solutions yield networks which behave acceptably under hydrological variability. Robust optimization can overcome the problem of selecting representative rainfall events when building the optimization process. This paper reports an original approach based on Montana IDF model parameters. The latter are assumed to be geostatistical variables, and their spatial interdependence is taken into account through the adoption of cross-variograms in the kriging process. The problem of optimally locating a fixed number of new monitoring stations based on an existing rain gauge network is addressed. The objective function is based on the mean spatial kriging variance and rainfall variogram structure using a variance-reduction method. Hydrological variability was taken into account by considering and implementing several return periods to define the robust objective function. Variance minimization is performed using a simulated annealing algorithm. In addition, knowledge of the time horizon is needed for the computation of the robust objective function. A short- and a long-term horizon were studied, and optimal networks are identified for each. The method developed is applied to north Tunisia (area = 21 000 km2). Data inputs for the variogram analysis were IDF curves provided by the hydrological bureau and available for 14 tipping bucket type rain gauges. The recording period was from 1962 to 2001, depending on the station. The study concerns an imaginary network augmentation based on the network configuration in 1973, which is a very significant year in Tunisia because there was an exceptional regional flood event in March 1973. This network consisted of 13 stations and did not meet World Meteorological Organization (WMO) recommendations for the minimum spatial density. Therefore, it is proposed to augment it by 25, 50, 100 and 160% virtually, which is the rate that would meet WMO requirements. Results suggest that for a given augmentation robust networks remain stable overall for the two time horizons.

  11. Feasibility of digital image colorimetry--application for water calcium hardness determination.

    PubMed

    Lopez-Molinero, Angel; Tejedor Cubero, Valle; Domingo Irigoyen, Rosa; Sipiera Piazuelo, Daniel

    2013-01-15

    Interpretation and relevance of basic RGB colors in Digital Image-Based Colorimetry have been treated in this paper. The studies were carried out using the chromogenic model formed by the reaction between Ca(II) ions and glyoxal bis(2-hydroxyanil). It produced orange-red colored solutions in alkaline media. Individual basic color data (RGB) and also the total intensity of colors, I(tot), were the original variables treated by Factorial Analysis. Te evaluation evidenced that the highest variance of the system and the highest analytical sensitivity were associated to the G color. However, after the study by Fourier transform the basic R color was recognized as an important feature in the information. It was manifested as an intrinsic characteristic that appeared differentiated in terms of low frequency in Fourier transform. The Principal Components Analysis study showed that the variance of the system could be mostly retained in the first principal component, but was dependent on all basic colors. The colored complex was also applied and validated as a Digital Image Colorimetric method for the determination of Ca(II) ions. RGB intensities were linearly correlated with Ca(II) in the range 0.2-2.0 mg L(-1). In the best conditions, using green color, a simple and reliable method for Ca determination could be developed. Its detection limit was established (criterion 3s) as 0.07 mg L(-1). And the reproducibility was lower than 6%, for 1.0 mg L(-1) Ca. Other chromatic parameters were evaluated as dependent calibration variables. Their representativeness, variance and sensitivity were discussed in order to select the best analytical variable. The potentiality of the procedure as a field and ready-to-use method, susceptible to be applied 'in situ' with a minimum of experimental needs, was probed. Applications of the analysis of Ca in different real water samples were carried out. Water of the city net, mineral bottled, and natural-river were analyzed and results were compared and evaluated statistically. The validity was assessed by the alternative techniques of flame atomic absorption spectroscopy and titrimetry. Differences were appreciated but they were consistent with the applied methods. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Effective water surface mapping in macrophyte-covered reservoirs in NE Brazil based on TerraSAR-X time series

    NASA Astrophysics Data System (ADS)

    Zhang, Shuping; Foerster, Saskia; Medeiros, Pedro; de Araújo, José Carlos; Waske, Bjoern

    2018-07-01

    Water supplies in northeastern Brazil strongly depend on the numerous surface water reservoirs of various sizes there. However, the seasonal and long-term water surface dynamics of these reservoirs, particularly the large number of small ones, remain inadequately known. Remote sensing techniques have shown great potentials in water bodies mapping. Yet, the widespread presence of macrophytes in most of the reservoirs often impedes the delineation of the effective water surfaces. Knowledge of the dynamics of the effective water surfaces in the reservoirs is essential for understanding, managing, and modelling the local and regional water resources. In this study, a two-year time series of TerraSAR-X (TSX) satellite data was used to monitor the effective water surface areas in nine reservoirs in NE Brazil. Calm open water surfaces were obtained by segmenting the backscattering coefficients of TSX images with minimum error thresholding. Linear unmixing was implemented on the distributions of gray-level co-occurrence matrix (GLCM) variance in the reservoirs to quantify the proportions of sub-populations dominated by different types of scattering along the TSX time series. By referring to the statistics and the seasonal proportions of the GLCM variance sub-populations the GLCM variance was segmented to map the vegetated water surfaces. The effective water surface areas that include the vegetation-covered waters as well as calm open water in the reservoirs were mapped with accuracies >77%. The temporal and spatial change patterns of water surfaces in the nine reservoirs over a period of two consecutive dry and wet seasons were derived. Precipitation-related soil moisture changes, topography and the dense macrophyte canopies are the main sources of errors in the such-derived effective water surfaces. Independent from in-situ data, the approach employed in this study shows great potential in monitoring water surfaces of different complexity and macrophyte coverage. The effective water surface areas obtained for the reservoirs can provide valuable input for efficient water management and improve the hydrological modelling in this region.

  13. The response of neurons in areas V1 and MT of the alert rhesus monkey to moving random dot patterns.

    PubMed

    Snowden, R J; Treue, S; Andersen, R A

    1992-01-01

    We studied the response of single units to moving random dot patterns in areas V1 and MT of the alert macaque monkey. Most cells could be driven by such patterns; however, many cells in V1 did not give a consistent response but fired at a particular point during stimulus presentation. Thus different dot patterns can produce a markedly different response at any particular time, though the time averaged response is similar. A comparison of the directionality of cells in both V1 and MT using random dot patterns shows the cells of MT to be far more directional. In addition our estimates of the percentage of directional cells in both areas are consistent with previous reports using other stimuli. However, we failed to find a bimodality of directionality in V1 which has been reported in some other studies. The variance associated with response was determined for individual cells. In both areas the variance was found to be approximately equal to the mean response, indicating little difference between extrastriate and striate cortex. These estimates are in broad agreement (though the variance appears a little lower) with those of V1 cells of the anesthetized cat. The response of MT cells was simulated on a computer from the estimates derived from the single unit recordings. While the direction tuning of MT cells is quite wide (mean half-width at half-height approximately 50 degrees) it is shown that the cells can reliably discriminate much smaller changes in direction, and the performance of the cells with the smallest discriminanda were comparable to thresholds measured with human subjects using the same stimuli (approximately 1.1 degrees). Minimum discriminanda for individual cells occurred not at the preferred direction, that is, the peak of their tuning curves, but rather on the steep flanks of their tuning curves. This result suggests that the cells which may mediate the discrimination of motion direction may not be the cells most sensitive to that direction.

  14. Precision gravimetric survey at the conditions of urban agglomerations

    NASA Astrophysics Data System (ADS)

    Sokolova, Tatiana; Lygin, Ivan; Fadeev, Alexander

    2014-05-01

    Large cities growth and aging lead to the irreversible negative changes of underground. The study of these changes at the urban area mainly based on the shallow methods of Geophysics, which extensive usage restricted by technogenic noise. Among others, precision gravimetry is allocated as method with good resistance to the urban noises. The main the objects of urban gravimetric survey are the soil decompaction, leaded to the rocks strength violation and the karst formation. Their gravity effects are too small, therefore investigation requires the modern high-precision equipment and special methods of measurements. The Gravimetry division of Lomonosov Moscow State University examin of modern precision gravimeters Scintrex CG-5 Autograv since 2006. The main performance characteristics of over 20 precision gravimeters were examined in various operational modes. Stationary mode. Long-term gravimetric measurements were carried at a base station. It shows that records obtained differ by high-frequency and mid-frequency (period 5 - 12 hours) components. The high-frequency component, determined as a standard deviation of measurement, characterizes the level of the system sensitivity to external noise and varies for different devices from 2 to 5-7 μGals. Midrange component, which closely meet to the rest of nonlinearity gravimeter drifts, is partially compensated by the equipment. This factor is very important in the case of gravimetric monitoring or observations, when midrange anomalies are the target ones. For the examined gravimeters, amplitudes' deviations, associated with this parameter may reach 10 μGals. Various transportation modes - were performed by walking (softest mode), lift (vertical overload), vehicle (horizontal overloads), boat (vertical plus horizontal overloads) and helicopter. The survey quality was compared by the variance of the measurement results and internal convergence of series. The measurement results variance (from ±2 to ±4 μGals) and its internal convergence are independent on transportation mode. Actually, measurements differ just by the processing time and appropriate number of readings. Important, that the internal convergence is the individual attribute of particular device. For the investigated gravimeters it varies from ±3 up to ±8 μGals. Various stability of the gravimeters location base. The most stable basis (minimum microseisms) in this experiment was a concrete pedestal, the least stable - point on the 28th floor. There is no direct dependence of the measurement results variance at the external noise level. Moreover, the external dispersion between different gravimeters is minimal in the point of the highest microseisms. Conclusions. The quality of the modern high-precision gravimeters Scintrex CG-5 Autograv measurements is determined by stability of the particular device, its standard deviation value and the nonlinearity drift degree. Despite the fact, that mentioned parameters of the tested gravimeters, generally corresponded to the factory characters, for the surveys required accuracy ±2-5 μGals, the best gravimeters should be selected. Practical gravimetric survey with such accuracy allowed reliable determination of the position of technical communication boxes and underground walkway in the urban area, indicated by gravity minimums with the amplitudes from 6-8 μGals and 1 - 15 meters width. The holes' parameters, obtained as the result of interpretationare well aligned with priori data.

  15. Increasing selection response by Bayesian modeling of heterogeneous environmental variances

    USDA-ARS?s Scientific Manuscript database

    Heterogeneity of environmental variance among genotypes reduces selection response because genotypes with higher variance are more likely to be selected than low-variance genotypes. Modeling heterogeneous variances to obtain weighted means corrected for heterogeneous variances is difficult in likel...

  16. Modelling heterogeneity variances in multiple treatment comparison meta-analysis--are informative priors the better solution?

    PubMed

    Thorlund, Kristian; Thabane, Lehana; Mills, Edward J

    2013-01-11

    Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.

  17. Dominance Genetic Variance for Traits Under Directional Selection in Drosophila serrata

    PubMed Central

    Sztepanacz, Jacqueline L.; Blows, Mark W.

    2015-01-01

    In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait–fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. PMID:25783700

  18. VARIANCE ANISOTROPY IN KINETIC PLASMAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parashar, Tulasi N.; Matthaeus, William H.; Oughton, Sean

    Solar wind fluctuations admit well-documented anisotropies of the variance matrix, or polarization, related to the mean magnetic field direction. Typically, one finds a ratio of perpendicular variance to parallel variance of the order of 9:1 for the magnetic field. Here we study the question of whether a kinetic plasma spontaneously generates and sustains parallel variances when initiated with only perpendicular variance. We find that parallel variance grows and saturates at about 5% of the perpendicular variance in a few nonlinear times irrespective of the Reynolds number. For sufficiently large systems (Reynolds numbers) the variance approaches values consistent with the solarmore » wind observations.« less

  19. Landsat 8 and ICESat-2: Performance and potential synergies for quantifying dryland ecosystem vegetation cover and biomass

    USGS Publications Warehouse

    Glenn, Nancy F.; Neuenschwander, Amy; Vierling, Lee A.; Spaete, Lucas; Li, Aihua; Shinneman, Douglas; Pilliod, David S.; Arkle, Robert; McIlroy, Susan

    2016-01-01

    To estimate the potential synergies of OLI and ICESat-2 we used simulated ICESat-2 photon data to predict vegetation structure. In a shrubland environment with a vegetation mean height of 1 m and mean vegetation cover of 33%, vegetation photons are able to explain nearly 50% of the variance in vegetation height. These results, and those from a comparison site, suggest that a lower detection threshold of ICESat-2 may be in the range of 30% canopy cover and roughly 1 m height in comparable dryland environments and these detection thresholds could be used to combine future ICESat-2 photon data with OLI spectral data for improved vegetation structure. Overall, the synergistic use of Landsat 8 and ICESat-2 may improve estimates of above-ground biomass and carbon storage in drylands that meet these minimum thresholds, increasing our ability to monitor drylands for fuel loading and the potential to sequester carbon.

  20. Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.

    PubMed

    Shinzato, Takashi

    2015-01-01

    In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.

  1. A comparison of different postures for scaffold end-frame disassembly.

    PubMed

    Cutlip, R; Hsiao, H; Garcia, R; Becker, E; Mayeux, B

    2000-10-01

    Overexertion and fall injuries comprise the largest category of nonfatal injuries among scaffold workers. This study was conducted to identify the most favourable scaffold end-frame disassembly techniques and evaluate the associated slip potential by measuring whole-body isometric strength capability and required coefficient of friction (RCOF) to reduce the incidence of injury. Forty-six male construction workers were used to study seven typical postures associated with scaffold end-frame disassembly. An analysis of variance (ANOVA) showed that the isometric forces (334.4-676.3 N) resulting from the seven postures were significantly different (p < 0.05). Three of the disassembly postures resulted in considerable biomechanical stress to workers. The symmetric front-lift method with hand locations at knuckle height would be the most favourable posture; at least 93% of the male construction worker population could handle the end frame with minimum overexertion risk. The static RCOF value resulting from this posture during the disassembly phase was less than 0.2, thus the likelihood of a slip should be low.

  2. Optimisation of process parameters on thin shell part using response surface methodology (RSM)

    NASA Astrophysics Data System (ADS)

    Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Rashidi, M. M.

    2017-09-01

    This study is carried out to focus on optimisation of process parameters by simulation using Autodesk Moldflow Insight (AMI) software. The process parameters are taken as the input in order to analyse the warpage value which is the output in this study. There are some significant parameters that have been used which are melt temperature, mould temperature, packing pressure, and cooling time. A plastic part made of Polypropylene (PP) has been selected as the study part. Optimisation of process parameters is applied in Design Expert software with the aim to minimise the obtained warpage value. Response Surface Methodology (RSM) has been applied in this study together with Analysis of Variance (ANOVA) in order to investigate the interactions between parameters that are significant to the warpage value. Thus, the optimised warpage value can be obtained using the model designed using RSM due to its minimum error value. This study comes out with the warpage value improved by using RSM.

  3. Health beliefs of blue collar workers. Increasing self efficacy and removing barriers.

    PubMed

    Wilson, S; Sisk, R J; Baldwin, K A

    1997-05-01

    The study compared the health beliefs of participants and non-participants in a blood pressure and cholesterol screening held at the worksite. A cross sectional, ex-post facto design was used. Questionnaires measuring health beliefs related to cardiac screening and prevention of cardiac problems were distributed to a convenience sample of 200 blue-collar workers in a large manufacturing plant in the Midwest. One hundred fifty-one (75.5%) completed questionnaires were returned, of which 45 had participated in cardiac worksite screening in the past month. A multivariate analysis of variance was used to analyze data. Participants perceived significantly fewer barriers to cardiac screening and scored significantly higher on self efficacy than non-participants. These findings concur with other studies identifying barriers and self efficacy as important predictors of health behavior. Occupational health nurses' efforts are warranted to reduce barriers and improve self efficacy by advertising screenings, scheduling them at convenient times and locations, assuring privacy, and keeping time inconvenience to a minimum.

  4. Reliability Stress-Strength Models for Dependent Observations with Applications in Clinical Trials

    NASA Technical Reports Server (NTRS)

    Kushary, Debashis; Kulkarni, Pandurang M.

    1995-01-01

    We consider the applications of stress-strength models in studies involving clinical trials. When studying the effects and side effects of certain procedures (treatments), it is often the case that observations are correlated due to subject effect, repeated measurements and observing many characteristics simultaneously. We develop maximum likelihood estimator (MLE) and uniform minimum variance unbiased estimator (UMVUE) of the reliability which in clinical trial studies could be considered as the chances of increased side effects due to a particular procedure compared to another. The results developed apply to both univariate and multivariate situations. Also, for the univariate situations we develop simple to use lower confidence bounds for the reliability. Further, we consider the cases when both stress and strength constitute time dependent processes. We define the future reliability and obtain methods of constructing lower confidence bounds for this reliability. Finally, we conduct simulation studies to evaluate all the procedures developed and also to compare the MLE and the UMVUE.

  5. The relationship between seasonal mood change and personality: more apparent than real?

    PubMed

    Jang, K L; Lam, R W; Livesley, W J; Vernon, P A

    1997-06-01

    A number of recent research reports have reported significant relationships between seasonal mood change (seasonality) and personality. However, some of the results are difficult to interpret because of inherent methodological problems, the most important of which is the use of samples drawn from the southern as opposed to the northern hemisphere, where the phenomenon of seasonality may be quite different. The present study examined the relationship between personality and seasonality in a sample from the northern hemisphere (minimum latitude = 49 degrees N). A total of 297 adults drawn from the general population (112 male and 185 female subjects) completed the Seasonal Pattern Assessment Questionnaire, and the results obtained confirmed most of the previously reported relationships and showed that these are reliable across (i) different hemispheres, (ii) different measures of personality and (iii) clinical and general population samples. However, the impact of the relationship seems to be more apparent than real, with personality accounting for just under 15% of the total variance.

  6. [A Review on the Use of Effect Size in Nursing Research].

    PubMed

    Kang, Hyuncheol; Yeon, Kyupil; Han, Sang Tae

    2015-10-01

    The purpose of this study was to introduce the main concepts of statistical testing and effect size and to provide researchers in nursing science with guidance on how to calculate the effect size for the statistical analysis methods mainly used in nursing. For t-test, analysis of variance, correlation analysis, regression analysis which are used frequently in nursing research, the generally accepted definitions of the effect size were explained. Some formulae for calculating the effect size are described with several examples in nursing research. Furthermore, the authors present the required minimum sample size for each example utilizing G*Power 3 software that is the most widely used program for calculating sample size. It is noted that statistical significance testing and effect size measurement serve different purposes, and the reliance on only one side may be misleading. Some practical guidelines are recommended for combining statistical significance testing and effect size measure in order to make more balanced decisions in quantitative analyses.

  7. Tunning PID controller using particle swarm optimization algorithm on automatic voltage regulator system

    NASA Astrophysics Data System (ADS)

    Aranza, M. F.; Kustija, J.; Trisno, B.; Hakim, D. L.

    2016-04-01

    PID Controller (Proportional Integral Derivative) was invented since 1910, but till today still is used in industries, even though there are many kind of modern controllers like fuzz controller and neural network controller are being developed. Performance of PID controller is depend on on Proportional Gain (Kp), Integral Gain (Ki) and Derivative Gain (Kd). These gains can be got by using method Ziegler-Nichols (ZN), gain-phase margin, Root Locus, Minimum Variance dan Gain Scheduling however these methods are not optimal to control systems that nonlinear and have high-orde, in addition, some methods relative hard. To solve those obstacles, particle swarm optimization (PSO) algorithm is proposed to get optimal Kp, Ki and Kd. PSO is proposed because PSO has convergent result and not require many iterations. On this research, PID controller is applied on AVR (Automatic Voltage Regulator). Based on result of analyzing transient, stability Root Locus and frequency response, performance of PID controller is better than Ziegler-Nichols.

  8. Filtering for networked control systems with single/multiple measurement packets subject to multiple-step measurement delays and multiple packet dropouts

    NASA Astrophysics Data System (ADS)

    Moayedi, Maryam; Foo, Yung Kuan; Chai Soh, Yeng

    2011-03-01

    The minimum-variance filtering problem in networked control systems, where both random measurement transmission delays and packet dropouts may occur, is investigated in this article. Instead of following the many existing results that solve the problem by using probabilistic approaches based on the probabilities of the uncertainties occurring between the sensor and the filter, we propose a non-probabilistic approach by time-stamping the measurement packets. Both single-measurement and multiple measurement packets are studied. We also consider the case of burst arrivals, where more than one packet may arrive between the receiver's previous and current sampling times; the scenario where the control input is non-zero and subject to delays and packet dropouts is examined as well. It is shown that, in such a situation, the optimal state estimate would generally be dependent on the possible control input. Simulations are presented to demonstrate the performance of the various proposed filters.

  9. Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity.

    PubMed

    Li, Jielin; Hassebrook, Laurence G; Guan, Chun

    2003-01-01

    Temporal frame-to-frame noise in multipattern structured light projection can significantly corrupt depth measurement repeatability. We present a rigorous stochastic analysis of phase-measuring-profilometry temporal noise as a function of the pattern parameters and the reconstruction coefficients. The analysis is used to optimize the two-frequency phase measurement technique. In phase-measuring profilometry, a sequence of phase-shifted sine-wave patterns is projected onto a surface. In two-frequency phase measurement, two sets of pattern sequences are used. The first, low-frequency set establishes a nonambiguous depth estimate, and the second, high-frequency set is unwrapped, based on the low-frequency estimate, to obtain an accurate depth estimate. If the second frequency is too low, then depth error is caused directly by temporal noise in the phase measurement. If the second frequency is too high, temporal noise triggers ambiguous unwrapping, resulting in depth measurement error. We present a solution for finding the second frequency, where intensity noise variance is at its minimum.

  10. Changes in seasonal streamflow extremes experienced in rivers of Northwestern South America (Colombia)

    NASA Astrophysics Data System (ADS)

    Pierini, J. O.; Restrepo, J. C.; Aguirre, J.; Bustamante, A. M.; Velásquez, G. J.

    2017-04-01

    A measure of the variability in seasonal extreme streamflow was estimated for the Colombian Caribbean coast, using monthly time series of freshwater discharge from ten watersheds. The aim was to detect modifications in the streamflow monthly distribution, seasonal trends, variance and extreme monthly values. A 20-year length time moving window, with 1-year successive shiftments, was applied to the monthly series to analyze the seasonal variability of streamflow. The seasonal-windowed data were statistically fitted through the Gamma distribution function. Scale and shape parameters were computed using the Maximum Likelihood Estimation (MLE) and the bootstrap method for 1000 resample. A trend analysis was performed for each windowed-serie, allowing to detect the window of maximum absolute values for trends. Significant temporal shifts in seasonal streamflow distribution and quantiles (QT), were obtained for different frequencies. Wet and dry extremes periods increased significantly in the last decades. Such increase did not occur simultaneously through the region. Some locations exhibited continuous increases only at minimum QT.

  11. Interplanetary sector boundaries, 1971 - 1973

    NASA Technical Reports Server (NTRS)

    Klein, L.; Burlaga, L. F.

    1979-01-01

    Eighteen interplanetary sector boundary crossings observed at 1 AU by the magnetometer on the IMP-6 spacecraft are discussed. The events were examined on many different time scales ranging from days on either side of the boundary to high resolution measurements of 12.5 vectors per second. Two categories of boundaries were found, one group being relatively thin and the other being thick. In many cases the field vector rotated in a plane from one polarity to the other. Only two of the transitions were null sheets. Using the minimum variance analysis to determine the normals to the plane of rotation, and assuming that this is the same as the normal to the sector boundary surface, it was found that the normals were close to the ecliptic plane. An analysis of tangential discontinuities contained in 4-day periods about the events showed that their orientations were generally not related to the orientations of the sector boundary surface, but rather their characteristics were about the same as those for discontinuities outside the sector boundaries.

  12. Waves associated to COMPLEX EVENTS observed by STEREO

    NASA Astrophysics Data System (ADS)

    Siu Tapia, A. L.; Blanco-Cano, X.; Kajdic, P.; Aguilar-Rodriguez, E.; Russell, C. T.; Jian, L. K.; Luhmann, J. G.

    2012-12-01

    Complex Events are formed by two or more large-scale solar wind structures which interact in space. Typical cases are interactions of: (i) a Magnetic Cloud/Interplanetary Coronal Mass Ejection (MC/ICME) with another MC/ICME transient; and (ii) an ICME followed by a Stream Interaction Region (SIR). Complex Events are of importance for space weather studies and studying them can enhance our understanding of collisionless plasma physics. Some of these structures can produce or enhance southward magnetic fields, a key factor in geomagnetic storm generation. Using data from the STEREO mission during the years 2006-2011, we found 17 Complex Events preceded by a shock wave. We use magnetic field and plasma data to study the micro-scale structure of the shocks, and the waves associated to these shocks and within Complex Events structures. To determine wave characteristics we perform Power Spectra and Minimum Variance Analysis. We also use PLASTIC WAP protons data to study foreshock extensions and the relationship between Complex Regions and particle acceleration to suprathermal energies.

  13. Characterization of large price variations in financial markets

    NASA Astrophysics Data System (ADS)

    Johansen, Anders

    2003-06-01

    Statistics of drawdowns (loss from the last local maximum to the next local minimum) plays an important role in risk assessment of investment strategies. As they incorporate higher (> two) order correlations, they offer a better measure of real market risks than the variance or other cumulants of daily (or some other fixed time scale) of returns. Previous results have shown that the vast majority of drawdowns occurring on the major financial markets have a distribution which is well represented by a stretched exponential, while the largest drawdowns are occurring with a significantly larger rate than predicted by the bulk of the distribution and should thus be characterized as outliers (Eur. Phys. J. B 1 (1998) 141; J. Risk 2001). In the present analysis, the definition of drawdowns is generalized to coarse-grained drawdowns or so-called ε-drawdowns and a link between such ε- outliers and preceding log-periodic power law bubbles previously identified (Quantitative Finance 1 (2001) 452) is established.

  14. A SURVEY OF MAGNETIC WAVES EXCITED BY NEWBORN INTERSTELLAR He{sup +} OBSERVED BY THE ACE SPACECRAFT AT 1 au

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisher, Meghan K.; Argall, Matthew R.; Joyce, Colin J., E-mail: mkl54@wildcats.unh.edu, E-mail: Matthew.Argall@unh.edu, E-mail: cjl46@wildcats.unh.edu

    We report observations of low-frequency waves at 1 au by the magnetic field instrument on the Advanced Composition Explorer ( ACE /MAG) and show evidence that they arise due to newborn interstellar pickup He{sup +}. Twenty-five events are studied. They possess the generally predicted attributes: spacecraft-frame frequencies slightly greater than the He{sup +} cyclotron frequency, left-hand polarization in the spacecraft frame, and transverse fluctuations with minimum variance directions that are quasi-parallel to the mean magnetic field. Their occurrence spans the first 18 years of ACE operations, with no more than 3 such observations in any given year. Thus, the eventsmore » are relatively rare. As with past observations by the Ulysses and Voyager spacecraft, we argue that the waves are seen only when the background turbulence is sufficiently weak as to allow for the slow accumulation of wave energy over many hours.« less

  15. Nanometric depth resolution from multi-focal images in microscopy.

    PubMed

    Dalgarno, Heather I C; Dalgarno, Paul A; Dada, Adetunmise C; Towers, Catherine E; Gibson, Gavin J; Parton, Richard M; Davis, Ilan; Warburton, Richard J; Greenaway, Alan H

    2011-07-06

    We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels.

  16. Nanometric depth resolution from multi-focal images in microscopy

    PubMed Central

    Dalgarno, Heather I. C.; Dalgarno, Paul A.; Dada, Adetunmise C.; Towers, Catherine E.; Gibson, Gavin J.; Parton, Richard M.; Davis, Ilan; Warburton, Richard J.; Greenaway, Alan H.

    2011-01-01

    We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels. PMID:21247948

  17. Cosmic Microwave Background Mapmaking with a Messenger Field

    NASA Astrophysics Data System (ADS)

    Huffenberger, Kevin M.; Næss, Sigurd K.

    2018-01-01

    We apply a messenger field method to solve the linear minimum-variance mapmaking equation in the context of Cosmic Microwave Background (CMB) observations. In simulations, the method produces sky maps that converge significantly faster than those from a conjugate gradient descent algorithm with a diagonal preconditioner, even though the computational cost per iteration is similar. The messenger method recovers large scales in the map better than conjugate gradient descent, and yields a lower overall χ2. In the single, pencil beam approximation, each iteration of the messenger mapmaking procedure produces an unbiased map, and the iterations become more optimal as they proceed. A variant of the method can handle differential data or perform deconvolution mapmaking. The messenger method requires no preconditioner, but a high-quality solution needs a cooling parameter to control the convergence. We study the convergence properties of this new method and discuss how the algorithm is feasible for the large data sets of current and future CMB experiments.

  18. The applicability of ordinary least squares to consistently short distances between taxa in phylogenetic tree construction and the normal distribution test consequences.

    PubMed

    Roux, C Z

    2009-05-01

    Short phylogenetic distances between taxa occur, for example, in studies on ribosomal RNA-genes with slow substitution rates. For consistently short distances, it is proved that in the completely singular limit of the covariance matrix ordinary least squares (OLS) estimates are minimum variance or best linear unbiased (BLU) estimates of phylogenetic tree branch lengths. Although OLS estimates are in this situation equal to generalized least squares (GLS) estimates, the GLS chi-square likelihood ratio test will be inapplicable as it is associated with zero degrees of freedom. Consequently, an OLS normal distribution test or an analogous bootstrap approach will provide optimal branch length tests of significance for consistently short phylogenetic distances. As the asymptotic covariances between branch lengths will be equal to zero, it follows that the product rule can be used in tree evaluation to calculate an approximate simultaneous confidence probability that all interior branches are positive.

  19. Radiation exposure and performance of multiple burn LEO-GEO orbit transfer trajectories

    NASA Technical Reports Server (NTRS)

    Gorland, S. H.

    1985-01-01

    Many potential strategies exist for the transfer of spacecraft from low Earth orbit (LEO) to geosynchronous (GEO) orbit. One strategy has generally been utilized, that being a single impulsive burn at perigee and a GEO insertion burn at apogee. Multiple burn strategies were discussed for orbit transfer vehicles (OTVs) but the transfer times and radiation exposure, particularly for potentially manned missions, were used as arguments against those options. Quantitative results concerning the trip time and radiation encountered by multiple burn orbit transfer missions in order to establish the feasibility of manned missions, the vulnerability of electronics, and the shielding requirements are presented. The performance of these multiple burn missions is quantified in terms of the payload and propellant variances from the minimum energy mission transfer. The missions analyzed varied from one to eight perigee burns and ranged from a high thrust, 1 g acceleration, cryogenic hydrogen-oxygen chemical prpulsion system to a continuous burn, 0.001 g acceleration, hydrogen fueled resistojet propulsion system with a trip time of 60 days.

  20. A Fast and Robust Beamspace Adaptive Beamformer for Medical Ultrasound Imaging.

    PubMed

    Mohades Deylami, Ali; Mohammadzadeh Asl, Babak

    2017-06-01

    Minimum variance beamformer (MVB) increases the resolution and contrast of medical ultrasound imaging compared with nonadaptive beamformers. These advantages come at the expense of high computational complexity that prevents this adaptive beamformer to be applied in a real-time imaging system. A new beamspace (BS) based on discrete cosine transform is proposed in which the medical ultrasound signals can be represented with less dimensions compared with the standard BS. This is because of symmetric beampattern of the beams in the proposed BS compared with the asymmetric ones in the standard BS. This lets us decrease the dimensions of data to two, so a high complex algorithm, such as the MVB, can be applied faster in this BS. The results indicated that by keeping only two beams, the MVB in the proposed BS provides very similar resolution and also better contrast compared with the standard MVB (SMVB) with only 0.44% of needed flops. Also, this beamformer is more robust against sound speed estimation errors than the SMVB.

Top