Sample records for llp variational method

  1. Learning from label proportions in brain-computer interfaces: Online unsupervised learning with guarantees

    PubMed Central

    Verhoeven, Thibault; Schmid, Konstantin; Müller, Klaus-Robert; Tangermann, Michael; Kindermans, Pieter-Jan

    2017-01-01

    Objective Using traditional approaches, a brain-computer interface (BCI) requires the collection of calibration data for new subjects prior to online use. Calibration time can be reduced or eliminated e.g., by subject-to-subject transfer of a pre-trained classifier or unsupervised adaptive classification methods which learn from scratch and adapt over time. While such heuristics work well in practice, none of them can provide theoretical guarantees. Our objective is to modify an event-related potential (ERP) paradigm to work in unison with the machine learning decoder, and thus to achieve a reliable unsupervised calibrationless decoding with a guarantee to recover the true class means. Method We introduce learning from label proportions (LLP) to the BCI community as a new unsupervised, and easy-to-implement classification approach for ERP-based BCIs. The LLP estimates the mean target and non-target responses based on known proportions of these two classes in different groups of the data. We present a visual ERP speller to meet the requirements of LLP. For evaluation, we ran simulations on artificially created data sets and conducted an online BCI study with 13 subjects performing a copy-spelling task. Results Theoretical considerations show that LLP is guaranteed to minimize the loss function similar to a corresponding supervised classifier. LLP performed well in simulations and in the online application, where 84.5% of characters were spelled correctly on average without prior calibration. Significance The continuously adapting LLP classifier is the first unsupervised decoder for ERP BCIs guaranteed to find the optimal decoder. This makes it an ideal solution to avoid tedious calibration sessions. Additionally, LLP works on complementary principles compared to existing unsupervised methods, opening the door for their further enhancement when combined with LLP. PMID:28407016

  2. Learning from label proportions in brain-computer interfaces: Online unsupervised learning with guarantees.

    PubMed

    Hübner, David; Verhoeven, Thibault; Schmid, Konstantin; Müller, Klaus-Robert; Tangermann, Michael; Kindermans, Pieter-Jan

    2017-01-01

    Using traditional approaches, a brain-computer interface (BCI) requires the collection of calibration data for new subjects prior to online use. Calibration time can be reduced or eliminated e.g., by subject-to-subject transfer of a pre-trained classifier or unsupervised adaptive classification methods which learn from scratch and adapt over time. While such heuristics work well in practice, none of them can provide theoretical guarantees. Our objective is to modify an event-related potential (ERP) paradigm to work in unison with the machine learning decoder, and thus to achieve a reliable unsupervised calibrationless decoding with a guarantee to recover the true class means. We introduce learning from label proportions (LLP) to the BCI community as a new unsupervised, and easy-to-implement classification approach for ERP-based BCIs. The LLP estimates the mean target and non-target responses based on known proportions of these two classes in different groups of the data. We present a visual ERP speller to meet the requirements of LLP. For evaluation, we ran simulations on artificially created data sets and conducted an online BCI study with 13 subjects performing a copy-spelling task. Theoretical considerations show that LLP is guaranteed to minimize the loss function similar to a corresponding supervised classifier. LLP performed well in simulations and in the online application, where 84.5% of characters were spelled correctly on average without prior calibration. The continuously adapting LLP classifier is the first unsupervised decoder for ERP BCIs guaranteed to find the optimal decoder. This makes it an ideal solution to avoid tedious calibration sessions. Additionally, LLP works on complementary principles compared to existing unsupervised methods, opening the door for their further enhancement when combined with LLP.

  3. PET Imaging of Very Late Antigen-4 in Melanoma: Comparison of 68Ga- and 64Cu-Labeled NODAGA and CB-TE1A1P-LLP2A Conjugates

    PubMed Central

    Beaino, Wissam; Anderson, Carolyn J.

    2014-01-01

    Melanoma is a malignant tumor derived from epidermal melanocytes, and it is known for its aggressiveness, therapeutic resistance, and predisposition for late metastasis. Very late antigen-4 (VLA-4; also called integrin α4β1) is a transmembrane noncovalent heterodimer overexpressed in melanoma tumors that plays an important role in tumor growth, angiogenesis, and metastasis by promoting adhesion and migration of cancer cells. In this study, we evaluated 2 conjugates of a high-affinity VLA-4 peptidomimetic ligand, LLP2A, for PET/CT imaging in a subcutaneous and metastatic melanoma tumor. Methods LLP2A was conjugated to 1,4,8,11-tetraazacyclotetradecane-1-(methane phosphonic acid)-8-(methane carboxylic acid) (CB-TE1A1P) and 2-(4,7-bis(carboxymethyl)-1,4,7-triazonan-1-yl)pentanedioic acid (NODAGA) chelators for 68Ga and 64Cu labeling. The conjugates were synthesized by solid-phase peptide synthesis, purified by reversed-phase high-performance liquid chromatography, and verified by liquid chromatography mass spectrometry. Saturation and competitive binding assays with B16F10 melanoma cells determined the affinity of the compounds for VLA-4. The biodistributions of the LLP2A conjugates were evaluated in murine B16F10 subcutaneous tumor–bearing C57BL/6 mice. Melanoma metastasis was induced by intracardiac injection of B16F10 cells. PET/CT imaging was performed at 2, 4, and 24 h after injection for the 64Cu tracers and 1 h after injection for the 68Ga tracer. Results 64Cu-labeled CB-TE1A1P-PEG4-LLP2A and NODAGA-PEG4-LLP2A showed high affinity to VLA-4, with a comparable dissociation constant (0.28 vs. 0.23 nM) and receptor concentration (296 vs. 243 fmol/mg). The tumor uptake at 2 h after injection was comparable for the 2 probes, but 64Cu-CB-TE1A1P-PEG4-LLP2A trended toward higher uptake than 64Cu-NODAGA-PEG4-LLP2A (16.9 ± 2.2 vs. 13.4 ± 1.7 percentage injected dose per gram, P = 0.07). Tumor-to-muscle and tumor-to-blood ratios from biodistribution and PET/CT images were significantly higher for 64Cu-CB-TE1A1P-PEG4-LLP2A than 64Cu-NODAGA-PEG4-LLP2A (all P values < 0.05). PET/CT imaging of metastatic melanoma with 68Ga-NODAGA-PEG4-LLP2A and 64Cu-NODAGA-PEG4-LLP2A showed high uptake of the probes at the site of metastasis, correlating with the bioluminescence imaging of the tumor. Conclusion These data demonstrate that 64Cu-labeled CB-TE1A1P/NODAGA LLP2A conjugates and 68Ga-labeled NODAGA-LLP2A are excellent imaging agents for melanoma and potentially other VLA-4–positive tumors. 64Cu-CB-TE1A1P-PEG4-LLP2A had the most optimal tumor–to–nontarget tissue ratios for translation into humans as a PET imaging agent for melanoma. PMID:25256059

  4. Rational site-directed mutations of the LLP-1 and LLP-2 lentivirus lytic peptide domains in the intracytoplasmic tail of human immunodeficiency virus type 1 gp41 indicate common functions in cell-cell fusion but distinct roles in virion envelope incorporation.

    PubMed

    Kalia, Vandana; Sarkar, Surojit; Gupta, Phalguni; Montelaro, Ronald C

    2003-03-01

    Two highly conserved cationic amphipathic alpha-helical motifs, designated lentivirus lytic peptides 1 and 2 (LLP-1 and LLP-2), have been characterized in the carboxyl terminus of the transmembrane (TM) envelope glycoprotein (Env) of lentiviruses. Although various properties have been attributed to these domains, their structural and functional significance is not clearly understood. To determine the specific contributions of the Env LLP domains to Env expression, processing, and incorporation and to viral replication and syncytium induction, site-directed LLP mutants of a primary dualtropic infectious human immunodeficiency virus type 1 (HIV-1) isolate (ME46) were examined. Substitutions were made for highly conserved arginine residues in either the LLP-1 or LLP-2 domain (MX1 or MX2, respectively) or in both domains (MX4). The HIV-1 mutants with altered LLP domains demonstrated distinct phenotypes. The LLP-1 mutants (MX1 and MX4) were replication defective and showed an average of 85% decrease in infectivity, which was associated with an evident decrease in gp41 incorporation into virions without a significant decrease in Env expression or processing in transfected 293T cells. In contrast, MX2 virus was replication competent and incorporated a full complement of Env into its virions, indicating a differential role for the LLP-1 domain in Env incorporation. Interestingly, the replication-competent MX2 virus was impaired in its ability to induce syncytia in T-cell lines. This defect in cell-cell fusion did not correlate with apparent defects in the levels of cell surface Env expression, oligomerization, or conformation. The lack of syncytium formation, however, correlated with a decrease of about 90% in MX2 Env fusogenicity compared to that of wild-type Env in quantitative luciferase-based cell-cell fusion assays. The LLP-1 mutant MX1 and MX4 Envs also exhibited an average of 80% decrease in fusogenicity. Altogether, these results demonstrate for the first time that the highly conserved LLP domains perform critical but distinct functions in Env incorporation and fusogenicity.

  5. UV exposure modulates hemidesmosome plasticity, contributing to long-term pigmentation in human skin

    PubMed Central

    Coelho, Sergio G.; Valencia, Julio C.; Yin, Lanlan; Smuda, Christoph; Mahns, Andre; Kolbe, Ludger; Miller, Sharon A.; Beer, Janusz Z.; Zhang, Guofeng; Tuma, Pamela L.; Hearing, Vincent J.

    2014-01-01

    Human skin color, i.e. pigmentation, differs widely among individuals as do their responses to various types of ultraviolet radiation (UV) and their risks of skin cancer. In some individuals UV-induced pigmentation persists for months to years in a phenomenon termed long-lasting pigmentation (LLP). It is unclear whether LLP is an indicator of potential risk for skin cancer. LLP seems to have similar features to other forms of hyperpigmentation, e.g. solar lentigines or age spots, which are clinical markers of photodamage and risk factors for precancerous lesions. To investigate what UV-induced molecular changes may persist in individuals with LLP, clinical specimens from non-sunburn-inducing repeated UV exposures (UVA, UVB or UVA+UVB) at 4 months post-exposure (short-term LLP) were evaluated by microarray analysis and dataset mining. Validated targets were further evaluated in clinical specimens from 6 healthy individuals (3 LLP+ and 3 LLP-) followed for more than 9 months (long-term LLP) who initially received a single sunburn-inducing UVA+UVB exposure. The results support a UV-induced hyperpigmentation model in which basal keratinocytes have an impaired ability to remove melanin that leads to a compensatory mechanism by neighboring keratinocytes with increased proliferative capacity to maintain skin homeostasis. The attenuated expression of SOX7 and other hemidesmosomal components (integrin α6β4 and plectin) leads to increased melanosome uptake by keratinocytes and points to a spatial regulation within the epidermis. The reduced density of hemidesmosomes provides supporting evidence for plasticity at the epidermal-dermal junction. Altered hemidesmosome plasticity, and the sustained nature of LLP, may be mediated by the role of SOX7 in basal keratinocytes. The long-term sustained subtle changes detected are modest, but sufficient to create dramatic visual differences in skin color. These results suggest that the hyperpigmentation phenomenon leading to increased interdigitation develops in order to maintain normal skin homeostasis in individuals with LLP. PMID:25488118

  6. UV exposure modulates hemidesmosome plasticity, contributing to long-term pigmentation in human skin.

    PubMed

    Coelho, Sergio G; Valencia, Julio C; Yin, Lanlan; Smuda, Christoph; Mahns, Andre; Kolbe, Ludger; Miller, Sharon A; Beer, Janusz Z; Zhang, Guofeng; Tuma, Pamela L; Hearing, Vincent J

    2015-05-01

    Human skin colour, ie pigmentation, differs widely among individuals, as do their responses to various types of ultraviolet radiation (UV) and their risks of skin cancer. In some individuals, UV-induced pigmentation persists for months to years in a phenomenon termed long-lasting pigmentation (LLP). It is unclear whether LLP is an indicator of potential risk for skin cancer. LLP seems to have similar features to other forms of hyperpigmentation, eg solar lentigines or age spots, which are clinical markers of photodamage and risk factors for precancerous lesions. To investigate what UV-induced molecular changes may persist in individuals with LLP, clinical specimens from non-sunburn-inducing repeated UV exposures (UVA, UVB or UVA + UVB) at 4 months post-exposure (short-term LLP) were evaluated by microarray analysis and dataset mining. Validated targets were further evaluated in clinical specimens from six healthy individuals (three LLP+ and three LLP-) followed for more than 9 months (long-term LLP) who initially received a single sunburn-inducing UVA + UVB exposure. The results support a UV-induced hyperpigmentation model in which basal keratinocytes have an impaired ability to remove melanin that leads to a compensatory mechanism by neighbouring keratinocytes with increased proliferative capacity to maintain skin homeostasis. The attenuated expression of SOX7 and other hemidesmosomal components (integrin α6β4 and plectin) leads to increased melanosome uptake by keratinocytes and points to a spatial regulation within the epidermis. The reduced density of hemidesmosomes provides supporting evidence for plasticity at the epidermal-dermal junction. Altered hemidesmosome plasticity, and the sustained nature of LLP, may be mediated by the role of SOX7 in basal keratinocytes. The long-term sustained subtle changes detected are modest, but sufficient to create dramatic visual differences in skin colour. These results suggest that the hyperpigmentation phenomenon leading to increased interdigitation develops in order to maintain normal skin homeostasis in individuals with LLP. Copyright © 2014 Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.

  7. Quality Control Review of the PricewaterhouseCoopers LLP FY 2014 Single Audit of Carnegie Mellon University

    DTIC Science & Technology

    2015-12-17

    No. DODIG-2016-034 D E C E M B E R 1 7 , 2 0 1 5 Quality Control Review of the PricewaterhouseCoopers LLP FY 2014 Single Audit of Carnegie ...ALEXANDRIA, VIRGINIA 22350-1500 December 17, 2015 Audit Partner PricewaterhouseCoopers LLP Board of Trustees Carnegie Mellon University Director, Sponsored...Projects Accounting Carnegie Mellon University SUBJECT: Quality Control Review of the PricewaterhouseCoopers LLP FY 2014 Single Audit of Carnegie

  8. 17 CFR 230.437a - Written consents.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...) Are filing a registration statement containing financial statements in which Arthur Andersen LLP (or a foreign affiliate of Arthur Andersen LLP) had been acting as the independent public accountant. (b... dispense with the requirement for the registrant to file the written consent of Arthur Andersen LLP (or a...

  9. Hartree-Fock treatment of Fermi polarons using the Lee-Low-Pine transformation

    NASA Astrophysics Data System (ADS)

    Kain, Ben; Ling, Hong Y.

    2017-09-01

    We consider the Fermi polaron problem at zero temperature, where a single impurity interacts with noninteracting host fermions. We approach the problem starting with a Fröhlich-like Hamiltonian where the impurity is described with canonical position and momentum operators. We apply the Lee-Low-Pine (LLP) transformation to change the fermionic Fröhlich Hamiltonian into the fermionic LLP Hamiltonian, which describes a many-body system containing host fermions only. We adapt the self-consistent Hartree-Fock (HF) approach, first proposed by Edwards, to the fermionic LLP Hamiltonian in which a pair of host fermions with momenta k and k' interact with a potential proportional to k .k' . We apply the HF theory, which has the advantage of not restricting the number of particle-hole pairs, to repulsive Fermi polarons in one dimension. When the impurity and host fermion masses are equal our variational ansatz, where HF orbitals are expanded in terms of free-particle states, produces results in excellent agreement with McGuire's exact analytical results based on the Bethe ansatz. This work raises the prospect of using the HF ansatz and its time-dependent generalization as building blocks for developing all-coupling theories for both equilibrium and nonequilibrium Fermi polarons in higher dimensions.

  10. 76 FR 15826 - Fisheries of the Exclusive Economic Zone Off Alaska; Gulf of Alaska License Limitation Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-22

    ... or non-trawl gear designations); and (4) designate the type of vessel operation permitted (i.e., LLP... catcher/processor). The endorsements for specific regulatory areas, gear designations, and vessel operational types are non-severable from the LLP license (i.e., once an LLP license is issued, the components...

  11. Improved Mobilization of Exogenous Mesenchymal Stem Cells to Bone for Fracture Healing and Sex Difference

    PubMed Central

    Yao, Wei; Evan Lay, Yu-An; Kot, Alexander; Liu, Ruiwu; Zhang, Hongliang; Chen, Haiyan; Lam, Kit; Lane, Nancy E.

    2017-01-01

    Mesenchymal stem cell (MSC) transplantation has been tested in animal and clinical fracture studies. We have developed a bone-seeking compound, LLP2A-Alendronate (LLP2A-Ale) that augments MSC homing to bone. The purpose of this study was to determine whether treatment with LLP2A-Ale or a combination of LLP2A-Ale and MSCs would accelerate bone healing in a mouse closed fracture model and if the effects are sex dependent. A right mid-femur fracture was induced in two-month-old osterix-mCherry (Osx-mCherry) male and female reporter mice. The mice were subsequently treated with placebo, LLP2A-Ale (500 µg/kg, IV), MSCs derived from wild-type female Osx-mCherry adipose tissue (ADSC, 3 × 105, IV) or ADSC + LLP2A-Ale. In phosphate buffered saline-treated mice, females had higher systemic and surface-based bone formation than males. However, male mice formed a larger callus and had higher volumetric bone mineral density and bone strength than females. LLP2A-Ale treatment increased exogenous MSC homing to the fracture gaps, enhanced incorporation of these cells into callus formation, and stimulated endochondral bone formation. Additionally, higher engraftment of exogenous MSCs in fracture gaps seemed to contribute to overall fracture healing and improved bone strength. These effects were sex-independent. There was a sex-difference in the rate of fracture healing. ADSC and LLP2A-Ale combination treatment was superior to on callus formation, which was independent of sex. Increased mobilization of exogenous MSCs to fracture sites accelerated endochondral bone formation and enhanced bone tissue regeneration. PMID:27334693

  12. Evaluation of (68)Ga- and (177)Lu-DOTA-PEG4-LLP2A for VLA-4-Targeted PET Imaging and Treatment of Metastatic Melanoma.

    PubMed

    Beaino, Wissam; Nedrow, Jessie R; Anderson, Carolyn J

    2015-06-01

    Malignant melanoma is a highly aggressive cancer, and the incidence of this disease is increasing worldwide at an alarming rate. Despite advances in the treatment of melanoma, patients with metastatic disease still have a poor prognosis and low survival rate. New strategies, including targeted radiotherapy, would provide options for patients who become resistant to therapies such as BRAF inhibitors. Very late antigen-4 (VLA-4) is expressed on melanoma tumor cells in higher levels in more aggressive and metastatic disease and may provide an ideal target for drug delivery and targeted radiotherapy. In this study, we evaluated (177)Lu- and (68)Ga-labeled DOTA-PEG4-LLP2A as a VLA-4-targeted radiotherapeutic with a companion PET agent for diagnosis and monitoring metastatic melanoma treatment. DOTA-PEG4-LLP2A was synthesized by solid-phase synthesis. The affinity of (177)Lu- and (68)Ga-labeled DOTA-PEG4-LLP2A to VLA-4 was determined in B16F10 melanoma cells by saturation binding and competitive binding assays, respectively. Biodistribution of the LLP2A conjugates was determined in C57BL/6 mice bearing B16F10 subcutaneous tumors, while PET/CT imaging was performed in subcutaneous and metastatic models. (177)Lu-DOTA-PEG4-LLP2A showed high affinity to VLA-4 with a Kd of 4.1 ± 1.5 nM and demonstrated significant accumulation in the B16F10 melanoma tumor after 4 h (31.5 ± 7.8%ID/g). The tumor/blood ratio of (177)Lu-DOTA-PEG4-LLP2A was highest at 24 h (185 ± 26). PET imaging of metastatic melanoma with (68)Ga-DOTA-PEG4-LLP2A showed high uptake in sites of metastases and correlated with bioluminescence imaging of the tumors. These data demonstrate that (177)Lu-DOTA-PEG4-LLP2A has potential as a targeted therapeutic for treating melanoma as well as other VLA-4-expressing tumors. In addition, (68)Ga-DOTA-PEG4-LLP2A is a readily translatable companion PET tracer for imaging of metastatic melanoma.

  13. Synthesis of Lectin-Like Protein in Developing Cotyledons of Normal and Phytohemagglutinin-Deficient Phaseolus vulgaris.

    PubMed

    Vitale, A; Zoppè, M; Fabbrini, M S; Genga, A; Rivas, L; Bollini, R

    1989-07-01

    The genome of the common bean Phaseolus vulgaris contains a small gene family that encodes lectin and lectin-like proteins (phytohemagglutinin, arcelin, and others). One of these phytohemagglutinin-like genes was cloned by L. M. Hoffman et al. ([1982] Nucleic Acids Res 10: 7819-7828), but its product in bean cells has never been identified. We identified the product of this gene, referred to as lectin-like protein (LLP), as an abundant polypeptide synthesized on the endoplasmic reticulum (ER) of developing bean cotyledons. The gene product was first identified in extracts of Xenopus oocytes injected with either cotyledonary bean RNA or LLP-mRNA obtained by hybrid-selection with an LLP cDNA clone. A tryptic map of this protein was identical with a tryptic map of a polypeptide with the same SDS-PAGE mobility detectable in the ER of bean cotyledons pulse-labeled with either [(3)H]glucosamine or [(3)H]amino acids, both in a normal and in a phytohemagglutinin-deficient cultivar (cultivars Greensleeves and Pinto UI 111). Greensleeves LLP has M(r) 40,000 and most probably has four asparagine-linked glycans. Pinto UI 111 LLP has M(r) 38,500. Unlike phytohemagglutinin which is a tetramer, LLP appears to be a monomer by gel filtration analysis. Incorporation of [(3)H]amino acids indicates that synthesis of LLP accounts for about 3% of the proteins synthesized on the ER, a level similar to that of phytohemagglutinin.

  14. Space station automation of common module power management and distribution, volume 2

    NASA Technical Reports Server (NTRS)

    Ashworth, B.; Riedesel, J.; Myers, C.; Jakstas, L.; Smith, D.

    1990-01-01

    The new Space Station Module Power Management and Distribution System (SSM/PMAD) testbed automation system is described. The subjects discussed include testbed 120 volt dc star bus configuration and operation, SSM/PMAD automation system architecture, fault recovery and management expert system (FRAMES) rules english representation, the SSM/PMAD user interface, and the SSM/PMAD future direction. Several appendices are presented and include the following: SSM/PMAD interface user manual version 1.0, SSM/PMAD lowest level processor (LLP) reference, SSM/PMAD technical reference version 1.0, SSM/PMAD LLP visual control logic representation's (VCLR's), SSM/PMAD LLP/FRAMES interface control document (ICD) , and SSM/PMAD LLP switchgear interface controller (SIC) ICD.

  15. "The Help I Didn't Know I Needed": How a Living-Learning Program "FITS" into the First-Year Experience

    ERIC Educational Resources Information Center

    Mach, Kane P.; Gordon, Sarah R.; Tearney, Katie; McClinton, Leon, Jr.

    2018-01-01

    Living-learning programs (LLPs) can have a positive influence on student success. The purpose of this study is to provide an example of an LLP that effectively involves faculty and college academic staff as well as to understand student experiences in the LLP related to personal, professional, and academic growth. The LLP at the focus of this…

  16. Evidence that long-lasting potentiation in limbic circuits mediating defensive behaviour in the right hemisphere underlies pharmacological stressor (FG-7142) induced lasting increases in anxiety-like behaviour: role of benzodiazepine receptors.

    PubMed

    Adamec, R E

    2000-01-01

    The hypothesis that benzodiazepine receptors mediate initiation of lasting behavioural changes induced by FG-7142 was supported in this study. Behavioural changes normally induced by FG-7142 were blocked by prior administration of the competitive benzodiazepine receptor blocker, Flumazenil. When cats were subsequently given FG-7142 alone, the drug produced lasting behavioural changes in species characteristic defensive responses to rodent and cat vocal threat. FG-7142 also induced long-lasting potentiation (LLP) of evoked potentials in a number of efferent pathways from the amygdala in both hemispheres. Flumazenil given prior to FG-7142 blocked LLP in all but one of the amygdala efferent pathways, suggesting benzodiazepine receptor dependence of initiation of LLP. Three physiological changes were most closely correlated with behavioural changes. LLP in the right amygdalo-ventromedial hypothalamic (VMH) and amygdalo-periacqueductal gray (PAG) pathways coincided closely with behavioural changes, as did a reduced threshold for the right amygdalo-VMH evoked potential. Administration of Flumazenil after FG-7142 returned defensive behaviour to pre FG-7142 baseline levels in a drug-dependent manner. At the same time LLP only in the right amygdalo-PAG pathway was reduced by Flumazenil. LLP in other pathways and amygdalo-VMH threshold were unaltered by Flumazenil. Moreover, covariance analyses indicated that increased defensiveness depended solely on LLP in the right amygdalo-PAG. These findings support the view that maintenance of lasting increases in defensive behaviour depend upon LLP of excitatory neural transmission between amygdala and lateral column of the PAG in the right hemisphere. Moreover, FG-7142 may be a useful model of the effects of traumatic stressors on limbic system function in anxiety, especially in view of the recent data in humans implicating right hemispheric function in persisting negative affective states in post-traumatic stress disorder.

  17. Pig lumbar spine anatomy and imaging-guided lateral lumbar puncture: a new large animal model for intrathecal drug delivery.

    PubMed

    Pleticha, Josef; Maus, Timothy P; Jeng-Singh, Christian; Marsh, Michael P; Al-Saiegh, Fadi; Christner, Jodie A; Lee, Kendall H; Beutler, Andreas S

    2013-05-30

    Intrathecal (IT) administration is an important route of drug delivery, and its modelling in a large animal species is of critical value. Although domestic swine is the preferred species for preclinical pharmacology, no minimally invasive method has been established to deliver agents into the IT space. While a "blind" lumbar puncture (LP) can sample cerebrospinal fluid (CSF), it is unreliable for drug delivery in pigs. Using computed tomography (CT), we determined the underlying anatomical reasons for this irregularity. The pig spinal cord was visualised terminating at the S2-S3 level. The lumbar region contained only small amounts of CSF found in the lateral recess. Additional anatomical constraints included ossification of the midline ligaments, overlapping lamina with small interlaminar spaces, and a large bulk of epidural adipose tissue. Accommodating the the pig CT anatomy, we developed a lateral LP (LLP) injection technique that employs advanced planning of the needle path and monitoring of the IT injection progress. The key features of the LLP procedure involved choosing a vertebral level without overlapping lamina or spinal ligament ossification, a needle trajectory crossing the midline, and entering the IT space in its lateral recess. Effective IT delivery was validated by the injection of contrast media to obtain a CT myelogram. LLP represents a safe and reliable method to deliver agents to the lumbar pig IT space, which can be implemented in a straightforward way by any laboratory with access to CT equipment. Therefore, LLP is an attractive large animal model for preclinical studies of IT therapies. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Comparison of two cross-bridged macrocyclicchelators for the evaluation of 64Cu-labeled-LLP2A, a peptidomimetic ligand targeting VLA-4-positive tumors

    PubMed Central

    Jiang, Majiong; Ferdani, Riccardo; Shokeen, Monica; Anderson, Carolyn J.

    2013-01-01

    Integrin α4β1 (also called very late antigen-4 or VLA-4) plays an important role in tumor growth, angiogenesis and metastasis, and there has been increasing interest in targeting this receptor for cancer imaging and therapy. In this study, we conjugated a peptidomimetic ligand known to have good binding affinity for α4β1 integrin to a cross-bridged macrocyclicchelator with a methane phosphonic acid pendant arm, CB-TE1A1P. CB-TE1A1P-LLP2A was labeled with 64Cu under mild conditions in high specific activity, in contrast to conjugates based on the “gold standard” di-acid cross-bridged chelator, CB-TE2A, which require high temperatures for efficient radiolabeling. Saturation binding assays demonstrated that 64Cu-CB-TE1A1P-LLP2A had comparable binding affinity(1.2 nM vs 1.6 nM) but more binding sites(Bmax = 471 fmol/mg) in B16F10 melanoma tumor cells than 64Cu-CB-TE2A-LLP2A (Bmax = 304 fmol/mg, p < 0.03). In biodistribution studies, 64Cu-CB-TE1A1P-LLP2A had less renal retention but higher uptake in tumor(11.4 ± 2.3 %ID/g versus 3.1± 0.6 %ID/g, p<0.001)and other receptor-rich tissues compared to 64Cu-CB-TE2A-LLP2A. At 2 h post-injection, 64Cu-CB-TE1A1P-LLP2A also had significantly higher tumor: blood and tumor: muscle ratios than 64Cu-CB-TE2A-LLP2A(CB-TE1A1P = 19.5 ± 3.0 and 13.0 ± 1.4, respectively, CB-TE2A = 4.2 ± 1.4 and 5.5 ± 0.9, respectively, p< 0.001). These data demonstrate that 64Cu-CB-TE1A1P-LLP2A is an excellent PET radiopharmaceutical for the imaging of α4β1 positive tumors and also has potential for imaging other α4β1 positive cells such as those of the pre-metastatic niche. PMID:23265977

  19. Army’s Audit Readiness at Risk Because of Unreliable Data in the Appropriation Status Report

    DTIC Science & Technology

    2014-06-26

    for data reviewed from the December 2012 report. Material differences existed between reported data from the General Fund Enterprise Business System...GFEBS. An independent public accounting firm, KPMG LLP, performed examinations of SBR business processes at Army activities using GFEBS.   KPMG LLP...to its processes. A second report from KPMG LLP, issued April 9, 2013, reported that the Army did not meet the FIAR guidance requirements for

  20. Data-driven model-independent searches for long-lived particles at the LHC

    NASA Astrophysics Data System (ADS)

    Coccaro, Andrea; Curtin, David; Lubatti, H. J.; Russell, Heather; Shelton, Jessie

    2016-12-01

    Neutral long-lived particles (LLPs) are highly motivated by many beyond the Standard Model scenarios, such as theories of supersymmetry, baryogenesis, and neutral naturalness, and present both tremendous discovery opportunities and experimental challenges for the LHC. A major bottleneck for current LLP searches is the prediction of Standard Model backgrounds, which are often impossible to simulate accurately. In this paper, we propose a general strategy for obtaining differential, data-driven background estimates in LLP searches, thereby notably extending the range of LLP masses and lifetimes that can be discovered at the LHC. We focus on LLPs decaying in the ATLAS muon system, where triggers providing both signal and control samples are available at LHC run 2. While many existing searches require two displaced decays, a detailed knowledge of backgrounds will allow for very inclusive searches that require just one detected LLP decay. As we demonstrate for the h →X X signal model of LLP pair production in exotic Higgs decays, this results in dramatic sensitivity improvements for proper lifetimes ≳10 m . In theories of neutral naturalness, this extends reach to glueball masses far below the b ¯b threshold. Our strategy readily generalizes to other signal models and other detector subsystems. This framework therefore lends itself to the development of a systematic, model-independent LLP search program, in analogy to the highly successful simplified-model framework of prompt searches.

  1. 75 FR 41179 - Sunshine Act Notices

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-15

    .... Laham, Esq., and D. Mark Renaud, Esq., of Wiley Rein LLP. Draft Advisory Opinion 2010-10: National Right... Ten, by its counsel, Marc E. Elias, Esq., and Ezra Reese, Esq., of Perkins Coie LLP. Management and...

  2. Targeted delivery of mesenchymal stem cells to the bone.

    PubMed

    Yao, Wei; Lane, Nancy E

    2015-01-01

    Osteoporosis is a disease of excess skeletal fragility that results from estrogen loss and aging. Age related bone loss has been attributed to both elevated bone resorption and insufficient bone formation. We developed a hybrid compound, LLP2A-Ale in which LLP2A has high affinity for the α4β1 integrin on mesenchymal stem cells (MSCs) and alendronate has high affinity for bone. When LLP2A-Ale was injected into mice, the compound directed MSCs to both trabecular and cortical bone surfaces and increased bone mass and bone strength. Additional studies are underway to further characterize this hybrid compound, LLP2A-Ale, and how it can be utilized for the treatment of bone loss resulting from hormone deficiency, aging, and inflammation and to augment bone fracture healing. This article is part of a Special Issue entitled "Stem Cells and Bone". Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Nanoparticle targeted therapy against childhood acute lymphoblastic leukemia

    NASA Astrophysics Data System (ADS)

    Satake, Noriko; Lee, Joyce; Xiao, Kai; Luo, Juntao; Sarangi, Susmita; Chang, Astra; McLaughlin, Bridget; Zhou, Ping; Kenney, Elaina; Kraynov, Liliya; Arnott, Sarah; McGee, Jeannine; Nolta, Jan; Lam, Kit

    2011-06-01

    The goal of our project is to develop a unique ligand-conjugated nanoparticle (NP) therapy against childhood acute lymphoblastic leukemia (ALL). LLP2A, discovered by Dr. Kit Lam, is a high-affinity and high-specificity peptidomimetic ligand against an activated α4β1 integrin. Our study using 11 fresh primary ALL samples (10 precursor B ALL and 1 T ALL) showed that childhood ALL cells expressed activated α4β1 integrin and bound to LLP2A. Normal hematopoietic cells such as activated lymphocytes and monocytes expressed activated α4β1 integrin; however, normal hematopoietic stem cells showed low expression of α4β1 integrin. Therefore, we believe that LLP2A can be used as a targeted therapy for childhood ALL. The Lam lab has developed novel telodendrimer-based nanoparticles (NPs) which can carry drugs efficiently. We have also developed a human leukemia mouse model using immunodeficient NOD/SCID/IL2Rγ null mice engrafted with primary childhood ALL cells from our patients. LLP2A-conjugated NPs will be evaluated both in vitro and in vivo using primary leukemia cells and this mouse model. NPs will be loaded first with DiD near infra-red dye, and then with the chemotherapeutic agents daunorubicin or vincristine. Both drugs are mainstays of current chemotherapy for childhood ALL. Targeting properties of LLP2A-conjugated NPs will be evaluated by fluorescent microscopy, flow cytometry, MTS assay, and mouse survival after treatment. We expect that LLP2A-conjugated NPs will be preferentially delivered and endocytosed to leukemia cells as an effective targeted therapy.

  4. Clinical performance of an objective methodology to categorize tear film lipid layer patterns

    NASA Astrophysics Data System (ADS)

    Garcia-Resua, Carlos; Pena-Verdeal, Hugo; Giraldez, Maria J.; Yebra-Pimentel, Eva

    2017-08-01

    Purpose: To validate the performance of a new objective application designated iDEAS (Dry Eye Assessment System) to categorize different zones of lipid layer patterns (LLPs) in one image. Material and methods: Using the Tearscopeplus and a digital camera attached to a slit-lamp, 50 images were captured and analyzed by 4 experienced optometrists. In each image the observers outlined tear film zones that they clearly identified as a specific LLP. Further, the categorization made by the 4 optometrists (called observer 1, 2, 3 and 4) was compared with the automatic system included in iDEAS (5th observer). Results: In general, observer 3 classified worse than all observers (observers 1, 2, 4 and automatic application, Wilcoxon test, <0.05). The automatic system behaved similar to the remaining three observers (observer 1, 2 and 4) showing differences only for Open meshwork LLP when comparing with observer 4 (Wilcoxon test, p=0.02). For the remaining two observers (observer 1 and 2) there was not found statistical differences (Wilcoxon test, >0.05). Furthermore, we obtained a set of photographs per LLP category for which all optometrists showed agreement by using the new tool. After examining them, we detected the more characteristic features for each LLP to enhance the description of the patterns implemented by Guillon. Conclusions: The automatic application included in the iDEAS framework is able to provide zones similar to the annotations made by experienced optometrists. Thus, the manual process done by experts can be automated with the benefits of being unaffected by subjective factors.

  5. Hydro-thermal processes and thermal offsets of peat soils in the active layer in an alpine permafrost region, NE Qinghai-Tibet plateau

    NASA Astrophysics Data System (ADS)

    Wang, Qingfeng; Jin, Huijun; Zhang, Tingjun; Cao, Bin; Peng, Xiaoqing; Wang, Kang; Xiao, Xiongxin; Guo, Hong; Mu, Cuicui; Li, Lili

    2017-09-01

    Observation data of the hydrothermal processes in the active layer are vital for the verification of permafrost formation and evolution, eco-hydrology, ground-atmosphere interactions, and climate models at various time and spatial scales. Based on measurements of ground temperatures in boreholes, of temperatures and moisture contents of soils in the active layer, and of the mean annual air temperatures at the Qilian, Yeniugou and Tuole meteorological stations in the upper Heihe River Basin (UHRB) and the adjacent areas, a series of observations were made concerning changes in the lower limit of permafrost (LLP) and the related hydrothermal dynamics of soils in the active layer. Because of the thermal diode effect of peat soils, the LLP (at 3600 m) was lower on the northern slope of the Eboling Mountains at the eastern branch of the UHRB than that (at 3650-3700 m) on the alluvial plain at the western branch of the UHRB. The mean temperature of soils at depths of 5 to 77 cm in the active layer on peatlands was higher during periods with subzero temperatures and lower during periods with above-zero temperatures in the vicinity of the LLP on the northern slope of the Eboling Mountains than those at the LLP at the western branch of the UHRB. The thawing and downward freezing rates of soils in the active layer near the LLP on the northern slope of the Eboling Mountains were 0.2 and 1.6 times those found at the LLP at the western branch of the UHRB. From early May to late August, the soil water contents at the depths of 20 to 60 cm in the active layer near the LLP on the northern slope of the Eboling Mountains were significantly lower than those found at the LLP at the western branch of the UHRB. The annual ranges of soil temperatures (ARSTs), mean annual soil temperatures (MASTs) in the active layer on peatlands, and the mean annual ground temperature (MAGT) at a depth of 14 m of the underlying permafrost were all significantly lower near the LLP on the northern slope of the Eboling Mountains. Moreover, the thermophysical properties of peat soils and high moisture contents in the active layer on peatlands resulted in the lower soil temperatures in the active layer close to the LLP on the northern slope of the Eboling Mountains than those found at the LLP at the western branch of the UHRB in the warm season, especially at the deeper depths (20-77 cm). They also resulted in the smaller freezing index (FI) and thawing index (TI) and larger FI/TI ratios of soils at the depths of 5 to 77 cm in the active layer near the LLP on the northern slope of the Eboling Mountains. In short, peatlands have unique thermophysical properties for reducing heat absorption in the warm season and for limiting heat release in the cold season as well. However, the permafrost zone has shrunk by 10-20 km along the major highways at the western branch of the UHRB since 1985, and a medium-scale retrogressive slump has occurred on the peatlands on the northern slope of the Eboling Mountains in recent decades. The results can provide basic data for further studies of the hydrological functions of different landscapes in alpine permafrost regions. Such studies can also enable evaluations and forecasts the hydrological impacts of changing frozen ground in the UHRB and of other alpine mountain regions in West China.

  6. Predictive Accuracy of the Liverpool Lung Project Risk Model for Stratifying Patients for Computed Tomography Screening for Lung Cancer

    PubMed Central

    Raji, Olaide Y.; Duffy, Stephen W.; Agbaje, Olorunshola F.; Baker, Stuart G.; Christiani, David C.; Cassidy, Adrian; Field, John K.

    2013-01-01

    Background External validation of existing lung cancer risk prediction models is limited. Using such models in clinical practice to guide the referral of patients for computed tomography (CT) screening for lung cancer depends on external validation and evidence of predicted clinical benefit. Objective To evaluate the discrimination of the Liverpool Lung Project (LLP) risk model and demonstrate its predicted benefit for stratifying patients for CT screening by using data from 3 independent studies from Europe and North America. Design Case–control and prospective cohort study. Setting Europe and North America. Patients Participants in the European Early Lung Cancer (EUELC) and Harvard case–control studies and the LLP population-based prospective cohort (LLPC) study. Measurements 5-year absolute risks for lung cancer predicted by the LLP model. Results The LLP risk model had good discrimination in both the Harvard (area under the receiver-operating characteristic curve [AUC], 0.76 [95% CI, 0.75 to 0.78]) and the LLPC (AUC, 0.82 [CI, 0.80 to 0.85]) studies and modest discrimination in the EUELC (AUC, 0.67 [CI, 0.64 to 0.69]) study. The decision utility analysis, which incorporates the harms and benefit of using a risk model to make clinical decisions, indicates that the LLP risk model performed better than smoking duration or family history alone in stratifying high-risk patients for lung cancer CT screening. Limitations The model cannot assess whether including other risk factors, such as lung function or genetic markers, would improve accuracy. Lack of information on asbestos exposure in the LLPC limited the ability to validate the complete LLP risk model. Conclusion Validation of the LLP risk model in 3 independent external data sets demonstrated good discrimination and evidence of predicted benefits for stratifying patients for lung cancer CT screening. Further studies are needed to prospectively evaluate model performance and evaluate the optimal population risk thresholds for initiating lung cancer screening. Primary Funding Source Roy Castle Lung Cancer Foundation. PMID:22910935

  7. 75 FR 66797 - PricewaterhouseCoopers LLP (“PwC”) Internal Firm Services Client Account Administrators Group...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-29

    ... LLP (``PwC'') Internal Firm Services Client Account Administrators Group, Charlotte, NC; Amended... Firm Services Client Account Administrators Group. Accordingly, the Department is amending this... Firm Services Client Account Administrators Group. The amended notice applicable to TA-W-73,608 is...

  8. 75 FR 66796 - Pricewaterhousecoopers LLP (“PwC”), Internal Firm Services Client Account Administrators Group...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-29

    ... LLP (``PwC''), Internal Firm Services Client Account Administrators Group Atlanta, GA; Amended...''), Internal Firm Services Client Account Administrators Group. Accordingly, the Department is amending this... Firm Services Client Account Administrators Group. The amended notice applicable to TA-W-73,630 is...

  9. GATEWAY Demonstrations: OLED Lighting in the Offices of DeJoy, Knauf & Blood, LLP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Naomi J.

    At the offices of the accounting firm of DeJoy, Knauf & Blood, LLP in Rochester, NY, the GATEWAY program evaluated a new lighting system that incorporates a number of different OLED luminaires. Evaluation of the OLED products included efficacy performance, field measurements of panel color, flicker measurements, and staff feedback.

  10. 78 FR 52426 - Retail Commodity Transactions Under Commodity Exchange Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-23

    ... enacted to reduce risk, increase transparency, and promote market integrity within the financial system by... typical commercial practice in cash or spot markets for the commodity involved.\\11\\ \\10\\ 7 U.S.C. 2(c)(2..., LLP (GBT), and Rothgerber Johnson & Lyons LLP (RJL). \\17\\ National Energy Markets Association (NEM...

  11. 78 FR 19149 - Small Generator Interconnection Agreements and Procedures; Supplemental Notice of Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-29

    ..., National Grid (Edison Electric Institute) [rtarr8] Michael Sheehan, P.E., Keyes, Fox & Wiedman L.L.P... Association of Regulatory Utility Commissioners [rtarr8] Sky Stanfield, Attorney, Keyes, Fox & Wiedman L.L.P... Policy, National Grid (Edison Electric Institute) [rtarr8] Michael Sheehan, P.E., Keyes, Fox & Wiedman L...

  12. 78 FR 79363 - Petitions for Reconsideration of Action in Rulemaking Proceeding

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-30

    ...In this document, Petitions for Reconsideration (Petitions) have been filed in the Commission's Rulemaking proceeding, one by Gerard J. Duffy of Blooston, Modkofsky, Dickens, Duffy & Prendergrast, LLP, on behalf of Blooston Private Microwave Licenses and a second by David L. Nace, of Lukas, Nace, Gutierrez & Sachs, LLP, on behalf of Small Purchasers Coalition.

  13. GATEWAY Report Brief: Evaluating OLED Lighting in the Accounting Office of DeJoy, Knauf & Blood LLP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Summary of GATEWAY report evaluating a new lighting system, at the offices of the accounting firm of DeJoy, Knauf & Blood, LLP in Rochester, NY, that incorporates a number of different OLED luminaires. Evaluation of the OLED products included efficacy performance, field measurements of panel color, flicker measurements, and staff feedback.

  14. Developing Educational Leaders: A Partnership between Two Universities to Bring about System-Wide Change

    ERIC Educational Resources Information Center

    Naicker, Suraiya R.; Mestry, Raj

    2015-01-01

    This study investigated a system-wide change strategy in a South African school district, which sought to build the leadership capacity of principals and district officials to improve instruction. The three-year venture was called the Leadership for Learning Programme (LLP). A distinctive feature of the LLP was that it was based on a partnership…

  15. Lifelong Learning Programs of Education Faculty in Sinop: Evaluation of Participants' Problems and Worries

    ERIC Educational Resources Information Center

    Usakli, Hakan

    2009-01-01

    In this paper Lifelong Learning Program of Education Faculty in Sinop was evaluated in terms of interrelations between LLP and cultural shock. The barriers of LLP in Education Faculty in Sinop can be examined in two main parts: difficulties of finding suitable partner and students' difficulty in deciding whether to apply or not. These two main…

  16. Lightning studies using LDAR and LLP data

    NASA Technical Reports Server (NTRS)

    Forbes, Gregory S.

    1993-01-01

    This study intercompared lightning data from LDAR and LLP systems in order to learn more about the spatial relationships between thunderstorm electrical discharges aloft and lightning strikes to the surface. The ultimate goal of the study is to provide information that can be used to improve the process of real-time detection and warning of lightning by weather forecasters who issue lightning advisories. The Lightning Detection and Ranging (LDAR) System provides data on electrical discharges from thunderstorms that includes cloud-ground flashes as well as lightning aloft (within cloud, cloud-to-cloud, and sometimes emanating from cloud to clear air outside or above cloud). The Lightning Location and Protection (LLP) system detects primarily ground strikes from lightning. Thunderstorms typically produce LDAR signals aloft prior to the first ground strike, so that knowledge of preferred positions of ground strikes relative to the LDAR data pattern from a thunderstorm could allow advance estimates of enhanced ground strike threat. Studies described in the report examine the position of LLP-detected ground strikes relative to the LDAR data pattern from the thunderstorms. The report also describes other potential approaches to the use of LDAR data in the detection and forecasting of lightning ground strikes.

  17. 78 FR 59908 - Fisheries of the Exclusive Economic Zone Off Alaska; Bering Sea and Aleutian Islands Management...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-30

    ... authorizing a designated vessel to catch and process Pacific cod in the BSAI hook-and-line fisheries to use... (MLOA) to 220 feet (67 m) on LLP licenses authorizing vessels to catch and process Pacific cod with hook... cod with hook-and-line and pot gear in the BSAI to increase the MLOA on the LLP license to 220 feet...

  18. Benchmarking for maximum value.

    PubMed

    Baldwin, Ed

    2009-03-01

    Speaking at the most recent Healthcare Estates conference, Ed Baldwin, of international built asset consultancy EC Harris LLP, examined the role of benchmarking and market-testing--two of the key methods used to evaluate the quality and cost-effectiveness of hard and soft FM services provided under PFI healthcare schemes to ensure they are offering maximum value for money.

  19. It Takes an Eco-System: A Review of the Research Administration Technology Landscape

    ERIC Educational Resources Information Center

    Saas, Tyler; Kemp, James

    2017-01-01

    Deloitte Consulting LLP conducted a review of publicly available data sources with the goal of identifying the pre- and post-award systems used in higher education. The number and type of pre- and post-award systems identified not only show that higher education institutions (HEIs) use a variety of methods to facilitate research activities, but…

  20. Design and application of a fish-shaped lateral line probe for flow measurement

    NASA Astrophysics Data System (ADS)

    Tuhtan, J. A.; Fuentes-Pérez, J. F.; Strokina, N.; Toming, G.; Musall, M.; Noack, M.; Kämäräinen, J. K.; Kruusmaa, M.

    2016-04-01

    We introduce the lateral line probe (LLP) as a measurement device for natural flows. Hydraulic surveys in rivers and hydraulic structures are currently based on time-averaged velocity measurements using propellers or acoustic Doppler devices. The long-term goal is thus to develop a sensor system, which includes spatial gradients of the flow field along a fish-shaped sensor body. Interpreting the biological relevance of a collection of point velocity measurements is complicated by the fact that fish and other aquatic vertebrates experience the flow field through highly dynamic fluid-body interactions. To collect body-centric flow data, a bioinspired fish-shaped probe is equipped with a lateral line pressure sensing array, which can be applied both in the laboratory and in the field. Our objective is to introduce a new type of measurement device for body-centric data and compare its output to estimates of conventional point-based technologies. We first provide the calibration workflow for laboratory investigations. We then provide a review of two velocity estimation workflows, independent of calibration. Such workflows are required as existing field investigations consist of measurements in environments where calibration is not feasible. The mean difference for uncalibrated LLP velocity estimates from 0 to 50 cm/s under in a closed flow tunnel and open channel flume was within 4 cm/s when compared to conventional measurement techniques. Finally, spatial flow maps in a scale vertical slot fishway are compared for the LLP, direct measurements, and 3D numerical models where it was found that the LLP provided a slight overestimation of the current velocity in the jet and underestimated the velocity in the recirculation zone.

  1. Hyper-heuristics with low level parameter adaptation.

    PubMed

    Ren, Zhilei; Jiang, He; Xuan, Jifeng; Luo, Zhongxuan

    2012-01-01

    Recent years have witnessed the great success of hyper-heuristics applying to numerous real-world applications. Hyper-heuristics raise the generality of search methodologies by manipulating a set of low level heuristics (LLHs) to solve problems, and aim to automate the algorithm design process. However, those LLHs are usually parameterized, which may contradict the domain independent motivation of hyper-heuristics. In this paper, we show how to automatically maintain low level parameters (LLPs) using a hyper-heuristic with LLP adaptation (AD-HH), and exemplify the feasibility of AD-HH by adaptively maintaining the LLPs for two hyper-heuristic models. Furthermore, aiming at tackling the search space expansion due to the LLP adaptation, we apply a heuristic space reduction (SAR) mechanism to improve the AD-HH framework. The integration of the LLP adaptation and the SAR mechanism is able to explore the heuristic space more effectively and efficiently. To evaluate the performance of the proposed algorithms, we choose the p-median problem as a case study. The empirical results show that with the adaptation of the LLPs and the SAR mechanism, the proposed algorithms are able to achieve competitive results over the three heterogeneous classes of benchmark instances.

  2. Automation in the Space Station module power management and distribution Breadboard

    NASA Technical Reports Server (NTRS)

    Walls, Bryan; Lollar, Louis F.

    1990-01-01

    The Space Station Module Power Management and Distribution (SSM/PMAD) Breadboard, located at NASA's Marshall Space Flight Center (MSFC) in Huntsville, Alabama, models the power distribution within a Space Station Freedom Habitation or Laboratory module. Originally designed for 20 kHz ac power, the system is now being converted to high voltage dc power with power levels on a par with those expected for a space station module. In addition to the power distribution hardware, the system includes computer control through a hierarchy of processes. The lowest level process consists of fast, simple (from a computing standpoint) switchgear, capable of quickly safing the system. The next level consists of local load center processors called Lowest Level Processors (LLP's). These LLP's execute load scheduling, perform redundant switching, and shed loads which use more than scheduled power. The level above the LLP's contains a Communication and Algorithmic Controller (CAC) which coordinates communications with the highest level. Finally, at this highest level, three cooperating Artificial Intelligence (AI) systems manage load prioritization, load scheduling, load shedding, and fault recovery and management. The system provides an excellent venue for developing and examining advanced automation techniques. The current system and the plans for its future are examined.

  3. Improving zero-training brain-computer interfaces by mixing model estimators

    NASA Astrophysics Data System (ADS)

    Verhoeven, T.; Hübner, D.; Tangermann, M.; Müller, K. R.; Dambre, J.; Kindermans, P. J.

    2017-06-01

    Objective. Brain-computer interfaces (BCI) based on event-related potentials (ERP) incorporate a decoder to classify recorded brain signals and subsequently select a control signal that drives a computer application. Standard supervised BCI decoders require a tedious calibration procedure prior to every session. Several unsupervised classification methods have been proposed that tune the decoder during actual use and as such omit this calibration. Each of these methods has its own strengths and weaknesses. Our aim is to improve overall accuracy of ERP-based BCIs without calibration. Approach. We consider two approaches for unsupervised classification of ERP signals. Learning from label proportions (LLP) was recently shown to be guaranteed to converge to a supervised decoder when enough data is available. In contrast, the formerly proposed expectation maximization (EM) based decoding for ERP-BCI does not have this guarantee. However, while this decoder has high variance due to random initialization of its parameters, it obtains a higher accuracy faster than LLP when the initialization is good. We introduce a method to optimally combine these two unsupervised decoding methods, letting one method’s strengths compensate for the weaknesses of the other and vice versa. The new method is compared to the aforementioned methods in a resimulation of an experiment with a visual speller. Main results. Analysis of the experimental results shows that the new method exceeds the performance of the previous unsupervised classification approaches in terms of ERP classification accuracy and symbol selection accuracy during the spelling experiment. Furthermore, the method shows less dependency on random initialization of model parameters and is consequently more reliable. Significance. Improving the accuracy and subsequent reliability of calibrationless BCIs makes these systems more appealing for frequent use.

  4. Influence of the Oil Phase and Topical Formulation on the Wound Healing Ability of a Birch Bark Dry Extract

    PubMed Central

    Steinbrenner, Isabel; Houdek, Pia; Pollok, Simone; Brandner, Johanna M.; Daniels, Rolf

    2016-01-01

    Triterpenes from the outer bark of birch are known for various pharmacological effects including enhanced wound healing (WH). A birch bark dry extract (TE) obtained by accelerated solvent extraction showed the ability to form oleogels when it is suspended in oils. Consistency of the oleogels and the dissolved amount of triterpenes varies largely with the used oil. Here we wanted to know to what extent different oils and formulations (oleogel versus o/w emulsion) influence WH. Looking at the plain oils, medium-chain triglycerides (MCT) enhanced WH (ca. 1.4-fold), while e.g. castor oil (ca.0.3-fold) or light liquid paraffin (LLP; ca. 0.5-fold) significantly decreased WH. Concerning the respective oleogels, TE-MCT showed no improvement although the solubility of the TE was high. In contrast, the oleogel of sunflower oil which alone showed a slight tendency to impair WH, enhanced WH significantly (ca. 1.6-fold). These results can be explained by release experiments where the release rate of betulin, the main component of TE, from MCT oleogels was significantly lower than from sunflower oil oleogels. LLP impaired WH as plain oil and even though it released betulin comparable to sunflower oil it still results in an overall negative effect of the oleogel on WH. As a further formulation option also surfactant free o/w emulsions were prepared using MCT, sunflower oil and LLP as a nonpolar oil phase. Depending on the preparation method (suspension or oleogel method) the distribution of the TE varied markedly and affected also release kinetics. However, the released betulin was clearly below the values measured with the respective oleogels. Consequently, none of the emulsions showed a significantly positive effect on WH. In conclusion, our data show that the oil used as a vehicle influences wound healing not only by affecting the release of the extract, but also by having its own vehicle effect on wound healing. This is also of importance for other applications where drugs have to be applied in non-polar vehicles because these solvents likely influence the outcome of the experiment substantially. PMID:27219110

  5. Augmentation of the space station module power management and distribution breadboard

    NASA Technical Reports Server (NTRS)

    Walls, Bryan; Hall, David K.; Lollar, Louis F.

    1991-01-01

    The space station module power management and distribution (SSM/PMAD) breadboard models power distribution and management, including scheduling, load prioritization, and a fault detection, identification, and recovery (FDIR) system within a Space Station Freedom habitation or laboratory module. This 120 VDC system is capable of distributing up to 30 kW of power among more than 25 loads. In addition to the power distribution hardware, the system includes computer control through a hierarchy of processes. The lowest level consists of fast, simple (from a computing standpoint) switchgear that is capable of quickly safing the system. At the next level are local load center processors, (LLP's) which execute load scheduling, perform redundant switching, and shed loads which use more than scheduled power. Above the LLP's are three cooperating artificial intelligence (AI) systems which manage load prioritizations, load scheduling, load shedding, and fault recovery and management. Recent upgrades to hardware and modifications to software at both the LLP and AI system levels promise a drastic increase in speed, a significant increase in functionality and reliability, and potential for further examination of advanced automation techniques. The background, SSM/PMAD, interface to the Lewis Research Center test bed, the large autonomous spacecraft electrical power system, and future plans are discussed.

  6. Induction of cervical dilation for transcervical embryo transfer in ewes

    PubMed Central

    2014-01-01

    Background A major limitation in the application of assisted reproductive technologies in sheep arises from the inability to easily traverse the uterine cervix. The cervix of the non-pregnant ewe is a narrow and rigid structure, with 5–7 spiral folds and crypts that block its lumen. The first two folds closest to the vagina appear to be the greatest obstacle for the instrument insertion into the sheep cervix. Therefore, the dilation of the distal part of the cervix could provide the conformational change necessary to perform non-invasive transcervical procedures. The present study set out to assess the efficacy of Cervidil®, a patented dinoprostone (PgE2)-containing vaginal insert with a controlled-release mechanism, to safely induce sufficient cervical dilation for the purpose of transcervical embryo transfer (TCET) in cyclic ewes. Methods The transfer of frozen-thawed ovine embryos was attempted in 22 cross-bred Rideau Arcott x Polled Dorset ewes, with or without the pre-treatment with Cervidil® for 12 or 24 h prior to TCET. Results Cervical penetration rate was significantly improved after Cervidil® pre-treatment, with 55% (6/11) of treated versus 9% (1/11) of control animals successfully penetrated (χ2-test, p < 0.05). Within the treated ewes that were penetrated, 67% (4/6) had been exposed to Cervidil(R) for 24 h and 33% (2/6) had had a 12-h exposure (p > 0.05). Variations in the age, weight, genotype, parity, lifetime lamb production (LLP) and post-partum interval (PPI) between penetrated and non-penetrated ewes were not significant (p > 0.05). The time taken to traverse the uterine cervix was negatively correlated (p < 0.05) with the age, parity, LLP and PPI. Progesterone assays and ultrasonographic examinations performed 25 days after ET confirmed pregnancy in 2 of 7 penetrated ewes, but no fetuses were detected ultrasonographically 55 days post-TCET. Conclusions The present results indicate a significant benefit of using Cervidil® for inducing cervical dilation during the mid-luteal phase in ewes but the reason(s) for impaired fertility after the transfer of frozen-thawed ovine embryos remains to be elucidated. PMID:24467737

  7. Enhanced photoluminescence and phosphorescence properties of green phosphor Zn2GeO4:Mn(2+)via composition modification with GeO2 and MgF2.

    PubMed

    Pan, Yuexiao; Li, Li; Lu, Jing; Pang, Ran; Wan, Li; Huang, Shaoming

    2016-06-21

    A green long-lasting phosphorescence (LLP) phosphor Zn2GeO4:Mn(2+) (ZGOM) has been synthesized by a solid-state method at 1100 °C in air. The luminescence intensity has been improved up to 9 and 6 times through mixing GeO2 and MgF2 into the composition, respectively. The phosphorescence duration of the sample has been prolonged to 5 h. The phosphor, composed of a mixture of Zn2GeO4 (ZGO), GeO2, and MgGeO3 phases, emits enhanced green luminescence with a broad excitation band between 250 nm to 400 nm. Under identical measurement conditions, the optimized phosphor ZGOM has a higher emission intensity and shows longer wavelength emission than those of the commercial green LLP phosphor SrAl2O4:Eu,Dy (SAOED) under an excitation at 336 nm. The quantum yield of the sample modified by GeO2 and MgF2 is as high as 95.0%. Understanding of the formation mechanism for enhancement of emission intensity and prolonging of phosphorescence duration of ZGOM is fundamentally important, which might be extended to other identified solid-state inorganic phosphor materials for advanced properties.

  8. Broadband 0.25-um Gallium Nitride (GaN) Power Amplifier Designs

    DTIC Science & Technology

    2017-08-14

    CP pF RES ID=R1 R=RP Ohm PORT P=1 Z=50 Ohm RP=87.5ohm/mm... CP =-0.31pF/mm For 1.75mm, RP=50ohms, CP =0.54pf CP = 0.31 * size size=1.75 RP = 87.5 / size CAP ID=C1 C=CP1 pF RES ID=R1 R=RP Ohm IND ID=L1 L=LP1 nH CAP...ID=C2 C=Cser2 pF IND ID=L2 L=Lser2 nH IND ID=L3 L=LP1 nH CAP ID=C3 C=CP1 pF PORT P=1 Z=50 Ohm PORT P=2 Z=50 Ohm size=1.75 RP = 87.5 / size CP =

  9. 75 FR 52944 - Sunshine Act Notices

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-30

    ... be closed to the public. Items To Be Discussed: Compliance matters pursuant to 2 U.S.C. 437g. Audits...., of Perkins Coie, LLP. Explanation and Justification and Final Rules on Coordinated Communications...

  10. 78 FR 28205 - Notice of Second Prehearing Conference

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-14

    ....m. Eastern. ADDRESSES: Members of the public are welcome to attend the prehearing conference to be..., LLP, Counsel for BABY MATTERS LLC; and, Larry W. Bennett, Esq. and Geoffrey S. Wagner, Esq., of...

  11. 75 FR 22595 - Sunshine Act Notices

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-29

    ... (NDRT), by Marc E. Elias and Kate S. Keane of Perkins Coie LLP, counsel. Report of the Audit Division on the Tennessee Democratic Party (TDP). Report of the Audit Division on Friends for Menor Committee...

  12. 75 FR 2141 - Notice of Agreements Filed

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-14

    ...; Seabourn Cruise Line; SeaDream Yacht Club; Silversea Cruises, Ltd.; Uniworld River Cruises, Inc.; and... Services Ltd. Filing Party: John Longstreth, Esq.; K & L Gates LLP; 1601 K Street NW.; Washington, DC 20006...

  13. 77 FR 26759 - Sunshine Act Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-07

    ... Minutes for the Meeting of April 26, 2012. Draft Advisory Opinion 2012-07: Feinstein for Senate. Draft Advisory Opinion 2012-16: Angus King for U.S. Senate Campaign and Pierce Atwood LLP. Audit Division...

  14. POLARIZED LINE FORMATION WITH LOWER-LEVEL POLARIZATION AND PARTIAL FREQUENCY REDISTRIBUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Supriya, H. D.; Sampoorna, M.; Nagendra, K. N.

    2016-09-10

    In the well-established theories of polarized line formation with partial frequency redistribution (PRD) for a two-level and two-term atom, it is generally assumed that the lower level of the scattering transition is unpolarized. However, the existence of unexplained spectral features in some lines of the Second Solar Spectrum points toward a need to relax this assumption. There exists a density matrix theory that accounts for the polarization of all the atomic levels, but it is based on the flat-spectrum approximation (corresponding to complete frequency redistribution). In the present paper we propose a numerical algorithm to solve the problem of polarizedmore » line formation in magnetized media, which includes both the effects of PRD and the lower level polarization (LLP) for a two-level atom. First we derive a collisionless redistribution matrix that includes the combined effects of the PRD and the LLP. We then solve the relevant transfer equation using a two-stage approach. For illustration purposes, we consider two case studies in the non-magnetic regime, namely, the J {sub a} = 1, J {sub b} = 0 and J {sub a} = J {sub b} = 1, where J {sub a} and J {sub b} represent the total angular momentum quantum numbers of the lower and upper states, respectively. Our studies show that the effects of LLP are significant only in the line core. This leads us to propose a simplified numerical approach to solve the concerned radiative transfer problem.« less

  15. Introduction to patent strategies for medical device inventions.

    PubMed

    Gutman, Siegmond Y; Capraro, Joe; Chen, Tom

    2016-11-01

    Siegmund Gutman is the Chair of the Life Sciences Patent practice and a partner at the global law firm of Proskauer Rose LLP. Siegmund's practice focuses on developing and executing business-oriented patent strategies for medical device, biotechnology, and biopharmaceutical clients, including early-stage and mature companies, as well as academic and other research organizations. His background combines a graduate degree in biophysical chemistry and molecular and cell biology with more than 25 years of experience in the life sciences industry, including serving as senior counsel at Amgen. Joe Capraro is a partner and the Boston Office Head at the law firm of Proskauer Rose LLP. Joe has more than 25 years of experience advising start-ups and established companies on intellectual property issues. Joe has amassed broad intellectual property and transactional experience over the years and provides clients with practical, business-oriented advice. Tom Chen is a senior associate in the Los Angeles office of Proskauer Rose LLP, where his practice focuses on patent litigation and counseling in the life sciences sector. Tom holds an A.B. in chemistry and pharmacology from Duke University, and an M.S. in biotechnology from Johns Hopkins University. Prior to joining Proskauer, Tom previously served as a judicial law clerk for the Honorable Alvin A. Schall of the U.S. Court of Appeals for the Federal Circuit, and the Honorable Leonard P. Stark of the U.S. District Court for the District of Delaware. Copyright © 2016. Published by Elsevier Inc.

  16. 78 FR 20106 - Farm Credit System Insurance Corporation Board; Regular Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-03

    ... Institutions Presentation of 2012 Audit Results by External Auditor Clifton Larson Allen L.L.P. Executive Session Executive Session of the FCSIC Board Audit Committee with the External Auditor Dated: March 29...

  17. 77 FR 24709 - Farm Credit System Insurance Corporation Board; Regular Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-25

    ... Presentation of 2011 Audit Results by External Auditor Clifton Larson Allen LLP Executive Session Executive Session of the FCSIC Board Audit Committee with the External Auditor Dated: April 19, 2012. Dale L...

  18. 76 FR 34972 - USG Pipeline Company; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-15

    .... Forshay or Sandra E. Safro, Sutherland Ashbill & Brennan LLP, 1275 Pennsylvania Avenue, Washington [email protected]sutherland.com , [email protected]sutherland.com or [email protected]sutherland.com . There are two ways...

  19. 75 FR 44792 - Sunshine Act Notices

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-29

    ...: Club for Growth, by its counsel, Carol A. Laham, Esq., and D. Mark Renaud, Esq., of Wiley Rein LLP. Draft Advisory Opinion 2010-11: Commonsense Ten, by its counsel, Marc E. Elias, Esq., and Ezra Reese...

  20. 75 FR 42444 - Sunshine Act Notices

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-21

    ... Opinion 2010-09: Club for Growth, by its counsel, Carol A. Laham, Esq., and D. Mark Renaud, Esq., of Wiley Rein LLP. Draft Advisory Opinion 2010-11: Commonsense Ten, by its counsel, Marc E. Elias, Esq., and...

  1. 75 FR 43983 - Sunshine Act Notices

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-27

    ...: Club for Growth, by its counsel, Carol A. Laham, Esq., and D. Mark Renaud, Esq., of Wiley Rein LLP. Draft Advisory Opinion 2010-11: Commonsense Ten, by its counsel, Marc E. Elias, Esq., and Ezra Reese...

  2. Cytotoxic and Antifungal Constituents Isolated from the Metabolites of Endophytic Fungus DO14 from Dendrobium officinale.

    PubMed

    Wu, Ling-Shang; Jia, Min; Chen, Ling; Zhu, Bo; Dong, Hong-Xiu; Si, Jin-Ping; Peng, Wei; Han, Ting

    2015-12-22

    Two novel cytotoxic and antifungal constituents, (4S,6S)-6-[(1S,2R)-1, 2-dihydroxybutyl]-4-hydroxy-4-methoxytetrahydro-2H-pyran-2-one (1), (6S,2E)-6-hydroxy-3-methoxy-5-oxodec-2-enoic acid (2), together with three known compounds, LL-P880γ (3), LL-P880α (4), and Ergosta-5,7,22-trien-3b-ol (5) were isolated from the metabolites of endophytic fungi from Dendrobium officinale. The chemical structures were determined based on spectroscopic methods. All the isolated compounds 1-5 were evaluated by cytotoxicity and antifungal effects. Our present results indicated that compounds 1-4 showed notable anti-fungal activities (minimal inhibitory concentration (MIC) ≤ 50 μg/mL) for all the tested pathogens including Candida albicans, Cryptococcus neoformans, Trichophyton rubrum, Aspergillus fumigatus. In addition, compounds 1-4 possessed notable cytotoxcities against human cancer cell lines of HL-60 cells with the IC50 values of below 100 μM. Besides, compounds 1, 2, 4 and 5 showed strong cytotoxities on the LOVO cell line with the IC50 values were lower than 100 μM. In conclusion, our study suggested that endophytic fungi of D. officinale are great potential resources to discover novel agents for preventing or treating pathogens and tumors.

  3. Photovoltaic systems sizing for Algeria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arab, A.H.; Driss, B.A.; Amimeur, R.

    1995-02-01

    The purpose of this work is to develop an optimization method applicable to stand-alone photovoltaic systems as a function of its reliability. For a given loss-of-load probability (LLP), there are many combinations of battery capacity and photovoltaic array peak power. The problem consists in determining the couple which corresponds to a minimum total system cost. The method has been applied to various areas all over Algeria taking into account various climatic zones. The parameter used to define the different climatic zones is the clearness index KT for all the considered sites. The period of the simulation system is 10 years.more » 5 refs., 4 figs., 5 tabs.« less

  4. 77 FR 77122 - Pyxis Capital, L.P., et al.;

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-31

    ... 20549. Applicants, c/o W. John McGuire, Esq. and Michael Berenson, Esq., Bingham McCutchen LLP, 2020 K... (202) 551-6878 or Mary Kay Frech, Branch Chief, at (202) 551-6821 (Division of Investment Management...

  5. 78 FR 56684 - Woonsocket Falls Project, City of Woonsocket; Notice of Application Accepted for Filing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-13

    ... Coffey, Burns & Levinson LLP, One Citizens Plaza, Suite 1100, Providence, RI 02903, (401) 831-8173. i... dam; (2) two 8-foot diameter concrete penstocks; (3) a powerhouse located about 240 feet downstream...

  6. 76 FR 62498 - Finger Lakes Railway Corp.-Acquisition and Operation Exemption-CSX Transportation, Inc.

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-07

    ... filed no later than October 17, 2011 (at least 7 days before the exemption becomes effective). An... served on Eric M. Hocky, Thorp Reed & Armstrong, LLP, One Commerce Square, 2005 Market Street, Suite 1000...

  7. Applying a machine learning model using a locally preserving projection based feature regeneration algorithm to predict breast cancer risk

    NASA Astrophysics Data System (ADS)

    Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qian, Wei; Zheng, Bin

    2018-03-01

    Both conventional and deep machine learning has been used to develop decision-support tools applied in medical imaging informatics. In order to take advantages of both conventional and deep learning approach, this study aims to investigate feasibility of applying a locally preserving projection (LPP) based feature regeneration algorithm to build a new machine learning classifier model to predict short-term breast cancer risk. First, a computer-aided image processing scheme was used to segment and quantify breast fibro-glandular tissue volume. Next, initially computed 44 image features related to the bilateral mammographic tissue density asymmetry were extracted. Then, an LLP-based feature combination method was applied to regenerate a new operational feature vector using a maximal variance approach. Last, a k-nearest neighborhood (KNN) algorithm based machine learning classifier using the LPP-generated new feature vectors was developed to predict breast cancer risk. A testing dataset involving negative mammograms acquired from 500 women was used. Among them, 250 were positive and 250 remained negative in the next subsequent mammography screening. Applying to this dataset, LLP-generated feature vector reduced the number of features from 44 to 4. Using a leave-onecase-out validation method, area under ROC curve produced by the KNN classifier significantly increased from 0.62 to 0.68 (p < 0.05) and odds ratio was 4.60 with a 95% confidence interval of [3.16, 6.70]. Study demonstrated that this new LPP-based feature regeneration approach enabled to produce an optimal feature vector and yield improved performance in assisting to predict risk of women having breast cancer detected in the next subsequent mammography screening.

  8. House Hearing Independent Audit

    NASA Image and Video Library

    2009-12-03

    Daniel Murrin, Partner, Assurance and Advisory Business Services, Ernst & Young LLP, testifies during a Joint Hearing before the House Committee on Science and Technology, Transportation and Infrastructure Committee, Subcommittee on Investigations and Oversight, Thursday, Dec. 3, 2009, on Capitol Hill in Washington. Photo Credit: (NASA/Bill Ingalls)

  9. House Hearing Independent Audit

    NASA Image and Video Library

    2009-12-03

    Daniel Murrin, Partner, Assurance and Advisory Business Service, Ernst & Young LLP, testifies during a Joint Hearing before the House Committee on Science and Technology, Transportation and Infrastructure Committee, Subcommittee on Investigations and Oversight, Thursday, Dec. 3, 2009, on Capitol Hill in Washington. Photo Credit: (NASA/Bill Ingalls)

  10. 77 FR 45656 - Notice Pursuant to the National Cooperative Research and Production Act of 1993-Pistoia Alliance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-01

    ..., PORTUGAL; Deloitte Consulting LLP, New York, NY; Mary Chitty (individual member), Needham, MA; and Hewlett- Packard Company, Palo Alto, CA, have been added as parties to this venture. No other changes have been...

  11. 50 CFR Table 31 to Part 679 - List of Amendment 80 Vessels and LLP Licenses Originally Assigned to an Amendment 80 Vessel

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE (CONTINUED) FISHERIES OF THE EXCLUSIVE ECONOMIC ZONE OFF ALASKA Pt. 679, Table 31 Table 31 to Part...

  12. 75 FR 20392 - Investigations Regarding Certifications of Eligibility To Apply for Worker Adjustment Assistance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-19

    .../08/10 03/05/10 73656 JK Products and Services, Jonesboro, AR......... 03/08/10 03/05/10 Inc. (Wkrs... LLP (State)... Houston, TX 03/11/10 05/27/09 73697 Federal Coach (Wkrs)....... Fort Smith, AR...

  13. 78 FR 67146 - Change in Bank Control Notices; Acquisitions of Shares of a Bank or Bank Holding Company

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-08

    .... Cummings, III; Nanette Weaver Cummings; George W. Cummings, Jr.; Dewey F. Weaver Jr.; Colby Weaver, all of Monroe, Louisiana; Twist Family, LLP; Randall Twist, both of Dallas, Texas; and Dewey Weaver, III, West...

  14. The bi-directional leader observation in positive cloud-to-ground lightning flashes during summer thunderstorm season

    NASA Astrophysics Data System (ADS)

    Nakamura, Y.; Manabu, A.; Morimoto, T.; Ushio, T.; Kawasaki, Z.; Miki, M.; Shimizu, M.

    2009-12-01

    In this paper, we present observations of positive cloud-to-ground (+CG) lightning flashes obtained with the VHF BDITF (VHF Broadband Digital InTerFerometer) and the ALPS (Automatic Lightning Discharge Progressing Feature Observation System). The VHF BDITF observed two- (2D) and three-dimensional (3D) developments of lightning flashes with high time resolution. The ALPS observed the luminous propagation of the local process at low altitudes within its observational range. At 2028:59 JST on 8 August, 2008, we observed the 3D spatiotemporal development channels of +CG lightning flash with the VHF BDITF and the RS with the lightning location and protection (LLP) system. This flash is divided before and after the RS. In the former stage, the in-cloud negative breakdown (NB) progress about 15 km horizontally between 6 and 10 km high. The LLP system detects the RS near the initiation point of that negative breakdown (NB) at the end of the former stage. In the latter stage, the new NB runs through the same path as the first NB before the RS. The luminous intensity of the RS near the ground obtained with the ALPS is synchronized with the development of the new NB. The time variation of luminous intensity by the ALPS has two peaks. The time difference of these peaks is corresponding to the blank of the VHF radiation. Since the new NB following the RS runs through the path of the first NB, the positive breakdown (PB), which is not visualized by the VHF BDITF, could be considered to progress from the starting point of the first NB and touches to the ground. The RS current propagates and penetrates in the opposite direction as visualized subsequent NB. This suggests the first NB and the PB progress together. This +CG lightning flash has the bi-directional leader. To assume the path of the PB is straight line, the velocity of the PB is about 4 × 104 m/s.

  15. Lightning forecasting studies using LDAR, LLP, field mill, surface mesonet, and Doppler radar data

    NASA Technical Reports Server (NTRS)

    Forbes, Gregory S.; Hoffert, Steven G.

    1995-01-01

    The ultimate goal of this research is to develop rules, algorithms, display software, and training materials that can be used by the operational forecasters who issue weather advisories for daily ground operations and launches by NASA and the United States Air Force to improve real-time forecasts of lightning. Doppler radar, Lightning Detection and Ranging (LDAR), Lightning Location and Protection (LLP), field mill (Launch Pad Lightning Warning System -- LPLWS), wind tower (surface mesonet) and additional data sets have been utilized in 10 case studies of thunderstorms in the vicinity of KSC during the summers of 1994 and 1995. These case studies reveal many intriguing aspects of cloud-to-ground, cloud-to-cloud, in-cloud, and cloud-to-air lightning discharges in relation to radar thunderstorm structure and evolution. They also enable the formulation of some preliminary working rules of potential use in the forecasting of initial and final ground strike threat. In addition, LDAR and LLP data sets from 1993 have been used to quantify the lightning threat relative to the center and edges of LDAR discharge patterns. Software has been written to overlay and display the various data sets as color imagery. However, human intervention is required to configure the data sets for proper intercomparison. Future efforts will involve additional software development to automate the data set intercomparisons, to display multiple overlay combinations in a windows format, and to allow for animation of the imagery. The software package will then be used as a tool to examine more fully the current cases and to explore additional cases in a timely manner. This will enable the formulation of more general and reliable forecasting guidelines and rules.

  16. Best Practices & Outstanding Initiatives

    ERIC Educational Resources Information Center

    Training, 2012

    2012-01-01

    In this article, "Training" editors recognize innovative and successful learning and development programs and practices submitted in the 2012 Training Top 125 application. Best practices: (1) Edward Jones: Practice Makes Perfect (sales training); (2) Grant Thornton LLP: Senior Manager Development Program (SMDP); (3) MetLife, Inc.: Top Advisor…

  17. Rand Project AIR FORCE Annual Report 2010

    DTIC Science & Technology

    2010-01-01

    LLC Michael Lynton, Chairman and Chief Executive Officer, Sony Pictures Entertainment Ronald L. Olson, Partner, Munger, Tolles & Olson LLP Paul H...management, and his Air Force career centered on research. Even a shared love of music finds them in different parts of the orchestra, with Ray as a

  18. Inspector General, DOD, Oversight of the Audit of the Military Retirement Trust Fund Financial Statements for FY 1998

    DTIC Science & Technology

    1999-03-05

    Our objective was to determine the accuracy and completeness of the Deloitte & Touche LLP audit of the Military Retirement Trust Fund Financial Statements for FY 1998 See Appendix A for a discussion of the audit process.

  19. 77 FR 60343 - Petition for Reconsideration of Action in Rulemaking Proceeding

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-03

    ... Murray, Petersen & Shuster LLP, on behalf of Professional Association for Customer Engagement, and Anthony S. Mendoza, Esq. on behalf of SatCom Marketing, LLC. DATES: Oppositions to the Petitions must be... Implementing the Telephone Consumer Protection Act of 1991, Professional Association for Customer Engagement's...

  20. Report: McGladrey & Pullen, LLP Single Audit of Geothermal Heat Pump Consortium, Inc., for Year Ended December 31, 2003

    EPA Pesticide Factsheets

    Report #2005-S-00006, June 28, 2005. McGladrey & Pullen’s audit work met generally accepted government auditing standards and the requirements in Office of Management and Budget Circular A-133 and its related supplements.

  1. ICT Security Curriculum or How to Respond to Current Global Challenges

    ERIC Educational Resources Information Center

    Poboroniuc, Marian Silviu; Naaji, Antoanela; Ligusova, Jana; Grout, Ian; Popescu, Dorin; Ward, Tony; Grindei, Laura; Ruseva, Yoana; Bencheva, Nina; Jackson, Noel

    2017-01-01

    The paper presents some results obtained through the implementation of the Erasmus LLP "SALEIE" (Strategic Alignment of Electrical and Information Engineering in European Higher Education Institutions). The aim of the project was to bring together experts from European universities to enhance the competitiveness of Electrical and…

  2. FY 2004 Performance and Accountability Report

    ERIC Educational Resources Information Center

    US Department of Education, 2004

    2004-01-01

    The Department?s financial statements, which appear on pp. 115?119, received an unqualified audit opinion issued by the independent accounting firm of Ernst & Young LLP for the third consecutive year. Preparing these statements is part of the Department?s continuing efforts to achieve financial management excellence and to provide accurate and…

  3. 46 CFR 402.320 - Working rules.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Vincent, N.Y., dated May 1, 1980, amended to March 24, 1999. (2) The Working Rules and Dispatch Procedures... March 30, 1999. (4) The Working Rules for District No. 3, adopted by the Western Great Lakes Pilots Association, LLP, Superior, WI., dated February 24, 2001 amended to February 28, 2007. (b) [Reserved] [USCG...

  4. Report: PricewaterhouseCoopers, LLP Single Audit of Natural Resources Defense Council, Inc., for Year Ended June 30, 2003

    EPA Pesticide Factsheets

    Report #2006-S-00002, May 25, 2006. We did not adequately test and document the auditee’s compliance with Federal procurement regulations, and did not properly report the auditee’s lack of compliance with indirect cost proposal requirements.

  5. 78 FR 19758 - Stetson Capital Fund LP and Davis Polk & Wardwell LLP; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-02

    ... investments in operating businesses, separate accounts with registered or unregistered investment advisers... Partner will register as an investment adviser under the Investment Advisers Act of 1940 (the ``Advisers Act''), if such registration is required under the Advisers Act and the rules thereunder. 6...

  6. 50 CFR 679.80 - Allocation and transfer of rockfish QS.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... from which that LLP license was derived during the calendar years 2000 and 2001, unless clear and...) Determine the Total Entry Level Trawl Fishery Transition Rockfish QS pool for each rockfish primary species... Rockfish QS pools. (v) Multiply the Percentage of the Total Entry Level Trawl Fishery Transition Rockfish...

  7. 50 CFR 679.80 - Allocation and transfer of rockfish QS.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... from which that LLP license was derived during the calendar years 2000 and 2001, unless clear and...) Determine the Total Entry Level Trawl Fishery Transition Rockfish QS pool for each rockfish primary species... Rockfish QS pools. (v) Multiply the Percentage of the Total Entry Level Trawl Fishery Transition Rockfish...

  8. 50 CFR 679.80 - Allocation and transfer of rockfish QS.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... from which that LLP license was derived during the calendar years 2000 and 2001, unless clear and...) Determine the Total Entry Level Trawl Fishery Transition Rockfish QS pool for each rockfish primary species... Rockfish QS pools. (v) Multiply the Percentage of the Total Entry Level Trawl Fishery Transition Rockfish...

  9. 2011 Women in Defense (WID) National Fall Conference

    DTIC Science & Technology

    2011-10-19

    UCLA. She is also a graduate of the UCLA Executive Management Course and the University of Chicago Business Leadership Program. A member of the Air...Supercircuits Ms. Beth A. Shepard -Savery Cox Communications Hampton Roads, LLC Ms. Erin B. Sheppard McKenna Long & Aldridge, LLP Ms. Heidi L Shyu

  10. lcmodels

    Cancer.gov

    The R package provides individual risks of lung cancer and lung cancer death based on various published papers: Bach et al., 2003; Spitz et al., 2007; Cassidy et al., 2008 (LLP); Hoggart et al., 2012; Tammemagi et al., 2013; Marcus et al., 2015 (LLPi); Wilson and Weissfeld, 2015 (Pittsburgh); Katki et al., 2016 (LCRAT and LCDRAT)

  11. 75 FR 75091 - Mortgage Assistance Relief Services

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-01

    ... ; Stephanie Armour, Home Foreclosure Rates Posts First Annual Decline in Five Years, USA Today (May 13, 2010... (ANX), Mem. Supp. Pls. Ex Parte App. at 3 (Aug. 3, 2009) (alleging that defendants engaged in... companies. \\67\\ See FTC v. Fed. Loan Modification Law Ctr., LLP, No. SACV09-401 CJC (MLGx), Mem. Supp. Ex...

  12. 78 FR 75413 - Self-Regulatory Organizations; National Securities Clearing Corporation; Order Approving Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-11

    ..., Bracewell & Giuliani LLP, on behalf of Investment Technology Group, Inc. (``ITG''), dated April 25, 2013... (``Citadel Letter IV''); and Mark Solomon, Managing Director and Deputy General Counsel, ITG, dated August 5... well as one industry trade group, SIFMA.\\23\\ NSCC also submitted two responses to comment letters...

  13. Counting on a More Diverse Workforce

    ERIC Educational Resources Information Center

    Chew, Cassie M.

    2007-01-01

    Big Four auditing firm Deloitte & Touche LLP USA may have found a way to address two of the top challenges for public accounting firms of the decade: finding and retaining qualified staff and recruiting new leadership. Deloitte is going to campuses across the United States in search of its next generation of talented accounting professionals…

  14. 50 CFR 679.82 - Rockfish Program use caps and sideboard limits.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... not participate in directed fishing for arrowtooth flounder, deep-water flatfish, and rex sole in the GOA (or in waters adjacent to the GOA when arrowtooth flounder, deep-water flatfish, and rex sole... authority of all eligible LLP licenses in the catcher/processor sector. (ii) For the deep-water halibut PSC...

  15. 50 CFR 679.82 - Rockfish Program use caps and sideboard limits.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... not participate in directed fishing for arrowtooth flounder, deep-water flatfish, and rex sole in the GOA (or in waters adjacent to the GOA when arrowtooth flounder, deep-water flatfish, and rex sole... authority of all eligible LLP licenses in the catcher/processor sector. (ii) For the deep-water halibut PSC...

  16. The Manager's Role in Financial Reporting: A Risk Consultant's Perspective

    ERIC Educational Resources Information Center

    Bell, Reginald L.

    2007-01-01

    This article presents an interview with Ray Gonzalez, a risk consultant at Deloitte & Touche LLP, in Houston, Texas, about the financial reporting responsibilities of top, middle, and frontline managers in large and medium-size firms. This interview spotlights the necessity for timely and accurate reporting of financial information relating to…

  17. 76 FR 18265 - Fairholme VP Series Fund, Inc. and Fairholme Capital Management LLC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-01

    ... VLI Accounts, the Plans and the participants in participant- directed Plans can make decisions quickly.... Miller, Esq., Seward & Kissel LLP, 1200 G Street, NW., Washington, DC 20005. FOR FURTHER INFORMATION... INFORMATION: The following is a summary of the application. The complete application may be obtained for a fee...

  18. Luigi Osmieri | NREL

    Science.gov Websites

    , characterizing, and testing innovative non-precious-metal electrocatalysts (mainly based on Fe-N-C and Co-N-C . Student to Universidad Autónoma de Madrid (Madrid, Spain), Department of Applied Physical-Chemistry, Sept Union Mobility Program scholarship (LLP - ERASMUS) at Universitat Politècnica de Catalunya (Barcelona

  19. 75 FR 60459 - Sunshine Act Notices

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-30

    ...: Thursday, September 23, 2010, at 10 a.m. PLACE: 999 E Street, NW., Washington, DC (Ninth Floor). STATUS... Democratic-Farmer-Labor Party by its counsel, Marc E. Elias, Esq. and Jonathan S. Berkon, Esq. of Perkins Coie, LLP. Draft Advisory Opinion 2010-19: Google by its counsel, Marc E. Elias, Esq. and Jonathan S...

  20. 76 FR 81925 - Freeport LNG Development, L.P.; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-29

    ... Development, L.P.; Notice of Application Take notice that on December 9, 2011, Freeport LNG Development, L.P... questions regarding this application should be directed to Lisa M. Tonery, Fulbright & Jaworski L.L.P., 666...-Filing'' link. The Commission strongly encourages electronic filings. Comment Date: 5 p.m. Eastern Time...

  1. Contrasting Roles of the Apoplastic Aspartyl Protease APOPLASTIC, ENHANCED DISEASE SUSCEPTIBILITY1-DEPENDENT1 and LEGUME LECTIN-LIKE PROTEIN1 in Arabidopsis Systemic Acquired Resistance1,2[W

    PubMed Central

    Breitenbach, Heiko H.; Wenig, Marion; Wittek, Finni; Jordá, Lucia; Maldonado-Alconada, Ana M.; Sarioglu, Hakan; Colby, Thomas; Knappe, Claudia; Bichlmeier, Marlies; Pabst, Elisabeth; Mackey, David; Parker, Jane E.; Vlot, A. Corina

    2014-01-01

    Systemic acquired resistance (SAR) is an inducible immune response that depends on ENHANCED DISEASE SUSCEPTIBILITY1 (EDS1). Here, we show that Arabidopsis (Arabidopsis thaliana) EDS1 is required for both SAR signal generation in primary infected leaves and SAR signal perception in systemic uninfected tissues. In contrast to SAR signal generation, local resistance remains intact in eds1 mutant plants in response to Pseudomonas syringae delivering the effector protein AvrRpm1. We utilized the SAR-specific phenotype of the eds1 mutant to identify new SAR regulatory proteins in plants conditionally expressing AvrRpm1. Comparative proteomic analysis of apoplast-enriched extracts from AvrRpm1-expressing wild-type and eds1 mutant plants led to the identification of 12 APOPLASTIC, EDS1-DEPENDENT (AED) proteins. The genes encoding AED1, a predicted aspartyl protease, and another AED, LEGUME LECTIN-LIKE PROTEIN1 (LLP1), were induced locally and systemically during SAR signaling and locally by salicylic acid (SA) or its functional analog, benzo 1,2,3-thiadiazole-7-carbothioic acid S-methyl ester. Because conditional overaccumulation of AED1-hemagglutinin inhibited SA-induced resistance and SAR but not local resistance, the data suggest that AED1 is part of a homeostatic feedback mechanism regulating systemic immunity. In llp1 mutant plants, SAR was compromised, whereas the local resistance that is normally associated with EDS1 and SA as well as responses to exogenous SA appeared largely unaffected. Together, these data indicate that LLP1 promotes systemic rather than local immunity, possibly in parallel with SA. Our analysis reveals new positive and negative components of SAR and reinforces the notion that SAR represents a distinct phase of plant immunity beyond local resistance. PMID:24755512

  2. [Diagnostic approach of an IgM monoclonal gammopathy and clinical importance of gene MYD88 L265P mutation].

    PubMed

    Cilla, N; Vercruyssen, M; Ameye, L; Paesmans, M; de Wind, A; Heimann, P; Meuleman, N; Bron, D

    2018-05-30

    An IgM monoclonal gammopathy points to a diagnosis of Waldenstrom's Macroglobulinemia. Other B lymphoproliferatives disorders should be ruled out but the limits are sometimes difficult to define. The discovery of the L265P mutation of the MYD88 gene simplified potentially the situation. 383 patients of the Jules Bordet Institute with an IgM level above 2 g/L were reviewed. For the 49 who had a monoclonal peak, we analysed the underlying pathology in termes of general, clinical and biological characteristics. We checked if the MYD88 mutation had been detected. The overall survival rate was studied. 5 histological groups were identified: Waldenstrom's Macroglobulinemia (MW, N = 27), lymphoplasmacytic lymphoma (LLP, N = 10), marginal zone lymphoma (LMZ, N = 7), monoclonal gammopathy of unknown significance and multiple myeloma (MGUS/MM, N = 5). The MW group was compared to the other groups. Regarding biological characteristics, the IgM level upon diagnosis was statistically higher in the MW group with a median level at 19.5 g/L (2.3-101 g/L) (p-value = 0,0001). Concerning the clinical characteristics, a splenomegaly was more frequent in the LMZ group (p-value = 0,04). The L265P mutation of the MYD88 gene was found in 77 % of patients in the MW group, 60 % of patients in the LLP group and 67 % in the LMZ group (p-value = 0,38). For the 49 patients, the 10-yearoverall survival was 85 % (CI 95 %, 67 % to 94 %) and the 15-year-overall survival was 65 % (CI 95 %, 41 % to 81 %). A monoclonal IgM peak suggests a MW but other B lymphoproliferatives disorders should be excluded. Even if the L265P mutation is frequent in the LLP/MW, it is not specific. A precise diagnosis requires collating clinical, histological, immunophenotypical and genetical data.

  3. Overview of progesterone profiles in dairy cows.

    PubMed

    Blavy, P; Derks, M; Martin, O; Höglund, J K; Friggens, N C

    2016-09-01

    The aim of this study was to gain a better understanding of the variability in shape and features of all progesterone profiles during estrus cycles in cows and to create templates for cycle shapes and features as a base for further research. Milk progesterone data from 1418 estrus cycles, coming from 1009 lactations, was obtained from the Danish Cattle Research Centre in Foulum, Denmark. Milk samples were analyzed daily using a Ridgeway ELISA-kit. Estrus cycles with less than 10 data points or shorter than 4 days were discarded, after which 1006 cycles remained in the analysis. A median kernel of three data points was used to smooth the progesterone time series. The time between start of progesterone rise and end of progesterone decline was identified by fitting a simple model consisting of base length and a quadratic curve to progesterone data, and this luteal-like phase (LLP) was used for further analysis. The data set of 1006 LLP's was divided into five quantiles based on length. Within quantiles, a cluster analysis was performed on the basis of shape distance. Height, upward and downward slope, and progesterone level on Day 5 were compared between quantiles. Also, the ratio of typical versus atypical shapes was described, using a reference curve on the basis of data in Q1-Q4. The main results of this article were that (1) most of the progesterone profiles showed a typical profile, including the ones that exceeded the optimum cycle length of 24 days; (2) cycles in Q2 and Q3 had steeper slopes and higher peak progesterone levels than cycles in Q1 and Q4 but, when normalized, had a similar shape. Results were used to define differences between quantiles that can be used as templates. Compared to Q1, LLP's in Q2 had a shape that is 1.068 times steeper and 1.048 times higher. Luteal-like phases in Q3 were 1.053 times steeper and 1.018 times higher. Luteal-like phases in Q4 were 0.977 times steeper and 0.973 times higher than LLP's in Q1. This article adds to our knowledge about the variability of progesterone profiles and their shape differences. The profile clustering procedure described in this article can be used as a means to classify progesterone profiles without recourse to an a priori set of rules, which arbitrarily segment the natural variability in these profiles. Using data-derived profile shapes may allow a more accurate assessment of the effects of, e.g., nutritional management or breeding system on progesterone profiles. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. 75 FR 82420 - Self-Regulatory Organizations, The NASDAQ Stock Market LLC; Order Approving Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-30

    .... Chamberlain, Bingham McCutchen LLP, dated November 22, 2010 (``Bingham Letter''); David Alan Miller, Managing..., the SPAC must complete one or more business combinations having an aggregative fair market value of at..., investor base, and trading interest to provide the depth and liquidity necessary to promote fair and...

  5. 76 FR 30396 - Deloitte Financial Advisory Services LLP, Real Estate Consulting, Houston, TX; Amended...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-25

    ... Apply for Worker Adjustment Assistance In accordance with Section 223 of the Trade Act of 1974, as... Eligibility to Apply for Worker Adjustment Assistance on November 19, 2010, applicable to workers of Deloitte... Notice was published in the Federal Register on December 6, 2010 (75 FR 75700). The subject worker group...

  6. U.S. EPA, Pesticide Product Label, PROKIL DIURON 80W, 05/26/1971

    EPA Pesticide Factsheets

    2011-04-21

    ... itt' \\H'!.\\.h ,.1''':' dr,}livrJ intll t!;;· 'flttill~; pC'l'l,d hy \\,:l1ltlll,li pr.I\\.'ti\\.:I..: ,l}lp. ... U";,' ~ 11,::, P'.'! .tl.'l't'. 1'\\ \\;1· ly 1'1;1;'" .• In.1,. lit I,l:l:-r, Itt Ihv ;'.1' ... , ...

  7. 76 FR 32258 - Access to Aircraft Situation Display (ASDI) and National Airspace System Status Information (NASSI)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-03

    ..., Global Business Travel Association (GBTA), McAfee & Taft P.C. (a law firm), and Patton Boggs LLP (a law... complete information, due primarily to concerns of the National Business Aviation Association (NBAA) to... business according to a published listing of service and schedule, general aviation operators do not. It is...

  8. 75 FR 11958 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Order Granting Approval of Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-12

    ... the application, entry and annual fees currently charged to issuers listed on the Nasdaq Global and Nasdaq Global Select Markets, as well as the fee for written interpretations of Nasdaq listing rules. The.... Markham, Jr., Roger Myers, and Stephen Ryerson, Holme Roberts & Owen LLP (writing on behalf of Business...

  9. 77 FR 17530 - Order Granting an Application of Edward Jones & Co. LLP Exemption From Exchange Act Section 11(d...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-26

    ... margin on newly-purchased shares of mutual funds not managed or sponsored by Edward Jones or any affiliate of Edward Jones (``non-proprietary mutual funds'') in instances in which the customer makes a dollar-for-dollar substitution by selling an already- margined non-proprietary mutual fund and buying...

  10. Look Ahead: Long-Range Learning Plans

    ERIC Educational Resources Information Center

    Weinstein, Margery

    2010-01-01

    Faced with an unsteady economy and fluctuating learning needs, planning a learning strategy designed to last longer than the next six months can be a tall order. But a long-range learning plan can provide a road map for success. In this article, four companies (KPMG LLP, CarMax, DPR Construction, and EMC Corp.) describe their learning plans, and…

  11. Academies: A Model for School Improvement? Key Findings from a Five-Year Longitudinal Evaluation

    ERIC Educational Resources Information Center

    Armstrong, David; Bunting, Valerie; Larsen, Judy

    2009-01-01

    Academies were launched by David Blunkett, the then Secretary of State for Education, in March 2000 in a speech on transforming secondary education. PricewaterhouseCoopers LLP (PwC) was commissioned by the predecessor of the Department for Children, Schools and Families (DCSF) in February 2003 to conduct an independent longitudinal evaluation of…

  12. 50 CFR 679.82 - Rockfish Program use caps and sideboard limits.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... with that LLP licence that is subsequently approved by NMFS may not fish for that fishing year in any... of Alaska for which it adopts the applicable Federal fishing season for that species with any vessel... sideboard restrictions apply to fishing activities during July 1 through July 31 of each year in each...

  13. 76 FR 77728 - Process for a Designated Contract Market or Swap Execution Facility To Make a Swap Available To...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-14

    ..., 2011 at 9; Letter from Robert Pickel and Kenneth Bentsen, International Swaps and Derivatives..., dated Apr. 5, 2011 at 19; Letter from Robert Pickel and Kenneth Bentsen, International Swaps and... LLP, on behalf of certain dealers, dated Apr. 5, 2011 at 19; Letter from Robert Pickel and Kenneth...

  14. 77 FR 10775 - United States v. SG Interests I LTD., et al.; Proposed Final Judgment and Competitive Impact...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-23

    ... a related qui tam case also filed in United States District Court for the District of Colorado... ENERGY CORPORATION, 1801 Broadway, Suite 1200, Denver, CO 80202, Defendants. COMPLAINT The United States... & Jaworksi, LLP, Republic Plaza, 370 Seventeenth Street, Suite 2150, Denver, CO 80202. Telephone: (303) 801...

  15. Quality Control Review of the Dixon Hughes Goodman LLP FY 2014 Single Audit of Logistics Management Institute

    DTIC Science & Technology

    2016-09-29

    independent, relevant, and timely oversight of the Department of Defense that supports the warfighter; promotes accountability , integrity, and...compliance testing for the allowable costs/cost principles compliance requirement to ensure the review of indirect costs is adequately performed...consulting services in logistics, acquisition and financial management, infrastructure management, information management, organizational improvement, and

  16. 75 FR 62421 - Notice of Lodging of Consent Decree Under the Clean Air Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-08

    ... States of America v. Dakota Ethanol, LLC, Civil Action No. 4:10-CV-04144-LLP, was lodged with the United... by the United States against Dakota Ethanol, LLC pursuant to Sections 111 and 502(a) of the Clean Air... penalties for Defendant's alleged violations of the Act. Dakota Ethanol, LLC owns and operates an ethanol...

  17. 76 FR 63953 - Notice of Lodging of Consent Decree Under the Clean Air Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-14

    ... & Co., Civil Action No. 4:11-cv-04143-LLP, D.J. Ref. 90-5-1-1-3973/1, was lodged with the United States... and injunctive relief in connection with Defendant John Morrell & Co.'s (``JMC'') violations of... has agreed to institute a new nameplate and label creation procedure to fix the remaining deficiency...

  18. A Downtown Denver Law Firm Leverages Tenant Improvement Funds to Cut Operating Expenses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Bryan Cave HRO (formerly Holme Roberts & Owen LLP, headquartered in Denver, Colorado), an international law firm, partnered with the U.S Department of Energy (DOE) to develop and implement solutions to retrofit existing buildings to reduce annual energy consumption by at least 30% versus pre-retrofit energy use as part of DOE’s Commercial Building Partnership (CBP) program.

  19. 78 FR 66909 - Sabine Pass Liquefaction, LLC; Sabine Pass LNG, L.P.; Notice of Application to Amend...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-07

    ..., or call (713) 375-5000, or by email [email protected] . Or contact Lisa M. Tonery, Partner, Fulbright & Jaworski LLP, 666 Fifth Avenue, New York, NY 10103, or call (212)318-3009, or by email lisa...) and place it into the Commission's public record (eLibrary) for this proceeding; or issue a Notice of...

  20. 78 FR 38703 - LNG Development Company (d/b/a Oregon LNG); Oregon Pipeline Company, LLC; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-27

    ...://www.ferc.gov using the ``eLibrary'' link. Enter the docket number excluding the last three digits in..., Vancouver, WA 98662, (503) 298-4967, [email protected] or Lisa M. Tonery, Fulbright & Jaworski LLP, 666 Fifth Avenue, New York, NY 10103, (212) 318-3009, lisa[email protected] . On July 16, 2012...

  1. The Nunn-McCurdy Act: Background, Analysis, and Issues for Congress

    DTIC Science & Technology

    2016-05-12

    December 2001, the Navy Area Defense ( NAD ) program was cancelled. 49 According to DOD, “the cancellation came, in part, as a result of a Nunn...Consulting LLP, Can We Afford Our Own Future? Why A&D Programs are Late and Over-budget—and What Can Be Done to Fix the Problem, 2008, p. 2. 49 NAD was

  2. 76 FR 13010 - Self-Regulatory Organizations; NASDAQ OMX BX, Inc.; Notice of Designation of Longer Period for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-09

    ..., dated September 28, 2010; Michael R. Trocchio, Bingham McCutchen LLP, on behalf of Pink OTC Markets Inc... Listing Market on the Exchange March 3, 2011. On August 20, 2010, NASDAQ OMX BX, Inc. (the ``Exchange... change to create a listing market on the Exchange. The proposed rule change was published for comment in...

  3. 75 FR 38452 - Fisheries of the Exclusive Economic Zone Off Alaska; Central Gulf of Alaska License Limitation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-02

    ... gear'' which includes hook-and-line, pot, and jig gear. A vessel owner received an LLP license endorsed... operations. Specifically, fixed gear participants were concerned about the potential effects of additional... designated for (1) pot, hook-and-line, and jig gear; (2) specific GOA regulatory areas (i.e., CG and WG); (3...

  4. 77 FR 37941 - Order Granting a Limited Exemption From Exchange Act Rule 10b-17 to Certain Actively Managed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-25

    .... John McGuire, Esq., Morgan Lewis & Bockius LLP regarding AdvisorShares Trust (December 16, 2011... or rights or other subscription offering, of the amount in cash to be paid or distributed per share... or distribution and the rate of the dividend or distribution. \\2\\ If the exact per share cash...

  5. Preparing Elementary and Secondary Pre-Service Teachers for Everyday Science

    ERIC Educational Resources Information Center

    Evagorou, Maria; Guven, Devrim; Mugaloglu, Ebru

    2014-01-01

    The purpose of the paper is to present the framework and design of modules aiming to teach socio-scientific issues and the related pedagogy to pre-service teachers. Specifically, the work presented in this paper is part of the PreSEES project, a Comenius/LLP project with the main aim of engaging elementary and secondary pre-service teachers in…

  6. 75 FR 1118 - BNSF Railway Company-Temporary Trackage Rights Exemption-Union Pacific Railroad Company

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-08

    ... trackage rights are temporary in nature and are for a period from January 22, 2010 through December 10... effective). An original and 10 copies of all pleadings, referring to STB Finance Docket No. 35340, must be..., a copy of each pleading must be served on Adrian L. Steel, Jr., Mayer Brown LLP, 1999 K Street, NW...

  7. Learning to Become an Intercultural Practitioner: The Case of Lifelong Learning Intensive Programme Interdisciplinary Course of Intercultural Competences

    ERIC Educational Resources Information Center

    Onorati, Maria Giovanna; Bednarz, Furio

    2010-01-01

    This paper dates back to 2009 (it was first presented at the CRLL Conference at Stirling University) and deals with the advances in lifelong learning introduced by an ERASMUS LLP-IP named Interdisciplinary Course of Intercultural Competences (ICIC). The programme, that involves academic and non-academic institutions concerned with higher education…

  8. 75 FR 37878 - Soo Line Railroad Company-Discontinuance of Trackage Rights Exemption-in Wayne, Washtenaw...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-30

    ..., LaPorte, Porter, and Lake Counties, IN, and Cook County, IL Soo Line Railroad Company (Soo Line) \\1\\ has... must be filed by July 20, 2010, with: Surface Transportation Board, 395 E Street, SW., Washington, DC... with the Board should be sent to Soo Line's representative: Terence M. Hynes, Sidley Austin LLP, 1501 K...

  9. Study of Large Data Resources for Multilingual Training and System Porting (Pub Version, Open Access)

    DTIC Science & Technology

    2016-05-03

    extraction trained on a large database corpus – English Fisher. Although the performance of ported monolingual system would be worse in comparison...Language TE LI HA LA ZU LLP hours 8.6 9.6 7.9 8.1 8.4 LM sentences 11935 10743 9861 11577 10644 LM words 68175 83157 93131 93328 60832 dictionary 14505

  10. Affirmative Action Redux: Who Is behind the Latest Effort to End the Consideration of Race in College Admissions?

    ERIC Educational Resources Information Center

    Smith, Susan

    2012-01-01

    The homepage of the Project on Fair Representation (POFR) features a smiling photo of Abigail Fisher, the young White woman at the center of "Fisher v. the University of Texas," which could end race as a criterion in university admissions. Edward Blum, founder of POFR, a conservative advocacy group, connected Fisher with Wiley Rein LLP,…

  11. Seizing Opportunities: Genie Tyburski--Ballard Spahr Andrews & Ingersoll, LLP, Philadelphia

    ERIC Educational Resources Information Center

    Library Journal, 2004

    2004-01-01

    Genie Tyburski did not set out to be a law librarian. When asked at Drexel's library school what kind of librarian she wanted to be, she was surprised that "a good one" was not one of the options. But six weeks into the semester, she landed a part-time cataloging job at Community Legal Services in Philadelphia; six months later she was…

  12. Improving Air Force Enterprise Resource Planning-Enabled Business Transformation

    DTIC Science & Technology

    2013-01-01

    of the time of the research. - xviii - At RAND, we thank Mr. Jerry Sollinger for helping us to organize our material and Dr. Laura Baldwin for...complex technology effort most public-sector organizations will ever attempt” ( KPMG , 2011). While many of the challenges listed above may manifest... KPMG LLP, 2011). 1 Multi-echelon means one person is in charge and has responsibility for

  13. Department of Defense Office of the Inspector General FY 2013 Audit Plan

    DTIC Science & Technology

    2012-11-01

    oversight procedures to review KPMG LLPs work; and if applicable disclose instances where KPMG LLP does not comply, in all material respects, with U.S...decisions. Pervasive material internal control weaknesses impact the accuracy, reliability and timeliness of budgetary and accounting data and...reported the same 13 material internal control weaknesses as in the previous year. These pervasive and longstanding financial management challenges

  14. 77 FR 68722 - Petition for Reconsideration of Action in Rulemaking Proceeding

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-16

    ... Manager of Navtech Radar Ltd., on behalf of Navtech Radar Ltd. and Bruce A. Olcott, for Squire Sanders LLP... 15.35 and 15.253 of the Commission's Rules Regarding Operation of Radar Systems in the 76-77 GHz Band; Amendment of Section 15.253 of the Commission's Rules to Permit Fixed Use of Radar in the 76-77 GHz Band (ET...

  15. 75 FR 64772 - Self-Regulatory Organizations; NASDAQ OMX BX, Inc.; Notice of Designation of a Longer Period for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-20

    ..., Secretary, Commission, from Michael R. Trocchio, Bingham McCutchen LLP, on behalf of Pink OTC Markets Inc... Rule Change To Create a Listing Market on the Exchange October 14, 2010. On August 20, 2010, NASDAQ OMX... thereunder,\\3\\ a proposed rule change to create a listing market, which will be called ``BX.'' The proposed...

  16. Universal aspects of conformations and transverse fluctuations of a two-dimensional semi-flexible chain

    NASA Astrophysics Data System (ADS)

    Hsu, Hsiao-Ping; Huang, Aiqun; Bhattacharya, Aniket; Binder, Kurt

    2015-03-01

    In this talk we compare the results obtained from Monte Carlo (MC) and Brownian dynamics (BD) simulation for the universal properties of a semi-flexible chain. Specifically we compare MC results obtained using pruned-enriched Rosenbluth method (PERM) with those obtained from BD simulation. We find that the scaled plot of root-mean-square (RMS) end-to-end distance / 2 Llp and RMS transverse transverse fluctuations √{ } /lp as a function of L /lp (where L and lp are the contour length, and the persistence length respectively) are universal and independent of the definition of the persistence length used in MC and BD schemes. We further investigate to what extent these results agree for a semi-flexible polymer confined in a quasi one dimensional channel.

  17. Fostered Learning: Exploring Effects of Faculty and Student Affairs Staff Roles within Living-Learning Programs on Undergraduate Student Perceptions of Growth in Cognitive Dimensions

    ERIC Educational Resources Information Center

    Long, Nicole Natasha

    2012-01-01

    The purpose of this study was to explore effects of faculty and student affairs staff roles within living-learning programs (LLPs) on perceptions of growth in critical thinking/analysis abilities, cognitive complexity, and liberal learning among LLP participants. This study used two data sources from the National Study of Living-Learning Programs…

  18. Tobacco plants transformed with the bean. alpha. ai gene express an inhibitor of insect. alpha. -amylase in their seeds. [Nicotiana tabacum; Tenebrio molitor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altabella, T.; Chrispeels, M.J.

    Bean (Phaseolus vulgaris L.) seeds contain a putative plant defense protein that inhibits insect and mammalian but not plant {alpha}-amylases. We recently presented strong circumstantial evidence that this {alpha}-amylase inhibitor ({alpha}Al) is encoded by an already-identified lectin gene whose product is referred to as lectin-like-protein (LLP). We have now made a chimeric gene consisting of the coding sequence of the lectin gene that encodes LLP and the 5{prime} and 3{prime} flanking sequences of the lectin gene that encodes phytohemagglutinin-L. When this chimeric gene was expressed in transgenic tobacco (Nicotiana tabacum), we observed in the seeds a series of polypeptides (M{submore » r} 10,000-18,000) that cross-react with antibodies to the bean {alpha}-amylase inhibitor. Most of these polypeptides bind to a pig pancreas {alpha}-amylase affinity column. An extract of the seeds of the transformed tobacco plants inhibits pig pancreas {alpha}-amylase activity as well as the {alpha}-amylase present in the midgut of Tenebrio molitor. We suggest that introduction of this lectin gene (to be called {alpha}ai) into other leguminous plants may be a strategy to protect the seeds from the seed-eating larvae of Coleoptera.« less

  19. Ernst and Young LLP South Carolina Research Authority Fiscal Year Ended June 30, 1995.

    DTIC Science & Technology

    1997-06-30

    The objective of a quality control review is to ensure that the audit was conducted in accordance with applicable standards and meets the auditing...requirements of OMB Circular A-133. As the Federal oversight agency for SCRA, we conducted a quality control review of the audit working papers. We...focused our review on the following qualitative aspects of the audit : due professional care, planning, supervision, independence, quality control

  20. The Effect of Alternative Work Schedules (AWS) on Performance During Acquisition Based Testing at the U.S. Army Aberdeen Test Center

    DTIC Science & Technology

    2014-09-01

    profile .............................................................. 11 Table 5. Eastman Kodak company profile...schedules. Company profiles for KPMG LLP, Eastman Kodak and Texas Instruments (TI) are presented in Tables 4–6. Following each profile is a summary of the...and business continuity (Giglio n.d.-a). 2. Case Two A company profile (see Table 5) and case study summary on Eastman Kodak are presented in the

  1. A Management Case Analysis of the Department of Defense Contractor Risk Assessment Guide Program

    DTIC Science & Technology

    1990-12-01

    ORGANIZATION ......... 17 1. Past Government Organization Problems... 17 2. Current Government Organization ......... 17 3. The Contractor’s Organizatin ...Program and urged CODSIA to take a leadership role in encouraging its members to participate. The DCAA [Ref ll:p. 45] informed defense contractors in...conferences, and through the leadership of officials including the Under Secretary of the Navy for Acquisition, DOD Comptroller General, DOD Inspector General

  2. 50 CFR Table 9 to Part 679 - Groundfish LLP Licenses Eligible for Use in the BSAI Longline Catcher/Processor Subsector, Column...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... in the BSAI Longline Catcher/Processor Subsector, Column A. X Indicates Whether Column B or Column C... the BSAI Longline Catcher/Processor Subsector, Column A. X Indicates Whether Column B or Column C...)(D)(2) LLG 4508 X LLG 1785 X LLG 3681 X LLG 3676 X LLG 3609 X LLG 1400 X LLG 1401 X LLG 3617 X LLG...

  3. SERDP and ESTCP Workshop on Long Term Management of Contaminated Groundwater Sites

    DTIC Science & Technology

    2013-11-01

    Yasmin Shafiq HydroGeoLogic, Inc. Allen Shapiro, Ph.D. U.S. Geological Survey Thomas Simpkin, Ph.D. CH2M HILL Mike Singletary NAVFAC...Buffalo ALLEN SHAPIRO, U.S. Geological Survey LENNY SIEGEL, Center for Public Environmental Oversight WILLIAM WALSH, Pepper Hamilton LLP Presenters...MNA Challenges Not appropriate for many CVOC sites (McGuire et al., Historical and Retrospective Survey of MNA... WSRC-TR-2003-00333) MNA Challenges

  4. Quality Control Review of Coopers & Lybrand L.L.P. Polytechnic University. Fiscal Year Ended June 30, 1996

    DTIC Science & Technology

    1998-05-18

    The objective of a quality control review is to assure that the audit was conducted in accordance with applicable standards and meets the auditing...requirements of the OMB Circular A-I 33. As the cognizant Federal agency for the University, we conducted a quality control review of the audit working...papers. We focused our review on the following qualitative aspects of the audit : due professional care, planning, supervision, independence, quality

  5. KPMG Peat Marwick LLP GreatLakes Composites Consortium, Inc. Fiscal Year Ended December 31, 1995

    DTIC Science & Technology

    1997-06-25

    The objective of a quality control review is to assure that the audit was conducted in accordance with applicable standards and meets the auditing...requirements of the OMB Circular A-133. As the cognizant agency for the Institute, we conducted a quality control review of the audit working papers. We...focused our review on the qualitative aspects of the audit : due professional care, planning, supervision, independence, quality control, internal

  6. KPMG Peat Marwick LLP Corporation of Mercer University Fiscal Year Ended June 30, 1995

    DTIC Science & Technology

    1997-06-11

    The objective of a quality control review is to ensure that the audit was conducted in accordance with applicable standards and meets the auditing...requirements of the OMB Circular A-133. We conducted a quality control review of the audit working papers. We focused our review on the following...qualitative aspects of the audit : due professional care, planning, supervision, independence, quality control, internal controls, substantive testing, general and specific compliance testing, and the Schedule of Federal Awards.

  7. Report on Quality Control Review of the Raich Ende Malter & Co. LLP FY 2009 Single Audit of the Riverside Research Institute

    DTIC Science & Technology

    2012-03-07

    compliance was based on a determination that 10 of the 14 compliance requirements were applicable to the Institute. However, the audit working papers...for all 14 of the compliance requirements were not adequate to support conclusions on applicability, internal control, and the audit opinion on...compliance with laws, regulations, and award provisions applicable to the R&D cluster program. In addition, the audit firm did not appropriately report an

  8. "Best-in-class": an interview with John T. Bigalke.

    PubMed

    Bigalke, J T

    2000-07-01

    John T. Bigalke, FHFMA, MBA, CPA, is national director, healthcare assurance & advisory services, Deloitte & Touche LLP. Bigalke began his career in health care in 1977 as a staff auditor and consultant with a Big Five accounting firm. In 1983, he moved to another Big Five firm, where he spent 15 years in leadership positions, including vice chairman of the healthcare practice, before moving to Deloitte & Touche in 1998. A member of HFMA since 1980, he served on HFMA's National Board of Directors from 1997 to 2000.

  9. 106-17 Telemetry Standards Chapter 7 Packet Telemetry Downlink

    DTIC Science & Technology

    2017-07-31

    Acronyms IP Internet Protocol IPv4 Internet Protocol, Version 4 IPv6 Internet Protocol, Version 6 LLP low-latency PTDP MAC media access control...o 4’b0101: PT Internet Protocol (IP) Packet o 4’b0110: PT Chapter 24 TmNSMessage Packet o 4’b0111 – 4’b1111: Reserved • Fragment (bits 17 – 16...packet is defined as a free -running 12-bit counter. The PT test counter packet shall consist of one 12-bit word and shall be encoded as one 24-bit

  10. Estimation and Detection with Chaotic Systems

    DTIC Science & Technology

    1994-02-01

    125 For each nonsingular transformation f: X - X on (X, /3, I), there is a unique operator Pf: Ll(/) Ll(p) known as the Frobenius - Perron operator which...satisfies lB Pf (p(x)) d.(x) = J (B) p(x) dl(x) (6.10) for each B E and p E L(u) [50, 55]. This rather abstract definition of the Frobenius - Perron ... Frobenius - Perron operator for the n-fold composition of the transformation f is the same as the n-fold composition of the Frobenius - Perron operator for f. As

  11. The Exponentially Embedded Family of Distributions for Effective Data Representation, Information Extraction, and Decision Making

    DTIC Science & Technology

    2013-03-01

    information ex- traction and learning from data. First of all, it admits sufficient statistics and therefore, provides the means for selecting good models...readily found since the Kullback -Liebler divergence can be used to ascertain distances between PDFs for various hypothesis testing scenarios. We...t1, t2) Information content of T2 (x) is D(pryj,IJ2(tl, t2)11Pryj,!J2=0(tl, t2)) = reduction in distance to true PDF where D(p1llp2) is Kullback

  12. Report on Quality Control Review of the Deloitte & Touche, LLP and Defense Contract Audit Agency FY 2008 Single Audit of The Aerospace Corporation

    DTIC Science & Technology

    2010-10-29

    Auounlants’ lI<:<:ounl~ LS’ Slal.-mt31 . enLS nts on ~’" Audiling i[i !! St3nd:IrIJ. ’a , bnls S~cllon CI] 316. t , MCons~Co sitliJ"<’,."lion...8217e Bnl /lch Mal/ager. Suulh /lay .Bmnch OjJia .. Def~".~e C(Jn/roc, Audll Agfncy, prOl"idi! $Ioffper/m·mfng Circular .... ] JJ o!l</ir /o’lno/lraini

  13. Damage Models for Delamination and Transverse Fracture.

    DTIC Science & Technology

    1987-08-01

    is metals mas be Aritten in this form (Leek ic and lls.\\ en tm ’r [ hus . except for consideration of hurst 19-f4 .I In analop. as th elasicits t her... exponent % . that theisirrir- must sols e siniultaneousls for f, g and 11 using the J, (but not past %alues) determines, the irrst:irtar ie dlinienionless...in this last case does r’hc’r pirrmeersl so that a J, is not necessaril% the nonlinearit% exponent appear in the equatioin for I’llp. I hc. results

  14. Current Capabilities, Requirements and a Proposed Strategy for Interdependency Analysis in the UK

    NASA Astrophysics Data System (ADS)

    Bloomfield, Robin; Chozos, Nick; Salako, Kizito

    The UK government recently commissioned a research study to identify the state-of-the-art in Critical Infrastructure modelling and analysis, and the government/industry requirements for such tools and services. This study (Cetifs) concluded with a strategy aiming to bridge the gaps between the capabilities and requirements, which would establish interdependency analysis as a commercially viable service in the near future. This paper presents the findings of this study that was carried out by CSR, City University London, Adelard LLP, a safety/security consultancy and Cranfield University, defense academy of the UK.

  15. Modelling income distribution impacts of water sector projects in Bangladesh.

    PubMed

    Ahmed, C S; Jones, S

    1991-09-01

    Dynamic analysis was conducted to assess the long-term impacts of water sector projects on agricultural income distribution, and sensitivity analysis was conducted to check the robustness of the 5 assumptions in this study of income distribution and water sector projects in Bangladesh. 7 transitions are analyzed for mutually exclusive irrigation and flooding projects: Nonirrigation to 1) LLP irrigation, 2) STW irrigation, 3) DTW irrigation, 4) major gravity irrigation, and manually operated shallow tubewell irrigation (MOSTI) and Flood Control Projects (FCD) of 6) medium flooded to shallow flooded, and 7) deeply flooded to shallow flooded. 5 analytical stages are involved: 1) farm budgets are derived with and without project cropping patterns for each transition. 2) Estimates are generated for value added/hectare from each transition. 3) Assumptions are made about the number of social classes, distribution of land ownership between classes, extent of tenancy for each social class, term of tenancy contracts, and extent of hiring of labor for each social class. 4) Annual value added/hectare is distributed among social classes. 5) Using Gini coefficients and simple ratios, the distribution of income between classes is estimated for with and without transition. Assumption I is that there are 4 social classes defined by land acreage: large farmers (5 acres), medium farmers (1.5-5.0), small farmers, (.01-1.49), and landless. Assumption II is that land distribution follows the 1978 Land Occupancy Survey (LOS). Biases, if any, are indicated. Assumption III is that large farmers sharecrop out 15% of land to small farmers. Assumption IV is that landlords provide nonirrigated crop land and take 50% of the crop, and, under irrigation, provide 50% of the fertilizer, pesticide, and irrigation costs and take 50% of the crop. Assumption V is that hired and family labor is assumed to be 40% for small farmers, 60% for medium farmers, and 80% for large farmers. It is understood that the analysis is partially complete, since there if no Assessment of the impact on nonagricultural income and employment, or secondary impacts such as demand for irrigation equipment, services for processing, manufacture and transport services, or investment of new agricultural surpluses. Few empirical studies have been done and the estimates apply only to individual project areas. The results show that inequality is greatest with major (gravity) irrigation, followed by STW, DTW and LLP, FCD (medium to shallow), FCD (deep to shallow), and the most equitable MOSTI. Changes in the absolute income accruing to the rural poor would lead to the rank of major gravity irrigation as raising more above the poverty line, followed by MOSTI, minor irritation (STW, DTW, and LLP), and FCD schemes.

  16. SU-F-BRB-01: How Effective Is Abdominal Compression at Reducing Lung Motion? An Analysis Using Deformable Image Registration Within Different Sub-Regions of the Lung

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paradiso, D; Pearce, A; Leszczynski, K

    2015-06-15

    Purpose: To investigate the effectiveness of employing abdominal compression (AC) in reducing motion for the target region and sub-regions of the lung as part of the planning process for radiation therapy. Methods: Fourteen patients with early lung cancer were scanned with 4DCT and it was determined that target motion exceeded our institutional limit of > 8 mm motion and received a repeat 4DCT with AC. For each 4DCT, deformable image registration (DIR) was used to map the max inhale to the max exhale phase to determine the deformation vector fields (DVF). DIR was performed with Morphons and Demons algorithms. Themore » mean DVF was used to represent that sub-region for each patient. The magnitudes of the mean DVF were quantified for the target and 12 sub-regions in the AP, LR SI directions. The sub-regions were contoured on each lung as (add prefix R or L for lung): Upper-Anterior (UA), Upper-Posterior (UP), Mid-Anterior (MA), Mid-Posterior (MP), Lower-Anterior (LA) and Lower-Posterior (LP). Results: The min/max SI motion for the target on the uncompressed 4DCT was 8mm/24.5 mm. The magnitude of decrease in SI was greatest in the RLP region (3.7±4.0mm) followed by target region (3.3±2.2mm) and finally the LLP region (3.0±3.5mm). The magnitude of decrease in 3D vector followed the same trend; RLP (3.5±2.2mm) then GTV (3.5±2.6mm) then LLP (2.7±3.8mm). 79% of the cases had a SI decrease of >12.5%, 43% had a SI decrease of >25% and 21% had a SI decrease of >50% as compared to the motion on the uncompressed 4DCT. Conclusion: AC is useful in reducing motion with the largest decreases observed in the lower posterior regions of the lungs. However, it should be noted that AC will not greatly decrease motion for all cases as 21% of cases did not reduce SI motion more than 12.5% of initial motion.« less

  17. Use of spent mushroom substrate for production of Bacillus thuringiensis by solid-state fermentation.

    PubMed

    Wu, Songqing; Lan, Yanjiao; Huang, Dongmei; Peng, Yan; Huang, Zhipeng; Xu, Lei; Gelbic, Ivan; Carballar-Lejarazu, Rebeca; Guan, Xiong; Zhang, Lingling; Zou, Shuangquan

    2014-02-01

    The aim of this study was to explore a cost-effective method for the mass production of Bacillus thuringiensis (Bt) by solid-state fermentation. As a locally available agroindustrial byproduct, spent mushroom substrate (SMS) was used as raw material for Bt cultivation, and four combinations of SMS-based media were designed. Fermentation conditions were optimized on the best medium and the optimal conditions were determined as follows: temperature 32 degrees C, initial pH value 6, moisture content 50%, the ratio of sieved material to initial material 1:3, and inoculum volume 0.5 ml. Large scale production of B. thuringiensis subsp. israelensis (Bti) LLP29 was conducted on the optimal medium at optimal conditions. High toxicity (1,487 international toxic units/milligram) and long larvicidal persistence of the product were observed in the study, which illustrated that SMS-based solid-state fermentation medium was efficient and economical for large scale industrial production of Bt-based biopesticides. The cost of production of 1 kg of Bt was approximately US$0.075.

  18. Remote direct memory access over datagrams

    DOEpatents

    Grant, Ryan Eric; Rashti, Mohammad Javad; Balaji, Pavan; Afsahi, Ahmad

    2014-12-02

    A communication stack for providing remote direct memory access (RDMA) over a datagram network is disclosed. The communication stack has a user level interface configured to accept datagram related input and communicate with an RDMA enabled network interface card (NIC) via an NIC driver. The communication stack also has an RDMA protocol layer configured to supply one or more data transfer primitives for the datagram related input of the user level. The communication stack further has a direct data placement (DDP) layer configured to transfer the datagram related input from a user storage to a transport layer based on the one or more data transfer primitives by way of a lower layer protocol (LLP) over the datagram network.

  19. A Revision of the Argyritarsis Section of the Subgenus Nyssorhynchus of Anopheles (Diptera: Culicidae)

    DTIC Science & Technology

    1988-01-01

    specimens: lM, 1M gen). Cayo: Cayo, BHL 40, lM, 1M gen. BOLIVIA (37 specimens: 21M, 1M gen, 15F). La Paz: Apolo, 15 Jan 1946, S. Blatman, 3M, 3F. Chulumani... La 152 Dorada, 25 Jun 1943, KO 112-6, lM, 1F. Meta: Cumaral, COB 45, IlpF, llpM, 1M gen. Pt. Lopez, COM 547, 1F. Restrepo, 21 Aug 1935, W. H. Kemp...and collector, 16 Feb 1927, La Ese, CR 32, 41pF, lpF, 21p, 2F. Orotina, 20 Dee 1920, A. Alfaro, 3F. Rio Tiribi, 1921, A. Alfaro, 2M, 1F. San Isidro

  20. The Proteome of Seed Development in the Model Legume Lotus japonicus1[C][W

    PubMed Central

    Dam, Svend; Laursen, Brian S.; Ørnfelt, Jane H.; Jochimsen, Bjarne; Stærfeldt, Hans Henrik; Friis, Carsten; Nielsen, Kasper; Goffard, Nicolas; Besenbacher, Søren; Krusell, Lene; Sato, Shusei; Tabata, Satoshi; Thøgersen, Ida B.; Enghild, Jan J.; Stougaard, Jens

    2009-01-01

    We have characterized the development of seeds in the model legume Lotus japonicus. Like soybean (Glycine max) and pea (Pisum sativum), Lotus develops straight seed pods and each pod contains approximately 20 seeds that reach maturity within 40 days. Histological sections show the characteristic three developmental phases of legume seeds and the presence of embryo, endosperm, and seed coat in desiccated seeds. Furthermore, protein, oil, starch, phytic acid, and ash contents were determined, and this indicates that the composition of mature Lotus seed is more similar to soybean than to pea. In a first attempt to determine the seed proteome, both a two-dimensional polyacrylamide gel electrophoresis approach and a gel-based liquid chromatography-mass spectrometry approach were used. Globulins were analyzed by two-dimensional polyacrylamide gel electrophoresis, and five legumins, LLP1 to LLP5, and two convicilins, LCP1 and LCP2, were identified by matrix-assisted laser desorption ionization quadrupole/time-of-flight mass spectrometry. For two distinct developmental phases, seed filling and desiccation, a gel-based liquid chromatography-mass spectrometry approach was used, and 665 and 181 unique proteins corresponding to gene accession numbers were identified for the two phases, respectively. All of the proteome data, including the experimental data and mass spectrometry spectra peaks, were collected in a database that is available to the scientific community via a Web interface (http://www.cbs.dtu.dk/cgi-bin/lotus/db.cgi). This database establishes the basis for relating physiology, biochemistry, and regulation of seed development in Lotus. Together with a new Web interface (http://bioinfoserver.rsbs.anu.edu.au/utils/PathExpress4legumes/) collecting all protein identifications for Lotus, Medicago, and soybean seed proteomes, this database is a valuable resource for comparative seed proteomics and pathway analysis within and beyond the legume family. PMID:19129418

  1. Optical element for full spectral purity from IR-generated EUV light sources

    NASA Astrophysics Data System (ADS)

    van den Boogaard, A. J. R.; Louis, E.; van Goor, F. A.; Bijkerk, F.

    2009-03-01

    Laser produced plasma (LLP) sources are generally considered attractive for high power EUV production in next generation lithography equipment. Such plasmas are most efficiently excited by the relatively long, infrared wavelengths of CO2-lasers, but a significant part of the rotational-vibrational excitation lines of the CO2 radiation will be backscattered by the plasma's critical density surface and consequently will be present as parasitic radiation in the spectrum of such sources. Since most optical elements in the EUV collecting and imaging train have a high reflection coefficient for IR radiation, undesirable heating phenomena at the resist level are likely to occur. In this study a completely new principle is employed to obtain full separation of EUV and IR radiation from the source by a single optical component. While the application of a transmission filter would come at the expense of EUV throughput, this technique potentially enables wavelength separation without loosing reflectance compared to a conventional Mo/Si multilayer coated element. As a result this method provides full spectral purity from the source without loss in EUV throughput. Detailed calculations on the principal of functioning are presented.

  2. 77 FR 76770 - Proposed Exemptions From Certain Prohibited Transaction Restrictions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-28

    ...This document contains notices of pendency before the Department of Labor (the Department) of proposed exemptions from certain of the prohibited transaction restrictions of the Employee Retirement Income Security Act of 1974 (ERISA or the Act) and/or the Internal Revenue Code of 1986 (the Code). This notice includes the following proposed exemptions: D-11664, Atlas Energy, Inc. Employee Stock Ownership Plan (the Plan); D-11718, Notice of Proposed Amendment to Prohibited Transaction Exemption (PTE) 2007-05, Involving Prudential Securities Incorporated; L-11720, The Mo-Kan Teamsters Apprenticeship and Training Fund (the Fund); L-11738, The Coca-Cola Company (TCCC) and Red Re, Inc. (Red Re) (together, the Applicants); and D-11671, Silchester International Investors LLP (Silchester or the Applicant).

  3. World pharmaceuticals--Financial Times tenth annual conference. 22-23 April 1999, London, UK.

    PubMed

    Muhsin, M

    1999-07-01

    This two-day conference was organized by The Financial Times, in association with PriceWaterhouseCoopers LLP. The general theme of the event was the state of the healthcare industry, past, present and future. The main areas covered included addressing the challenges of the 1990s, anticipating the challenges of the next decade, the changing shape of global marketing, IT in healthcare, consolidation challenges, shareholder expectations, and new strategies and technologies for growth sustenance within the industry. Key speakers within the industry addressed these issues to an audience of approximately 200 healthcare business executives. The first day was chaired by Mr Robert Cawthorn (Chairman Emeritus, Rhone-Poulenc Rorer Inc) and the second by Professor Trevor Jones (Director General, Association of the British Pharmaceutical Industry).

  4. Intellectual Property Is No Game: An Interview with James G. Gatto, JD.

    PubMed

    2012-12-01

    Copying within the games industry is reportedly widespread. Some people attribute this to the belief that this is just the way it is and has always been based on the notion that the "idea" for a game is not protectable. But as the game market grows, so too do the losses from copying suffered by game innovators. A contributing factor is that many game developers do not develop comprehensive strategies for protecting the valuable intellectual property that they create. In the following interview, Bill Ferguson, PhD, Editor of Games for Health Journal, discusses the hazards and ways to protect health game assets with intellectual property expert Jim Gatto, Leader of the Social Media, Entertainment & Technology Team at the respected law firm of Pillsbury Winthrop Shaw Pittman LLP.

  5. FUELS IN SOIL TEST KIT: FIELD USE OF DIESEL DOG SOIL TEST KITS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unknown

    2001-05-31

    Western Research Institute (WRI) is commercializing Diesel Dog Portable Soil Test Kits for performing analysis of fuel-contaminated soils in the field. The technology consists of a method developed by WRI (U.S. Patents 5,561,065 and 5,976,883) and hardware developed by WRI that allows the method to be performed in the field (patent pending). The method is very simple and does not require the use of highly toxic reagents. The aromatic components in a soil extract are measured by absorption at 254 nm with a field-portable photometer. WRI added significant value to the technology by taking the method through the American Societymore » for Testing and Materials (ASTM) approval and validation processes. The method is designated ASTM Method D-5831-96, Standard Test Method for Screening Fuels in Soils. This ASTM designation allows the method to be used for federal compliance activities. In FY 99, twenty-five preproduction kits were successfully constructed in cooperation with CF Electronics, Inc., of Laramie, Wyoming. The kit components work well and the kits are fully operational. In the calendar year 2000, kits were provided to the following entities who agreed to participate as FY 99 and FY 00 JSR (Jointly Sponsored Research) cosponsors and use the kits as opportunities arose for field site work: Wyoming Department of Environmental Quality (DEQ) (3 units), F.E. Warren Air Force Base, Gradient Corporation, The Johnson Company (2 units), IT Corporation (2 units), TRC Environmental Corporation, Stone Environmental, ENSR, Action Environmental, Laco Associates, Barenco, Brown and Caldwell, Dames and Moore Lebron LLP, Phillips Petroleum, GeoSyntek, and the State of New Mexico. By early 2001, ten kits had been returned to WRI following the six-month evaluation period. On return, the components of all ten kits were fully functional. The kits were upgraded with circuit modifications, new polyethylene foam inserts, and updated instruction manuals.« less

  6. Extraction and the Fatty Acid Profile of Rosa acicularis Seed Oil.

    PubMed

    Du, Huanan; Zhang, Xu; Zhang, Ruchun; Zhang, Lu; Yu, Dianyu; Jiang, Lianzhou

    2017-12-01

    Rosa acicularis seed oil was extracted from Rosa acicularis seeds by the ultrasonic-assisted aqueous enzymatic method using cellulase and protease. Based on a single experiment, Plackett-Burman design was applied to ultrasonic-assisted aqueous enzymatic extraction of wild rose seed oil. The effects of enzyme amount, hydrolysis temperature and initial pH on total extraction rate of wild rose seed oil was studied by using Box-Behnken optimize methodology. Chemical characteristics of a sample of Rosa acicularis seeds and Rosa acicularis seed oil were characterized in this work. The tocopherol content was 200.6±0.3 mg/100 g oil. The Rosa acicularis seed oil was rich in linoleic acid (56.5%) and oleic acid (34.2%). The saturated fatty acids included palmitic acid (4%) and stearic acid (2.9%). The major fatty acids in the sn-2 position of triacylglycerol in Rosa acicularis oil were linoleic acid (60.6%), oleic acid (33.6%) and linolenic acid (3.2%). According to the 1,3-random-2-random hypothesis, the dominant triacylglycerols were LLL (18%), LLnL (1%), LLP (2%), LOL (10%), LLSt (1.2%), PLP (0.2%), LLnP (0.1%), LLnO (0.6%) and LOP (1.1%). This work could be useful for developing applications for Rosa acicularis seed oil.

  7. Development of feeding systems and strategies of supplementation to enhance rumen fermentation and ruminant production in the tropics

    PubMed Central

    2013-01-01

    The availability of local feed resources in various seasons can contribute as essential sources of carbohydrate and protein which significantly impact rumen fermentation and the subsequent productivity of the ruminant. Recent developments, based on enriching protein in cassava chips, have yielded yeast fermented cassava chip protein (YEFECAP) providing up to 47.5% crude protein (CP), which can be used to replace soybean meal. The use of fodder trees has been developed through the process of pelleting; Leucaena leucocephala leaf pellets (LLP), mulberry leaf pellets (MUP) and mangosteen peel and/or garlic pellets, can be used as good sources of protein to supplement ruminant feeding. Apart from producing volatile fatty acids and microbial proteins, greenhouse gases such as methane are also produced in the rumen. Several methods have been used to reduce rumen methane. However, among many approaches, nutritional manipulation using feed formulation and feeding management, especially the use of plant extracts or plants containing secondary compounds (condensed tannins and saponins) and plant oils, has been reported. This approach could help todecrease rumen protozoa and methanogens and thus mitigate the production of methane. At present, more research concerning this burning issue - the role of livestock in global warming - warrants undertaking further research with regard to economic viability and practical feasibility. PMID:23981662

  8. Assessing Soil Organic Carbon Stocks in Fire-Affected Pinus Palustris Forests

    NASA Astrophysics Data System (ADS)

    Butnor, J. R.; Johnsen, K. H.; Jackson, J. A.; Anderson, P. H.; Samuelson, L. J.; Lorenz, K.

    2014-12-01

    This study aimed to quantify the vertical distribution of soil organic carbon (SOC) and its biochemically resistant fraction (SOCR; defined as residual SOC following H2O2 treatment and dilute HNO3 digestion) in managed longleaf pine (LLP) stands located at Fort Benning, Georgia, USA (32.38 N., 84.88 W.). Although it is unclear how to increase SOCR via land management, it is a relatively stable carbon (C) pool that is important for terrestrial C sequestration. SOC concentration declines with soil depth on upland soils without a spodic horizon; however, the portion that is SOCR and the residence time of this fraction on LLP stands is unknown. Soils were collected by depth at five sites with common land use history, present use for active military training and a three-year prescribed fire return cycle. Soils were treated with H2O2 and dilute HNO3 to isolate SOCR. In the upper 1-m of soil SOC stocks averaged 72.1 ± 6.6 Mg C ha-1 and SOCR averaged 25.8 ± 3.2 Mg C ha-1. Depending on the site, the ratio of SOCR:SOC ranged from 0.25 to 0.50 in the upper 1-m of soil. On clayey soils the ratio of SOCR:SOC increased with soil depth. One site containing 33% clay at 50 to 100 cm depth had a SOCR:SOC ratio of 0.68. The radiocarbon age of SOCR increased with soil depth, ranging from approximately 2,000 years before present (YBP) at 0 to 10 cm to over 5,500 YBP at 50 to 100 cm depth. Across all sites, SOCR makes up a considerable portion of SOC. What isn't clear is the proportion of SOCR that is of pyrogenic origin (black carbon), versus SOCR that is stabilized by association with the mineral phase. Ongoing analysis with 13C nuclear magnetic resonance spectroscopy will provide data on the degree of aromaticity of the SOCR and some indication of the nature of its biochemical stability.

  9. Expression of antimicrobial peptide genes in Bombyx mori gut modulated by oral bacterial infection and development.

    PubMed

    Wu, Shan; Zhang, Xiaofeng; He, Yongqiang; Shuai, Jiangbing; Chen, Xiaomei; Ling, Erjun

    2010-11-01

    Although Bombyx mori systematic immunity is extensively studied, little is known about the silkworm's intestine-specific responses to bacterial infection. Antimicrobial peptides (AMPs) gene expression analysis of B. mori intestinal tissue to oral infection with the Gram-positive (Staphylococcus aureus) and -negative (Escherichia coli) bacteria revealed that there is specificity in the interaction between host immune responses and parasite types. Neither Att1 nor Leb could be stimulated by S. aureus and E. coli. However, CecA1, Glo1, Glo2, Glo3, Glo4 and Lys, could only be trigged by S. aureus. On the contrary, E. coli stimulation caused the decrease in the expression of CecA1, Glo3 and Glo4 in some time points. Interestingly, there is regional specificity in the silkworm local gut immunity. During the immune response, the increase in Def, Hem and LLP3 was only detected in the foregut and midgut. For CecB1, CecD, LLP2 and Mor, after orally administered with E. coli, the up-regulation was only limited in the midgut and hindgut. CecE was the only AMP that positively responses to the both bacteria in all the testing situations. With development, the expression levels of the AMPs were also changed dramatically. That is, at spinning and prepupa stages, a large increase in the expression of CecA1, CecB1, CecD, CecE, Glo1, Glo2, Glo3, Glo4, Leb, Def, Hem, Mor and Lys was detected in the gut. Unexpectedly, in addition to the IMD pathway genes, the Toll and JAK/STAT pathway genes in the silkworm gut can also be activated by microbial oral infection. But in the developmental course, corresponding to the increase in expression of AMPs at spinning and prepupa stages, only the Toll pathway genes in the gut exhibit the similar increasing trend. Our results imply that the immune responses in the silkworm gut are synergistically regulated by the Toll, JAK/STAT and IMD pathways. However, as the time for approaching pupation, the Toll pathway may play a role in the AMPs expression. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. Severe storm electricity

    NASA Technical Reports Server (NTRS)

    Arnold, R. T.; Rust, W. D.

    1984-01-01

    Successful ground truth support of U-2 overflights was been accomplished. Data have been reduced for 4 June 1984 and some of the results have been integrated into some of MSFC's efforts. Staccato lightning (multiply branched, single stroke flash with no continuing current) is prevalent within the rainfree region around the main storm updraft and this is believed to be important, i.e., staccato flashes might be an important indicator of severe storm electrification. Results from data analysis from two stations appear to indicate that charge center heights can be estimated from a combination of intercept data with data from the fixed laboratory at NSSL. An excellent data base has been provided for determining the sight errors and efficiency of NSSL's LLP system. Cloud structures, observable in a low radar reflectivity region and on a scale smaller than is currently resolved by radar, which appear to be related to electrical activity are studied.

  11. Report on Quality Control Review of Grant Thornton, LLP FY 2009 Single Audit of Concurrent Technologies Corporation

    DTIC Science & Technology

    2011-12-05

    F t:deral Aw:uds IJ󈧏it. Jl~r fllllfifM> /Jltfoi7J/I’Ii ,md/1 pwmfttt’!J Jr/.tf••tf Jo / ht tul’lliTitf otlll/ tVJIIJj>/eknW nf //Ito rtft•du/t 1/J(J...1111/. f ht ll lld/111!’> IWI~<II’Cd !4 .t l/bt/1/~Jirb I~ •~>tfrlllll ( fi/11’(1/7>/d I rrllwnlt~gitJ’ omtrl/011 lf•.JIIII~fi!IIJI/1 ciiMI’il )iotJ...fiJI> tt’l’l<ll’ C1\\IHI T/JqmMIIti»Y/’fttl (i,mllrmti ’l’t~lllml.>~m · •ll•rtfilltl "’"’ "" mb,mwTit we;r ;Udtln 111 tvu, 6m ’,,m/ tl,tlrjlllr

  12. New detectors to explore the lifetime frontier

    NASA Astrophysics Data System (ADS)

    Chou, John Paul; Curtin, David; Lubatti, H. J.

    2017-04-01

    Long-lived particles (LLPs) are a common feature in many beyond the Standard Model theories, including supersymmetry, and are generically produced in exotic Higgs decays. Unfortunately, no existing or proposed search strategy will be able to observe the decay of non-hadronic electrically neutral LLPs with masses above ∼ GeV and lifetimes near the limit set by Big Bang Nucleosynthesis (BBN), cτ ≲107-108 m. We propose the MATHUSLA surface detector concept (MAssive Timing Hodoscope for Ultra Stable neutraL pArticles), which can be implemented with existing technology and in time for the high luminosity LHC upgrade to find such ultra-long-lived particles (ULLPs), whether produced in exotic Higgs decays or more general production modes. We also advocate a dedicated LLP detector at a future 100 TeV collider, where a modestly sized underground design can discover ULLPs with lifetimes at the BBN limit produced in sub-percent level exotic Higgs decays.

  13. Time-Resolved Detection of Fingermarks on Non-Porous and Semi-Porous Substrates Using Sr2MgSi2O7:Eu2+, Dy3+ Phosphors.

    PubMed

    Xiong, Xiaobo; Yuan, Ximing; Song, Jiangqi; Yin, Guoxiang

    2016-06-01

    Eu(2+), Dy(3+) co-doped strontium-magnesium silicate phosphors, Sr2MgSi2O7:Eu(2+), Dy(3+) (SMSEDs), have shown great potential in optoelectronic device due to their unique luminescent property. However, their potential applications in forensic science, latent fingermark detection in particular, are still being investigated. In this contribution, SMSEDs were successfully employed to latent fingermarks on a variety of non-porous and semi-porous surfaces, including aluminum foil, porcelain, glass, painted wood, colored paper, and leather. All the results illustrated that this luminescent powder, as a long-lasting phosphorescence material (LLP), was an ideal time-resolved detection reagent of fingermark for elimination of background interferences from various difficult substrates, and offered a good contrast to allow their identification without the need to enhance the results compared to nanosized organic fluorescent powder. © The Author(s) 2016.

  14. Space Station module Power Management And Distribution (PMAD) system

    NASA Technical Reports Server (NTRS)

    Walls, Bryan

    1990-01-01

    This project consists of several tasks which are unified toward experimentally demonstrating the operation of a highly autonomous, user-supportive power management and distribution system for Space Station Freedom (SSF) habitation/laboratory modules. This goal will be extended to a demonstration of autonomous, cooperative power system operation for the whole SSF power system through a joint effort with NASA's Lewis Research Center, using their Autonomous Power System. Short term goals for the space station module power management and distribution include having an operational breadboard reflecting current plans for SSF, improving performance of the system communications, and improving the organization and mutability of the artificial intelligence (AI) systems. In the middle term, intermediate levels of autonomy will be added, user interfaces will be modified, and enhanced modeling capabilities will be integrated in the system. Long term goals involve conversion of all software into Ada, vigorous verification and validation efforts and, finally, seeing an impact of this research on the operation of SSF. Conversion of the system to a DC Star configuration is now in progress, and should be completed by the end of October, 1989. This configuration reflects the latest SSF module architecture. Hardware is now being procured which will improve system communications significantly. The Knowledge-Based Management System (KBMS) is initially developed and the rules from FRAMES have been implemented in the KBMS. Rules in the other two AI systems are also being grouped modularly, making them more tractable, and easier to eventually move into the KBMS. Adding an intermediate level of autonomy will require development of a planning utility, which will also be built using the KBMS. These changes will require having the user interface for the whole system available from one interface. An Enhanced Model will be developed, which will allow exercise of the system through the interface without requiring all of the power hardware to be operational. The functionality of the AI systems will continue to be advanced, including incipient failure detection. Ada conversion will begin with the lowest level processor (LLP) code. Then selected pieces of the higher level functionality will be recorded in Ada and, where possible, moved to the LLP level. Validation and verification will be done on the Ada code, and will complete sometimes after completion of the Ada conversion.

  15. Additive manufacturing integrated energy—enabling innovative solutions for buildings of the future

    DOE PAGES

    Biswas, Kaushik; Rose, James; Eikevik, Leif; ...

    2016-11-10

    Here, the AMIE (Additive Manufacturing Integrated Energy) demonstration utilized 3D printing as an enabling technology in the pursuit of construction methods that use less material, create less waste, and require less energy to build and operate. It was developed by Oak Ridge National Laboratory (ORNL) in collaboration with the Governor's Chair for Energy and Urbanism, a research partnership of the University of Tennessee (UT) and ORNL led by Skidmore, Owings & Merrill LLP (SOM), AMIE embodies a suite of innovations demonstrating a transformative future for designing, constructing and operating buildings. Subsequent, blind UT College of Architecture and Design studios taughtmore » in collaboration with SOM professionals also explored forms and shapes based on biological systems that naturally integrate structure and enclosure. AMIE, a compact micro-dwelling developed by ORNL research scientists and SOM designers, incorporates next-generation modified atmosphere insulation, self-shading windows, and the ability to produce, store and share solar power with a paired hybrid vehicle. It establishes for the first time, a platform for investigating solutions integrating the energy systems in buildings, vehicles, and the power grid. The project was built with broad-based support from local industry and national material suppliers. Designed and constructed in a span of only nine months, AMIE 1.0 serves as an example of the rapid innovation that can be accomplished when research, design, academic and industrial partners work in collaboration toward the common goal of a more sustainable and resilient built environment.« less

  16. Implications of Nine Risk Prediction Models for Selecting Ever-Smokers for Computed Tomography Lung Cancer Screening.

    PubMed

    Katki, Hormuzd A; Kovalchik, Stephanie A; Petito, Lucia C; Cheung, Li C; Jacobs, Eric; Jemal, Ahmedin; Berg, Christine D; Chaturvedi, Anil K

    2018-05-15

    Lung cancer screening guidelines recommend using individualized risk models to refer ever-smokers for screening. However, different models select different screening populations. The performance of each model in selecting ever-smokers for screening is unknown. To compare the U.S. screening populations selected by 9 lung cancer risk models (the Bach model; the Spitz model; the Liverpool Lung Project [LLP] model; the LLP Incidence Risk Model [LLPi]; the Hoggart model; the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial Model 2012 [PLCOM2012]; the Pittsburgh Predictor; the Lung Cancer Risk Assessment Tool [LCRAT]; and the Lung Cancer Death Risk Assessment Tool [LCDRAT]) and to examine their predictive performance in 2 cohorts. Population-based prospective studies. United States. Models selected U.S. screening populations by using data from the National Health Interview Survey from 2010 to 2012. Model performance was evaluated using data from 337 388 ever-smokers in the National Institutes of Health-AARP Diet and Health Study and 72 338 ever-smokers in the CPS-II (Cancer Prevention Study II) Nutrition Survey cohort. Model calibration (ratio of model-predicted to observed cases [expected-observed ratio]) and discrimination (area under the curve [AUC]). At a 5-year risk threshold of 2.0%, the models chose U.S. screening populations ranging from 7.6 million to 26 million ever-smokers. These disagreements occurred because, in both validation cohorts, 4 models (the Bach model, PLCOM2012, LCRAT, and LCDRAT) were well-calibrated (expected-observed ratio range, 0.92 to 1.12) and had higher AUCs (range, 0.75 to 0.79) than 5 models that generally overestimated risk (expected-observed ratio range, 0.83 to 3.69) and had lower AUCs (range, 0.62 to 0.75). The 4 best-performing models also had the highest sensitivity at a fixed specificity (and vice versa) and similar discrimination at a fixed risk threshold. These models showed better agreement on size of the screening population (7.6 million to 10.9 million) and achieved consensus on 73% of persons chosen. No consensus on risk thresholds for screening. The 9 lung cancer risk models chose widely differing U.S. screening populations. However, 4 models (the Bach model, PLCOM2012, LCRAT, and LCDRAT) most accurately predicted risk and performed best in selecting ever-smokers for screening. Intramural Research Program of the National Institutes of Health/National Cancer Institute.

  17. Altmetric: Top 50 dental articles in 2014.

    PubMed

    Kolahi, J; Khazaei, S

    2016-06-10

    Introduction Altmetrics is a new and emerging scholarly tool that measures online attention surrounding journal articles. Altmetric data resources include: policy documents, news outlets, blogs, online reference managers (eg Mendeley and CiteULike), post-publication peer-review forums (eg PubPeer and Publons), social media (eg Twitter, Facebook, Weibo, Google(+), Pinterest, Reddit), Wikipedia, sites running Stack Exchange (Q&A), and reviews on F1000 and YouTube.Methods To identify the top 50 dental articles in 2014, PubMed was searched using the following query "("2014/1/1"[PDAT]:"2014/12/31"[PDAT]) and jsubsetd[text]" in December, 2015. Consequently, all PubMed records were extracted and sent to Altmetric LLP (London, UK) as a CSV file for examination. Data were analysed by Microsoft Office Excel 2010 using descriptive statistics and charts.Results Using PubMed searches,15,132 dental articles were found in 2014. The mean Altmetric score of 50 top dental articles in 2014 was 69.5 ± 73.3 (95% CI: -74.14 to 213.14). The British Dental Journal (48%) and Journal of Dental Research (16%) had the maximum number of top articles. Twitter (67.13%), Mendeley (15.89%) and news outlets (10.92%) were the most popular altmetric data resources.Discussion Altmetrics are intended to supplement bibliometrics, not replace them. Altmetrics is a fresh and emerging arena for the dental research community. We believe that dental clinical practitioners, research scientists, research directors and journal editors must pay more attention to altmetrics as a new and rapid tool to measure the social impact of scholarly articles.

  18. STENCIL: Science Teaching European Network for Creativity and Innovation in Learning

    NASA Astrophysics Data System (ADS)

    Cattadori, M.; Magrefi, F.

    2013-12-01

    STENCIL is an european educational project funded with support of the European Commission within the framework of LLP7 (Lifelong Learning Programme) for a period of 3 years (2011 - 2013). STENCIL includes 21 members from 9 European countries (Bulgaria, Germany, Greece, France, Italy, Malta, Portugal, Slovenia, Turkey.) working together to contribute to the general objective of improving science teaching, by promoting innovative methodologies and creative solutions. Among the innovative methods adept a particolar interest is a joint partnership between a wide spectrum of type of institutions such as schools, school authorities, research centres, universities, science museums, and other organizations, representing differing perspectives on science education. STENCIL offers to practitioners in science education from all over Europe, a platform; the web portal - www.stencil-science.eu - that provides high visibility to schools and institutions involved in Comenius and other similar European funded projects in science education. STENCIL takes advantage of the positive results achieved by the former European projects STELLA - Science Teaching in a Lifelong Learning Approach (2007 - 2009) and GRID - Growing interest in the development of teaching science (2004-2006). The specific objectives of the project are : 1) to identify and promote innovative practices in science teaching through the publication of Annual Reports on Science Education; 2) to bring together science education practitioners to share different experiences and learn from each other through the organisation of periodical study visits and workshops; 3) to disseminate materials and outcomes coming from previous EU funded projects and from isolated science education initiatives through the STENCIL web portal, as well as through international conferences and national events. This contribution aims at explaining the main features of the project together with the achieved results during the project's 3 year lifetime-span.

  19. Prediction of breast cancer risk using a machine learning approach embedded with a locality preserving projection algorithm.

    PubMed

    Heidari, Morteza; Khuzani, Abolfazl Zargari; Hollingsworth, Alan B; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qiu, Yuchen; Liu, Hong; Zheng, Bin

    2018-01-30

    In order to automatically identify a set of effective mammographic image features and build an optimal breast cancer risk stratification model, this study aims to investigate advantages of applying a machine learning approach embedded with a locally preserving projection (LPP) based feature combination and regeneration algorithm to predict short-term breast cancer risk. A dataset involving negative mammograms acquired from 500 women was assembled. This dataset was divided into two age-matched classes of 250 high risk cases in which cancer was detected in the next subsequent mammography screening and 250 low risk cases, which remained negative. First, a computer-aided image processing scheme was applied to segment fibro-glandular tissue depicted on mammograms and initially compute 44 features related to the bilateral asymmetry of mammographic tissue density distribution between left and right breasts. Next, a multi-feature fusion based machine learning classifier was built to predict the risk of cancer detection in the next mammography screening. A leave-one-case-out (LOCO) cross-validation method was applied to train and test the machine learning classifier embedded with a LLP algorithm, which generated a new operational vector with 4 features using a maximal variance approach in each LOCO process. Results showed a 9.7% increase in risk prediction accuracy when using this LPP-embedded machine learning approach. An increased trend of adjusted odds ratios was also detected in which odds ratios increased from 1.0 to 11.2. This study demonstrated that applying the LPP algorithm effectively reduced feature dimensionality, and yielded higher and potentially more robust performance in predicting short-term breast cancer risk.

  20. Prediction of breast cancer risk using a machine learning approach embedded with a locality preserving projection algorithm

    NASA Astrophysics Data System (ADS)

    Heidari, Morteza; Zargari Khuzani, Abolfazl; Hollingsworth, Alan B.; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qiu, Yuchen; Liu, Hong; Zheng, Bin

    2018-02-01

    In order to automatically identify a set of effective mammographic image features and build an optimal breast cancer risk stratification model, this study aims to investigate advantages of applying a machine learning approach embedded with a locally preserving projection (LPP) based feature combination and regeneration algorithm to predict short-term breast cancer risk. A dataset involving negative mammograms acquired from 500 women was assembled. This dataset was divided into two age-matched classes of 250 high risk cases in which cancer was detected in the next subsequent mammography screening and 250 low risk cases, which remained negative. First, a computer-aided image processing scheme was applied to segment fibro-glandular tissue depicted on mammograms and initially compute 44 features related to the bilateral asymmetry of mammographic tissue density distribution between left and right breasts. Next, a multi-feature fusion based machine learning classifier was built to predict the risk of cancer detection in the next mammography screening. A leave-one-case-out (LOCO) cross-validation method was applied to train and test the machine learning classifier embedded with a LLP algorithm, which generated a new operational vector with 4 features using a maximal variance approach in each LOCO process. Results showed a 9.7% increase in risk prediction accuracy when using this LPP-embedded machine learning approach. An increased trend of adjusted odds ratios was also detected in which odds ratios increased from 1.0 to 11.2. This study demonstrated that applying the LPP algorithm effectively reduced feature dimensionality, and yielded higher and potentially more robust performance in predicting short-term breast cancer risk.

  1. Measurement System Analyses - Gauge Repeatability and Reproducibility Methods

    NASA Astrophysics Data System (ADS)

    Cepova, Lenka; Kovacikova, Andrea; Cep, Robert; Klaput, Pavel; Mizera, Ondrej

    2018-02-01

    The submitted article focuses on a detailed explanation of the average and range method (Automotive Industry Action Group, Measurement System Analysis approach) and of the honest Gauge Repeatability and Reproducibility method (Evaluating the Measurement Process approach). The measured data (thickness of plastic parts) were evaluated by both methods and their results were compared on the basis of numerical evaluation. Both methods were additionally compared and their advantages and disadvantages were discussed. One difference between both methods is the calculation of variation components. The AIAG method calculates the variation components based on standard deviation (then a sum of variation components does not give 100 %) and the honest GRR study calculates the variation components based on variance, where the sum of all variation components (part to part variation, EV & AV) gives the total variation of 100 %. Acceptance of both methods among the professional society, future use, and acceptance by manufacturing industry were also discussed. Nowadays, the AIAG is the leading method in the industry.

  2. Severe storm electricity

    NASA Technical Reports Server (NTRS)

    Rust, W. D.; Macgorman, D. R.; Taylor, W.; Arnold, R. T.

    1984-01-01

    Severe storms and lightning were measured with a NASA U2 and ground based facilities, both fixed base and mobile. Aspects of this program are reported. The following results are presented: (1) ground truth measurements of lightning for comparison with those obtained by the U2. These measurements include flash type identification, electric field changes, optical waveforms, and ground strike location; (2) simultaneous extremely low frequency (ELF) waveforms for cloud to ground (CG) flashes; (3) the CG strike location system (LLP) using a combination of mobile laboratory and television video data are assessed; (4) continued development of analog-to-digital conversion techniques for processing lightning data from the U2, mobile laboratory, and NSSL sensors; (5) completion of an all azimuth TV system for CG ground truth; (6) a preliminary analysis of both IC and CG lightning in a mesocyclone; and (7) the finding of a bimodal peak in altitude lightning activity in some storms in the Great Plains and on the east coast. In the forms on the Great Plains, there was a distinct class of flash what forms the upper mode of the distribution. These flashes are smaller horizontal extent, but occur more frequently than flashes in the lower mode of the distribution.

  3. Sizing procedures for sun-tracking PV system with batteries

    NASA Astrophysics Data System (ADS)

    Nezih Gerek, Ömer; Başaran Filik, Ümmühan; Filik, Tansu

    2017-11-01

    Deciding optimum number of PV panels, wind turbines and batteries (i.e. a complete renewable energy system) for minimum cost and complete energy balance is a challenging and interesting problem. In the literature, some rough data models or limited recorded data together with low resolution hourly averaged meteorological values are used to test the sizing strategies. In this study, active sun tracking and fixed PV solar power generation values of ready-to-serve commercial products are recorded throughout 2015-2016. Simultaneously several outdoor parameters (solar radiation, temperature, humidity, wind speed/direction, pressure) are recorded with high resolution. The hourly energy consumption values of a standard 4-person household, which is constructed in our campus in Eskisehir, Turkey, are also recorded for the same period. During sizing, novel parametric random process models for wind speed, temperature, solar radiation, energy demand and electricity generation curves are achieved and it is observed that these models provide sizing results with lower LLP through Monte Carlo experiments that consider average and minimum performance cases. Furthermore, another novel cost optimization strategy is adopted to show that solar tracking PV panels provide lower costs by enabling reduced number of installed batteries. Results are verified over real recorded data.

  4. Zero tolerance for incorrect data: Best practices in SQL transaction programming

    NASA Astrophysics Data System (ADS)

    Laiho, M.; Skourlas, C.; Dervos, D. A.

    2015-02-01

    DBMS products differ in the way they support even the basic SQL transaction services. In this paper, a framework of best practices in SQL transaction programming is given and discussed. The SQL developers are advised to experiment with and verify the services supported by the DBMS product used. The framework has been developed by DBTechNet, a European network of teachers, trainers and ICT professionals. A course module on SQL transactions, offered by the LLP "DBTech VET Teachers" programme, is also presented and discussed. Aims and objectives of the programme include the introduction of the topics and content of SQL transactions and concurrency control to HE/VET curricula and addressing the need for initial and continuous training on these topics to in-company trainers, VET teachers, and Higher Education students. An overview of the course module, its learning outcomes, the education and training (E&T) content, virtual database labs with hands-on self-practicing exercises, plus instructions for the teacher/trainer on the pedagogy and the usage of the course modules' content are briefly described. The main principle adopted is to "Learn by verifying in practice" and the transactions course motto is: "Zero Tolerance for Incorrect Data".

  5. Conjugated linoleic acid-rich soy oil triacylglycerol identification.

    PubMed

    Lall, Rahul K; Proctor, Andrew; Jain, Vishal P; Lay, Jackson O

    2009-03-11

    Conjugated linoleic acid (CLA)-rich soy oil has been produced by soy oil linoleic acid (LA) photoisomerization, but CLA-rich oil triacylglycerol (TAG) characterization was not described. Therefore, the objectives were to identify and quantify new TAG fractions in CLA-rich oil by nonaqueous reversed-phase high-performance liquid chromatography (NARP-HPLC). Analytical NARP-HPLC with an acetonitrile/dichloromethane (ACN/DCM) gradient and an evaporating light scattering detector/ultraviolet (ELSD/UV) detector was used. New TAG peaks from LA-containing TAGs were observed. The LnLL, LLL, LLO, and LLP (Ln, linolenic; L, linoleic; O, oleic; and P, palmitic) peaks reduced after isomerization with an increase in adjacent peaks that coeluted with LnLnO, LnLO, LnOO, and LnPP. The newly formed peaks were wider than those of the original oil and absorbed at 233 nm, suggesting the possibility of various CLA containing TAGs. The HPLC profile showed five fractions of mixed TAGs, and fatty acid analysis showed that CLA isomers were found predominately in fractions 2 and 3, which originally contained most LA. The CLA isomers were 70-80% trans,trans and 20-30% cis,trans and trans,cis.

  6. Interpretation of biological and mechanical variations between the Lowry versus Bradford method for protein quantification.

    PubMed

    Lu, Tzong-Shi; Yiao, Szu-Yu; Lim, Kenneth; Jensen, Roderick V; Hsiao, Li-Li

    2010-07-01

    The identification of differences in protein expression resulting from methodical variations is an essential component to the interpretation of true, biologically significant results. We used the Lowry and Bradford methods- two most commonly used methods for protein quantification, to assess whether differential protein expressions are a result of true biological or methodical variations. MATERIAL #ENTITYSTARTX00026; Differential protein expression patterns was assessed by western blot following protein quantification by the Lowry and Bradford methods. We have observed significant variations in protein concentrations following assessment with the Lowry versus Bradford methods, using identical samples. Greater variations in protein concentration readings were observed over time and in samples with higher concentrations, with the Bradford method. Identical samples quantified using both methods yielded significantly different expression patterns on Western blot. We show for the first time that methodical variations observed in these protein assay techniques, can potentially translate into differential protein expression patterns, that can be falsely taken to be biologically significant. Our study therefore highlights the pivotal need to carefully consider methodical approaches to protein quantification in techniques that report quantitative differences.

  7. Capacitance variation measurement method with a continuously variable measuring range for a micro-capacitance sensor

    NASA Astrophysics Data System (ADS)

    Lü, Xiaozhou; Xie, Kai; Xue, Dongfeng; Zhang, Feng; Qi, Liang; Tao, Yebo; Li, Teng; Bao, Weimin; Wang, Songlin; Li, Xiaoping; Chen, Renjie

    2017-10-01

    Micro-capacitance sensors are widely applied in industrial applications for the measurement of mechanical variations. The measurement accuracy of micro-capacitance sensors is highly dependent on the capacitance measurement circuit. To overcome the inability of commonly used methods to directly measure capacitance variation and deal with the conflict between the measurement range and accuracy, this paper presents a capacitance variation measurement method which is able to measure the output capacitance variation (relative value) of the micro-capacitance sensor with a continuously variable measuring range. We present the principles and analyze the non-ideal factors affecting this method. To implement the method, we developed a capacitance variation measurement circuit and carried out experiments to test the circuit. The result shows that the circuit is able to measure a capacitance variation range of 0-700 pF linearly with a maximum relative accuracy of 0.05% and a capacitance range of 0-2 nF (with a baseline capacitance of 1 nF) with a constant resolution of 0.03%. The circuit is proposed as a new method to measure capacitance and is expected to have applications in micro-capacitance sensors for measuring capacitance variation with a continuously variable measuring range.

  8. Comparison of variational real-space representations of the kinetic energy operator

    NASA Astrophysics Data System (ADS)

    Skylaris, Chris-Kriton; Diéguez, Oswaldo; Haynes, Peter D.; Payne, Mike C.

    2002-08-01

    We present a comparison of real-space methods based on regular grids for electronic structure calculations that are designed to have basis set variational properties, using as a reference the conventional method of finite differences (a real-space method that is not variational) and the reciprocal-space plane-wave method which is fully variational. We find that a definition of the finite-difference method [P. Maragakis, J. Soler, and E. Kaxiras, Phys. Rev. B 64, 193101 (2001)] satisfies one of the two properties of variational behavior at the cost of larger errors than the conventional finite-difference method. On the other hand, a technique which represents functions in a number of plane waves which is independent of system size closely follows the plane-wave method and therefore also the criteria for variational behavior. Its application is only limited by the requirement of having functions strictly localized in regions of real space, but this is a characteristic of an increasing number of modern real-space methods, as they are designed to have a computational cost that scales linearly with system size.

  9. Variation block-based genomics method for crop plants.

    PubMed

    Kim, Yul Ho; Park, Hyang Mi; Hwang, Tae-Young; Lee, Seuk Ki; Choi, Man Soo; Jho, Sungwoong; Hwang, Seungwoo; Kim, Hak-Min; Lee, Dongwoo; Kim, Byoung-Chul; Hong, Chang Pyo; Cho, Yun Sung; Kim, Hyunmin; Jeong, Kwang Ho; Seo, Min Jung; Yun, Hong Tai; Kim, Sun Lim; Kwon, Young-Up; Kim, Wook Han; Chun, Hye Kyung; Lim, Sang Jong; Shin, Young-Ah; Choi, Ik-Young; Kim, Young Sun; Yoon, Ho-Sung; Lee, Suk-Ha; Lee, Sunghoon

    2014-06-15

    In contrast with wild species, cultivated crop genomes consist of reshuffled recombination blocks, which occurred by crossing and selection processes. Accordingly, recombination block-based genomics analysis can be an effective approach for the screening of target loci for agricultural traits. We propose the variation block method, which is a three-step process for recombination block detection and comparison. The first step is to detect variations by comparing the short-read DNA sequences of the cultivar to the reference genome of the target crop. Next, sequence blocks with variation patterns are examined and defined. The boundaries between the variation-containing sequence blocks are regarded as recombination sites. All the assumed recombination sites in the cultivar set are used to split the genomes, and the resulting sequence regions are termed variation blocks. Finally, the genomes are compared using the variation blocks. The variation block method identified recurring recombination blocks accurately and successfully represented block-level diversities in the publicly available genomes of 31 soybean and 23 rice accessions. The practicality of this approach was demonstrated by the identification of a putative locus determining soybean hilum color. We suggest that the variation block method is an efficient genomics method for the recombination block-level comparison of crop genomes. We expect that this method will facilitate the development of crop genomics by bringing genomics technologies to the field of crop breeding.

  10. Speckle reduction in optical coherence tomography by adaptive total variation method

    NASA Astrophysics Data System (ADS)

    Wu, Tong; Shi, Yaoyao; Liu, Youwen; He, Chongjun

    2015-12-01

    An adaptive total variation method based on the combination of speckle statistics and total variation restoration is proposed and developed for reducing speckle noise in optical coherence tomography (OCT) images. The statistical distribution of the speckle noise in OCT image is investigated and measured. With the measured parameters such as the mean value and variance of the speckle noise, the OCT image is restored by the adaptive total variation restoration method. The adaptive total variation restoration algorithm was applied to the OCT images of a volunteer's hand skin, which showed effective speckle noise reduction and image quality improvement. For image quality comparison, the commonly used median filtering method was also applied to the same images to reduce the speckle noise. The measured results demonstrate the superior performance of the adaptive total variation restoration method in terms of image signal-to-noise ratio, equivalent number of looks, contrast-to-noise ratio, and mean square error.

  11. Variational Methods in Design Optimization and Sensitivity Analysis for Two-Dimensional Euler Equations

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.

    1997-01-01

    Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.

  12. Variational iteration method — a promising technique for constructing equivalent integral equations of fractional order

    NASA Astrophysics Data System (ADS)

    Wang, Yi-Hong; Wu, Guo-Cheng; Baleanu, Dumitru

    2013-10-01

    The variational iteration method is newly used to construct various integral equations of fractional order. Some iterative schemes are proposed which fully use the method and the predictor-corrector approach. The fractional Bagley-Torvik equation is then illustrated as an example of multi-order and the results show the efficiency of the variational iteration method's new role.

  13. Using the Screened Coulomb Potential to Illustrate the Variational Method

    ERIC Educational Resources Information Center

    Zuniga, Jose; Bastida, Adolfo; Requena, Alberto

    2012-01-01

    The screened Coulomb potential, or Yukawa potential, is used to illustrate the application of the single and linear variational methods. The trial variational functions are expressed in terms of Slater-type functions, for which the integrals needed to carry out the variational calculations are easily evaluated in closed form. The variational…

  14. Variational estimate method for solving autonomous ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Mungkasi, Sudi

    2018-04-01

    In this paper, we propose a method for solving first-order autonomous ordinary differential equation problems using a variational estimate formulation. The variational estimate is constructed with a Lagrange multiplier which is chosen optimally, so that the formulation leads to an accurate solution to the problem. The variational estimate is an integral form, which can be computed using a computer software. As the variational estimate is an explicit formula, the solution is easy to compute. This is a great advantage of the variational estimate formulation.

  15. Evaluating variation in human gut microbiota profiles due to DNA extraction method and inter-subject differences.

    PubMed

    Wagner Mackenzie, Brett; Waite, David W; Taylor, Michael W

    2015-01-01

    The human gut contains dense and diverse microbial communities which have profound influences on human health. Gaining meaningful insights into these communities requires provision of high quality microbial nucleic acids from human fecal samples, as well as an understanding of the sources of variation and their impacts on the experimental model. We present here a systematic analysis of commonly used microbial DNA extraction methods, and identify significant sources of variation. Five extraction methods (Human Microbiome Project protocol, MoBio PowerSoil DNA Isolation Kit, QIAamp DNA Stool Mini Kit, ZR Fecal DNA MiniPrep, phenol:chloroform-based DNA isolation) were evaluated based on the following criteria: DNA yield, quality and integrity, and microbial community structure based on Illumina amplicon sequencing of the V4 region of bacterial and archaeal 16S rRNA genes. Our results indicate that the largest portion of variation within the model was attributed to differences between subjects (biological variation), with a smaller proportion of variation associated with DNA extraction method (technical variation) and intra-subject variation. A comprehensive understanding of the potential impact of technical variation on the human gut microbiota will help limit preventable bias, enabling more accurate diversity estimates.

  16. The Effects of Predator Evolution and Genetic Variation on Predator-Prey Population-Level Dynamics.

    PubMed

    Cortez, Michael H; Patel, Swati

    2017-07-01

    This paper explores how predator evolution and the magnitude of predator genetic variation alter the population-level dynamics of predator-prey systems. We do this by analyzing a general eco-evolutionary predator-prey model using four methods: Method 1 identifies how eco-evolutionary feedbacks alter system stability in the fast and slow evolution limits; Method 2 identifies how the amount of standing predator genetic variation alters system stability; Method 3 identifies how the phase lags in predator-prey cycles depend on the amount of genetic variation; and Method 4 determines conditions for different cycle shapes in the fast and slow evolution limits using geometric singular perturbation theory. With these four methods, we identify the conditions under which predator evolution alters system stability and shapes of predator-prey cycles, and how those effect depend on the amount of genetic variation in the predator population. We discuss the advantages and disadvantages of each method and the relations between the four methods. This work shows how the four methods can be used in tandem to make general predictions about eco-evolutionary dynamics and feedbacks.

  17. The existence results and Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces

    NASA Astrophysics Data System (ADS)

    Wang, Min

    2017-06-01

    This paper aims to establish the Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces. For this purpose, we firstly prove a very general existence result for generalized mixed variational inequalities, provided that the mapping involved has the so-called mixed variational inequality property and satisfies a rather weak coercivity condition. Finally, we establish the Tikhonov regularization method for generalized mixed variational inequalities. Our findings extended the results for the generalized variational inequality problem (for short, GVIP( F, K)) in R^n spaces (He in Abstr Appl Anal, 2012) to the generalized mixed variational inequality problem (for short, GMVIP(F,φ , K)) in reflexive Banach spaces. On the other hand, we generalized the corresponding results for the generalized mixed variational inequality problem (for short, GMVIP(F,φ ,K)) in R^n spaces (Fu and He in J Sichuan Norm Univ (Nat Sci) 37:12-17, 2014) to reflexive Banach spaces.

  18. Introduction to the Special Issue on Advancing Methods for Analyzing Dialect Variation.

    PubMed

    Clopper, Cynthia G

    2017-07-01

    Documenting and analyzing dialect variation is traditionally the domain of dialectology and sociolinguistics. However, modern approaches to acoustic analysis of dialect variation have their roots in Peterson and Barney's [(1952). J. Acoust. Soc. Am. 24, 175-184] foundational work on the acoustic analysis of vowels that was published in the Journal of the Acoustical Society of America (JASA) over 6 decades ago. Although Peterson and Barney (1952) were not primarily concerned with dialect variation, their methods laid the groundwork for the acoustic methods that are still used by scholars today to analyze vowel variation within and across languages. In more recent decades, a number of methodological advances in the study of vowel variation have been published in JASA, including work on acoustic vowel overlap and vowel normalization. The goal of this special issue was to honor that tradition by bringing together a set of papers describing the application of emerging acoustic, articulatory, and computational methods to the analysis of dialect variation in vowels and beyond.

  19. Variational method based on Retinex with double-norm hybrid constraints for uneven illumination correction

    NASA Astrophysics Data System (ADS)

    Li, Shuo; Wang, Hui; Wang, Liyong; Yu, Xiangzhou; Yang, Le

    2018-01-01

    The uneven illumination phenomenon reduces the quality of remote sensing image and causes interference in the subsequent processing and applications. A variational method based on Retinex with double-norm hybrid constraints for uneven illumination correction is proposed. The L1 norm and the L2 norm are adopted to constrain the textures and details of reflectance image and the smoothness of the illumination image, respectively. The problem of separating the illumination image from the reflectance image is transformed into the optimal solution of the variational model. In order to accelerate the solution, the split Bregman method is used to decompose the variational model into three subproblems, which are calculated by alternate iteration. Two groups of experiments are implemented on two synthetic images and three real remote sensing images. Compared with the variational Retinex method with single-norm constraint and the Mask method, the proposed method performs better in both visual evaluation and quantitative measurements. The proposed method can effectively eliminate the uneven illumination while maintaining the textures and details of the remote sensing image. Moreover, the proposed method using split Bregman method is more than 10 times faster than the method with the steepest descent method.

  20. Experimental studies of breaking of elastic tired wheel under variable normal load

    NASA Astrophysics Data System (ADS)

    Fedotov, A. I.; Zedgenizov, V. G.; Ovchinnikova, N. I.

    2017-10-01

    The paper analyzes the braking of a vehicle wheel subjected to disturbances of normal load variations. Experimental tests and methods for developing test modes as sinusoidal force disturbances of the normal wheel load were used. Measuring methods for digital and analogue signals were used as well. Stabilization of vehicle wheel braking subjected to disturbances of normal load variations is a topical issue. The paper suggests a method for analyzing wheel braking processes under disturbances of normal load variations. A method to control wheel baking processes subjected to disturbances of normal load variations was developed.

  1. Scaling up functional traits for ecosystem services with remote sensing: concepts and methods.

    PubMed

    Abelleira Martínez, Oscar J; Fremier, Alexander K; Günter, Sven; Ramos Bendaña, Zayra; Vierling, Lee; Galbraith, Sara M; Bosque-Pérez, Nilsa A; Ordoñez, Jenny C

    2016-07-01

    Ecosystem service-based management requires an accurate understanding of how human modification influences ecosystem processes and these relationships are most accurate when based on functional traits. Although trait variation is typically sampled at local scales, remote sensing methods can facilitate scaling up trait variation to regional scales needed for ecosystem service management. We review concepts and methods for scaling up plant and animal functional traits from local to regional spatial scales with the goal of assessing impacts of human modification on ecosystem processes and services. We focus our objectives on considerations and approaches for (1) conducting local plot-level sampling of trait variation and (2) scaling up trait variation to regional spatial scales using remotely sensed data. We show that sampling methods for scaling up traits need to account for the modification of trait variation due to land cover change and species introductions. Sampling intraspecific variation, stratification by land cover type or landscape context, or inference of traits from published sources may be necessary depending on the traits of interest. Passive and active remote sensing are useful for mapping plant phenological, chemical, and structural traits. Combining these methods can significantly improve their capacity for mapping plant trait variation. These methods can also be used to map landscape and vegetation structure in order to infer animal trait variation. Due to high context dependency, relationships between trait variation and remotely sensed data are not directly transferable across regions. We end our review with a brief synthesis of issues to consider and outlook for the development of these approaches. Research that relates typical functional trait metrics, such as the community-weighted mean, with remote sensing data and that relates variation in traits that cannot be remotely sensed to other proxies is needed. Our review narrows the gap between functional trait and remote sensing methods for ecosystem service management.

  2. Variational method for integrating radial gradient field

    NASA Astrophysics Data System (ADS)

    Legarda-Saenz, Ricardo; Brito-Loeza, Carlos; Rivera, Mariano; Espinosa-Romero, Arturo

    2014-12-01

    We propose a variational method for integrating information obtained from circular fringe pattern. The proposed method is a suitable choice for objects with radial symmetry. First, we analyze the information contained in the fringe pattern captured by the experimental setup and then move to formulate the problem of recovering the wavefront using techniques from calculus of variations. The performance of the method is demonstrated by numerical experiments with both synthetic and real data.

  3. Application of the Mathar Method to Identify Internal Stress Variation in Steel as a Welding Process Result

    NASA Astrophysics Data System (ADS)

    Kowalski, Dariusz

    2017-06-01

    The paper deals with the method to identify internal stresses in two-dimensional steel members. Steel members were investigated in the delivery stage and after assembly, by means of electric-arc welding. In order to perform the member assessment two methods to identify the stress variation were applied. The first is a non-destructive measurement method employing local external magnetic field and to detecting the induced voltage, including Barkhausen noise The analysis of the latter allows to assess internal stresses in a surface layer of the material. The second method, essential in the paper, is a semi-trepanation Mathar method of tensometric strain variation measurement in the course of a controlled void-making in the material. Variation of internal stress distribution in the material led to the choice of welding technology to join. The assembly process altered the actual stresses and made up new stresses, triggering post-welding stresses as a response for the excessive stress variation.

  4. Estimation and Partitioning of Heritability in Human Populations using Whole Genome Analysis Methods

    PubMed Central

    Vinkhuyzen, Anna AE; Wray, Naomi R; Yang, Jian; Goddard, Michael E; Visscher, Peter M

    2014-01-01

    Understanding genetic variation of complex traits in human populations has moved from the quantification of the resemblance between close relatives to the dissection of genetic variation into the contributions of individual genomic loci. But major questions remain unanswered: how much phenotypic variation is genetic, how much of the genetic variation is additive and what is the joint distribution of effect size and allele frequency at causal variants? We review and compare three whole-genome analysis methods that use mixed linear models (MLM) to estimate genetic variation, using the relationship between close or distant relatives based on pedigree or SNPs. We discuss theory, estimation procedures, bias and precision of each method and review recent advances in the dissection of additive genetic variation of complex traits in human populations that are based upon the application of MLM. Using genome wide data, SNPs account for far more of the genetic variation than the highly significant SNPs associated with a trait, but they do not account for all of the genetic variance estimated by pedigree based methods. We explain possible reasons for this ‘missing’ heritability. PMID:23988118

  5. Second-order variational equations for N-body simulations

    NASA Astrophysics Data System (ADS)

    Rein, Hanno; Tamayo, Daniel

    2016-07-01

    First-order variational equations are widely used in N-body simulations to study how nearby trajectories diverge from one another. These allow for efficient and reliable determinations of chaos indicators such as the Maximal Lyapunov characteristic Exponent (MLE) and the Mean Exponential Growth factor of Nearby Orbits (MEGNO). In this paper we lay out the theoretical framework to extend the idea of variational equations to higher order. We explicitly derive the differential equations that govern the evolution of second-order variations in the N-body problem. Going to second order opens the door to new applications, including optimization algorithms that require the first and second derivatives of the solution, like the classical Newton's method. Typically, these methods have faster convergence rates than derivative-free methods. Derivatives are also required for Riemann manifold Langevin and Hamiltonian Monte Carlo methods which provide significantly shorter correlation times than standard methods. Such improved optimization methods can be applied to anything from radial-velocity/transit-timing-variation fitting to spacecraft trajectory optimization to asteroid deflection. We provide an implementation of first- and second-order variational equations for the publicly available REBOUND integrator package. Our implementation allows the simultaneous integration of any number of first- and second-order variational equations with the high-accuracy IAS15 integrator. We also provide routines to generate consistent and accurate initial conditions without the need for finite differencing.

  6. The Gibbs Variational Method in Thermodynamics of Equilibrium Plasma: 1. General Conditions of Equilibrium and Stability for One-Component Charged Gas

    DTIC Science & Technology

    2018-04-01

    systems containing ionized gases. 2. Gibbs Method in the Integral Form As per the Gibbs general methodology , based on the concept of heterogeneous...ARL-TR-8348 ● APR 2018 US Army Research Laboratory The Gibbs Variational Method in Thermodynamics of Equilibrium Plasma: 1...ARL-TR-8348 ● APR 2018 US Army Research Laboratory The Gibbs Variational Method in Thermodynamics of Equilibrium Plasma: 1. General

  7. The Schwinger Variational Method

    NASA Technical Reports Server (NTRS)

    Huo, Winifred M.

    1995-01-01

    Variational methods have proven invaluable in theoretical physics and chemistry, both for bound state problems and for the study of collision phenomena. For collisional problems they can be grouped into two types: those based on the Schroedinger equation and those based on the Lippmann-Schwinger equation. The application of the Schwinger variational (SV) method to e-molecule collisions and photoionization has been reviewed previously. The present chapter discusses the implementation of the SV method as applied to e-molecule collisions.

  8. Variation of Parameters in Differential Equations (A Variation in Making Sense of Variation of Parameters)

    ERIC Educational Resources Information Center

    Quinn, Terry; Rai, Sanjay

    2012-01-01

    The method of variation of parameters can be found in most undergraduate textbooks on differential equations. The method leads to solutions of the non-homogeneous equation of the form y = u[subscript 1]y[subscript 1] + u[subscript 2]y[subscript 2], a sum of function products using solutions to the homogeneous equation y[subscript 1] and…

  9. Total generalized variation-regularized variational model for single image dehazing

    NASA Astrophysics Data System (ADS)

    Shu, Qiao-Ling; Wu, Chuan-Sheng; Zhong, Qiu-Xiang; Liu, Ryan Wen

    2018-04-01

    Imaging quality is often significantly degraded under hazy weather condition. The purpose of this paper is to recover the latent sharp image from its hazy version. It is well known that the accurate estimation of depth information could assist in improving dehazing performance. In this paper, a detail-preserving variational model was proposed to simultaneously estimate haze-free image and depth map. In particular, the total variation (TV) and total generalized variation (TGV) regularizers were introduced to restrain haze-free image and depth map, respectively. The resulting nonsmooth optimization problem was efficiently solved using the alternating direction method of multipliers (ADMM). Comprehensive experiments have been conducted on realistic datasets to compare our proposed method with several state-of-the-art dehazing methods. Results have illustrated the superior performance of the proposed method in terms of visual quality evaluation.

  10. Variational formulation of high performance finite elements: Parametrized variational principles

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.; Militello, Carmello

    1991-01-01

    High performance elements are simple finite elements constructed to deliver engineering accuracy with coarse arbitrary grids. This is part of a series on the variational basis of high-performance elements, with emphasis on those constructed with the free formulation (FF) and assumed natural strain (ANS) methods. Parametrized variational principles that provide a foundation for the FF and ANS methods, as well as for a combination of both are presented.

  11. Characterisation of various grape seed oils by volatile compounds, triacylglycerol composition, total phenols and antioxidant capacity.

    PubMed

    Bail, Stefanie; Stuebiger, Gerald; Krist, Sabine; Unterweger, Heidrun; Buchbauer, Gerhard

    2008-06-01

    Grape seed oil (Oleum vitis viniferae) representing a promising plant fat, mainly used for culinary and pharmaceutical purposes as well as for various technical applications, was subject of the present investigation. HS-SPME-GC-MS was applied to study volatile compounds in several seed oil samples from different grape oils. The triacylglycerol (TAG) composition of these oils was analyzed by MALDI-TOF-MS/MS. In addition the total phenol content and the antioxidant capacity (using TEAC) of these oils were determined. The headspace of virgin grape oils from white and red grapes was dominated by ethyl octanoate (up to 27.5% related to the total level of volatiles), ethylacetate (up to 25.0%), ethanol (up to 22.7%), acetic acid (up to 17.2%), ethyl hexanoate (up to 17.4%) and 3-methylbutanol (up to 11.0%). Triacylglycerol composition was found to be dominated by LLL (up to 41.8%), LLP (up to 24.3%), LLO (up to 16.3%) and LOO (up to 11.7%), followed by LOP (up to 9.3%) and LOS/OOO (up to 4.3%). Total phenol content ranged between 59μg/g and 115.5μg/g GAE. Antioxidant capacity (TEAC) was analyzed to range between 0.09μg/g and 1.16μg/g. Copyright © 2007 Elsevier Ltd. All rights reserved.

  12. Office of Inspector General report on Naval Petroleum Reserve Number 1, independent accountant`s report on applying agreed-upon procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-12-01

    On October 6, 1997, the Department of Energy (DOE) announced it had agreed to sell all of the Government`s interest in Naval Petroleum Reserve Number 1 (NPR-1) to Occidental Petroleum Corporation for $3.65 billion. This report presents the results of the independent certified public accountants` agreed-upon procedures work on the Preliminary Settlement Statement of the Purchase and Sale Agreement between DOE and Occidental. To fulfill their responsibilities, the Office of Inspector General contracted with the independent public accounting firm of KPMG Peat Marwick LLP to conduct the work for them, subject to their review. The work was done in accordancemore » with the Statements on Standards for Attestation Engagements issued by the American Institute of Certified Public Accountants. As such, the independent certified public accountants performed only work that was agreed upon by DOE and Occidental. This report is intended solely for the use of DOE and Occidental and should not be used by those who have not agreed to the procedures and taken responsibility for the sufficiency of the procedures for their purposes. However, this report is a matter of public record, and its distribution is not limited. The independent certified public accountants identified over 20 adjustments to the Preliminary Settlement Statement that would result in a $10.8 million increase in the sale price.« less

  13. Couple of the Variational Iteration Method and Fractional-Order Legendre Functions Method for Fractional Differential Equations

    PubMed Central

    Song, Junqiang; Leng, Hongze; Lu, Fengshun

    2014-01-01

    We present a new numerical method to get the approximate solutions of fractional differential equations. A new operational matrix of integration for fractional-order Legendre functions (FLFs) is first derived. Then a modified variational iteration formula which can avoid “noise terms” is constructed. Finally a numerical method based on variational iteration method (VIM) and FLFs is developed for fractional differential equations (FDEs). Block-pulse functions (BPFs) are used to calculate the FLFs coefficient matrices of the nonlinear terms. Five examples are discussed to demonstrate the validity and applicability of the technique. PMID:24511303

  14. Iterative Nonlocal Total Variation Regularization Method for Image Restoration

    PubMed Central

    Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen

    2013-01-01

    In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560

  15. Accurate sparse-projection image reconstruction via nonlocal TV regularization.

    PubMed

    Zhang, Yi; Zhang, Weihua; Zhou, Jiliu

    2014-01-01

    Sparse-projection image reconstruction is a useful approach to lower the radiation dose; however, the incompleteness of projection data will cause degeneration of imaging quality. As a typical compressive sensing method, total variation has obtained great attention on this problem. Suffering from the theoretical imperfection, total variation will produce blocky effect on smooth regions and blur edges. To overcome this problem, in this paper, we introduce the nonlocal total variation into sparse-projection image reconstruction and formulate the minimization problem with new nonlocal total variation norm. The qualitative and quantitative analyses of numerical as well as clinical results demonstrate the validity of the proposed method. Comparing to other existing methods, our method more efficiently suppresses artifacts caused by low-rank reconstruction and reserves structure information better.

  16. Application of Variational Methods to the Thermal Entrance Region of Ducts

    NASA Technical Reports Server (NTRS)

    Sparrow, E. M.; Siegel. R.

    1960-01-01

    A variational method is presented for solving eigenvalue problems which arise in connection with the analysis of convective heat transfer in the thermal entrance region of ducts. Consideration is given, to both situations where the temperature profile depends upon one cross-sectional coordinate (e.g. circular tube) or upon two cross-sectional coordinates (e.g. rectangular duct). The variational method is illustrated and verified by application to laminar heat transfer in a circular tube and a parallel-plate channel, and good agreement with existing numerical solutions is attained. Then, application is made to laminar heat transfer in a square duct as a check, an alternate computation for the square duct is made using a method indicated by Misaps and Pohihausen. The variational method can, in principle, also be applied to problems in turbulent heat transfer.

  17. Study of weak solutions for parabolic variational inequalities with nonstandard growth conditions.

    PubMed

    Dong, Yan

    2018-01-01

    In this paper, we study the degenerate parabolic variational inequality problem in a bounded domain. First, the weak solutions of the variational inequality are defined. Second, the existence and uniqueness of the solutions in the weak sense are proved by using the penalty method and the reduction method.

  18. Thickness-Independent Ultrasonic Imaging Applied to Abrasive Cut-Off Wheels: An Advanced Aerospace Materials Characterization Method for the Abrasives Industry. A NASA Lewis Research Center Technology Transfer Case History

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Farmer, Donald A.

    1998-01-01

    Abrasive cut-off wheels are at times unintentionally manufactured with nonuniformity that is difficult to identify and sufficiently characterize without time-consuming, destructive examination. One particular nonuniformity is a density variation condition occurring around the wheel circumference or along the radius, or both. This density variation, depending on its severity, can cause wheel warpage and wheel vibration resulting in unacceptable performance and perhaps premature failure of the wheel. Conventional nondestructive evaluation methods such as ultrasonic c-scan imaging and film radiography are inaccurate in their attempts at characterizing the density variation because a superimposing thickness variation exists as well in the wheel. In this article, the single transducer thickness-independent ultrasonic imaging method, developed specifically to allow more accurate characterization of aerospace components, is shown to precisely characterize the extent of the density variation in a cut-off wheel having a superimposing thickness variation. The method thereby has potential as an effective quality control tool in the abrasives industry for the wheel manufacturer.

  19. A novel iterative scheme and its application to differential equations.

    PubMed

    Khan, Yasir; Naeem, F; Šmarda, Zdeněk

    2014-01-01

    The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.

  20. A Comparison of Cut Scores Using Multiple Standard Setting Methods.

    ERIC Educational Resources Information Center

    Impara, James C.; Plake, Barbara S.

    This paper reports the results of using several alternative methods of setting cut scores. The methods used were: (1) a variation of the Angoff method (1971); (2) a variation of the borderline group method; and (3) an advanced impact method (G. Dillon, 1996). The results discussed are from studies undertaken to set the cut scores for fourth grade…

  1. Solution of the Time-Dependent Schrödinger Equation by the Laplace Transform Method

    PubMed Central

    Lin, S. H.; Eyring, H.

    1971-01-01

    The time-dependent Schrödinger equation for two quite general types of perturbation has been solved by introducing the Laplace transforms to eliminate the time variable. The resulting time-independent differential equation can then be solved by the perturbation method, the variation method, the variation-perturbation method, and other methods. PMID:16591898

  2. Comparison of four extraction/methylation analytical methods to measure fatty acid composition by gas chromatography in meat.

    PubMed

    Juárez, M; Polvillo, O; Contò, M; Ficco, A; Ballico, S; Failla, S

    2008-05-09

    Four different extraction-derivatization methods commonly used for fatty acid analysis in meat (in situ or one-step method, saponification method, classic method and a combination of classic extraction and saponification derivatization) were tested. The in situ method had low recovery and variation. The saponification method showed the best balance between recovery, precision, repeatability and reproducibility. The classic method had high recovery and acceptable variation values, except for the polyunsaturated fatty acids, showing higher variation than the former methods. The combination of extraction and methylation steps had great recovery values, but the precision, repeatability and reproducibility were not acceptable. Therefore the saponification method would be more convenient for polyunsaturated fatty acid analysis, whereas the in situ method would be an alternative for fast analysis. However the classic method would be the method of choice for the determination of the different lipid classes.

  3. An Improved Variational Method for Hyperspectral Image Pansharpening with the Constraint of Spectral Difference Minimization

    NASA Astrophysics Data System (ADS)

    Huang, Z.; Chen, Q.; Shen, Y.; Chen, Q.; Liu, X.

    2017-09-01

    Variational pansharpening can enhance the spatial resolution of a hyperspectral (HS) image using a high-resolution panchromatic (PAN) image. However, this technology may lead to spectral distortion that obviously affect the accuracy of data analysis. In this article, we propose an improved variational method for HS image pansharpening with the constraint of spectral difference minimization. We extend the energy function of the classic variational pansharpening method by adding a new spectral fidelity term. This fidelity term is designed following the definition of spectral angle mapper, which means that for every pixel, the spectral difference value of any two bands in the HS image is in equal proportion to that of the two corresponding bands in the pansharpened image. Gradient descent method is adopted to find the optimal solution of the modified energy function, and the pansharpened image can be reconstructed. Experimental results demonstrate that the constraint of spectral difference minimization is able to preserve the original spectral information well in HS images, and reduce the spectral distortion effectively. Compared to original variational method, our method performs better in both visual and quantitative evaluation, and achieves a good trade-off between spatial and spectral information.

  4. [A proposal for a new definition of excess mortality associated with influenza-epidemics and its estimation].

    PubMed

    Takahashi, M; Tango, T

    2001-05-01

    As methods for estimating excess mortality associated with influenza-epidemic, the Serfling's cyclical regression model and the Kawai and Fukutomi model with seasonal indices have been proposed. Excess mortality under the old definition (i.e., the number of deaths actually recorded in excess of the number expected on the basis of past seasonal experience) covers the random error for that portion of variation regarded as due to chance. In addition, it disregards the range of random variation of mortality with the season. In this paper, we propose a new definition of excess mortality associated with influenza-epidemics and a new estimation method, considering these questions with the Kawai and Fukutomi method. The new definition of excess mortality and a novel method for its estimation were generated as follows. Factors bringing about variation in mortality in months with influenza-epidemics may be divided into two groups: 1. Influenza itself, 2. others (practically random variation). The range of variation of mortality due to the latter (normal range) can be estimated from the range for months in the absence of influenza-epidemics. Excess mortality is defined as death over the normal range. A new definition of excess mortality associated with influenza-epidemics and an estimation method are proposed. The new method considers variation in mortality in months in the absence of influenza-epidemics. Consequently, it provides reasonable estimates of excess mortality by separating the portion of random variation. Further, it is a characteristic that the proposed estimate can be used as a criterion of statistical significance test.

  5. Parameterization of aquatic ecosystem functioning and its natural variation: Hierarchical Bayesian modelling of plankton food web dynamics

    NASA Astrophysics Data System (ADS)

    Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede

    2017-10-01

    Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.

  6. Identification of Vibrotactile Patterns Encoding Obstacle Distance Information.

    PubMed

    Kim, Yeongmi; Harders, Matthias; Gassert, Roger

    2015-01-01

    Delivering distance information of nearby obstacles from sensors embedded in a white cane-in addition to the intrinsic mechanical feedback from the cane-can aid the visually impaired in ambulating independently. Haptics is a common modality for conveying such information to cane users, typically in the form of vibrotactile signals. In this context, we investigated the effect of tactile rendering methods, tactile feedback configurations and directions of tactile flow on the identification of obstacle distance. Three tactile rendering methods with temporal variation only, spatio-temporal variation and spatial/temporal/intensity variation were investigated for two vibration feedback configurations. Results showed a significant interaction between tactile rendering method and feedback configuration. Spatio-temporal variation generally resulted in high correct identification rates for both feedback configurations. In the case of the four-finger vibration, tactile rendering with spatial/temporal/intensity variation also resulted in high distance identification rate. Further, participants expressed their preference for the four-finger vibration over the single-finger vibration in a survey. Both preferred rendering methods with spatio-temporal variation and spatial/temporal/intensity variation for the four-finger vibration could convey obstacle distance information with low workload. Overall, the presented findings provide valuable insights and guidance for the design of haptic displays for electronic travel aids for the visually impaired.

  7. Survey Shows Variation in Ph.D. Methods Training.

    ERIC Educational Resources Information Center

    Steeves, Leslie; And Others

    1983-01-01

    Reports on a 1982 survey of journalism graduate studies indicating considerable variation in research methods requirements and emphases in 23 universities offering doctoral degrees in mass communication. (HOD)

  8. Plate equations for piezoelectrically actuated flexural mode ultrasound transducers.

    PubMed

    Perçin, Gökhan

    2003-01-01

    This paper considers variational methods to derive two-dimensional plate equations for piezoelectrically actuated flexural mode ultrasound transducers. In the absence of analytical expressions for the equivalent circuit parameters of a flexural mode transducer, it is difficult to calculate its optimal parameters and dimensions, and to choose suitable materials. The influence of coupling between flexural and extensional deformation, and coupling between the structure and the acoustic volume on the dynamic response of piezoelectrically actuated flexural mode transducer is analyzed using variational methods. Variational methods are applied to derive two-dimensional plate equations for the transducer, and to calculate the coupled electromechanical field variables. In these methods, the variations across the thickness direction vanish by using the stress resultants. Thus, two-dimensional plate equations for a stepwise laminated circular plate are obtained.

  9. Variational finite-difference methods in linear and nonlinear problems of the deformation of metallic and composite shells (review)

    NASA Astrophysics Data System (ADS)

    Maksimyuk, V. A.; Storozhuk, E. A.; Chernyshenko, I. S.

    2012-11-01

    Variational finite-difference methods of solving linear and nonlinear problems for thin and nonthin shells (plates) made of homogeneous isotropic (metallic) and orthotropic (composite) materials are analyzed and their classification principles and structure are discussed. Scalar and vector variational finite-difference methods that implement the Kirchhoff-Love hypotheses analytically or algorithmically using Lagrange multipliers are outlined. The Timoshenko hypotheses are implemented in a traditional way, i.e., analytically. The stress-strain state of metallic and composite shells of complex geometry is analyzed numerically. The numerical results are presented in the form of graphs and tables and used to assess the efficiency of using the variational finite-difference methods to solve linear and nonlinear problems of the statics of shells (plates)

  10. A method for the fast estimation of a battery entropy-variation high-resolution curve - Application on a commercial LiFePO4/graphite cell

    NASA Astrophysics Data System (ADS)

    Damay, Nicolas; Forgez, Christophe; Bichat, Marie-Pierre; Friedrich, Guy

    2016-11-01

    The entropy-variation of a battery is responsible for heat generation or consumption during operation and its prior measurement is mandatory for developing a thermal model. It is generally done through the potentiometric method which is considered as a reference. However, it requires several days or weeks to get a look-up table with a 5 or 10% SoC (State of Charge) resolution. In this study, a calorimetric method based on the inversion of a thermal model is proposed for the fast estimation of a nearly continuous curve of entropy-variation. This is achieved by separating the heats produced while charging and discharging the battery. The entropy-variation is then deduced from the extracted entropic heat. The proposed method is validated by comparing the results obtained with several current rates to measurements made with the potentiometric method.

  11. Quantification of the Barkhausen noise method for the evaluation of time-dependent degradation

    NASA Astrophysics Data System (ADS)

    Kim, Dong-Won; Kwon, Dongil

    2003-02-01

    The Barkhausen noise (BN) method has long been applied to measure the bulk magnetic properties of magnetic materials. Recently, this important nondestructive testing (NDT) method has been applied to evaluate microstructure, stress distribution analysis, fatigue, creep and fracture characteristics. Until now the BN method has been used only qualitatively in evaluating the variation of BN with variations in material properties. For this reason, few NDT methods have been applied in industrial plants and laboratories. The present investigation studied the coercive force and BN while varying the microstructure of ultrafine-grained steels and SA508 cl.3 steels. This variation was carried out according to the second heat-treatment condition with rolling of ultrafine-grained steels and the simulated time-dependent degradation of SA 508 cl.3 steels. An attempt was also made to quantify BN from the relationship between the velocity of magnetic domain walls and the retarding force, using the coercive force of the domain wall movement. The microstructure variation was analyzed according to time-dependent degradation. Fracture toughness was evaluated quantitatively by measuring the BN from two intermediary parameters; grain size and distribution of nonmagnetic particles. From these measurements, the variation of microstructure and fracture toughness can be directly evaluated by the BN method as an accurate in situ NDT method.

  12. Augmented classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2004-02-03

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  13. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-07-26

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  14. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-01-11

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  15. Effect of culture methods on individual variation in the growth of sea cucumber Apostichopus japonicus within a cohort and family

    NASA Astrophysics Data System (ADS)

    Qiu, Tianlong; Zhang, Libin; Zhang, Tao; Bai, Yucen; Yang, Hongsheng

    2014-07-01

    There is substantial individual variation in the growth rates of sea cucumber Apostichopus japonicus individuals. This necessitates additional work to grade the seed stock and lengthens the production period. We evaluated the influence of three culture methods (free-mixed, isolated-mixed, isolated-alone) on individual variation in growth and assessed the relationship between feeding, energy conversion efficiency, and individual growth variation in individually cultured sea cucumbers. Of the different culture methods, animals grew best when reared in the isolated-mixed treatment (i.e., size classes were held separately), though there was no difference in individual variation in growth between rearing treatment groups. The individual variation in growth was primarily attributed to genetic factors. The difference in food conversion efficiency caused by genetic differences among individuals was thought to be the origin of the variance. The level of individual growth variation may be altered by interactions among individuals and environmental heterogeneity. Our results suggest that, in addition to traditional seed grading, design of a new kind of substrate that changes the spatial distribution of sea cucumbers would effectively enhance growth and reduce individual variation in growth of sea cucumbers in culture.

  16. Statistics, Uncertainty, and Transmitted Variation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, Joanne Roth

    2014-11-05

    The field of Statistics provides methods for modeling and understanding data and making decisions in the presence of uncertainty. When examining response functions, variation present in the input variables will be transmitted via the response function to the output variables. This phenomenon can potentially have significant impacts on the uncertainty associated with results from subsequent analysis. This presentation will examine the concept of transmitted variation, its impact on designed experiments, and a method for identifying and estimating sources of transmitted variation in certain settings.

  17. Reconstruction of fluorophore concentration variation in dynamic fluorescence molecular tomography.

    PubMed

    Zhang, Xuanxuan; Liu, Fei; Zuo, Simin; Shi, Junwei; Zhang, Guanglei; Bai, Jing; Luo, Jianwen

    2015-01-01

    Dynamic fluorescence molecular tomography (DFMT) is a potential approach for drug delivery, tumor detection, diagnosis, and staging. The purpose of DFMT is to quantify the changes of fluorescent agents in the bodies, which offer important information about the underlying physiological processes. However, the conventional method requires that the fluorophore concentrations to be reconstructed are stationary during the data collection period. As thus, it cannot offer the dynamic information of fluorophore concentration variation within the data collection period. In this paper, a method is proposed to reconstruct the fluorophore concentration variation instead of the fluorophore concentration through a linear approximation. The fluorophore concentration variation rate is introduced by the linear approximation as a new unknown term to be reconstructed and is used to obtain the time courses of fluorophore concentration. Simulation and phantom studies are performed to validate the proposed method. The results show that the method is able to reconstruct the fluorophore concentration variation rates and the time courses of fluorophore concentration with relative errors less than 0.0218.

  18. Pre-analytical and analytical variation of drug determination in segmented hair using ultra-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian

    2014-01-01

    Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Comparison of Dissolution Similarity Assessment Methods for Products with Large Variations: f2 Statistics and Model-Independent Multivariate Confidence Region Procedure for Dissolution Profiles of Multiple Oral Products.

    PubMed

    Yoshida, Hiroyuki; Shibata, Hiroko; Izutsu, Ken-Ichi; Goda, Yukihiro

    2017-01-01

    The current Japanese Ministry of Health Labour and Welfare (MHLW)'s Guideline for Bioequivalence Studies of Generic Products uses averaged dissolution rates for the assessment of dissolution similarity between test and reference formulations. This study clarifies how the application of model-independent multivariate confidence region procedure (Method B), described in the European Medical Agency and U.S. Food and Drug Administration guidelines, affects similarity outcomes obtained empirically from dissolution profiles with large variations in individual dissolution rates. Sixty-one datasets of dissolution profiles for immediate release, oral generic, and corresponding innovator products that showed large variation in individual dissolution rates in generic products were assessed on their similarity by using the f 2 statistics defined in the MHLW guidelines (MHLW f 2 method) and two different Method B procedures, including a bootstrap method applied with f 2 statistics (BS method) and a multivariate analysis method using the Mahalanobis distance (MV method). The MHLW f 2 and BS methods provided similar dissolution similarities between reference and generic products. Although a small difference in the similarity assessment may be due to the decrease in the lower confidence interval for expected f 2 values derived from the large variation in individual dissolution rates, the MV method provided results different from those obtained through MHLW f 2 and BS methods. Analysis of actual dissolution data for products with large individual variations would provide valuable information towards an enhanced understanding of these methods and their possible incorporation in the MHLW guidelines.

  20. Spatially adapted second-order total generalized variational image deblurring model under impulse noise

    NASA Astrophysics Data System (ADS)

    Zhong, Qiu-Xiang; Wu, Chuan-Sheng; Shu, Qiao-Ling; Liu, Ryan Wen

    2018-04-01

    Image deblurring under impulse noise is a typical ill-posed problem which requires regularization methods to guarantee high-quality imaging. L1-norm data-fidelity term and total variation (TV) regularizer have been combined to contribute the popular regularization method. However, the TV-regularized variational image deblurring model often suffers from the staircase-like artifacts leading to image quality degradation. To enhance image quality, the detailpreserving total generalized variation (TGV) was introduced to replace TV to eliminate the undesirable artifacts. The resulting nonconvex optimization problem was effectively solved using the alternating direction method of multipliers (ADMM). In addition, an automatic method for selecting spatially adapted regularization parameters was proposed to further improve deblurring performance. Our proposed image deblurring framework is able to remove blurring and impulse noise effects while maintaining the image edge details. Comprehensive experiments have been conducted to demonstrate the superior performance of our proposed method over several state-of-the-art image deblurring methods.

  1. Single Transducer Ultrasonic Imaging Method that Eliminates the Effect of Plate Thickness Variation in the Image

    NASA Technical Reports Server (NTRS)

    Roth, Don J.

    1996-01-01

    This article describes a single transducer ultrasonic imaging method that eliminates the effect of plate thickness variation in the image. The method thus isolates ultrasonic variations due to material microstructure. The use of this method can result in significant cost savings because the ultrasonic image can be interpreted correctly without the need for machining to achieve precise thickness uniformity during nondestructive evaluations of material development. The method is based on measurement of ultrasonic velocity. Images obtained using the thickness-independent methodology are compared with conventional velocity and c-scan echo peak amplitude images for monolithic ceramic (silicon nitride), metal matrix composite and polymer matrix composite materials. It was found that the thickness-independent ultrasonic images reveal and quantify correctly areas of global microstructural (pore and fiber volume fraction) variation due to the elimination of thickness effects. The thickness-independent ultrasonic imaging method described in this article is currently being commercialized under a cooperative agreement between NASA Lewis Research Center and Sonix, Inc.

  2. Alternative to the Palatini method: A new variational principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goenner, Hubert

    2010-06-15

    A variational principle is suggested within Riemannian geometry, in which an auxiliary metric and the Levi Civita connection are varied independently. The auxiliary metric plays the role of a Lagrange multiplier and introduces nonminimal coupling of matter to the curvature scalar. The field equations are 2nd order PDEs and easier to handle than those following from the so-called Palatini method. Moreover, in contrast to the latter method, no gradients of the matter variables appear. In cosmological modeling, the physics resulting from the alternative variational principle will differ from the modeling using the standard Palatini method.

  3. Hints of correlation between broad-line and radio variations for 3C 120

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, H. T.; Bai, J. M.; Li, S. K.

    2014-01-01

    In this paper, we investigate the correlation between broad-line and radio variations for the broad-line radio galaxy 3C 120. By the z-transformed discrete correlation function method and the model-independent flux randomization/random subset selection (FR/RSS) Monte Carlo method, we find that broad Hβ line variations lead the 15 GHz variations. The FR/RSS method shows that the Hβ line variations lead the radio variations by a factor of τ{sub ob} = 0.34 ± 0.01 yr. This time lag can be used to locate the position of the emitting region of radio outbursts in the jet, on the order of ∼5 lt-yr frommore » the central engine. This distance is much larger than the size of the broad-line region. The large separation of the radio outburst emitting region from the broad-line region will observably influence the gamma-ray emission in 3C 120.« less

  4. Supervised Variational Relevance Learning, An Analytic Geometric Feature Selection with Applications to Omic Datasets.

    PubMed

    Boareto, Marcelo; Cesar, Jonatas; Leite, Vitor B P; Caticha, Nestor

    2015-01-01

    We introduce Supervised Variational Relevance Learning (Suvrel), a variational method to determine metric tensors to define distance based similarity in pattern classification, inspired in relevance learning. The variational method is applied to a cost function that penalizes large intraclass distances and favors small interclass distances. We find analytically the metric tensor that minimizes the cost function. Preprocessing the patterns by doing linear transformations using the metric tensor yields a dataset which can be more efficiently classified. We test our methods using publicly available datasets, for some standard classifiers. Among these datasets, two were tested by the MAQC-II project and, even without the use of further preprocessing, our results improve on their performance.

  5. A Finite Mixture Method for Outlier Detection and Robustness in Meta-Analysis

    ERIC Educational Resources Information Center

    Beath, Ken J.

    2014-01-01

    When performing a meta-analysis unexplained variation above that predicted by within study variation is usually modeled by a random effect. However, in some cases, this is not sufficient to explain all the variation because of outlier or unusual studies. A previously described method is to define an outlier as a study requiring a higher random…

  6. A variationally coupled FE-BE method for elasticity and fracture mechanics

    NASA Technical Reports Server (NTRS)

    Lu, Y. Y.; Belytschko, T.; Liu, W. K.

    1991-01-01

    A new method for coupling finite element and boundary element subdomains in elasticity and fracture mechanics problems is described. The essential feature of this new method is that a single variational statement is obtained for the entire domain, and in this process the terms associated with tractions on the interfaces between the subdomains are eliminated. This provides the additional advantage that the ambiguities associated with the matching of discontinuous tractions are circumvented. The method leads to a direct procedure for obtaining the discrete equations for the coupled problem without any intermediate steps. In order to evaluate this method and compare it with previous methods, a patch test for coupled procedures has been devised. Evaluation of this variationally coupled method and other methods, such as stiffness coupling and constraint traction matching coupling, shows that this method is substantially superior. Solutions for a series of fracture mechanics problems are also reported to illustrate the effectiveness of this method.

  7. A semi-inverse variational method for generating the bound state energy eigenvalues in a quantum system: the Dirac Coulomb type-equation

    NASA Astrophysics Data System (ADS)

    Libarir, K.; Zerarka, A.

    2018-05-01

    Exact eigenspectra and eigenfunctions of the Dirac quantum equation are established using the semi-inverse variational method. This method improves of a considerable manner the efficiency and accuracy of results compared with the other usual methods much argued in the literature. Some applications for different state configurations are proposed to concretize the method.

  8. Multi-beam laser heterodyne measurement with ultra-precision for Young modulus based on oscillating mirror modulation

    NASA Astrophysics Data System (ADS)

    Li, Y. Chao; Ding, Q.; Gao, Y.; Ran, L. Ling; Yang, J. Ru; Liu, C. Yu; Wang, C. Hui; Sun, J. Feng

    2014-07-01

    This paper proposes a novel method of multi-beam laser heterodyne measurement for Young modulus. Based on Doppler effect and heterodyne technology, loaded the information of length variation to the frequency difference of the multi-beam laser heterodyne signal by the frequency modulation of the oscillating mirror, this method can obtain many values of length variation caused by mass variation after the multi-beam laser heterodyne signal demodulation simultaneously. Processing these values by weighted-average, it can obtain length variation accurately, and eventually obtain value of Young modulus of the sample by the calculation. This novel method is used to simulate measurement for Young modulus of wire under different mass by MATLAB, the obtained result shows that the relative measurement error of this method is just 0.3%.

  9. A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos.

    PubMed

    Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar

    2018-01-01

    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.

  10. A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos

    PubMed Central

    2018-01-01

    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421

  11. 27 CFR 22.22 - Alternate methods or procedures; and emergency variations from requirements.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Alternate methods or procedures; and emergency variations from requirements. 22.22 Section 22.22 Alcohol, Tobacco Products and... OF TAX-FREE ALCOHOL Administrative Provisions Authorities § 22.22 Alternate methods or procedures...

  12. 27 CFR 22.22 - Alternate methods or procedures; and emergency variations from requirements.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Alternate methods or procedures; and emergency variations from requirements. 22.22 Section 22.22 Alcohol, Tobacco Products and... OF TAX-FREE ALCOHOL Administrative Provisions Authorities § 22.22 Alternate methods or procedures...

  13. 27 CFR 22.22 - Alternate methods or procedures; and emergency variations from requirements.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Alternate methods or procedures; and emergency variations from requirements. 22.22 Section 22.22 Alcohol, Tobacco Products and... OF TAX-FREE ALCOHOL Administrative Provisions Authorities § 22.22 Alternate methods or procedures...

  14. 27 CFR 22.22 - Alternate methods or procedures; and emergency variations from requirements.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Alternate methods or procedures; and emergency variations from requirements. 22.22 Section 22.22 Alcohol, Tobacco Products and... OF TAX-FREE ALCOHOL Administrative Provisions Authorities § 22.22 Alternate methods or procedures...

  15. 27 CFR 22.22 - Alternate methods or procedures; and emergency variations from requirements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Alternate methods or procedures; and emergency variations from requirements. 22.22 Section 22.22 Alcohol, Tobacco Products and... OF TAX-FREE ALCOHOL Administrative Provisions Authorities § 22.22 Alternate methods or procedures...

  16. Resolving Rapid Variation in Energy for Particle Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haut, Terry Scot; Ahrens, Cory Douglas; Jonko, Alexandra

    2016-08-23

    Resolving the rapid variation in energy in neutron and thermal radiation transport is needed for the predictive simulation capability in high-energy density physics applications. Energy variation is difficult to resolve due to rapid variations in cross sections and opacities caused by quantized energy levels in the nuclei and electron clouds. In recent work, we have developed a new technique to simultaneously capture slow and rapid variations in the opacities and the solution using homogenization theory, which is similar to multiband (MB) and to the finite-element with discontiguous support (FEDS) method, but does not require closure information. We demonstrated the accuracymore » and efficiency of the method for a variety of problems. We are researching how to extend the method to problems with multiple materials and the same material but with different temperatures and densities. In this highlight, we briefly describe homogenization theory and some results.« less

  17. Some problems in applications of the linear variational method

    NASA Astrophysics Data System (ADS)

    Pupyshev, Vladimir I.; Montgomery, H. E.

    2015-09-01

    The linear variational method is a standard computational method in quantum mechanics and quantum chemistry. As taught in most classes, the general guidance is to include as many basis functions as practical in the variational wave function. However, if it is desired to study the patterns of energy change accompanying the change of system parameters such as the shape and strength of the potential energy, the problem becomes more complicated. We use one-dimensional systems with a particle in a rectangular or in a harmonic potential confined in an infinite rectangular box to illustrate situations where a variational calculation can give incorrect results. These situations result when the energy of the lowest eigenvalue is strongly dependent on the parameters that describe the shape and strength of the potential. The numerical examples described in this work are provided as cautionary notes for practitioners of numerical variational calculations.

  18. Make no mistake—errors can be controlled*

    PubMed Central

    Hinckley, C

    2003-01-01

    

 Traditional quality control methods identify "variation" as the enemy. However, the control of variation by itself can never achieve the remarkably low non-conformance rates of world class quality leaders. Because the control of variation does not achieve the highest levels of quality, an inordinate focus on these techniques obscures key quality improvement opportunities and results in unnecessary pain and suffering for patients, and embarrassment, litigation, and loss of revenue for healthcare providers. Recent experience has shown that mistakes are the most common cause of problems in health care as well as in other industrial environments. Excessive product and process complexity contributes to both excessive variation and unnecessary mistakes. The best methods for controlling variation, mistakes, and complexity are each a form of mistake proofing. Using these mistake proofing techniques, virtually every mistake and non-conformance can be controlled at a fraction of the cost of traditional quality control methods. PMID:14532368

  19. Variational Algorithms for Test Particle Trajectories

    NASA Astrophysics Data System (ADS)

    Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.

    2015-11-01

    The theory of variational integration provides a novel framework for constructing conservative numerical methods for magnetized test particle dynamics. The retention of conservation laws in the numerical time advance captures the correct qualitative behavior of the long time dynamics. For modeling the Lorentz force system, new variational integrators have been developed that are both symplectic and electromagnetically gauge invariant. For guiding center test particle dynamics, discretization of the phase-space action principle yields multistep variational algorithms, in general. Obtaining the desired long-term numerical fidelity requires mitigation of the multistep method's parasitic modes or applying a discretization scheme that possesses a discrete degeneracy to yield a one-step method. Dissipative effects may be modeled using Lagrange-D'Alembert variational principles. Numerical results will be presented using a new numerical platform that interfaces with popular equilibrium codes and utilizes parallel hardware to achieve reduced times to solution. This work was supported by DOE Contract DE-AC02-09CH11466.

  20. A parametric method for assessing diversification-rate variation in phylogenetic trees.

    PubMed

    Shah, Premal; Fitzpatrick, Benjamin M; Fordyce, James A

    2013-02-01

    Phylogenetic hypotheses are frequently used to examine variation in rates of diversification across the history of a group. Patterns of diversification-rate variation can be used to infer underlying ecological and evolutionary processes responsible for patterns of cladogenesis. Most existing methods examine rate variation through time. Methods for examining differences in diversification among groups are more limited. Here, we present a new method, parametric rate comparison (PRC), that explicitly compares diversification rates among lineages in a tree using a variety of standard statistical distributions. PRC can identify subclades of the tree where diversification rates are at variance with the remainder of the tree. A randomization test can be used to evaluate how often such variance would appear by chance alone. The method also allows for comparison of diversification rate among a priori defined groups. Further, the application of the PRC method is not restricted to monophyletic groups. We examined the performance of PRC using simulated data, which showed that PRC has acceptable false-positive rates and statistical power to detect rate variation. We apply the PRC method to the well-studied radiation of North American Plethodon salamanders, and support the inference that the large-bodied Plethodon glutinosus clade has a higher historical rate of diversification compared to other Plethodon salamanders. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.

  1. Hamilton's Principle and Approximate Solutions to Problems in Classical Mechanics

    ERIC Educational Resources Information Center

    Schlitt, D. W.

    1977-01-01

    Shows how to use the Ritz method for obtaining approximate solutions to problems expressed in variational form directly from the variational equation. Application of this method to classical mechanics is given. (MLH)

  2. Simultaneous Noncontact Precision Imaging of Microstructural and Thickness Variation in Dielectric Materials Using Terahertz Energy

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Seebo, Jeffrey P.; Winfree, William P.

    2008-01-01

    This article describes a noncontact single-sided terahertz electromagnetic measurement and imaging method that simultaneously characterizes microstructural (egs. spatially-lateral density) and thickness variation in dielectric (insulating) materials. The method was demonstrated for two materials-Space Shuttle External Tank sprayed-on foam insulation and a silicon nitride ceramic. It is believed that this method can be used as an inspection method for current and future NASA thermal protection system and other dielectric material inspection applications, where microstructural and thickness variation require precision mapping. Scale-up to more complex shapes such as cylindrical structures and structures with beveled regions would appear to be feasible.

  3. Nonperturbative calculations in the framework of variational perturbation theory in QCD

    NASA Astrophysics Data System (ADS)

    Solovtsova, O. P.

    2017-07-01

    We discuss applications of the method based on the variational perturbation theory to perform calculations down to the lowest energy scale. The variational series is different from the conventional perturbative expansion and can be used to go beyond the weak-coupling regime. We apply this method to investigate the Borel representation of the light Adler function constructed from the τ data and to determine the residual condensates. It is shown that within the method suggested the optimal values of these lower dimension condensates are close to zero.

  4. [Synthetic duration curve method for the design of the lowest navigable water level with inconsistent characters in dry seasons].

    PubMed

    Zhao, Jiang Yan; Xie, Ping; Sang, Yan Fang; Xui, Qiang Qiang; Wu, Zi Yi

    2018-04-01

    Under the influence of both global climate change and frequent human activities, the variability of second-moment in hydrological time series become obvious, indicating changes in the consistency of hydrological data samples. Therefore, the traditional hydrological series analysis methods, which only consider the variability of mean values, are not suitable for handling all hydrological non-consistency problems. Traditional synthetic duration curve methods for the design of the lowest navigable water level, based on the consistency of samples, would cause more risks to navigation, especially under low water level in dry seasons. Here, we detected both mean variation and variance variation using the hydrological variation diagnosis system. Furthermore, combing the principle of decomposition and composition of time series, we proposed the synthetic duration curve method for designing the lowest navigable water level with inconsistent characters in dry seasons. With the Yunjinghong Station in the Lancang River Basin as an example, we analyzed its designed water levels in the present, the distant past and the recent past, as well as the differences among three situations (i.e., considering second moment variation, only considering mean variation, not considering any variation). Results showed that variability of the second moment changed the trend of designed water levels alteration in the Yunjinghong Station. When considering the first two moments or just considering the mean variation, the difference ofdesigned water levels was as bigger as -1.11 m. When considering the first two moments or not, the difference of designed water levels was as bigger as -1.01 m. Our results indicated the strong effects of variance variation on the designed water levels, and highlighted the importance of the second moment variation analysis for the channel planning and design.

  5. CNV-seq, a new method to detect copy number variation using high-throughput sequencing.

    PubMed

    Xie, Chao; Tammi, Martti T

    2009-03-06

    DNA copy number variation (CNV) has been recognized as an important source of genetic variation. Array comparative genomic hybridization (aCGH) is commonly used for CNV detection, but the microarray platform has a number of inherent limitations. Here, we describe a method to detect copy number variation using shotgun sequencing, CNV-seq. The method is based on a robust statistical model that describes the complete analysis procedure and allows the computation of essential confidence values for detection of CNV. Our results show that the number of reads, not the length of the reads is the key factor determining the resolution of detection. This favors the next-generation sequencing methods that rapidly produce large amount of short reads. Simulation of various sequencing methods with coverage between 0.1x to 8x show overall specificity between 91.7 - 99.9%, and sensitivity between 72.2 - 96.5%. We also show the results for assessment of CNV between two individual human genomes.

  6. Removal of ring artifacts in microtomography by characterization of scintillator variations.

    PubMed

    Vågberg, William; Larsson, Jakob C; Hertz, Hans M

    2017-09-18

    Ring artifacts reduce image quality in tomography, and arise from faulty detector calibration. In microtomography, we have identified that ring artifacts can arise due to high-spatial frequency variations in the scintillator thickness. Such variations are normally removed by a flat-field correction. However, as the spectrum changes, e.g. due to beam hardening, the detector response varies non-uniformly introducing ring artifacts that persist after flat-field correction. In this paper, we present a method to correct for ring artifacts from variations in scintillator thickness by using a simple method to characterize the local scintillator response. The method addresses the actual physical cause of the ring artifacts, in contrary to many other ring artifact removal methods which rely only on image post-processing. By applying the technique to an experimental phantom tomography, we show that ring artifacts are strongly reduced compared to only making a flat-field correction.

  7. Localization of a variational particle smoother

    NASA Astrophysics Data System (ADS)

    Morzfeld, M.; Hodyss, D.; Poterjoy, J.

    2017-12-01

    Given the success of 4D-variational methods (4D-Var) in numerical weather prediction,and recent efforts to merge ensemble Kalman filters with 4D-Var,we consider a method to merge particle methods and 4D-Var.This leads us to revisit variational particle smoothers (varPS).We study the collapse of varPS in high-dimensional problemsand show how it can be prevented by weight-localization.We test varPS on the Lorenz'96 model of dimensionsn=40, n=400, and n=2000.In our numerical experiments, weight localization prevents the collapse of the varPS,and we note that the varPS yields results comparable to ensemble formulations of 4D-variational methods,while it outperforms EnKF with tuned localization and inflation,and the localized standard particle filter.Additional numerical experiments suggest that using localized weights in varPS may not yield significant advantages over unweighted or linearizedsolutions in near-Gaussian problems.

  8. Numerical investigation of multi-beam laser heterodyne measurement with ultra-precision for linear expansion coefficient of metal based on oscillating mirror modulation

    NASA Astrophysics Data System (ADS)

    Li, Yan-Chao; Wang, Chun-Hui; Qu, Yang; Gao, Long; Cong, Hai-Fang; Yang, Yan-Ling; Gao, Jie; Wang, Ao-You

    2011-01-01

    This paper proposes a novel method of multi-beam laser heterodyne measurement for metal linear expansion coefficient. Based on the Doppler effect and heterodyne technology, the information is loaded of length variation to the frequency difference of the multi-beam laser heterodyne signal by the frequency modulation of the oscillating mirror, this method can obtain many values of length variation caused by temperature variation after the multi-beam laser heterodyne signal demodulation simultaneously. Processing these values by weighted-average, it can obtain length variation accurately, and eventually obtain the value of linear expansion coefficient of metal by the calculation. This novel method is used to simulate measurement for linear expansion coefficient of metal rod under different temperatures by MATLAB, the obtained result shows that the relative measurement error of this method is just 0.4%.

  9. Sampling and pyrosequencing methods for characterizing bacterial communities in the human gut using 16S sequence tags.

    PubMed

    Wu, Gary D; Lewis, James D; Hoffmann, Christian; Chen, Ying-Yu; Knight, Rob; Bittinger, Kyle; Hwang, Jennifer; Chen, Jun; Berkowsky, Ronald; Nessel, Lisa; Li, Hongzhe; Bushman, Frederic D

    2010-07-30

    Intense interest centers on the role of the human gut microbiome in health and disease, but optimal methods for analysis are still under development. Here we present a study of methods for surveying bacterial communities in human feces using 454/Roche pyrosequencing of 16S rRNA gene tags. We analyzed fecal samples from 10 individuals and compared methods for storage, DNA purification and sequence acquisition. To assess reproducibility, we compared samples one cm apart on a single stool specimen for each individual. To analyze storage methods, we compared 1) immediate freezing at -80 degrees C, 2) storage on ice for 24 or 3) 48 hours. For DNA purification methods, we tested three commercial kits and bead beating in hot phenol. Variations due to the different methodologies were compared to variation among individuals using two approaches--one based on presence-absence information for bacterial taxa (unweighted UniFrac) and the other taking into account their relative abundance (weighted UniFrac). In the unweighted analysis relatively little variation was associated with the different analytical procedures, and variation between individuals predominated. In the weighted analysis considerable variation was associated with the purification methods. Particularly notable was improved recovery of Firmicutes sequences using the hot phenol method. We also carried out surveys of the effects of different 454 sequencing methods (FLX versus Titanium) and amplification of different 16S rRNA variable gene segments. Based on our findings we present recommendations for protocols to collect, process and sequence bacterial 16S rDNA from fecal samples--some major points are 1) if feasible, bead-beating in hot phenol or use of the PSP kit improves recovery; 2) storage methods can be adjusted based on experimental convenience; 3) unweighted (presence-absence) comparisons are less affected by lysis method.

  10. An Analysis of Periodic Components in BL Lac Object S5 0716 +714 with MUSIC Method

    NASA Astrophysics Data System (ADS)

    Tang, J.

    2012-01-01

    Multiple signal classification (MUSIC) algorithms are introduced to the estimation of the period of variation of BL Lac objects.The principle of MUSIC spectral analysis method and theoretical analysis of the resolution of frequency spectrum using analog signals are included. From a lot of literatures, we have collected a lot of effective observation data of BL Lac object S5 0716 + 714 in V, R, I bands from 1994 to 2008. The light variation periods of S5 0716 +714 are obtained by means of the MUSIC spectral analysis method and periodogram spectral analysis method. There exist two major periods: (3.33±0.08) years and (1.24±0.01) years for all bands. The estimation of the period of variation of the algorithm based on the MUSIC spectral analysis method is compared with that of the algorithm based on the periodogram spectral analysis method. It is a super-resolution algorithm with small data length, and could be used to detect the period of variation of weak signals.

  11. Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.

    PubMed

    Baranwal, Vipul K; Pandey, Ram K; Singh, Om P

    2014-01-01

    We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.

  12. Natural abundance deuterium and 18-oxygen effects on the precision of the doubly labeled water method

    NASA Technical Reports Server (NTRS)

    Horvitz, M. A.; Schoeller, D. A.

    2001-01-01

    The doubly labeled water method for measuring total energy expenditure is subject to error from natural variations in the background 2H and 18O in body water. There is disagreement as to whether the variations in background abundances of the two stable isotopes covary and what relative doses of 2H and 18O minimize the impact of variation on the precision of the method. We have performed two studies to investigate the amount and covariance of the background variations. These were a study of urine collected weekly from eight subjects who remained in the Madison, WI locale for 6 wk and frequent urine samples from 14 subjects during round-trip travel to a locale > or = 500 miles from Madison, WI. Background variation in excess of analytical error was detected in six of the eight nontravelers, and covariance was demonstrated in four subjects. Background variation was detected in all 14 travelers, and covariance was demonstrated in 11 subjects. The median slopes of the regression lines of delta2H vs. delta18O were 6 and 7, respectively. Modeling indicated that 2H and 18O doses yielding a 6:1 ratio of final enrichments should minimize this error introduced to the doubly labeled water method.

  13. Natural abundance deuterium and 18-oxygen effects on the precision of the doubly labeled water method.

    PubMed

    Horvitz, M A; Schoeller, D A

    2001-06-01

    The doubly labeled water method for measuring total energy expenditure is subject to error from natural variations in the background 2H and 18O in body water. There is disagreement as to whether the variations in background abundances of the two stable isotopes covary and what relative doses of 2H and 18O minimize the impact of variation on the precision of the method. We have performed two studies to investigate the amount and covariance of the background variations. These were a study of urine collected weekly from eight subjects who remained in the Madison, WI locale for 6 wk and frequent urine samples from 14 subjects during round-trip travel to a locale > or = 500 miles from Madison, WI. Background variation in excess of analytical error was detected in six of the eight nontravelers, and covariance was demonstrated in four subjects. Background variation was detected in all 14 travelers, and covariance was demonstrated in 11 subjects. The median slopes of the regression lines of delta2H vs. delta18O were 6 and 7, respectively. Modeling indicated that 2H and 18O doses yielding a 6:1 ratio of final enrichments should minimize this error introduced to the doubly labeled water method.

  14. An improved correlation method for determining the period of a torsion pendulum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo Jie; Wang Dianhong

    Considering variation of environment temperature and unhomogeneity of background gravitational field, an improved correlation method was proposed to determine the variational period of a torsion pendulum with high precision. The result of processing experimental data shows that the uncertainty of determining the period with this method has been improved about twofolds than traditional correlation method, which is significant for the determination of gravitational constant with time-of-swing method.

  15. Partitioning sources of variation in vertebrate species richness

    USGS Publications Warehouse

    Boone, R.B.; Krohn, W.B.

    2000-01-01

    Aim: To explore biogeographic patterns of terrestrial vertebrates in Maine, USA using techniques that would describe local and spatial correlations with the environment. Location: Maine, USA. Methods: We delineated the ranges within Maine (86,156 km2) of 275 species using literature and expert review. Ranges were combined into species richness maps, and compared to geomorphology, climate, and woody plant distributions. Methods were adapted that compared richness of all vertebrate classes to each environmental correlate, rather than assessing a single explanatory theory. We partitioned variation in species richness into components using tree and multiple linear regression. Methods were used that allowed for useful comparisons between tree and linear regression results. For both methods we partitioned variation into broad-scale (spatially autocorrelated) and fine-scale (spatially uncorrelated) explained and unexplained components. By partitioning variance, and using both tree and linear regression in analyses, we explored the degree of variation in species richness for each vertebrate group that Could be explained by the relative contribution of each environmental variable. Results: In tree regression, climate variation explained richness better (92% of mean deviance explained for all species) than woody plant variation (87%) and geomorphology (86%). Reptiles were highly correlated with environmental variation (93%), followed by mammals, amphibians, and birds (each with 84-82% deviance explained). In multiple linear regression, climate was most closely associated with total vertebrate richness (78%), followed by woody plants (67%) and geomorphology (56%). Again, reptiles were closely correlated with the environment (95%), followed by mammals (73%), amphibians (63%) and birds (57%). Main conclusions: Comparing variation explained using tree and multiple linear regression quantified the importance of nonlinear relationships and local interactions between species richness and environmental variation, identifying the importance of linear relationships between reptiles and the environment, and nonlinear relationships between birds and woody plants, for example. Conservation planners should capture climatic variation in broad-scale designs; temperatures may shift during climate change, but the underlying correlations between the environment and species richness will presumably remain.

  16. Methods of determining complete sensor requirements for autonomous mobility

    NASA Technical Reports Server (NTRS)

    Curtis, Steven A. (Inventor)

    2012-01-01

    A method of determining complete sensor requirements for autonomous mobility of an autonomous system includes computing a time variation of each behavior of a set of behaviors of the autonomous system, determining mobility sensitivity to each behavior of the autonomous system, and computing a change in mobility based upon the mobility sensitivity to each behavior and the time variation of each behavior. The method further includes determining the complete sensor requirements of the autonomous system through analysis of the relative magnitude of the change in mobility, the mobility sensitivity to each behavior, and the time variation of each behavior, wherein the relative magnitude of the change in mobility, the mobility sensitivity to each behavior, and the time variation of each behavior are characteristic of the stability of the autonomous system.

  17. The Schwinger Variational Method

    NASA Technical Reports Server (NTRS)

    Huo, Winifred M.

    1995-01-01

    Variational methods have proven invaluable in theoretical physics and chemistry, both for bound state problems and for the study of collision phenomena. The application of the Schwinger variational (SV) method to e-molecule collisions and molecular photoionization has been reviewed previously. The present chapter discusses the implementation of the SV method as applied to e-molecule collisions. Since this is not a review of cross section data, cross sections are presented only to server as illustrative examples. In the SV method, the correct boundary condition is automatically incorporated through the use of Green's function. Thus SV calculations can employ basis functions with arbitrary boundary conditions. The iterative Schwinger method has been used extensively to study molecular photoionization. For e-molecule collisions, it is used at the static exchange level to study elastic scattering and coupled with the distorted wave approximation to study electronically inelastic scattering.

  18. Factor Retention in Exploratory Factor Analysis: A Comparison of Alternative Methods.

    ERIC Educational Resources Information Center

    Mumford, Karen R.; Ferron, John M.; Hines, Constance V.; Hogarty, Kristine Y.; Kromrey, Jeffery D.

    This study compared the effectiveness of 10 methods of determining the number of factors to retain in exploratory common factor analysis. The 10 methods included the Kaiser rule and a modified Kaiser criterion, 3 variations of parallel analysis, 4 regression-based variations of the scree procedure, and the minimum average partial procedure. The…

  19. Stationary variational estimates for the effective response and field fluctuations in nonlinear composites

    NASA Astrophysics Data System (ADS)

    Ponte Castañeda, Pedro

    2016-11-01

    This paper presents a variational method for estimating the effective constitutive response of composite materials with nonlinear constitutive behavior. The method is based on a stationary variational principle for the macroscopic potential in terms of the corresponding potential of a linear comparison composite (LCC) whose properties are the trial fields in the variational principle. When used in combination with estimates for the LCC that are exact to second order in the heterogeneity contrast, the resulting estimates for the nonlinear composite are also guaranteed to be exact to second-order in the contrast. In addition, the new method allows full optimization with respect to the properties of the LCC, leading to estimates that are fully stationary and exhibit no duality gaps. As a result, the effective response and field statistics of the nonlinear composite can be estimated directly from the appropriately optimized linear comparison composite. By way of illustration, the method is applied to a porous, isotropic, power-law material, and the results are found to compare favorably with earlier bounds and estimates. However, the basic ideas of the method are expected to work for broad classes of composites materials, whose effective response can be given appropriate variational representations, including more general elasto-plastic and soft hyperelastic composites and polycrystals.

  20. Solvent effects in time-dependent self-consistent field methods. II. Variational formulations and analytical gradients

    DOE PAGES

    Bjorgaard, J. A.; Velizhanin, K. A.; Tretiak, S.

    2015-08-06

    This study describes variational energy expressions and analytical excited state energy gradients for time-dependent self-consistent field methods with polarizable solvent effects. Linear response, vertical excitation, and state-specific solventmodels are examined. Enforcing a variational ground stateenergy expression in the state-specific model is found to reduce it to the vertical excitation model. Variational excited state energy expressions are then provided for the linear response and vertical excitation models and analytical gradients are formulated. Using semiempiricalmodel chemistry, the variational expressions are verified by numerical and analytical differentiation with respect to a static external electric field. Lastly, analytical gradients are further tested by performingmore » microcanonical excited state molecular dynamics with p-nitroaniline.« less

  1. Microfluidic-Based Measurement Method of Red Blood Cell Aggregation under Hematocrit Variations

    PubMed Central

    2017-01-01

    Red blood cell (RBC) aggregation and erythrocyte sedimentation rate (ESR) are considered to be promising biomarkers for effectively monitoring blood rheology at extremely low shear rates. In this study, a microfluidic-based measurement technique is suggested to evaluate RBC aggregation under hematocrit variations due to the continuous ESR. After the pipette tip is tightly fitted into an inlet port, a disposable suction pump is connected to the outlet port through a polyethylene tube. After dropping blood (approximately 0.2 mL) into the pipette tip, the blood flow can be started and stopped by periodically operating a pinch valve. To evaluate variations in RBC aggregation due to the continuous ESR, an EAI (Erythrocyte-sedimentation-rate Aggregation Index) is newly suggested, which uses temporal variations of image intensity. To demonstrate the proposed method, the dynamic characterization of the disposable suction pump is first quantitatively measured by varying the hematocrit levels and cavity volume of the suction pump. Next, variations in RBC aggregation and ESR are quantified by varying the hematocrit levels. The conventional aggregation index (AI) is maintained constant, unrelated to the hematocrit values. However, the EAI significantly decreased with respect to the hematocrit values. Thus, the EAI is more effective than the AI for monitoring variations in RBC aggregation due to the ESR. Lastly, the proposed method is employed to detect aggregated blood and thermally-induced blood. The EAI gradually increased as the concentration of a dextran solution increased. In addition, the EAI significantly decreased for thermally-induced blood. From this experimental demonstration, the proposed method is able to effectively measure variations in RBC aggregation due to continuous hematocrit variations, especially by quantifying the EAI. PMID:28878199

  2. Using Check-All-That-Apply (CATA) method for determining product temperature-dependent sensory-attribute variations: A case study of cooked rice.

    PubMed

    Pramudya, Ragita C; Seo, Han-Seok

    2018-03-01

    Temperatures of most hot or cold meal items change over the period of consumption, possibly influencing sensory perception of those items. Unlike temporal variations in sensory attributes, product temperature-induced variations have not received much attention. Using a Check-All-That-Apply (CATA) method, this study aimed to characterize variations in sensory attributes over a wide range of temperatures at which hot or cold foods and beverages may be consumed. Cooked milled rice, typically consumed at temperatures between 70 and 30°C in many rice-eating countries, was used as a target sample in this study. Two brands of long-grain milled rice were cooked and randomly presented at 70, 60, 50, 40, and 30°C. Thirty-five CATA terms for cooked milled rice were generated. Eighty-eight untrained panelists were asked to quickly select all the CATA terms that they considered appropriate to characterize sensory attributes of cooked rice samples presented at each temperature. Proportions of selection by panelists for 13 attributes significantly differed among the five temperature conditions. "Product temperature-dependent sensory-attribute variations" differed with two brands of milled rice grains. Such variations in sensory attributes, resulted from both product temperature and rice brand, were more pronounced among panelists who more frequently consumed rice. In conclusion, the CATA method can be useful for characterizing "product temperature-dependent sensory attribute variations" in cooked milled-rice samples. Further study is needed to examine whether the CATA method is also effective in capturing "product temperature-dependent sensory-attribute variations" in other hot or cold foods and beverages. Published by Elsevier Ltd.

  3. An historical survey of computational methods in optimal control.

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  4. Techniques of orbital decay and long-term ephemeris prediction for satellites in earth orbit

    NASA Technical Reports Server (NTRS)

    Barry, B. F.; Pimm, R. S.; Rowe, C. K.

    1971-01-01

    In the special perturbation method, Cowell and variation-of-parameters formulations of the motion equations are implemented and numerically integrated. Variations in the orbital elements due to drag are computed using the 1970 Jacchia atmospheric density model, which includes the effects of semiannual variations, diurnal bulge, solar activity, and geomagnetic activity. In the general perturbation method, two-variable asymptotic series and automated manipulation capabilities are used to obtain analytical solutions to the variation-of-parameters equations. Solutions are obtained considering the effect of oblateness only and the combined effects of oblateness and drag. These solutions are then numerically evaluated by means of a FORTRAN program in which an updating scheme is used to maintain accurate epoch values of the elements. The atmospheric density function is approximated by a Fourier series in true anomaly, and the 1970 Jacchia model is used to periodically update the Fourier coefficients. The accuracy of both methods is demonstrated by comparing computed orbital elements to actual elements over time spans of up to 8 days for the special perturbation method and up to 356 days for the general perturbation method.

  5. Variation is function: Are single cell differences functionally important?: Testing the hypothesis that single cell variation is required for aggregate function.

    PubMed

    Dueck, Hannah; Eberwine, James; Kim, Junhyong

    2016-02-01

    There is a growing appreciation of the extent of transcriptome variation across individual cells of the same cell type. While expression variation may be a byproduct of, for example, dynamic or homeostatic processes, here we consider whether single-cell molecular variation per se might be crucial for population-level function. Under this hypothesis, molecular variation indicates a diversity of hidden functional capacities within an ensemble of identical cells, and this functional diversity facilitates collective behavior that would be inaccessible to a homogenous population. In reviewing this topic, we explore possible functions that might be carried by a heterogeneous ensemble of cells; however, this question has proven difficult to test, both because methods to manipulate molecular variation are limited and because it is complicated to define, and measure, population-level function. We consider several possible methods to further pursue the hypothesis that variation is function through the use of comparative analysis and novel experimental techniques. © 2015 The Authors. BioEssays published by WILEY Periodicals, Inc.

  6. [Study on the experimental application of floating-reference method to noninvasive blood glucose sensing].

    PubMed

    Yu, Hui; Qi, Dan; Li, Heng-da; Xu, Ke-xin; Yuan, Wei-jie

    2012-03-01

    Weak signal, low instrument signal-to-noise ratio, continuous variation of human physiological environment and the interferences from other components in blood make it difficult to extract the blood glucose information from near infrared spectrum in noninvasive blood glucose measurement. The floating-reference method, which analyses the effect of glucose concentration variation on absorption coefficient and scattering coefficient, gets spectrum at the reference point and the measurement point where the light intensity variations from absorption and scattering are counteractive and biggest respectively. By using the spectrum from reference point as reference, floating-reference method can reduce the interferences from variation of physiological environment and experiment circumstance. In the present paper, the effectiveness of floating-reference method working on improving prediction precision and stability was assessed through application experiments. The comparison was made between models whose data were processed with and without floating-reference method. The results showed that the root mean square error of prediction (RMSEP) decreased by 34.7% maximally. The floating-reference method could reduce the influences of changes of samples' state, instrument noises and drift, and improve the models' prediction precision and stability effectively.

  7. A comparison of five methods for monitoring the precision of automated x-ray film processors.

    PubMed

    Nickoloff, E L; Leo, F; Reese, M

    1978-11-01

    Five different methods for preparing sensitometric strips used to monitor the precision of automated film processors are compared. A method for determining the sensitivity of each system to processor variations is presented; the observed statistical variability is multiplied by the system response to temperature or chemical changes. Pre-exposed sensitometric strips required the use of accurate densitometers and stringent control limits to be effective. X-ray exposed sensitometric strips demonstrated large variations in the x-ray output (2 omega approximately equal to 8.0%) over a period of one month. Some light sensitometers were capable of detecting +/- 1.0 degrees F (+/- 0.6 degrees C) variations in developer temperature in the processor and/or about 10.0 ml of chemical contamination in the processor. Nevertheless, even the light sensitometers were susceptible to problems, e.g. film emulsion selection, line voltage variations, and latent image fading. Advantages and disadvantages of the various sensitometric methods are discussed.

  8. The variational method in quantum mechanics: an elementary introduction

    NASA Astrophysics Data System (ADS)

    Borghi, Riccardo

    2018-05-01

    Variational methods in quantum mechanics are customarily presented as invaluable techniques to find approximate estimates of ground state energies. In the present paper a short catalogue of different celebrated potential distributions (both 1D and 3D), for which an exact and complete (energy and wavefunction) ground state determination can be achieved in an elementary way, is illustrated. No previous knowledge of calculus of variations is required. Rather, in all presented cases the exact energy functional minimization is achieved by using only a couple of simple mathematical tricks: ‘completion of square’ and integration by parts. This makes our approach particularly suitable for undergraduates. Moreover, the key role played by particle localization is emphasized through the entire analysis. This gentle introduction to the variational method could also be potentially attractive for more expert students as a possible elementary route toward a rather advanced topic on quantum mechanics: the factorization method. Such an unexpected connection is outlined in the final part of the paper.

  9. Multigrid Solution of the Navier-Stokes Equations at Low Speeds with Large Temperature Variations

    NASA Technical Reports Server (NTRS)

    Sockol, Peter M.

    2002-01-01

    Multigrid methods for the Navier-Stokes equations at low speeds and large temperature variations are investigated. The compressible equations with time-derivative preconditioning and preconditioned flux-difference splitting of the inviscid terms are used. Three implicit smoothers have been incorporated into a common multigrid procedure. Both full coarsening and semi-coarsening with directional fine-grid defect correction have been studied. The resulting methods have been tested on four 2D laminar problems over a range of Reynolds numbers on both uniform and highly stretched grids. Two of the three methods show efficient and robust performance over the entire range of conditions. In addition none of the methods have any difficulty with the large temperature variations.

  10. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    PubMed

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  11. A Complete Color Normalization Approach to Histopathology Images Using Color Cues Computed From Saturation-Weighted Statistics.

    PubMed

    Li, Xingyu; Plataniotis, Konstantinos N

    2015-07-01

    In digital histopathology, tasks of segmentation and disease diagnosis are achieved by quantitative analysis of image content. However, color variation in image samples makes it challenging to produce reliable results. This paper introduces a complete normalization scheme to address the problem of color variation in histopathology images jointly caused by inconsistent biopsy staining and nonstandard imaging condition. Method : Different from existing normalization methods that either address partial cause of color variation or lump them together, our method identifies causes of color variation based on a microscopic imaging model and addresses inconsistency in biopsy imaging and staining by an illuminant normalization module and a spectral normalization module, respectively. In evaluation, we use two public datasets that are representative of histopathology images commonly received in clinics to examine the proposed method from the aspects of robustness to system settings, performance consistency against achromatic pixels, and normalization effectiveness in terms of histological information preservation. As the saturation-weighted statistics proposed in this study generates stable and reliable color cues for stain normalization, our scheme is robust to system parameters and insensitive to image content and achromatic colors. Extensive experimentation suggests that our approach outperforms state-of-the-art normalization methods as the proposed method is the only approach that succeeds to preserve histological information after normalization. The proposed color normalization solution would be useful to mitigate effects of color variation in pathology images on subsequent quantitative analysis.

  12. Laplace transform homotopy perturbation method for the approximation of variational problems.

    PubMed

    Filobello-Nino, U; Vazquez-Leal, H; Rashidi, M M; Sedighi, H M; Perez-Sesma, A; Sandoval-Hernandez, M; Sarmiento-Reyes, A; Contreras-Hernandez, A D; Pereyra-Diaz, D; Hoyos-Reyes, C; Jimenez-Fernandez, V M; Huerta-Chua, J; Castro-Gonzalez, F; Laguna-Camacho, J R

    2016-01-01

    This article proposes the application of Laplace Transform-Homotopy Perturbation Method and some of its modifications in order to find analytical approximate solutions for the linear and nonlinear differential equations which arise from some variational problems. As case study we will solve four ordinary differential equations, and we will show that the proposed solutions have good accuracy, even we will obtain an exact solution. In the sequel, we will see that the square residual error for the approximate solutions, belongs to the interval [0.001918936920, 0.06334882582], which confirms the accuracy of the proposed methods, taking into account the complexity and difficulty of variational problems.

  13. Compensation of flare-induced CD changes EUVL

    DOEpatents

    Bjorkholm, John E [Pleasanton, CA; Stearns, Daniel G [Los Altos, CA; Gullikson, Eric M [Oakland, CA; Tichenor, Daniel A [Castro Valley, CA; Hector, Scott D [Oakland, CA

    2004-11-09

    A method for compensating for flare-induced critical dimensions (CD) changes in photolithography. Changes in the flare level results in undesirable CD changes. The method when used in extreme ultraviolet (EUV) lithography essentially eliminates the unwanted CD changes. The method is based on the recognition that the intrinsic level of flare for an EUV camera (the flare level for an isolated sub-resolution opaque dot in a bright field mask) is essentially constant over the image field. The method involves calculating the flare and its variation over the area of a patterned mask that will be imaged and then using mask biasing to largely eliminate the CD variations that the flare and its variations would otherwise cause. This method would be difficult to apply to optical or DUV lithography since the intrinsic flare for those lithographies is not constant over the image field.

  14. Applications of He's semi-inverse method, ITEM and GGM to the Davey-Stewartson equation

    NASA Astrophysics Data System (ADS)

    Zinati, Reza Farshbaf; Manafian, Jalil

    2017-04-01

    We investigate the Davey-Stewartson (DS) equation. Travelling wave solutions were found. In this paper, we demonstrate the effectiveness of the analytical methods, namely, He's semi-inverse variational principle method (SIVPM), the improved tan(φ/2)-expansion method (ITEM) and generalized G'/G-expansion method (GGM) for seeking more exact solutions via the DS equation. These methods are direct, concise and simple to implement compared to other existing methods. The exact solutions containing four types solutions have been achieved. The results demonstrate that the aforementioned methods are more efficient than the Ansatz method applied by Mirzazadeh (2015). Abundant exact travelling wave solutions including solitons, kink, periodic and rational solutions have been found by the improved tan(φ/2)-expansion and generalized G'/G-expansion methods. By He's semi-inverse variational principle we have obtained dark and bright soliton wave solutions. Also, the obtained semi-inverse variational principle has profound implications in physical understandings. These solutions might play important role in engineering and physics fields. Moreover, by using Matlab, some graphical simulations were done to see the behavior of these solutions.

  15. Sampling and pyrosequencing methods for characterizing bacterial communities in the human gut using 16S sequence tags

    PubMed Central

    2010-01-01

    Intense interest centers on the role of the human gut microbiome in health and disease, but optimal methods for analysis are still under development. Here we present a study of methods for surveying bacterial communities in human feces using 454/Roche pyrosequencing of 16S rRNA gene tags. We analyzed fecal samples from 10 individuals and compared methods for storage, DNA purification and sequence acquisition. To assess reproducibility, we compared samples one cm apart on a single stool specimen for each individual. To analyze storage methods, we compared 1) immediate freezing at -80°C, 2) storage on ice for 24 or 3) 48 hours. For DNA purification methods, we tested three commercial kits and bead beating in hot phenol. Variations due to the different methodologies were compared to variation among individuals using two approaches--one based on presence-absence information for bacterial taxa (unweighted UniFrac) and the other taking into account their relative abundance (weighted UniFrac). In the unweighted analysis relatively little variation was associated with the different analytical procedures, and variation between individuals predominated. In the weighted analysis considerable variation was associated with the purification methods. Particularly notable was improved recovery of Firmicutes sequences using the hot phenol method. We also carried out surveys of the effects of different 454 sequencing methods (FLX versus Titanium) and amplification of different 16S rRNA variable gene segments. Based on our findings we present recommendations for protocols to collect, process and sequence bacterial 16S rDNA from fecal samples--some major points are 1) if feasible, bead-beating in hot phenol or use of the PSP kit improves recovery; 2) storage methods can be adjusted based on experimental convenience; 3) unweighted (presence-absence) comparisons are less affected by lysis method. PMID:20673359

  16. Numerical realization of the variational method for generating self-trapped beams

    NASA Astrophysics Data System (ADS)

    Duque, Erick I.; Lopez-Aguayo, Servando; Malomed, Boris A.

    2018-03-01

    We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schr\\"odinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.

  17. GRACE Hydrological estimates for small basins: Evaluating processing approaches on the High Plains Aquifer, USA

    NASA Astrophysics Data System (ADS)

    Longuevergne, Laurent; Scanlon, Bridget R.; Wilson, Clark R.

    2010-11-01

    The Gravity Recovery and Climate Experiment (GRACE) satellites provide observations of water storage variation at regional scales. However, when focusing on a region of interest, limited spatial resolution and noise contamination can cause estimation bias and spatial leakage, problems that are exacerbated as the region of interest approaches the GRACE resolution limit of a few hundred km. Reliable estimates of water storage variations in small basins require compromises between competing needs for noise suppression and spatial resolution. The objective of this study was to quantitatively investigate processing methods and their impacts on bias, leakage, GRACE noise reduction, and estimated total error, allowing solution of the trade-offs. Among the methods tested is a recently developed concentration algorithm called spatiospectral localization, which optimizes the basin shape description, taking into account limited spatial resolution. This method is particularly suited to retrieval of basin-scale water storage variations and is effective for small basins. To increase confidence in derived methods, water storage variations were calculated for both CSR (Center for Space Research) and GRGS (Groupe de Recherche de Géodésie Spatiale) GRACE products, which employ different processing strategies. The processing techniques were tested on the intensively monitored High Plains Aquifer (450,000 km2 area), where application of the appropriate optimal processing method allowed retrieval of water storage variations over a portion of the aquifer as small as ˜200,000 km2.

  18. Virtual standardized patients: an interactive method to examine variation in depression care among primary care physicians

    PubMed Central

    Hooper, Lisa M.; Weinfurt, Kevin P.; Cooper, Lisa A.; Mensh, Julie; Harless, William; Kuhajda, Melissa C.; Epstein, Steven A.

    2009-01-01

    Background Some primary care physicians provide less than optimal care for depression (Kessler et al., Journal of the American Medical Association 291, 2581–90, 2004). However, the literature is not unanimous on the best method to use in order to investigate this variation in care. To capture variations in physician behaviour and decision making in primary care settings, 32 interactive CD-ROM vignettes were constructed and tested. Aim and method The primary aim of this methods-focused paper was to review the extent to which our study method – an interactive CD-ROM patient vignette methodology – was effective in capturing variation in physician behaviour. Specifically, we examined the following questions: (a) Did the interactive CD-ROM technology work? (b) Did we create believable virtual patients? (c) Did the research protocol enable interviews (data collection) to be completed as planned? (d) To what extent was the targeted study sample size achieved? and (e) Did the study interview protocol generate valid and reliable quantitative data and rich, credible qualitative data? Findings Among a sample of 404 randomly selected primary care physicians, our voice-activated interactive methodology appeared to be effective. Specifically, our methodology – combining interactive virtual patient vignette technology, experimental design, and expansive open-ended interview protocol – generated valid explanations for variations in primary care physician practice patterns related to depression care. PMID:20463864

  19. A tri-modality image fusion method for target delineation of brain tumors in radiotherapy.

    PubMed

    Guo, Lu; Shen, Shuming; Harris, Eleanor; Wang, Zheng; Jiang, Wei; Guo, Yu; Feng, Yuanming

    2014-01-01

    To develop a tri-modality image fusion method for better target delineation in image-guided radiotherapy for patients with brain tumors. A new method of tri-modality image fusion was developed, which can fuse and display all image sets in one panel and one operation. And a feasibility study in gross tumor volume (GTV) delineation using data from three patients with brain tumors was conducted, which included images of simulation CT, MRI, and 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) examinations before radiotherapy. Tri-modality image fusion was implemented after image registrations of CT+PET and CT+MRI, and the transparency weight of each modality could be adjusted and set by users. Three radiation oncologists delineated GTVs for all patients using dual-modality (MRI/CT) and tri-modality (MRI/CT/PET) image fusion respectively. Inter-observer variation was assessed by the coefficient of variation (COV), the average distance between surface and centroid (ADSC), and the local standard deviation (SDlocal). Analysis of COV was also performed to evaluate intra-observer volume variation. The inter-observer variation analysis showed that, the mean COV was 0.14(± 0.09) and 0.07(± 0.01) for dual-modality and tri-modality respectively; the standard deviation of ADSC was significantly reduced (p<0.05) with tri-modality; SDlocal averaged over median GTV surface was reduced in patient 2 (from 0.57 cm to 0.39 cm) and patient 3 (from 0.42 cm to 0.36 cm) with the new method. The intra-observer volume variation was also significantly reduced (p = 0.00) with the tri-modality method as compared with using the dual-modality method. With the new tri-modality image fusion method smaller inter- and intra-observer variation in GTV definition for the brain tumors can be achieved, which improves the consistency and accuracy for target delineation in individualized radiotherapy.

  20. Use of a variational moment method in calculating propagation constants for waveguides with an arbitrary index profile.

    PubMed

    Hardy, A; Itzkowitz, M; Griffel, G

    1989-05-15

    A variational moment method is used to calculate propagation constants of 1-D optical waveguides with an arbitrary index profile. The method is applicable to 2-D waveguides as well, and the index profiles need not be symmetric. Examples are given for the lowest-order and the next higher-order modes and are compared with exact numerical solutions.

  1. Some New Mathematical Methods for Variational Objective Analysis

    NASA Technical Reports Server (NTRS)

    Wahba, G.; Johnson, D. R.

    1984-01-01

    New and/or improved variational methods for simultaneously combining forecast, heterogeneous observational data, a priori climatology, and physics to obtain improved estimates of the initial state of the atmosphere for the purpose of numerical weather prediction are developed. Cross validated spline methods are applied to atmospheric data for the purpose of improved description and analysis of atmospheric phenomena such as the tropopause and frontal boundary surfaces.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altunbas, Cem, E-mail: caltunbas@gmail.com; Lai, Chao-Jen; Zhong, Yuncheng

    Purpose: In using flat panel detectors (FPD) for cone beam computed tomography (CBCT), pixel gain variations may lead to structured nonuniformities in projections and ring artifacts in CBCT images. Such gain variations can be caused by change in detector entrance exposure levels or beam hardening, and they are not accounted by conventional flat field correction methods. In this work, the authors presented a method to identify isolated pixel clusters that exhibit gain variations and proposed a pixel gain correction (PGC) method to suppress both beam hardening and exposure level dependent gain variations. Methods: To modulate both beam spectrum and entrancemore » exposure, flood field FPD projections were acquired using beam filters with varying thicknesses. “Ideal” pixel values were estimated by performing polynomial fits in both raw and flat field corrected projections. Residuals were calculated by taking the difference between measured and ideal pixel values to identify clustered image and FPD artifacts in flat field corrected and raw images, respectively. To correct clustered image artifacts, the ratio of ideal to measured pixel values in filtered images were utilized as pixel-specific gain correction factors, referred as PGC method, and they were tabulated as a function of pixel value in a look-up table. Results: 0.035% of detector pixels lead to clustered image artifacts in flat field corrected projections, where 80% of these pixels were traced back and linked to artifacts in the FPD. The performance of PGC method was tested in variety of imaging conditions and phantoms. The PGC method reduced clustered image artifacts and fixed pattern noise in projections, and ring artifacts in CBCT images. Conclusions: Clustered projection image artifacts that lead to ring artifacts in CBCT can be better identified with our artifact detection approach. When compared to the conventional flat field correction method, the proposed PGC method enables characterization of nonlinear pixel gain variations as a function of change in x-ray spectrum and intensity. Hence, it can better suppress image artifacts due to beam hardening as well as artifacts that arise from detector entrance exposure variation.« less

  3. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory

    USGS Publications Warehouse

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-01-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  4. An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory

    NASA Astrophysics Data System (ADS)

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-07-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  5. A First Step towards Variational Methods in Engineering

    ERIC Educational Resources Information Center

    Periago, Francisco

    2003-01-01

    In this paper, a didactical proposal is presented to introduce the variational methods for solving boundary value problems to engineering students. Starting from a couple of simple models arising in linear elasticity and heat diffusion, the concept of weak solution for these models is motivated and the existence, uniqueness and continuous…

  6. A study on Marangoni convection by the variational iteration method

    NASA Astrophysics Data System (ADS)

    Karaoǧlu, Onur; Oturanç, Galip

    2012-09-01

    In this paper, we will consider the use of the variational iteration method and Padé approximant for finding approximate solutions for a Marangoni convection induced flow over a free surface due to an imposed temperature gradient. The solutions are compared with the numerical (fourth-order Runge Kutta) solutions.

  7. WE-FG-207B-05: Iterative Reconstruction Via Prior Image Constrained Total Generalized Variation for Spectral CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, S; Zhang, Y; Ma, J

    Purpose: To investigate iterative reconstruction via prior image constrained total generalized variation (PICTGV) for spectral computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The proposed PICTGV method is formulated as an optimization problem, which balances the data fidelity and prior image constrained total generalized variation of reconstructed images in one framework. The PICTGV method is based on structure correlations among images in the energy domain and high-quality images to guide the reconstruction of energy-specific images. In PICTGV method, the high-quality image is reconstructed from all detector-collected X-ray signals and is referred as the broad-spectrum image. Distinctmore » from the existing reconstruction methods applied on the images with first order derivative, the higher order derivative of the images is incorporated into the PICTGV method. An alternating optimization algorithm is used to minimize the PICTGV objective function. We evaluate the performance of PICTGV on noise and artifacts suppressing using phantom studies and compare the method with the conventional filtered back-projection method as well as TGV based method without prior image. Results: On the digital phantom, the proposed method outperforms the existing TGV method in terms of the noise reduction, artifacts suppression, and edge detail preservation. Compared to that obtained by the TGV based method without prior image, the relative root mean square error in the images reconstructed by the proposed method is reduced by over 20%. Conclusion: The authors propose an iterative reconstruction via prior image constrained total generalize variation for spectral CT. Also, we have developed an alternating optimization algorithm and numerically demonstrated the merits of our approach. Results show that the proposed PICTGV method outperforms the TGV method for spectral CT.« less

  8. Automated mask and wafer defect classification using a novel method for generalized CD variation measurements

    NASA Astrophysics Data System (ADS)

    Verechagin, V.; Kris, R.; Schwarzband, I.; Milstein, A.; Cohen, B.; Shkalim, A.; Levy, S.; Price, D.; Bal, E.

    2018-03-01

    Over the years, mask and wafers defects dispositioning has become an increasingly challenging and time consuming task. With design rules getting smaller, OPC getting complex and scanner illumination taking on free-form shapes - the probability of a user to perform accurate and repeatable classification of defects detected by mask inspection tools into pass/fail bins is reducing. The critical challenging of mask defect metrology for small nodes ( < 30 nm) was reviewed in [1]. While Critical Dimension (CD) variation measurement is still the method of choice for determining a mask defect future impact on wafer, the high complexity of OPCs combined with high variability in pattern shapes poses a challenge for any automated CD variation measurement method. In this study, a novel approach for measurement generalization is presented. CD variation assessment performance is evaluated on multiple different complex shape patterns, and is benchmarked against an existing qualified measurement methodology.

  9. A New Evaluation Method of Stored Heat Effect of Reinforced Concrete Wall of Cold Storage

    NASA Astrophysics Data System (ADS)

    Nomura, Tomohiro; Murakami, Yuji; Uchikawa, Motoyuki

    Today it has become imperative to save energy by operating a refrigerator in a cold storage executed by external insulate reinforced concrete wall intermittently. The theme of the paper is to get the evaluation method to be capable of calculating, numerically, interval time for stopping the refrigerator, in applying reinforced concrete wall as source of stored heat. The experiments with the concrete models were performed in order to examine the time variation of internal temperature after refrigerator stopped. In addition, the simulation method with three dimensional unsteady FEM for personal-computer type was introduced for easily analyzing the internal temperature variation. Using this method, it is possible to obtain the time variation of internal temperature and to calculate the interval time for stopping the refrigerator.

  10. Application of the moving frame method to deformed Willmore surfaces in space forms

    NASA Astrophysics Data System (ADS)

    Paragoda, Thanuja

    2018-06-01

    The main goal of this paper is to use the theory of exterior differential forms in deriving variations of the deformed Willmore energy in space forms and study the minimizers of the deformed Willmore energy in space forms. We derive both first and second order variations of deformed Willmore energy in space forms explicitly using moving frame method. We prove that the second order variation of deformed Willmore energy depends on the intrinsic Laplace Beltrami operator, the sectional curvature and some special operators along with mean and Gauss curvatures of the surface embedded in space forms, while the first order variation depends on the extrinsic Laplace Beltrami operator.

  11. Role of regression analysis and variation of rheological data in calculation of pressure drop for sludge pipelines.

    PubMed

    Farno, E; Coventry, K; Slatter, P; Eshtiaghi, N

    2018-06-15

    Sludge pumps in wastewater treatment plants are often oversized due to uncertainty in calculation of pressure drop. This issue costs millions of dollars for industry to purchase and operate the oversized pumps. Besides costs, higher electricity consumption is associated with extra CO 2 emission which creates huge environmental impacts. Calculation of pressure drop via current pipe flow theory requires model estimation of flow curve data which depends on regression analysis and also varies with natural variation of rheological data. This study investigates impact of variation of rheological data and regression analysis on variation of pressure drop calculated via current pipe flow theories. Results compare the variation of calculated pressure drop between different models and regression methods and suggest on the suitability of each method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. CNV-TV: a robust method to discover copy number variation from short sequencing reads.

    PubMed

    Duan, Junbo; Zhang, Ji-Gang; Deng, Hong-Wen; Wang, Yu-Ping

    2013-05-02

    Copy number variation (CNV) is an important structural variation (SV) in human genome. Various studies have shown that CNVs are associated with complex diseases. Traditional CNV detection methods such as fluorescence in situ hybridization (FISH) and array comparative genomic hybridization (aCGH) suffer from low resolution. The next generation sequencing (NGS) technique promises a higher resolution detection of CNVs and several methods were recently proposed for realizing such a promise. However, the performances of these methods are not robust under some conditions, e.g., some of them may fail to detect CNVs of short sizes. There has been a strong demand for reliable detection of CNVs from high resolution NGS data. A novel and robust method to detect CNV from short sequencing reads is proposed in this study. The detection of CNV is modeled as a change-point detection from the read depth (RD) signal derived from the NGS, which is fitted with a total variation (TV) penalized least squares model. The performance (e.g., sensitivity and specificity) of the proposed approach are evaluated by comparison with several recently published methods on both simulated and real data from the 1000 Genomes Project. The experimental results showed that both the true positive rate and false positive rate of the proposed detection method do not change significantly for CNVs with different copy numbers and lengthes, when compared with several existing methods. Therefore, our proposed approach results in a more reliable detection of CNVs than the existing methods.

  13. Constrained variation in Jastrow method at high density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owen, J.C.; Bishop, R.F.; Irvine, J.M.

    1976-11-01

    A method is derived for constraining the correlation function in a Jastrow variational calculation which permits the truncation of the cluster expansion after two-body terms, and which permits exact minimization of the two-body cluster by functional variation. This method is compared with one previously proposed by Pandharipande and is found to be superior both theoretically and practically. The method is tested both on liquid /sup 3/He, by using the Lennard--Jones potential, and on the model system of neutrons treated as Boltzmann particles (''homework'' problem). Good agreement is found both with experiment and with other calculations involving the explicit evaluation ofmore » higher-order terms in the cluster expansion. The method is then applied to a more realistic model of a neutron gas up to a density of 4 neutrons per F/sup 3/, and is found to give ground-state energies considerably lower than those of Pandharipande. (AIP)« less

  14. A visual tracking method based on deep learning without online model updating

    NASA Astrophysics Data System (ADS)

    Tang, Cong; Wang, Yicheng; Feng, Yunsong; Zheng, Chao; Jin, Wei

    2018-02-01

    The paper proposes a visual tracking method based on deep learning without online model updating. In consideration of the advantages of deep learning in feature representation, deep model SSD (Single Shot Multibox Detector) is used as the object extractor in the tracking model. Simultaneously, the color histogram feature and HOG (Histogram of Oriented Gradient) feature are combined to select the tracking object. In the process of tracking, multi-scale object searching map is built to improve the detection performance of deep detection model and the tracking efficiency. In the experiment of eight respective tracking video sequences in the baseline dataset, compared with six state-of-the-art methods, the method in the paper has better robustness in the tracking challenging factors, such as deformation, scale variation, rotation variation, illumination variation, and background clutters, moreover, its general performance is better than other six tracking methods.

  15. Adaptive variational mode decomposition method for signal processing based on mode characteristic

    NASA Astrophysics Data System (ADS)

    Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng

    2018-07-01

    Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.

  16. Selecting Magnet Laminations Recipes Using the Meth-od of Sim-u-la-ted Annealing

    NASA Astrophysics Data System (ADS)

    Russell, A. D.; Baiod, R.; Brown, B. C.; Harding, D. J.; Martin, P. S.

    1997-05-01

    The Fermilab Main Injector project is building 344 dipoles using more than 7000 tons of steel. Budget and logistical constraints required that steel production, lamination stamping and magnet fabrication proceed in parallel. There were significant run-to-run variations in the magnetic properties of the steel (Martin, P.S., et al., Variations in the Steel Properties and the Excitation Characteristics of FMI Dipoles, this conference). The large lamination size (>0.5 m coil opening) resulted in variations of gap height due to differences in stress relief in the steel after stamping. To minimize magnet-to-magnet strength and field shape variations the laminations were shuffled based on the available magnetic and mechanical data and assigned to magnets using a computer program based on the method of simulated annealing. The lamination sets selected by the program have produced magnets which easily satisfy the design requirements. Variations of the average magnet gap are an order of magnitude smaller than the variations in lamination gaps. This paper discusses observed gap variations, the program structure and the strength uniformity results.

  17. Improving detection of copy-number variation by simultaneous bias correction and read-depth segmentation.

    PubMed

    Szatkiewicz, Jin P; Wang, WeiBo; Sullivan, Patrick F; Wang, Wei; Sun, Wei

    2013-02-01

    Structural variation is an important class of genetic variation in mammals. High-throughput sequencing (HTS) technologies promise to revolutionize copy-number variation (CNV) detection but present substantial analytic challenges. Converging evidence suggests that multiple types of CNV-informative data (e.g. read-depth, read-pair, split-read) need be considered, and that sophisticated methods are needed for more accurate CNV detection. We observed that various sources of experimental biases in HTS confound read-depth estimation, and note that bias correction has not been adequately addressed by existing methods. We present a novel read-depth-based method, GENSENG, which uses a hidden Markov model and negative binomial regression framework to identify regions of discrete copy-number changes while simultaneously accounting for the effects of multiple confounders. Based on extensive calibration using multiple HTS data sets, we conclude that our method outperforms existing read-depth-based CNV detection algorithms. The concept of simultaneous bias correction and CNV detection can serve as a basis for combining read-depth with other types of information such as read-pair or split-read in a single analysis. A user-friendly and computationally efficient implementation of our method is freely available.

  18. Use of variational methods in the determination of wind-driven ocean circulation

    NASA Technical Reports Server (NTRS)

    Gelos, R.; Laura, P. A. A.

    1976-01-01

    Simple polynomial approximations and a variational approach were used to predict wind-induced circulation in rectangular ocean basins. Stommel's and Munk's models were solved in a unified fashion by means of the proposed method. Very good agreement with exact solutions available in the literature was shown to exist. The method was then applied to more complex situations where an exact solution seems out of the question.

  19. Variable Density Effects in Stochastic Lagrangian Models for Turbulent Combustion

    DTIC Science & Technology

    2016-07-20

    PDF methods in dealing with chemical reaction and convection are preserved irrespective of density variation. Since the density variation in a typical...combustion process may be as large as factor of seven, including variable- density effects in PDF methods is of significance. Conventionally, the...strategy of modelling variable density flows in PDF methods is similar to that used for second-moment closure models (SMCM): models are developed based on

  20. Efficient and accurate causal inference with hidden confounders from genome-transcriptome variation data

    PubMed Central

    2017-01-01

    Mapping gene expression as a quantitative trait using whole genome-sequencing and transcriptome analysis allows to discover the functional consequences of genetic variation. We developed a novel method and ultra-fast software Findr for higly accurate causal inference between gene expression traits using cis-regulatory DNA variations as causal anchors, which improves current methods by taking into consideration hidden confounders and weak regulations. Findr outperformed existing methods on the DREAM5 Systems Genetics challenge and on the prediction of microRNA and transcription factor targets in human lymphoblastoid cells, while being nearly a million times faster. Findr is publicly available at https://github.com/lingfeiwang/findr. PMID:28821014

  1. Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)

    1996-01-01

    Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.

  2. Blazhko modulation in the infrared

    NASA Astrophysics Data System (ADS)

    Jurcsik, J.; Hajdu, G.; Dékány, I.; Nuspl, J.; Catelan, M.; Grebel, E. K.

    2018-04-01

    We present first direct evidence of modulation in the K band of Blazhko-type RR Lyrae stars that are identified by their secular modulations in the I-band data of Optical Gravitational Lensing Experiment-IV. A method has been developed to decompose the K-band light variation into two parts originating from the temperature and the radius changes using synthetic data of atmosphere-model grids. The amplitudes of the temperature and the radius variations derived from the method for non-Blazhko RRab stars are in very good agreement with the results of the Baade-Wesselink analysis of RRab stars in the M3 globular cluster confirming the applicability and correctness of the method. It has been found that the Blazhko modulation is primarily driven by the change in the temperature variation. The radius variation plays a marginal part, moreover it has an opposite sign as if the Blazhko effect was caused by the radii variations. This result reinforces the previous finding based on the Baade-Wesselink analysis of M3 (NGC 5272) RR Lyrae, that significant modulation of the radius variations can only be detected in radial-velocity measurements, which relies on spectral lines that form in the uppermost atmospheric layers. Our result gives the first insight into the energetics and dynamics of the Blazhko phenomenon, hence it puts strong constraints on its possible physical explanations.

  3. FROG - Fingerprinting Genomic Variation Ontology

    PubMed Central

    Bhardwaj, Anshu

    2015-01-01

    Genetic variations play a crucial role in differential phenotypic outcomes. Given the complexity in establishing this correlation and the enormous data available today, it is imperative to design machine-readable, efficient methods to store, label, search and analyze this data. A semantic approach, FROG: “FingeRprinting Ontology of Genomic variations” is implemented to label variation data, based on its location, function and interactions. FROG has six levels to describe the variation annotation, namely, chromosome, DNA, RNA, protein, variations and interactions. Each level is a conceptual aggregation of logically connected attributes each of which comprises of various properties for the variant. For example, in chromosome level, one of the attributes is location of variation and which has two properties, allosomes or autosomes. Another attribute is variation kind which has four properties, namely, indel, deletion, insertion, substitution. Likewise, there are 48 attributes and 278 properties to capture the variation annotation across six levels. Each property is then assigned a bit score which in turn leads to generation of a binary fingerprint based on the combination of these properties (mostly taken from existing variation ontologies). FROG is a novel and unique method designed for the purpose of labeling the entire variation data generated till date for efficient storage, search and analysis. A web-based platform is designed as a test case for users to navigate sample datasets and generate fingerprints. The platform is available at http://ab-openlab.csir.res.in/frog. PMID:26244889

  4. Empirical correction for earth sensor horizon radiance variation

    NASA Technical Reports Server (NTRS)

    Hashmall, Joseph A.; Sedlak, Joseph; Andrews, Daniel; Luquette, Richard

    1998-01-01

    A major limitation on the use of infrared horizon sensors for attitude determination is the variability of the height of the infrared Earth horizon. This variation includes a climatological component and a stochastic component of approximately equal importance. The climatological component shows regular variation with season and latitude. Models based on historical measurements have been used to compensate for these systematic changes. The stochastic component is analogous to tropospheric weather. It can cause extreme, localized changes that for a period of days, overwhelm the climatological variation. An algorithm has been developed to compensate partially for the climatological variation of horizon height and at least to mitigate the stochastic variation. This method uses attitude and horizon sensor data from spacecraft to update a horizon height history as a function of latitude. For spacecraft that depend on horizon sensors for their attitudes (such as the Total Ozone Mapping Spectrometer-Earth Probe-TOMS-EP) a batch least squares attitude determination system is used. It is assumed that minimizing the average sensor residual throughout a full orbit of data results in attitudes that are nearly independent of local horizon height variations. The method depends on the additional assumption that the mean horizon height over all latitudes is approximately independent of season. Using these assumptions, the method yields the latitude dependent portion of local horizon height variations. This paper describes the algorithm used to generate an empirical horizon height. Ideally, an international horizon height database could be established that would rapidly merge data from various spacecraft to provide timely corrections that could be used by all.

  5. Half-quadratic variational regularization methods for speckle-suppression and edge-enhancement in SAR complex image

    NASA Astrophysics Data System (ADS)

    Zhao, Xia; Wang, Guang-xin

    2008-12-01

    Synthetic aperture radar (SAR) is an active remote sensing sensor. It is a coherent imaging system, the speckle is its inherent default, which affects badly the interpretation and recognition of the SAR targets. Conventional methods of removing the speckle is studied usually in real SAR image, which reduce the edges of the images at the same time as depressing the speckle. Morever, Conventional methods lost the information about images phase. Removing the speckle and enhancing the target and edge simultaneously are still a puzzle. To suppress the spckle and enhance the targets and the edges simultaneously, a half-quadratic variational regularization method in complex SAR image is presented, which is based on the prior knowledge of the targets and the edge. Due to the non-quadratic and non- convex quality and the complexity of the cost function, a half-quadratic variational regularization variation is used to construct a new cost function,which is solved by alternate optimization. In the proposed scheme, the construction of the model, the solution of the model and the selection of the model peremeters are studied carefully. In the end, we validate the method using the real SAR data.Theoretic analysis and the experimental results illustrate the the feasibility of the proposed method. Further more, the proposed method can preserve the information about images phase.

  6. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction

    PubMed Central

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410

  7. A Calibration Method for Nanowire Biosensors to Suppress Device-to-device Variation

    PubMed Central

    Ishikawa, Fumiaki N.; Curreli, Marco; Chang, Hsiao-Kang; Chen, Po-Chiang; Zhang, Rui; Cote, Richard J.; Thompson, Mark E.; Zhou, Chongwu

    2009-01-01

    Nanowire/nanotube biosensors have stimulated significant interest; however the inevitable device-to-device variation in the biosensor performance remains a great challenge. We have developed an analytical method to calibrate nanowire biosensor responses that can suppress the device-to-device variation in sensing response significantly. The method is based on our discovery of a strong correlation between the biosensor gate dependence (dIds/dVg) and the absolute response (absolute change in current, ΔI). In2O3 nanowire based biosensors for streptavidin detection were used as the model system. Studying the liquid gate effect and ionic concentration dependence of strepavidin sensing indicates that electrostatic interaction is the dominant mechanism for sensing response. Based on this sensing mechanism and transistor physics, a linear correlation between the absolute sensor response (ΔI) and the gate dependence (dIds/dVg) is predicted and confirmed experimentally. Using this correlation, a calibration method was developed where the absolute response is divided by dIds/dVg for each device, and the calibrated responses from different devices behaved almost identically. Compared to the common normalization method (normalization of the conductance/resistance/current by the initial value), this calibration method was proved advantageous using a conventional transistor model. The method presented here substantially suppresses device-to-device variation, allowing the use of nanosensors in large arrays. PMID:19921812

  8. Variational Approach to Monte Carlo Renormalization Group

    NASA Astrophysics Data System (ADS)

    Wu, Yantao; Car, Roberto

    2017-12-01

    We present a Monte Carlo method for computing the renormalized coupling constants and the critical exponents within renormalization theory. The scheme, which derives from a variational principle, overcomes critical slowing down, by means of a bias potential that renders the coarse grained variables uncorrelated. The two-dimensional Ising model is used to illustrate the method.

  9. Thermal and acid tolerant beta-xylosidases, genes encoding, related organisms, and methods

    DOEpatents

    Thompson, David N [Idaho Falls, ID; Thompson, Vicki S [Idaho Falls, ID; Schaller, Kastli D [Ammon, ID; Apel, William A [Jackson, WY; Lacey, Jeffrey A [Idaho Falls, ID; Reed, David W [Idaho Falls, ID

    2011-04-12

    Isolated and/or purified polypeptides and nucleic acid sequences encoding polypeptides from Alicyclobacillus acidocaldarius and variations thereof are provided. Further provided are methods of at least partially degrading xylotriose and/or xylobiose using isolated and/or purified polypeptides and nucleic acid sequences encoding polypeptides from Alicyclobacillus acidocaldarius and variations thereof.

  10. A Report on the Methods to Associate Variations in Aquatic Biotic Condition to Variations in Stressors

    EPA Science Inventory

    In this report we present examples of methods that we have used to explore associations between aquatic biotic condition and stressors in two different aquatic systems: estuaries and lakes. We review metrics and indices of biotic condition in lakes and estuaries; discuss some ph...

  11. Mixed Gaussian-Impulse Noise Image Restoration Via Total Variation

    DTIC Science & Technology

    2012-05-01

    deblurring under impulse noise ,” J. Math. Imaging Vis., vol. 36, pp. 46–53, January 2010. [5] B. Li, Q. Liu, J. Xu, and X. Luo, “A new method for removing......Several Total Variation (TV) regularization methods have recently been proposed to address denoising under mixed Gaussian and impulse noise . While

  12. On the optimal use of fictitious time in variation of parameters methods with application to BG14

    NASA Technical Reports Server (NTRS)

    Gottlieb, Robert G.

    1991-01-01

    The optimal way to use fictitious time in variation of parameter methods is presented. Setting fictitious time to zero at the end of each step is shown to cure the instability associated with some types of problems. Only some parameters are reinitialized, thereby retaining redundant information.

  13. A Decision-Based Modified Total Variation Diffusion Method for Impulse Noise Removal

    PubMed Central

    Zhu, Qingxin; Song, Xiuli; Tao, Jinsong

    2017-01-01

    Impulsive noise removal usually employs median filtering, switching median filtering, the total variation L1 method, and variants. These approaches however often introduce excessive smoothing and can result in extensive visual feature blurring and thus are suitable only for images with low density noise. A new method to remove noise is proposed in this paper to overcome this limitation, which divides pixels into different categories based on different noise characteristics. If an image is corrupted by salt-and-pepper noise, the pixels are divided into corrupted and noise-free; if the image is corrupted by random valued impulses, the pixels are divided into corrupted, noise-free, and possibly corrupted. Pixels falling into different categories are processed differently. If a pixel is corrupted, modified total variation diffusion is applied; if the pixel is possibly corrupted, weighted total variation diffusion is applied; otherwise, the pixel is left unchanged. Experimental results show that the proposed method is robust to different noise strengths and suitable for different images, with strong noise removal capability as shown by PSNR/SSIM results as well as the visual quality of restored images. PMID:28536602

  14. Variation and Defect Tolerance for Nano Crossbars

    NASA Astrophysics Data System (ADS)

    Tunc, Cihan

    With the extreme shrinking in CMOS technology, quantum effects and manufacturing issues are getting more crucial. Hence, additional shrinking in CMOS feature size seems becoming more challenging, difficult, and costly. On the other hand, emerging nanotechnology has attracted many researchers since additional scaling down has been demonstrated by manufacturing nanowires, Carbon nanotubes as well as molecular switches using bottom-up manufacturing techniques. In addition to the progress in manufacturing, developments in architecture show that emerging nanoelectronic devices will be promising for the future system designs. Using nano crossbars, which are composed of two sets of perpendicular nanowires with programmable intersections, it is possible to implement logic functions. In addition, nano crossbars present some important features as regularity, reprogrammability, and interchangeability. Combining these features, researchers have presented different effective architectures. Although bottom-up nanofabrication can greatly reduce manufacturing costs, due to low controllability in the manufacturing process, some critical issues occur. Bottom- up nanofabrication process results in high variation compared to conventional top- down lithography used in CMOS technology. In addition, an increased failure rate is expected. Variation and defect tolerance methods used for conventional CMOS technology seem inadequate for adapting to emerging nano technology because the variation and the defect rate for emerging nano technology is much more than current CMOS technology. Therefore, variations and defect tolerance methods for emerging nano technology are necessary for a successful transition. In this work, in order to tolerate variations for crossbars, we introduce a framework that is established based on reprogrammability and interchangeability features of nano crossbars. This framework is shown to be applicable for both FET-based and diode-based nano crossbars. We present a characterization testing method which requires minimal number of test vectors. We formulate the variation optimization problem using Simulated Annealing with different optimization goals. Furthermore, we extend the framework for defect tolerance. Experimental results and comparison of proposed framework with exhaustive methods confirm its effectiveness for both variation and defect tolerance.

  15. An inverse method for determining the spatially resolved properties of viscoelastic–viscoplastic three-dimensional printed materials

    PubMed Central

    Chen, X.; Ashcroft, I. A.; Wildman, R. D.; Tuck, C. J.

    2015-01-01

    A method using experimental nanoindentation and inverse finite-element analysis (FEA) has been developed that enables the spatial variation of material constitutive properties to be accurately determined. The method was used to measure property variation in a three-dimensional printed (3DP) polymeric material. The accuracy of the method is dependent on the applicability of the constitutive model used in the inverse FEA, hence four potential material models: viscoelastic, viscoelastic–viscoplastic, nonlinear viscoelastic and nonlinear viscoelastic–viscoplastic were evaluated, with the latter enabling the best fit to experimental data. Significant changes in material properties were seen in the depth direction of the 3DP sample, which could be linked to the degree of cross-linking within the material, a feature inherent in a UV-cured layer-by-layer construction method. It is proposed that the method is a powerful tool in the analysis of manufacturing processes with potential spatial property variation that will also enable the accurate prediction of final manufactured part performance. PMID:26730216

  16. An inverse method for determining the spatially resolved properties of viscoelastic-viscoplastic three-dimensional printed materials.

    PubMed

    Chen, X; Ashcroft, I A; Wildman, R D; Tuck, C J

    2015-11-08

    A method using experimental nanoindentation and inverse finite-element analysis (FEA) has been developed that enables the spatial variation of material constitutive properties to be accurately determined. The method was used to measure property variation in a three-dimensional printed (3DP) polymeric material. The accuracy of the method is dependent on the applicability of the constitutive model used in the inverse FEA, hence four potential material models: viscoelastic, viscoelastic-viscoplastic, nonlinear viscoelastic and nonlinear viscoelastic-viscoplastic were evaluated, with the latter enabling the best fit to experimental data. Significant changes in material properties were seen in the depth direction of the 3DP sample, which could be linked to the degree of cross-linking within the material, a feature inherent in a UV-cured layer-by-layer construction method. It is proposed that the method is a powerful tool in the analysis of manufacturing processes with potential spatial property variation that will also enable the accurate prediction of final manufactured part performance.

  17. Variational method of determining effective moduli of polycrystals with tetragonal symmetry

    USGS Publications Warehouse

    Meister, R.; Peselnick, L.

    1966-01-01

    Variational principles have been applied to aggregates of randomly oriented pure-phase polycrystals having tetragonal symmetry. The bounds of the effective elastic moduli obtained in this way show a substantial improvement over the bounds obtained by means of the Voigt and Reuss assumptions. The Hill average is found to be a good approximation in most cases when compared to the bounds found from the variational method. The new bounds reduce in their limits to the Voigt and Reuss values. ?? 1966 The American Institute of Physics.

  18. Numerical realization of the variational method for generating self-trapped beams.

    PubMed

    Duque, Erick I; Lopez-Aguayo, Servando; Malomed, Boris A

    2018-03-19

    We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schrödinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.

  19. Abnormal Circulation Changes in the Winter Stratosphere, Detected Through Variations of D Region Ionospheric Absorption

    NASA Technical Reports Server (NTRS)

    Delamorena, B. A.

    1984-01-01

    A method to detect stratospheric warmings using ionospheric absorption records obtained by an Absorption Meter (method A3) is introduced. The activity of the stratospheric circulation and the D region ionospheric absorption as well as other atmospheric parameters during the winter anomaly experience an abnormal variation. A simultaneity was found in the beginning of abnormal variation in the mentioned parameters, using the absorption records for detecting the initiation of the stratospheric warming. Results of this scientific experience of forecasting in the El Arenosillo Range, are presented.

  20. Estimation of biological variation and reference change value of glycated hemoglobin (HbA(1c)) when two analytical methods are used.

    PubMed

    Ucar, Fatma; Erden, Gonul; Ginis, Zeynep; Ozturk, Gulfer; Sezer, Sevilay; Gurler, Mukaddes; Guneyk, Ahmet

    2013-10-01

    Available data on biological variation of HbA1c revealed marked heterogeneity. We therefore investigated and estimated the components of biological variation for HbA1c in a group of healthy individuals by applying a recommended and strictly designed study protocol using two different assay methods. Each month, samples were derived on the same day, for three months. Four EDTA whole blood samples were collected from each individual (20 women, 9 men; 20-45 years of age) and stored at -80°C until analysis. HbA1c values were measured by both high performance liquid chromatography (HPLC) (Shimadzu, Prominence, Japan) and boronate affinity chromatography methods (Trinity Biotech, Premier Hb9210, Ireland). All samples were assayed in duplicate in a single batch for each assay method. Estimations were calculated according to the formulas described by Fraser and Harris. The within subject (CV(I))-between subject (CV(G)) biological variations were 1.17% and 5.58%, respectively for HPLC. The calculated CV(I) and CV(G) were 2.15% and 4.03%, respectively for boronate affinity chromatography. Reference change value (RCV) for HPLC and boronate affinity chromatography was 5.4% and 10.4% respectively and individuality index of HbA(1c) was 0.35 and 0.93 respectively. This study for the first time described the components of biological variation for HbA1c in healthy individuals by two different assay methods. Obtained findings showed that the difference between CV(A) values of the methods might considerably affect RCV. These data regarding biological variation of HbA(1c) could be useful for a better evaluation of HbA(1c) test results in clinical interpretation. Copyright © 2013 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  1. SU-D-207A-07: The Effects of Inter-Cycle Respiratory Motion Variation On Dose Accumulation in Single Fraction MR-Guided SBRT Treatment of Renal Cell Carcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stemkens, B; Glitzner, M; Kontaxis, C

    Purpose: To assess the dose deposition in simulated single-fraction MR-Linac treatments of renal cell carcinoma, when inter-cycle respiratory motion variation is taken into account using online MRI. Methods: Three motion characterization methods, with increasing complexity, were compared to evaluate the effect of inter-cycle motion variation and drifts on the accumulated dose for an SBRT kidney MR-Linac treatment: 1) STATIC, in which static anatomy was assumed, 2) AVG-RESP, in which 4D-MRI phase-volumes were time-weighted, based on the respiratory phase and 3) PCA, in which 3D volumes were generated using a PCA-model, enabling the detection of inter-cycle variations and drifts. An experimentalmore » ITV-based kidney treatment was simulated in a 1.5T magnetic field on three volunteer datasets. For each volunteer a retrospectively sorted 4D-MRI (ten respiratory phases) and fast 2D cine-MR images (temporal resolution = 476ms) were acquired to simulate MR-imaging during radiation. For each method, the high spatio-temporal resolution 3D volumes were non-rigidly registered to obtain deformation vector fields (DVFs). Using the DVFs, pseudo-CTs (generated from the 4D-MRI) were deformed and the dose was accumulated for the entire treatment. The accuracies of all methods were independently determined using an additional, orthogonal 2D-MRI slice. Results: Motion was most accurately estimated using the PCA method, which correctly estimated drifts and inter-cycle variations (RMSE=3.2, 2.2, 1.1mm on average for STATIC, AVG-RESP and PCA, compared to the 2DMRI slice). Dose-volume parameters on the ITV showed moderate changes (D99=35.2, 32.5, 33.8Gy for STATIC, AVG-RESP and PCA). AVG-RESP showed distinct hot/cold spots outside the ITV margin, which were more distributed for the PCA scenario, since inter-cycle variations were not modeled by the AVG-RESP method. Conclusion: Dose differences were observed when inter-cycle variations were taken into account. The increased inter-cycle randomness in motion as captured by the PCA model mitigates the local (erroneous) hotspots estimated by the AVG-RESP method.« less

  2. Characterization of surface and ground water δ18O seasonal variation and its use for estimating groundwater residence times

    USGS Publications Warehouse

    Reddy, Michael M.; Schuster, Paul; Kendall, Carol; Reddy, Micaela B.

    2006-01-01

    18O is an ideal tracer for characterizing hydrological processes because it can be reliably measured in several watershed hydrological compartments. Here, we present multiyear isotopic data, i.e. 18O variations (δ18O), for precipitation inputs, surface water and groundwater in the Shingobee River Headwaters Area (SRHA), a well-instrumented research catchment in north-central Minnesota. SRHA surface waters exhibit δ18O seasonal variations similar to those of groundwaters, and seasonal δ18O variations plotted versus time fit seasonal sine functions. These seasonal δ18O variations were interpreted to estimate surface water and groundwater mean residence times (MRTs) at sampling locations near topographically closed-basin lakes. MRT variations of about 1 to 16 years have been estimated over an area covering about 9 km2 from the basin boundary to the most downgradient well. Estimated MRT error (±0·3 to ±0·7 years) is small for short MRTs and is much larger (±10 years) for a well with an MRT (16 years) near the limit of the method. Groundwater transit time estimates based on Darcy's law, tritium content, and the seasonal δ18O amplitude approach appear to be consistent within the limits of each method. The results from this study suggest that use of the δ18O seasonal variation method to determine MRTs can help assess groundwater recharge areas in small headwaters catchments.

  3. Characterization of surface and ground water δ18O seasonal variation and its use for estimating groundwater residence times

    USGS Publications Warehouse

    Reddy, Michael M.; Schuster, Paul F.; Kendall, Carol; Reddy, Micaela B.

    2006-01-01

    18O is an ideal tracer for characterizing hydrological processes because it can be reliably measured in several watershed hydrological compartments. Here, we present multiyear isotopic data, i.e. 18O variations (δ18O), for precipitation inputs, surface water and groundwater in the Shingobee River Headwaters Area (SRHA), a well-instrumented research catchment in north-central Minnesota. SRHA surface waters exhibit δ18O seasonal variations similar to those of groundwaters, and seasonal δ18O variations plotted versus time fit seasonal sine functions. These seasonal δ18O variations were interpreted to estimate surface water and groundwater mean residence times (MRTs) at sampling locations near topographically closed-basin lakes. MRT variations of about 1 to 16 years have been estimated over an area covering about 9 km2 from the basin boundary to the most downgradient well. Estimated MRT error (±0·3 to ±0·7 years) is small for short MRTs and is much larger (±10 years) for a well with an MRT (16 years) near the limit of the method. Groundwater transit time estimates based on Darcy's law, tritium content, and the seasonal δ18O amplitude approach appear to be consistent within the limits of each method. The results from this study suggest that use of the δ18O seasonal variation method to determine MRTs can help assess groundwater recharge areas in small headwaters catchments.

  4. Analysis of PVA/AA based photopolymers at the zero spatial frequency limit using interferometric methods.

    PubMed

    Gallego, Sergi; Márquez, Andrés; Méndez, David; Ortuño, Manuel; Neipp, Cristian; Fernández, Elena; Pascual, Inmaculada; Beléndez, Augusto

    2008-05-10

    One of the problems associated with photopolymers as optical recording media is the thickness variation during the recording process. Different values of shrinkages or swelling are reported in the literature for photopolymers. Furthermore, these variations depend on the spatial frequencies of the gratings stored in the materials. Thickness variations can be measured using different methods: studying the deviation from the Bragg's angle for nonslanted gratings, using MicroXAM S/N 8038 interferometer, or by the thermomechanical analysis experiments. In a previous paper, we began the characterization of the properties of a polyvinyl alcohol/acrylamide based photopolymer at the lowest end of recorded spatial frequencies. In this work, we continue analyzing the thickness variations of these materials using a reflection interferometer. With this technique we are able to obtain the variations of the layers refractive index and, therefore, a direct estimation of the polymer refractive index.

  5. Geometric constrained variational calculus. II: The second variation (Part I)

    NASA Astrophysics Data System (ADS)

    Massa, Enrico; Bruno, Danilo; Luria, Gianvittorio; Pagani, Enrico

    2016-10-01

    Within the geometrical framework developed in [Geometric constrained variational calculus. I: Piecewise smooth extremals, Int. J. Geom. Methods Mod. Phys. 12 (2015) 1550061], the problem of minimality for constrained calculus of variations is analyzed among the class of differentiable curves. A fully covariant representation of the second variation of the action functional, based on a suitable gauge transformation of the Lagrangian, is explicitly worked out. Both necessary and sufficient conditions for minimality are proved, and reinterpreted in terms of Jacobi fields.

  6. SCHOOL DROPOUTS--A COMMENTARY AND ANNOTATED BIBLIOGRAPHY.

    ERIC Educational Resources Information Center

    MILLER, S.M.; AND OTHERS

    RESEARCH ON SCHOOL DROPOUTS IS HANDICAPPED IN THE FOLLOWING AREAS--DEFINITION OF THE DROPOUT POPULATION, INCONSISTENT METHODS OF DATA COLLECTION, INADEQUATE RESEARCH DESIGNS, COMMUNITY VARIATION, VARIATION IN TYPE OF DROPOUT, AND KNOWLEDGE OF THE PROCESS OF DROPPING OUT. DROPOUT GROUPS SHOULD BE CLEARLY DEFINED, AND VARIATION IN THESE GROUPS…

  7. Germanium soup

    NASA Astrophysics Data System (ADS)

    Palmer, Troy A.; Alexay, Christopher C.

    2006-05-01

    This paper addresses the variety and impact of dispersive model variations for infrared materials and, in particular, the level to which certain optical designs are affected by this potential variation in germanium. This work offers a method for anticipating and/or minimizing the pitfalls such potential model variations may have on a candidate optical design.

  8. Method of calibrating an interferometer and reducing its systematic noise

    NASA Technical Reports Server (NTRS)

    Hammer, Philip D. (Inventor)

    1997-01-01

    Methods of operation and data analysis for an interferometer so as to eliminate the errors contributed by non-responsive or unstable pixels, interpixel gain variations that drift over time, and spurious noise that would otherwise degrade the operation of the interferometer are disclosed. The methods provide for either online or post-processing calibration. The methods apply prescribed reversible transformations that exploit the physical properties of interferograms obtained from said interferometer to derive a calibration reference signal for subsequent treatment of said interferograms for interpixel gain variations. A self-consistent approach for treating bad pixels is incorporated into the methods.

  9. Stimulatory effect of vascular endothelial growth factor on progesterone production and survivability of cultured bubaline luteal cells.

    PubMed

    Chouhan, V S; Dangi, S S; Gupta, M; Babitha, V; Khan, F A; Panda, R P; Yadav, V P; Singh, G; Sarkar, M

    2014-08-01

    The objectives of the present study were to investigate the effects of vascular endothelial growth factor (VEGF) on progesterone (P4) synthesis in cultured luteal cells from different stages of the estrous cycle and on expression of steroidogenic acute regulatory protein (STARD1), cytochrome P450 cholesterol side chain cleavage (CYP11A1) and 3β-hydroxysteroid dehydrogenase (HSD3B), antiapoptotic gene PCNA, and proapoptotic gene BAX in luteal cells obtained from mid-luteal phase (MLP) of estrous cycle in buffalo. Corpus luteum samples from the early luteal phase (ELP; day 1st-4th; n=4), MLP (day 5th-10th; n=4), and the late luteal phase (LLP; day 11th-16th; n=4) of oestrous cycle were obtained from a slaughterhouse. Luteal cell cultures were treated with VEGF (0, 1, 10 and 100 ng/ml) for 24, 48 and 72h. Progesterone was assessed by RIA, while mRNA expression was determined by quantitative real-time PCR (qRT-PCR). Results indicated a dose- and time-dependent stimulatory effect of VEGF on P4 synthesis and expression of steroidogenic enzymes. Moreover, VEGF treatment led to an increase in PCNA expression and decrease in BAX expression. In summary, these findings suggest that VEGF acts locally in the bubaline CL to modulate steroid hormone synthesis and cell survivability, which indicates that this factor has an important role as a regulator of CL development and function in buffalo. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. The extracellular matrix locally regulates asynchronous concurrent lactation in tammar wallaby (Macropus eugenii).

    PubMed

    Wanyonyi, Stephen S; Lefevre, Christophe; Sharp, Julie A; Nicholas, Kevin R

    2013-08-08

    Asynchronous concurrent lactation (ACL) is an extreme lactation strategy in macropod marsupials including the tammar wallaby, that may hold the key to understanding local control of mammary epithelial cell function. Marsupials have a short gestation and a long lactation consisting of three phases; P2A, P2B and P3, representing early, mid and late lactation respectively and characterised by profound changes in milk composition. A lactating tammar is able to concurrently produce phase 2A and 3 milk from adjacent glands in order to feed a young newborn and an older sibling at heel. Physiological effectors of ACL remain unknown and in this study the extracellular matrix (ECM) is investigated for its role in switching mammary phenotypes between phases of tammar wallaby lactation. Using the level of expression of the genes for the phase specific markers tELP, tWAP, and tLLP-B representing phases 2A, 2B and 3 respectively we show for the first time that tammar wallaby mammary epithelial cells (WallMECs) extracted from P2B acquire P3 phenotype when cultured on P3 ECM. Similarly P2A cells acquire P2B phenotype when cultured on P2B ECM. We further demonstrate that changes in phase phenotype correlate with phase-specific changes in ECM composition. This study shows that progressive changes in ECM composition in individual mammary glands provide a local regulatory mechanism for milk protein gene expression thereby enabling the mammary glands to lactate independently. Copyright © 2013. Published by Elsevier B.V.

  11. Quantifying surface water–groundwater interactions using time series analysis of streambed thermal records: Method development

    USGS Publications Warehouse

    Hatch, Christine E; Fisher, Andrew T.; Revenaugh, Justin S.; Constantz, Jim; Ruehl, Chris

    2006-01-01

    We present a method for determining streambed seepage rates using time series thermal data. The new method is based on quantifying changes in phase and amplitude of temperature variations between pairs of subsurface sensors. For a reasonable range of streambed thermal properties and sensor spacings the time series method should allow reliable estimation of seepage rates for a range of at least ±10 m d−1 (±1.2 × 10−2 m s−1), with amplitude variations being most sensitive at low flow rates and phase variations retaining sensitivity out to much higher rates. Compared to forward modeling, the new method requires less observational data and less setup and data handling and is faster, particularly when interpreting many long data sets. The time series method is insensitive to streambed scour and sedimentation, which allows for application under a wide range of flow conditions and allows time series estimation of variable streambed hydraulic conductivity. This new approach should facilitate wider use of thermal methods and improve understanding of the complex spatial and temporal dynamics of surface water–groundwater interactions.

  12. Using Confirmatory Factor Analysis to Understand Executive Control in Preschool Children: Sources of Variation in Emergent Mathematic Achievement

    ERIC Educational Resources Information Center

    Bull, Rebecca; Espy, Kimberly Andrews; Wiebe, Sandra A.; Sheffield, Tiffany D.; Nelson, Jennifer Mize

    2011-01-01

    Latent variable modeling methods have demonstrated utility for understanding the structure of executive control (EC) across development. These methods are utilized to better characterize the relation between EC and mathematics achievement in the preschool period, and to understand contributing sources of individual variation. Using the sample and…

  13. Length polymorphism scanning is an efficient approach for revealing chloroplast DNA variation.

    Treesearch

    Matthew E. Horning; Richard C. Cronn

    2006-01-01

    Phylogeographic and population genetic screens of chloroplast DNA (cpDNA) provide insights into seedbased gene flow in angiosperms, yet studies are frequently hampered by the low mutation rate of this genome. Detection methods for intraspecific variation can be either direct (DNA sequencing) or indirect (PCR-RFLP), although no single method incorporates the best...

  14. Variational method for lattice spectroscopy with ghosts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burch, Tommy; Hagen, Christian; Gattringer, Christof

    2006-01-01

    We discuss the variational method used in lattice spectroscopy calculations. In particular we address the role of ghost contributions which appear in quenched or partially quenched simulations and have a nonstandard euclidean time dependence. We show that the ghosts can be separated from the physical states. Our result is illustrated with numerical data for the scalar meson.

  15. A FORTRAN Program for Computing Refractive Index Using the Double Variation Method.

    ERIC Educational Resources Information Center

    Blanchard, Frank N.

    1984-01-01

    Describes a computer program which calculates a best estimate of refractive index and dispersion from a large number of observations using the double variation method of measuring refractive index along with Sellmeier constants of the immersion oils. Program listing with examples will be provided on written request to the author. (Author/JM)

  16. Systematic Convergence in Applying Variational Method to Double-Well Potential

    ERIC Educational Resources Information Center

    Mei, Wai-Ning

    2016-01-01

    In this work, we demonstrate the application of the variational method by computing the ground- and first-excited state energies of a double-well potential. We start with the proper choice of the trial wave functions using optimized parameters, and notice that accurate expectation values in excellent agreement with the numerical results can be…

  17. An Evaluation Method of Words Tendency Depending on Time-Series Variation and Its Improvements.

    ERIC Educational Resources Information Center

    Atlam, El-Sayed; Okada, Makoto; Shishibori, Masami; Aoe, Jun-ichi

    2002-01-01

    Discussion of word frequency and keywords in text focuses on a method to estimate automatically the stability classes that indicate a word's popularity with time-series variations based on the frequency change in past electronic text data. Compares the evaluation of decision tree stability class results with manual classification results.…

  18. Thermal and acid tolerant beta xylosidases, arabinofuranosidases, genes encoding, related organisms, and methods

    DOEpatents

    Thompson, David N; Thompson, Vicki S; Schaller, Kastli D; Apel, William A; Reed, David W; Lacey, Jeffrey A

    2013-04-30

    Isolated and/or purified polypeptides and nucleic acid sequences encoding polypeptides from Alicyclobacillus acidocaldarius and variations thereof are provided. Further provided are methods of at least partially degrading xylotriose, xylobiose, and/or arabinofuranose-substituted xylan using isolated and/or purified polypeptides and nucleic acid sequences encoding polypeptides from Alicyclobacillus acidocaldarius and variations thereof.

  19. A Simple Demonstration of a General Rule for the Variation of Magnetic Field with Distance

    ERIC Educational Resources Information Center

    Kodama, K.

    2009-01-01

    We describe a simple experiment demonstrating the variation in magnitude of a magnetic field with distance. The method described requires only an ordinary magnetic compass and a permanent magnet. The proposed graphical analysis illustrates a unique method for deducing a general rule of magnetostatics. (Contains 1 table and 6 figures.)

  20. A Model for Engaging Students in a Research Experience Involving Variational Techniques, Mathematica, and Descent Methods.

    ERIC Educational Resources Information Center

    Mahavier, W. Ted

    2002-01-01

    Describes a two-semester numerical methods course that serves as a research experience for undergraduate students without requiring external funding or the modification of current curriculum. Uses an engineering problem to introduce students to constrained optimization via a variation of the traditional isoperimetric problem of finding the curve…

  1. Developments in variational methods for high performance plate and shell elements

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.; Militello, Carmelo

    1991-01-01

    High performance elements are simple finite elements constructed to deliver engineering accuracy with coarse arbitrary grids. This is part of a series on the variational foundations of high-performance elements, with emphasis on plate and shell elements constructed with the free formulation (FF) and assumed natural strain (ANS) methods. Parameterized variational principles are studied that provide a common foundation for the FF and ANS methods, as well as for a combination of both. From this unified formulation a variant of the ANS formulation, called the assumed natural deviatoric strain (ANDES) formulation, emerges as an important special case. The first ANDES element, a high-performance 9 degrees of freedom triangular Kirchhoff plate bending element, is briefly described to illustrate the use of the new formulation.

  2. Estimating nonrigid motion from inconsistent intensity with robust shape features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wenyang; Ruan, Dan, E-mail: druan@mednet.ucla.edu; Department of Radiation Oncology, University of California, Los Angeles, California 90095

    2013-12-15

    Purpose: To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Methods: Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, andmore » regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. Results: To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided qualitatively appealing results, demonstrating good feasibility and applicability of the proposed method. Conclusions: The authors have developed a novel method to estimate the nonrigid motion of GOIs in the presence of spatial intensity and contrast variations, taking advantage of robust shape features. Quantitative analysis and qualitative evaluation demonstrated good promise of the proposed method. Further clinical assessment and validation is being performed.« less

  3. GEMINI: Integrative Exploration of Genetic Variation and Genome Annotations

    PubMed Central

    Paila, Umadevi; Chapman, Brad A.; Kirchner, Rory; Quinlan, Aaron R.

    2013-01-01

    Modern DNA sequencing technologies enable geneticists to rapidly identify genetic variation among many human genomes. However, isolating the minority of variants underlying disease remains an important, yet formidable challenge for medical genetics. We have developed GEMINI (GEnome MINIng), a flexible software package for exploring all forms of human genetic variation. Unlike existing tools, GEMINI integrates genetic variation with a diverse and adaptable set of genome annotations (e.g., dbSNP, ENCODE, UCSC, ClinVar, KEGG) into a unified database to facilitate interpretation and data exploration. Whereas other methods provide an inflexible set of variant filters or prioritization methods, GEMINI allows researchers to compose complex queries based on sample genotypes, inheritance patterns, and both pre-installed and custom genome annotations. GEMINI also provides methods for ad hoc queries and data exploration, a simple programming interface for custom analyses that leverage the underlying database, and both command line and graphical tools for common analyses. We demonstrate GEMINI's utility for exploring variation in personal genomes and family based genetic studies, and illustrate its ability to scale to studies involving thousands of human samples. GEMINI is designed for reproducibility and flexibility and our goal is to provide researchers with a standard framework for medical genomics. PMID:23874191

  4. Method for Reducing Pumping Damage to Blood

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor); Akkerman, James W. (Inventor); Aber, Gregory S. (Inventor); VanDamm, George Arthur (Inventor); Bacak, James W. (Inventor); Svejkovsky, Robert J. (Inventor); Benkowski, Robert J. (Inventor)

    1997-01-01

    Methods are provided for minimizing damage to blood in a blood pump wherein the blood pump comprises a plurality of pump components that may affect blood damage such as clearance between pump blades and housing, number of impeller blades, rounded or flat blade edges, variations in entrance angles of blades, impeller length, and the like. The process comprises selecting a plurality of pump components believed to affect blood damage such as those listed herein before. Construction variations for each of the plurality of pump components are then selected. The pump components and variations are preferably listed in a matrix for easy visual comparison of test results. Blood is circulated through a pump configuration to test each variation of each pump component. After each test, total blood damage is determined for the blood pump. Preferably each pump component variation is tested at least three times to provide statistical results and check consistency of results. The least hemolytic variation for each pump component is preferably selected as an optimized component. If no statistical difference as to blood damage is produced for a variation of a pump component, then the variation that provides preferred hydrodynamic performance is selected. To compare the variation of pump components such as impeller and stator blade geometries, the preferred embodiment of the invention uses a stereolithography technique for realizing complex shapes within a short time period.

  5. A methodology for probabilistic remaining creep life assessment of gas turbine components

    NASA Astrophysics Data System (ADS)

    Liu, Zhimin

    Certain gas turbine components operate in harsh environments and various mechanisms may lead to component failure. It is common practice to use remaining life assessments to help operators schedule maintenance and component replacements. Creep is a major failure mechanisms that affect the remaining life assessment, and the resulting life consumption of a component is highly sensitive to variations in the material stresses and temperatures, which fluctuate significantly due to the changes in real operating conditions. In addition, variations in material properties and geometry will result in changes in creep life consumption rate. The traditional method used for remaining life assessment assumes a set of fixed operating conditions at all times, and it fails to capture the variations in operating conditions. This translates into a significant loss of accuracy and unnecessary high maintenance and replacement cost. A new method that captures these variations described above and improves the prediction accuracy of remaining life is developed. First, a metamodel is built to approximate the relationship between variables (operating conditions, material properties, geometry, etc.) and a creep response. The metamodel is developed using Response Surface Method/Design of Experiments methodology. Design of Experiments is an efficient sampling method, and for each sampling point a set of finite element analyses are used to compute the corresponding response value. Next, a low order polynomial Response Surface Equation (RSE) is used to fit these values. Four techniques are suggested to dramatically reduce computational effort, and to increase the accuracy of the RSE: smart meshing technique, automatic geometry parameterization, screening test and regional RSE refinement. The RSEs, along with a probabilistic method and a life fraction model are used to compute current damage accumulation and remaining life. By capturing the variations mentioned above, the new method results in much better accuracy than that available using the traditional method. After further development and proper verification the method should bring significant savings by reducing the number of inspections and deferring part replacement.

  6. A coupled mode formulation by reciprocity and a variational principle

    NASA Technical Reports Server (NTRS)

    Chuang, Shun-Lien

    1987-01-01

    A coupled mode formulation for parallel dielectric waveguides is presented via two methods: a reciprocity theorem and a variational principle. In the first method, a generalized reciprocity relation for two sets of field solutions satisfying Maxwell's equations and the boundary conditions in two different media, respectively, is derived. Based on the generalized reciprocity theorem, the coupled mode equations can then be formulated. The second method using a variational principle is also presented for a general waveguide system which can be lossy. The results of the variational principle can also be shown to be identical to those from the reciprocity theorem. The exact relations governing the 'conventional' and the new coupling coefficients are derived. It is shown analytically that the present formulation satisfies the reciprocity theorem and power conservation exactly, while the conventional theory violates the power conservation and reciprocity theorem by as much as 55 percent and the Hardy-Streifer (1985, 1986) theory by 0.033 percent, for example.

  7. Two levels ARIMAX and regression models for forecasting time series data with calendar variation effects

    NASA Astrophysics Data System (ADS)

    Suhartono, Lee, Muhammad Hisyam; Prastyo, Dedy Dwi

    2015-12-01

    The aim of this research is to develop a calendar variation model for forecasting retail sales data with the Eid ul-Fitr effect. The proposed model is based on two methods, namely two levels ARIMAX and regression methods. Two levels ARIMAX and regression models are built by using ARIMAX for the first level and regression for the second level. Monthly men's jeans and women's trousers sales in a retail company for the period January 2002 to September 2009 are used as case study. In general, two levels of calendar variation model yields two models, namely the first model to reconstruct the sales pattern that already occurred, and the second model to forecast the effect of increasing sales due to Eid ul-Fitr that affected sales at the same and the previous months. The results show that the proposed two level calendar variation model based on ARIMAX and regression methods yields better forecast compared to the seasonal ARIMA model and Neural Networks.

  8. Vertical profiles of wind and temperature by remote acoustical sounding

    NASA Technical Reports Server (NTRS)

    Fox, H. L.

    1969-01-01

    An acoustical method was investigated for obtaining meteorological soundings based on the refraction due to the vertical variation of wind and temperature. The method has the potential of yielding horizontally averaged measurements of the vertical variation of wind and temperature up to heights of a few kilometers; the averaging takes place over a radius of 10 to 15 km. An outline of the basic concepts and some of the results obtained with the method are presented.

  9. Calcification responses to diurnal variation in seawater carbonate chemistry by the coral Acropora formosa

    NASA Astrophysics Data System (ADS)

    Chan, W. Y.; Eggins, S. M.

    2017-09-01

    Significant diurnal variation in seawater carbonate chemistry occurs naturally in many coral reef environments, yet little is known of its effect on coral calcification. Laboratory studies on the response of corals to ocean acidification have manipulated the carbonate chemistry of experimental seawater to compare calcification rate changes under present-day and predicted future mean pH/Ωarag conditions. These experiments, however, have focused exclusively on differences in mean chemistry and have not considered diurnal variation. The aim of this study was to compare calcification responses of branching coral Acropora formosa under conditions with and without diurnal variation in seawater carbonate chemistry. To achieve this aim, we explored (1) a method to recreate natural diurnal variation in a laboratory experiment using the biological activities of a coral-reef mesocosm, and (2) a multi-laser 3D scanning method to accurately measure coral surface areas, essential to normalize their calcification rates. We present a cost- and time-efficient method of coral surface area estimation that is reproducible within 2% of the mean of triplicate measurements. Calcification rates were compared among corals subjected to a diurnal range in pH (total scale) from 7.8 to 8.2, relative to those at constant pH values of 7.8, 8.0 or 8.2. Mean calcification rates of the corals at the pH 7.8-8.2 (diurnal variation) treatment were not statistically different from the pH 8.2 treatment and were 34% higher than the pH 8.0 treatment despite similar mean seawater pH and Ωarag. Our results suggest that calcification of adult coral colonies may benefit from diurnal variation in seawater carbonate chemistry. Experiments that compare calcification rates at different constant pH without considering diurnal variation may have limitations.

  10. Total variation regularization for seismic waveform inversion using an adaptive primal dual hybrid gradient method

    NASA Astrophysics Data System (ADS)

    Yong, Peng; Liao, Wenyuan; Huang, Jianping; Li, Zhenchuan

    2018-04-01

    Full waveform inversion is an effective tool for recovering the properties of the Earth from seismograms. However, it suffers from local minima caused mainly by the limited accuracy of the starting model and the lack of a low-frequency component in the seismic data. Because of the high velocity contrast between salt and sediment, the relation between the waveform and velocity perturbation is strongly nonlinear. Therefore, salt inversion can easily get trapped in the local minima. Since the velocity of salt is nearly constant, we can make the most of this characteristic with total variation regularization to mitigate the local minima. In this paper, we develop an adaptive primal dual hybrid gradient method to implement total variation regularization by projecting the solution onto a total variation norm constrained convex set, through which the total variation norm constraint is satisfied at every model iteration. The smooth background velocities are first inverted and the perturbations are gradually obtained by successively relaxing the total variation norm constraints. Numerical experiment of the projection of the BP model onto the intersection of the total variation norm and box constraints has demonstrated the accuracy and efficiency of our adaptive primal dual hybrid gradient method. A workflow is designed to recover complex salt structures in the BP 2004 model and the 2D SEG/EAGE salt model, starting from a linear gradient model without using low-frequency data below 3 Hz. The salt inversion processes demonstrate that wavefield reconstruction inversion with a total variation norm and box constraints is able to overcome local minima and inverts the complex salt velocity layer by layer.

  11. Comparison of hardness variation of ion irradiated borosilicate glasses with different projected ranges

    NASA Astrophysics Data System (ADS)

    Sun, M. L.; Peng, H. B.; Duan, B. H.; Liu, F. F.; Du, X.; Yuan, W.; Zhang, B. T.; Zhang, X. Y.; Wang, T. S.

    2018-03-01

    Borosilicate glass has potential application for vitrification of high-level radioactive waste, which attracts extensive interest in studying its radiation durability. In this study, sodium borosilicate glass samples were irradiated with 4 MeV Kr17+ ion, 5 MeV Xe26+ ion and 0.3 MeV P+ ion, respectively. The hardness of irradiated borosilicate glass samples was measured with nanoindentation in continuous stiffness mode and quasi continuous stiffness mode, separately. Extrapolation method, mean value method, squared extrapolation method and selected point method are used to obtain hardness of irradiated glass and a comparison among these four methods is conducted. The extrapolation method is suggested to analyze the hardness of ion irradiated glass. With increasing irradiation dose, the values of hardness for samples irradiated with Kr, Xe and P ions dropped and then saturated at 0.02 dpa. Besides, both the maximum variations and decay constants for three kinds of ions with different energies are similar indicates the similarity behind the hardness variation in glasses after irradiation. Furthermore, the hardness variation of low energy P ion irradiated samples whose range is much smaller than those of high energy Kr and Xe ions, has the same trend as that of Kr and Xe ions. It suggested that electronic energy loss did not play a significant role in hardness decrease for irradiation of low energy ions.

  12. Assessing response of sediment load variation to climate change and human activities with six different approaches.

    PubMed

    Zhao, Guangju; Mu, Xingmin; Jiao, Juying; Gao, Peng; Sun, Wenyi; Li, Erhui; Wei, Yanhong; Huang, Jiacong

    2018-05-23

    Understanding the relative contributions of climate change and human activities to variations in sediment load is of great importance for regional soil, and river basin management. Considerable studies have investigated spatial-temporal variation of sediment load within the Loess Plateau; however, contradictory findings exist among methods used. This study systematically reviewed six quantitative methods: simple linear regression, double mass curve, sediment identity factor analysis, dam-sedimentation based method, the Sediment Delivery Distributed (SEDD) model, and the Soil Water Assessment Tool (SWAT) model. The calculation procedures and merits for each method were systematically explained. A case study in the Huangfuchuan watershed on the northern Loess Plateau has been undertaken. The results showed that sediment load had been reduced by 70.5% during the changing period from 1990 to 2012 compared to that of the baseline period from 1955 to 1989. Human activities accounted for an average of 93.6 ± 4.1% of the total decline in sediment load, whereas climate change contributed 6.4 ± 4.1%. Five methods produced similar estimates, but the linear regression yielded relatively different results. The results of this study provide a good reference for assessing the effects of climate change and human activities on sediment load variation by using different methods. Copyright © 2018. Published by Elsevier B.V.

  13. Scene-based nonuniformity correction using local constant statistics.

    PubMed

    Zhang, Chao; Zhao, Wenyi

    2008-06-01

    In scene-based nonuniformity correction, the statistical approach assumes all possible values of the true-scene pixel are seen at each pixel location. This global-constant-statistics assumption does not distinguish fixed pattern noise from spatial variations in the average image. This often causes the "ghosting" artifacts in the corrected images since the existing spatial variations are treated as noises. We introduce a new statistical method to reduce the ghosting artifacts. Our method proposes a local-constant statistics that assumes that the temporal signal distribution is not constant at each pixel but is locally true. This considers statistically a constant distribution in a local region around each pixel but uneven distribution in a larger scale. Under the assumption that the fixed pattern noise concentrates in a higher spatial-frequency domain than the distribution variation, we apply a wavelet method to the gain and offset image of the noise and separate out the pattern noise from the spatial variations in the temporal distribution of the scene. We compare the results to the global-constant-statistics method using a clean sequence with large artificial pattern noises. We also apply the method to a challenging CCD video sequence and a LWIR sequence to show how effective it is in reducing noise and the ghosting artifacts.

  14. Image denoising by a direct variational minimization

    NASA Astrophysics Data System (ADS)

    Janev, Marko; Atanacković, Teodor; Pilipović, Stevan; Obradović, Radovan

    2011-12-01

    In this article we introduce a novel method for the image de-noising which combines a mathematically well-posdenes of the variational modeling with the efficiency of a patch-based approach in the field of image processing. It based on a direct minimization of an energy functional containing a minimal surface regularizer that uses fractional gradient. The minimization is obtained on every predefined patch of the image, independently. By doing so, we avoid the use of an artificial time PDE model with its inherent problems of finding optimal stopping time, as well as the optimal time step. Moreover, we control the level of image smoothing on each patch (and thus on the whole image) by adapting the Lagrange multiplier using the information on the level of discontinuities on a particular patch, which we obtain by pre-processing. In order to reduce the average number of vectors in the approximation generator and still to obtain the minimal degradation, we combine a Ritz variational method for the actual minimization on a patch, and a complementary fractional variational principle. Thus, the proposed method becomes computationally feasible and applicable for practical purposes. We confirm our claims with experimental results, by comparing the proposed method with a couple of PDE-based methods, where we get significantly better denoising results specially on the oscillatory regions.

  15. Variational optical flow computation in real time.

    PubMed

    Bruhn, Andrés; Weickert, Joachim; Feddern, Christian; Kohlberger, Timo; Schnörr, Christoph

    2005-05-01

    This paper investigates the usefulness of bidirectional multigrid methods for variational optical flow computations. Although these numerical schemes are among the fastest methods for solving equation systems, they are rarely applied in the field of computer vision. We demonstrate how to employ those numerical methods for the treatment of variational optical flow formulations and show that the efficiency of this approach even allows for real-time performance on standard PCs. As a representative for variational optic flow methods, we consider the recently introduced combined local-global method. It can be considered as a noise-robust generalization of the Horn and Schunck technique. We present a decoupled, as well as a coupled, version of the classical Gauss-Seidel solver, and we develop several multgrid implementations based on a discretization coarse grid approximation. In contrast, with standard bidirectional multigrid algorithms, we take advantage of intergrid transfer operators that allow for nondyadic grid hierarchies. As a consequence, no restrictions concerning the image size or the number of traversed levels have to be imposed. In the experimental section, we juxtapose the developed multigrid schemes and demonstrate their superior performance when compared to unidirectional multgrid methods and nonhierachical solvers. For the well-known 316 x 252 Yosemite sequence, we succeeded in computing the complete set of dense flow fields in three quarters of a second on a 3.06-GHz Pentium4 PC. This corresponds to a frame rate of 18 flow fields per second which outperforms the widely-used Gauss-Seidel method by almost three orders of magnitude.

  16. When things go pear shaped: contour variations of contacts

    NASA Astrophysics Data System (ADS)

    Utzny, Clemens

    2013-04-01

    Traditional control of critical dimensions (CD) on photolithographic masks considers the CD average and a measure for the CD variation such as the CD range or the standard deviation. Also systematic CD deviations from the mean such as CD signatures are subject to the control. These measures are valid for mask quality verification as long as patterns across a mask exhibit only size variations and no shape variation. The issue of shape variations becomes especially important in the context of contact holes on EUV masks. For EUV masks the CD error budget is much smaller than for standard optical masks. This means that small deviations from the contact shape can impact EUV waver prints in the sense that contact shape deformations induce asymmetric bridging phenomena. In this paper we present a detailed study of contact shape variations based on regular product data. Two data sets are analyzed: 1) contacts of varying target size and 2) a regularly spaced field of contacts. Here, the methods of statistical shape analysis are used to analyze CD SEM generated contour data. We demonstrate that contacts on photolithographic masks do not only show size variations but exhibit also pronounced nontrivial shape variations. In our data sets we find pronounced shape variations which can be interpreted as asymmetrical shape squeezing and contact rounding. Thus we demonstrate the limitations of classic CD measures for describing the feature variations on masks. Furthermore we show how the methods of statistical shape analysis can be used for quantifying the contour variations thus paving the way to a new understanding of mask linearity and its specification.

  17. Single-Transducer, Ultrasonic Imaging Method for High-Temperature Structural Materials Eliminates the Effect of Thickness Variation in the Image

    NASA Technical Reports Server (NTRS)

    Roth, Don J.

    1998-01-01

    NASA Lewis Research Center's Life Prediction Branch, in partnership with Sonix, Inc., and Cleveland State University, recently advanced the development of, refined, and commercialized an advanced nondestructive evaluation (NDE) inspection method entitled the Single Transducer Thickness-Independent Ultrasonic Imaging Method. Selected by R&D Magazine as one of the 100 most technologically significant new products of 1996, the method uses a single transducer to eliminate the superimposing effects of thickness variation in the ultrasonic images of materials. As a result, any variation seen in the image is due solely to microstructural variation. This nondestructive method precisely and accurately characterizes material gradients (pore fraction, density, or chemical) that affect the uniformity of a material's physical performance (mechanical, thermal, or electrical). Advantages of the method over conventional ultrasonic imaging include (1) elimination of machining costs (for precision thickness control) during the quality control stages of material processing and development and (2) elimination of labor costs and subjectivity involved in further image processing and image interpretation. At NASA Lewis, the method has been used primarily for accurate inspections of high temperature structural materials including monolithic ceramics, metal matrix composites, and polymer matrix composites. Data were published this year for platelike samples, and current research is focusing on applying the method to tubular components. The initial publicity regarding the development of the method generated 150 requests for further information from a wide variety of institutions and individuals including the Federal Bureau of Investigation (FBI), Lockheed Martin Corporation, Rockwell International, Hewlett Packard Company, and Procter & Gamble Company. In addition, NASA has been solicited by the 3M Company and Allison Abrasives to use this method to inspect composite materials that are manufactured by these companies.

  18. Source Distribution Method for Unsteady One-Dimensional Flows With Small Mass, Momentum, and Heat Addition and Small Area Variation

    NASA Technical Reports Server (NTRS)

    Mirels, Harold

    1959-01-01

    A source distribution method is presented for obtaining flow perturbations due to small unsteady area variations, mass, momentum, and heat additions in a basic uniform (or piecewise uniform) one-dimensional flow. First, the perturbations due to an elemental area variation, mass, momentum, and heat addition are found. The general solution is then represented by a spatial and temporal distribution of these elemental (source) solutions. Emphasis is placed on discussing the physical nature of the flow phenomena. The method is illustrated by several examples. These include the determination of perturbations in basic flows consisting of (1) a shock propagating through a nonuniform tube, (2) a constant-velocity piston driving a shock, (3) ideal shock-tube flows, and (4) deflagrations initiated at a closed end. The method is particularly applicable for finding the perturbations due to relatively thin wall boundary layers.

  19. Analog graphic display method and apparatus

    DOEpatents

    Kronberg, J.W.

    1991-08-13

    Disclosed are an apparatus and method for using an output device such as an LED to show the approximate analog level of a variable electrical signal wherein a modulating AC waveform is superimposed either on the signal or a reference voltage, both of which are then fed to a comparator which drives the output device. Said device flashes at a constant perceptible rate with a duty cycle which varies in response to variations in the level of the input signal. The human eye perceives these variations in duty cycle as analogous to variations in the level of the input signal. 21 figures.

  20. Total variation approach for adaptive nonuniformity correction in focal-plane arrays.

    PubMed

    Vera, Esteban; Meza, Pablo; Torres, Sergio

    2011-01-15

    In this Letter we propose an adaptive scene-based nonuniformity correction method for fixed-pattern noise removal in imaging arrays. It is based on the minimization of the total variation of the estimated irradiance, and the resulting function is optimized by an isotropic total variation approach making use of an alternating minimization strategy. The proposed method provides enhanced results when applied to a diverse set of real IR imagery, accurately estimating the nonunifomity parameters of each detector in the focal-plane array at a fast convergence rate, while also forming fewer ghosting artifacts.

  1. Analog graphic display method and apparatus

    DOEpatents

    Kronberg, James W.

    1991-01-01

    An apparatus and method for using an output device such as an LED to show the approximate analog level of a variable electrical signal wherein a modulating AC waveform is superimposed either on the signal or a reference voltage, both of which are then fed to a comparator which drives the output device. Said device flashes at a constant perceptible rate with a duty cycle which varies in response to variations in the level of the input signal. The human eye perceives these variations in duty cycle as analogous to variations in the level of the input signal.

  2. Variational method of determining effective moduli of polycrystals: (A) hexagonal symmetry, (B) trigonal symmetry

    USGS Publications Warehouse

    Peselnick, L.; Meister, R.

    1965-01-01

    Variational principles of anisotropic elasticity have been applied to aggregates of randomly oriented pure-phase polycrystals having hexagonal symmetry and trigonal symmetry. The bounds of the effective elastic moduli obtained in this way show a considerable improvement over the bounds obtained by means of the Voigt and Reuss assumptions. The Hill average is found to be in most cases a good approximation when compared to the bounds found from the variational method. The new bounds reduce in their limits to the Voigt and Reuss values. ?? 1965 The American Institute of Physics.

  3. An MHD variational principle that admits reconnection

    NASA Technical Reports Server (NTRS)

    Rilee, M. L.; Sudan, R. N.; Pfirsch, D.

    1997-01-01

    The variational approach of Pfirsch and Sudan's averaged magnetohydrodynamics (MHD) to the stability of a line-tied current layer is summarized. The effect of line-tying on current sheets that might arise in line-tied magnetic flux tubes by estimating the growth rates of a resistive instability using a variational method. The results show that this method provides a potentially new technique to gauge the stability of nearly ideal magnetohydrodynamic systems. The primary implication for the stability of solar coronal structures is that tearing modes are probably constant at work removing magnetic shear from the solar corona.

  4. Quantum mechanical/molecular mechanical/continuum style solvation model: linear response theory, variational treatment, and nuclear gradients.

    PubMed

    Li, Hui

    2009-11-14

    Linear response and variational treatment are formulated for Hartree-Fock (HF) and Kohn-Sham density functional theory (DFT) methods and combined discrete-continuum solvation models that incorporate self-consistently induced dipoles and charges. Due to the variational treatment, analytic nuclear gradients can be evaluated efficiently for these discrete and continuum solvation models. The forces and torques on the induced point dipoles and point charges can be evaluated using simple electrostatic formulas as for permanent point dipoles and point charges, in accordance with the electrostatic nature of these methods. Implementation and tests using the effective fragment potential (EFP, a polarizable force field) method and the conductorlike polarizable continuum model (CPCM) show that the nuclear gradients are as accurate as those in the gas phase HF and DFT methods. Using B3LYP/EFP/CPCM and time-dependent-B3LYP/EFP/CPCM methods, acetone S(0)-->S(1) excitation in aqueous solution is studied. The results are close to those from full B3LYP/CPCM calculations.

  5. Quantum structural fluctuation in para-hydrogen clusters revealed by the variational path integral method

    NASA Astrophysics Data System (ADS)

    Miura, Shinichi

    2018-03-01

    In this paper, the ground state of para-hydrogen clusters for size regime N ≤ 40 has been studied by our variational path integral molecular dynamics method. Long molecular dynamics calculations have been performed to accurately evaluate ground state properties. The chemical potential of the hydrogen molecule is found to have a zigzag size dependence, indicating the magic number stability for the clusters of the size N = 13, 26, 29, 34, and 39. One-body density of the hydrogen molecule is demonstrated to have a structured profile, not a melted one. The observed magic number stability is examined using the inherent structure analysis. We also have developed a novel method combining our variational path integral hybrid Monte Carlo method with the replica exchange technique. We introduce replicas of the original system bridging from the structured to the melted cluster, which is realized by scaling the potential energy of the system. Using the enhanced sampling method, the clusters are demonstrated to have the structured density profile in the ground state.

  6. Quantum structural fluctuation in para-hydrogen clusters revealed by the variational path integral method.

    PubMed

    Miura, Shinichi

    2018-03-14

    In this paper, the ground state of para-hydrogen clusters for size regime N ≤ 40 has been studied by our variational path integral molecular dynamics method. Long molecular dynamics calculations have been performed to accurately evaluate ground state properties. The chemical potential of the hydrogen molecule is found to have a zigzag size dependence, indicating the magic number stability for the clusters of the size N = 13, 26, 29, 34, and 39. One-body density of the hydrogen molecule is demonstrated to have a structured profile, not a melted one. The observed magic number stability is examined using the inherent structure analysis. We also have developed a novel method combining our variational path integral hybrid Monte Carlo method with the replica exchange technique. We introduce replicas of the original system bridging from the structured to the melted cluster, which is realized by scaling the potential energy of the system. Using the enhanced sampling method, the clusters are demonstrated to have the structured density profile in the ground state.

  7. Simple Pixel Structure Using Video Data Correction Method for Nonuniform Electrical Characteristics of Polycrystalline Silicon Thin-Film Transistors and Differential Aging Phenomenon of Organic Light-Emitting Diodes

    NASA Astrophysics Data System (ADS)

    Hai-Jung In,; Oh-Kyong Kwon,

    2010-03-01

    A simple pixel structure using a video data correction method is proposed to compensate for electrical characteristic variations of driving thin-film transistors (TFTs) and the degradation of organic light-emitting diodes (OLEDs) in active-matrix OLED (AMOLED) displays. The proposed method senses the electrical characteristic variations of TFTs and OLEDs and stores them in external memory. The nonuniform emission current of TFTs and the aging of OLEDs are corrected by modulating video data using the stored data. Experimental results show that the emission current error due to electrical characteristic variation of driving TFTs is in the range from -63.1 to 61.4% without compensation, but is decreased to the range from -1.9 to 1.9% with the proposed correction method. The luminance error due to the degradation of an OLED is less than 1.8% when the proposed correction method is used for a 50% degraded OLED.

  8. Variational path integral molecular dynamics and hybrid Monte Carlo algorithms using a fourth order propagator with applications to molecular systems

    NASA Astrophysics Data System (ADS)

    Kamibayashi, Yuki; Miura, Shinichi

    2016-08-01

    In the present study, variational path integral molecular dynamics and associated hybrid Monte Carlo (HMC) methods have been developed on the basis of a fourth order approximation of a density operator. To reveal various parameter dependence of physical quantities, we analytically solve one dimensional harmonic oscillators by the variational path integral; as a byproduct, we obtain the analytical expression of the discretized density matrix using the fourth order approximation for the oscillators. Then, we apply our methods to realistic systems like a water molecule and a para-hydrogen cluster. In the HMC, we adopt two level description to avoid the time consuming Hessian evaluation. For the systems examined in this paper, the HMC method is found to be about three times more efficient than the molecular dynamics method if appropriate HMC parameters are adopted; the advantage of the HMC method is suggested to be more evident for systems described by many body interaction.

  9. Analyzing the Magnetopause Internal Structure: New Possibilities Offered by MMS Tested in a Case Study

    NASA Astrophysics Data System (ADS)

    Rezeau, L.; Belmont, G.; Manuzzo, R.; Aunai, N.; Dargent, J.

    2018-01-01

    We explore the structure of the magnetopause using a crossing observed by the Magnetospheric Multiscale (MMS) spacecraft on 16 October 2015. Several methods (minimum variance analysis, BV method, and constant velocity analysis) are first applied to compute the normal to the magnetopause considered as a whole. The different results obtained are not identical, and we show that the whole boundary is not stationary and not planar, so that basic assumptions of these methods are not well satisfied. We then analyze more finely the internal structure for investigating the departures from planarity. Using the basic mathematical definition of what is a one-dimensional physical problem, we introduce a new single spacecraft method, called LNA (local normal analysis) for determining the varying normal, and we compare the results so obtained with those coming from the multispacecraft minimum directional derivative (MDD) tool developed by Shi et al. (2005). This last method gives the dimensionality of the magnetic variations from multipoint measurements and also allows estimating the direction of the local normal when the variations are locally 1-D. This study shows that the magnetopause does include approximate one-dimensional substructures but also two- and three-dimensional structures. It also shows that the dimensionality of the magnetic variations can differ from the variations of other fields so that, at some places, the magnetic field can have a 1-D structure although all the plasma variations do not verify the properties of a global one-dimensional problem. A generalization of the MDD tool is proposed.

  10. Estimating nonrigid motion from inconsistent intensity with robust shape features.

    PubMed

    Liu, Wenyang; Ruan, Dan

    2013-12-01

    To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, and regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided qualitatively appealing results, demonstrating good feasibility and applicability of the proposed method. The authors have developed a novel method to estimate the nonrigid motion of GOIs in the presence of spatial intensity and contrast variations, taking advantage of robust shape features. Quantitative analysis and qualitative evaluation demonstrated good promise of the proposed method. Further clinical assessment and validation is being performed.

  11. Homodyne chiral polarimetry for measuring thermo-optic refractive index variations.

    PubMed

    Twu, Ruey-Ching; Wang, Jhao-Sheng

    2015-10-10

    Novel reflection-type homodyne chiral polarimetry is proposed for measuring the refractive index variations of a transparent plate under thermal impact. The experimental results show it is a simple and useful method for providing accurate measurements of refractive index variations. The measurement can reach a resolution of 7×10-5.

  12. Confidence bounds for normal and lognormal distribution coefficients of variation

    Treesearch

    Steve Verrill

    2003-01-01

    This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...

  13. Subtelomeric Rearrangements and Copy Number Variations in People with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Christofolini, D. M.; De Paula Ramos, M. A.; Kulikowski, L. D.; Da Silva Bellucco, F. T.; Belangero, S. I. N.; Brunoni, D.; Melaragno, M. I.

    2010-01-01

    Background: The most prevalent type of structural variation in the human genome is represented by copy number variations that can affect transcription levels, sequence, structure and function of genes. Method: In the present study, we used the multiplex ligation-dependent probe amplification (MLPA) technique and quantitative PCR for the detection…

  14. JOINT AND INDIVIDUAL VARIATION EXPLAINED (JIVE) FOR INTEGRATED ANALYSIS OF MULTIPLE DATA TYPES.

    PubMed

    Lock, Eric F; Hoadley, Katherine A; Marron, J S; Nobel, Andrew B

    2013-03-01

    Research in several fields now requires the analysis of datasets in which multiple high-dimensional types of data are available for a common set of objects. In particular, The Cancer Genome Atlas (TCGA) includes data from several diverse genomic technologies on the same cancerous tumor samples. In this paper we introduce Joint and Individual Variation Explained (JIVE), a general decomposition of variation for the integrated analysis of such datasets. The decomposition consists of three terms: a low-rank approximation capturing joint variation across data types, low-rank approximations for structured variation individual to each data type, and residual noise. JIVE quantifies the amount of joint variation between data types, reduces the dimensionality of the data, and provides new directions for the visual exploration of joint and individual structure. The proposed method represents an extension of Principal Component Analysis and has clear advantages over popular two-block methods such as Canonical Correlation Analysis and Partial Least Squares. A JIVE analysis of gene expression and miRNA data on Glioblastoma Multiforme tumor samples reveals gene-miRNA associations and provides better characterization of tumor types.

  15. Short and long periodic atmospheric variations between 25 and 200 km

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Woodrum, A.

    1973-01-01

    Previously collected data on atmospheric pressure, density, temperature and winds between 25 and 200 km from sources including Meteorological Rocket Network data, ROBIN falling sphere data, grenade release and pitot tube data, meteor winds, chemical release winds, satellite data, and others were analyzed by a daily difference method and results on the distribution statistics, magnitude, and spatial structure of gravity wave and planetary wave atmospheric variations are presented. Time structure of the gravity wave variations were determined by the analysis of residuals from harmonic analysis of time series data. Planetary wave contributions in the 25-85 km range were discovered and found to have significant height and latitudinal variation. Long period planetary waves, and seasonal variations were also computed by harmonic analysis. Revised height variations of the gravity wave contributions in the 25 to 85 km height range were computed. An engineering method and design values for gravity wave magnitudes and wave lengths are given to be used for such tasks as evaluating the effects on the dynamical heating, stability and control of spacecraft such as the space shuttle vehicle in launch or reentry trajectories.

  16. Money for health: the equivalent variation of cardiovascular diseases.

    PubMed

    Groot, Wim; Van Den Brink, Henriëtte Maassen; Plug, Erik

    2004-09-01

    This paper introduces a new method to calculate the extent to which individuals are willing to trade money for improvements in their health status. An individual welfare function of income (WFI) is applied to calculate the equivalent income variation of health impairments. We believe that this approach avoids various drawbacks of alternative willingness-to-pay methods. The WFI is used to calculate the equivalent variation of cardiovascular diseases. It is found that for a 25 year old male the equivalent variation of a heart disease ranges from 114,000 euro to 380,000 euro depending on the welfare level. This is about 10,000 euro - 30,000 euro for an additional life year. The equivalent variation declines with age and is about the same for men and women. The estimates further vary by discount rate chosen. The estimates of the equivalent variation are generally higher than the money spent on most heart-related medical interventions per QALY. The cost-benefit analysis shows that for most interventions the value of the health benefits exceeds the costs. Heart transplants seem to be too costly and only beneficial if patients are young.

  17. Identification and ranking of environmental threats with ecosystem vulnerability distributions.

    PubMed

    Zijp, Michiel C; Huijbregts, Mark A J; Schipper, Aafke M; Mulder, Christian; Posthuma, Leo

    2017-08-24

    Responses of ecosystems to human-induced stress vary in space and time, because both stressors and ecosystem vulnerabilities vary in space and time. Presently, ecosystem impact assessments mainly take into account variation in stressors, without considering variation in ecosystem vulnerability. We developed a method to address ecosystem vulnerability variation by quantifying ecosystem vulnerability distributions (EVDs) based on monitoring data of local species compositions and environmental conditions. The method incorporates spatial variation of both abiotic and biotic variables to quantify variation in responses among species and ecosystems. We show that EVDs can be derived based on a selection of locations, existing monitoring data and a selected impact boundary, and can be used in stressor identification and ranking for a region. A case study on Ohio's freshwater ecosystems, with freshwater fish as target species group, showed that physical habitat impairment and nutrient loads ranked highest as current stressors, with species losses higher than 5% for at least 6% of the locations. EVDs complement existing approaches of stressor assessment and management, which typically account only for variability in stressors, by accounting for variation in the vulnerability of the responding ecosystems.

  18. An examination of the hexokinase method for serum glucose assay using external quality assessment data.

    PubMed

    Westwood, A; Bullock, D G; Whitehead, T P

    1986-01-01

    Hexokinase methods for serum glucose assay appeared to give slightly but consistently higher inter-laboratory coefficients of variation than all methods combined in the UK External Quality Assessment Scheme; their performance over a two-year period was therefore compared with that for three groups of glucose oxidase methods. This assessment showed no intrinsic inferiority in the hexokinase method. The greater variation may be due to the more heterogeneous group of instruments, particularly discrete analysers, on which the method is used. The Beckman Glucose Analyzer and Astra group (using a glucose oxidase method) showed the least inter-laboratory variability but also the lowest mean value. No comment is offered on the absolute accuracy of any of the methods.

  19. Fast magnetic resonance imaging based on high degree total variation

    NASA Astrophysics Data System (ADS)

    Wang, Sujie; Lu, Liangliang; Zheng, Junbao; Jiang, Mingfeng

    2018-04-01

    In order to eliminating the artifacts and "staircase effect" of total variation in Compressive Sensing MRI, high degree total variation model is proposed for dynamic MRI reconstruction. the high degree total variation regularization term is used as a constraint to reconstruct the magnetic resonance image, and the iterative weighted MM algorithm is proposed to solve the convex optimization problem of the reconstructed MR image model, In addtion, one set of cardiac magnetic resonance data is used to verify the proposed algorithm for MRI. The results show that the high degree total variation method has a better reconstruction effect than the total variation and the total generalized variation, which can obtain higher reconstruction SNR and better structural similarity.

  20. RESULTS FROM EPA FUNDED RESEARCH PROGRAMS ON THE IMPORTANCE OF PURGE VOLUME, SAMPLE VOLUME, SAMPLE FLOW RATE AND TEMPORAL VARIATIONS ON SOIL GAS CONCENTRATIONS

    EPA Science Inventory

    Two research studies funded and overseen by EPA have been conducted since October 2006 on soil gas sampling methods and variations in shallow soil gas concentrations with the purpose of improving our understanding of soil gas methods and data for vapor intrusion applications. Al...

  1. Assimilation of HF Radar Observations in the Chesapeake-Delaware Bay Region Using the Navy Coastal Ocean Model (NCOM) and the Four-Dimensional Variational (4DVAR) Method

    DTIC Science & Technology

    2015-01-01

    6. Zhang WG, Wilkin JL, Arango HG. Towards an integrated observation and modeling system in the New York Bight using variational methods. Part 1...1992;7:262- 72. ---- -- - ---------------------------- References 391 17. Rosmond TE, Teixeria J, Pcng M, Hogan TF, Pauley R. Navy operational global

  2. Geographic variation in forest composition and precipitation predict the synchrony of forest insect outbreaks

    Treesearch

    Kyle J. Haynes; Andrew M. Liebhold; Ottar N. Bjørnstad; Andrew J. Allstadt; Randall S. Morin

    2018-01-01

    Evaluating the causes of spatial synchrony in population dynamics in nature is notoriously difficult due to a lack of data and appropriate statistical methods. Here, we use a recently developed method, a multivariate extension of the local indicators of spatial autocorrelation statistic, to map geographic variation in the synchrony of gypsy moth outbreaks. Regression...

  3. Interactively Applying the Variational Method to the Dihydrogen Molecule: Exploring Bonding and Antibonding

    ERIC Educational Resources Information Center

    Cruzeiro, Vinícius Wilian D.; Roitberg, Adrian; Polfer, Nicolas C.

    2016-01-01

    In this work we are going to present how an interactive platform can be used as a powerful tool to allow students to better explore a foundational problem in quantum chemistry: the application of the variational method to the dihydrogen molecule using simple Gaussian trial functions. The theoretical approach for the hydrogen atom is quite…

  4. Surface-induced brightness temperature variations and their effects on detecting thin cirrus clouds using IR emission channels in the 8-12 microns region

    NASA Technical Reports Server (NTRS)

    Gao, Bo-Cai; Wiscombe, W. J.

    1994-01-01

    A method for detecting cirrus clouds in terms of brightness temperature differences between narrowbands at 8, 11, and 12 microns has been proposed by Ackerman et al. In this method, the variation of emissivity with wavelength for different surface targets was not taken into consideration. Based on state-of-the-art laboratory measurements of reflectance spectra of terrestrial materials by Salisbury and D'Aria, it is found that the brightness temperature differences between the 8- and 11-microns bands for soils, rocks, and minerals, and dry vegetation can vary between approximately -8 and +8 K due solely to surface emissivity variations. The large brightness temperature differences are sufficient to cause false detection of cirrus clouds from remote sensing data acquired over certain surface targets using the 8-11-12-microns method directly. It is suggested that the 8-11-12-microns method should be improved to include the surface emissivity effects. In addition, it is recommended that in the future the variation of surface emissivity with wavelength should be taken into account in algorithms for retrieving surface temperatures and low-level atmospheric temperature and water vapor profiles.

  5. Variational approach to direct and inverse problems of atmospheric pollution studies

    NASA Astrophysics Data System (ADS)

    Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey

    2016-04-01

    We present the development of a variational approach for solving interrelated problems of atmospheric hydrodynamics and chemistry concerning air pollution transport and transformations. The proposed approach allows us to carry out complex studies of different-scale physical and chemical processes using the methods of direct and inverse modeling [1-3]. We formulate the problems of risk/vulnerability and uncertainty assessment, sensitivity studies, variational data assimilation procedures [4], etc. A computational technology of constructing consistent mathematical models and methods of their numerical implementation is based on the variational principle in the weak constraint formulation specifically designed to account for uncertainties in models and observations. Algorithms for direct and inverse modeling are designed with the use of global and local adjoint problems. Implementing the idea of adjoint integrating factors provides unconditionally monotone and stable discrete-analytic approximations for convection-diffusion-reaction problems [5,6]. The general framework is applied to the direct and inverse problems for the models of transport and transformation of pollutants in Siberian and Arctic regions. The work has been partially supported by the RFBR grant 14-01-00125 and RAS Presidium Program I.33P. References: 1. V. Penenko, A.Baklanov, E. Tsvetova and A. Mahura . Direct and inverse problems in a variational concept of environmental modeling //Pure and Applied Geoph.(2012) v.169: 447-465. 2. V. V. Penenko, E. A. Tsvetova, and A. V. Penenko Development of variational approach for direct and inverse problems of atmospheric hydrodynamics and chemistry, Izvestiya, Atmospheric and Oceanic Physics, 2015, Vol. 51, No. 3, p. 311-319, DOI: 10.1134/S0001433815030093. 3. V.V. Penenko, E.A. Tsvetova, A.V. Penenko. Methods based on the joint use of models and observational data in the framework of variational approach to forecasting weather and atmospheric composition quality// Russian meteorology and hydrology, V. 40, Issue: 6, Pages: 365-373, DOI: 10.3103/S1068373915060023. 4. A.V. Penenko and V.V. Penenko. Direct data assimilation method for convection-diffusion models based on splitting scheme. Computational technologies, 19(4):69-83, 2014. 5. V.V. Penenko, E.A. Tsvetova, A.V. Penenko Variational approach and Euler's integrating factors for environmental studies// Computers and Mathematics with Applications, 2014, V.67, Issue 12, Pages 2240-2256, DOI:10.1016/j.camwa.2014.04.004 6. V.V. Penenko, E.A. Tsvetova. Variational methods of constructing monotone approximations for atmospheric chemistry models // Numerical analysis and applications, 2013, V. 6, Issue 3, pp 210-220, DOI 10.1134/S199542391303004X

  6. Satellite and Model Analysis of the Atmospheric Moisture Budget in High Latitudes

    NASA Technical Reports Server (NTRS)

    Bromwich, David H.; Chen, Qui-Shi

    2001-01-01

    In order to understand variations of accumulation over Greenland, it is necessary to investigate precipitation and its variations. Observations of precipitation over Greenland are limited and generally inaccurate, but the analyzed wind, geopotential height, and moisture fields are available for recent years. The objective of this study is to enhance the dynamic method for retrieving high resolution precipitation over Greenland from the analyzed fields. The dynamic method enhanced in this study is referred to as the improved dynamic method.

  7. Common methods for fecal sample storage in field studies yield consistent signatures of individual identity in microbiome sequencing data.

    PubMed

    Blekhman, Ran; Tang, Karen; Archie, Elizabeth A; Barreiro, Luis B; Johnson, Zachary P; Wilson, Mark E; Kohn, Jordan; Yuan, Michael L; Gesquiere, Laurence; Grieneisen, Laura E; Tung, Jenny

    2016-08-16

    Field studies of wild vertebrates are frequently associated with extensive collections of banked fecal samples-unique resources for understanding ecological, behavioral, and phylogenetic effects on the gut microbiome. However, we do not understand whether sample storage methods confound the ability to investigate interindividual variation in gut microbiome profiles. Here, we extend previous work on storage methods for gut microbiome samples by comparing immediate freezing, the gold standard of preservation, to three methods commonly used in vertebrate field studies: lyophilization, storage in ethanol, and storage in RNAlater. We found that the signature of individual identity consistently outweighed storage effects: alpha diversity and beta diversity measures were significantly correlated across methods, and while samples often clustered by donor, they never clustered by storage method. Provided that all analyzed samples are stored the same way, banked fecal samples therefore appear highly suitable for investigating variation in gut microbiota. Our results open the door to a much-expanded perspective on variation in the gut microbiome across species and ecological contexts.

  8. Thinner regions of intracranial aneurysm wall correlate with regions of higher wall shear stress: a 7.0 tesla MRI

    PubMed Central

    Blankena, Roos; Kleinloog, Rachel; Verweij, Bon H.; van Ooij, Pim; ten Haken, Bennie; Luijten, Peter R.; Rinkel, Gabriel J.E.; Zwanenburg, Jaco J.M.

    2016-01-01

    Purpose To develop a method for semi-quantitative wall thickness assessment on in vivo 7.0 tesla (7T) MRI images of intracranial aneurysms for studying the relation between apparent aneurysm wall thickness and wall shear stress. Materials and Methods Wall thickness was analyzed in 11 unruptured aneurysms in 9 patients, who underwent 7T MRI with a TSE based vessel wall sequence (0.8 mm isotropic resolution). A custom analysis program determined the in vivo aneurysm wall intensities, which were normalized to signal of nearby brain tissue and were used as measure for apparent wall thickness (AWT). Spatial wall thickness variation was determined as the interquartile range in AWT (the middle 50% of the AWT range). Wall shear stress was determined using phase contrast MRI (0.5 mm isotropic resolution). We performed visual and statistical comparisons (Pearson’s correlation) to study the relation between wall thickness and wall shear stress. Results 3D colored AWT maps of the aneurysms showed spatial AWT variation, which ranged from 0.07 to 0.53, with a mean variation of 0.22 (a variation of 1.0 roughly means a wall thickness variation of one voxel (0.8mm)). In all aneurysms, AWT was inversely related to WSS (mean correlation coefficient −0.35, P<0.05). Conclusions A method was developed to measure the wall thickness semi-quantitatively, using 7T MRI. An inverse correlation between wall shear stress and AWT was determined. In future studies, this non-invasive method can be used to assess spatial wall thickness variation in relation to pathophysiologic processes such as aneurysm growth and –rupture. PMID:26892986

  9. Identification of structural variation in mouse genomes.

    PubMed

    Keane, Thomas M; Wong, Kim; Adams, David J; Flint, Jonathan; Reymond, Alexandre; Yalcin, Binnaz

    2014-01-01

    Structural variation is variation in structure of DNA regions affecting DNA sequence length and/or orientation. It generally includes deletions, insertions, copy-number gains, inversions, and transposable elements. Traditionally, the identification of structural variation in genomes has been challenging. However, with the recent advances in high-throughput DNA sequencing and paired-end mapping (PEM) methods, the ability to identify structural variation and their respective association to human diseases has improved considerably. In this review, we describe our current knowledge of structural variation in the mouse, one of the prime model systems for studying human diseases and mammalian biology. We further present the evolutionary implications of structural variation on transposable elements. We conclude with future directions on the study of structural variation in mouse genomes that will increase our understanding of molecular architecture and functional consequences of structural variation.

  10. Using Statistical Process Control to Drive Improvement in Neonatal Care: A Practical Introduction to Control Charts.

    PubMed

    Gupta, Munish; Kaplan, Heather C

    2017-09-01

    Quality improvement (QI) is based on measuring performance over time, and variation in data measured over time must be understood to guide change and make optimal improvements. Common cause variation is natural variation owing to factors inherent to any process; special cause variation is unnatural variation owing to external factors. Statistical process control methods, and particularly control charts, are robust tools for understanding data over time and identifying common and special cause variation. This review provides a practical introduction to the use of control charts in health care QI, with a focus on neonatology. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Mass Spectrometry Method to Measure Membrane Proteins in Dried Blood Spots for the Detection of Blood Doping Practices in Sport.

    PubMed

    Cox, Holly D; Eichner, Daniel

    2017-09-19

    The dried blood spot (DBS) matrix has significant utility for applications in the field where venous blood collection and timely shipment of labile blood samples is difficult. Unfortunately, protein measurement in DBS is hindered by high abundance proteins and matrix interference that increases with hematocrit. We developed a DBS method to enrich for membrane proteins and remove soluble proteins and matrix interference. Following a wash in a series of buffers, the membrane proteins are digested with trypsin and quantitated by parallel reaction monitoring mass spectrometry methods. The DBS method was applied to the quantification of four cell-specific cluster of differentiation (CD) proteins used to count cells by flow cytometry, band 3 (CD233), CD71, CD45, and CD41. We demonstrate that the DBS method counts low abundance cell types such as immature reticulocytes as well as high abundance cell types such as red blood cells, white blood cells, and platelets. When tested in 82 individuals, counts obtained by the DBS method demonstrated good agreement with flow cytometry and automated hematology analyzers. Importantly, the method allows longitudinal monitoring of CD protein concentration and calculation of interindividual variation which is difficult by other methods. Interindividual variation of band 3 and CD45 was low, 6 and 8%, respectively, while variation of CD41 and CD71 was higher, 18 and 78%, respectively. Longitudinal measurement of CD71 concentration in DBS over an 8-week period demonstrated intraindividual variation 17.1-38.7%. Thus, the method may allow stable longitudinal measurement of blood parameters currently monitored to detect blood doping practices.

  12. Relevant Feature Set Estimation with a Knock-out Strategy and Random Forests

    PubMed Central

    Ganz, Melanie; Greve, Douglas N.; Fischl, Bruce; Konukoglu, Ender

    2015-01-01

    Group analysis of neuroimaging data is a vital tool for identifying anatomical and functional variations related to diseases as well as normal biological processes. The analyses are often performed on a large number of highly correlated measurements using a relatively smaller number of samples. Despite the correlation structure, the most widely used approach is to analyze the data using univariate methods followed by post-hoc corrections that try to account for the data’s multivariate nature. Although widely used, this approach may fail to recover from the adverse effects of the initial analysis when local effects are not strong. Multivariate pattern analysis (MVPA) is a powerful alternative to the univariate approach for identifying relevant variations. Jointly analyzing all the measures, MVPA techniques can detect global effects even when individual local effects are too weak to detect with univariate analysis. Current approaches are successful in identifying variations that yield highly predictive and compact models. However, they suffer from lessened sensitivity and instabilities in identification of relevant variations. Furthermore, current methods’ user-defined parameters are often unintuitive and difficult to determine. In this article, we propose a novel MVPA method for group analysis of high-dimensional data that overcomes the drawbacks of the current techniques. Our approach explicitly aims to identify all relevant variations using a “knock-out” strategy and the Random Forest algorithm. In evaluations with synthetic datasets the proposed method achieved substantially higher sensitivity and accuracy than the state-of-the-art MVPA methods, and outperformed the univariate approach when the effect size is low. In experiments with real datasets the proposed method identified regions beyond the univariate approach, while other MVPA methods failed to replicate the univariate results. More importantly, in a reproducibility study with the well-known ADNI dataset the proposed method yielded higher stability and power than the univariate approach. PMID:26272728

  13. Vibration-based structural health monitoring using adaptive statistical method under varying environmental condition

    NASA Astrophysics Data System (ADS)

    Jin, Seung-Seop; Jung, Hyung-Jo

    2014-03-01

    It is well known that the dynamic properties of a structure such as natural frequencies depend not only on damage but also on environmental condition (e.g., temperature). The variation in dynamic characteristics of a structure due to environmental condition may mask damage of the structure. Without taking the change of environmental condition into account, false-positive or false-negative damage diagnosis may occur so that structural health monitoring becomes unreliable. In order to address this problem, an approach to construct a regression model based on structural responses considering environmental factors has been usually used by many researchers. The key to success of this approach is the formulation between the input and output variables of the regression model to take into account the environmental variations. However, it is quite challenging to determine proper environmental variables and measurement locations in advance for fully representing the relationship between the structural responses and the environmental variations. One alternative (i.e., novelty detection) is to remove the variations caused by environmental factors from the structural responses by using multivariate statistical analysis (e.g., principal component analysis (PCA), factor analysis, etc.). The success of this method is deeply depending on the accuracy of the description of normal condition. Generally, there is no prior information on normal condition during data acquisition, so that the normal condition is determined by subjective perspective with human-intervention. The proposed method is a novel adaptive multivariate statistical analysis for monitoring of structural damage detection under environmental change. One advantage of this method is the ability of a generative learning to capture the intrinsic characteristics of the normal condition. The proposed method is tested on numerically simulated data for a range of noise in measurement under environmental variation. A comparative study with conventional methods (i.e., fixed reference scheme) demonstrates the superior performance of the proposed method for structural damage detection.

  14. On some variational acceleration techniques and related methods for local refinement

    NASA Astrophysics Data System (ADS)

    Teigland, Rune

    1998-10-01

    This paper shows that the well-known variational acceleration method described by Wachspress (E. Wachspress, Iterative Solution of Elliptic Systems and Applications to the Neutron Diffusion Equations of Reactor Physics, Prentice-Hall, Englewood Cliffs, NJ, 1966) and later generalized to multilevels (known as the additive correction multigrid method (B.R Huthchinson and G.D. Raithby, Numer. Heat Transf., 9, 511-537 (1986))) is similar to the FAC method of McCormick and Thomas (S.F McCormick and J.W. Thomas, Math. Comput., 46, 439-456 (1986)) and related multilevel methods. The performance of the method is demonstrated for some simple model problems using local refinement and suggestions for improving the performance of the method are given.

  15. Thickness and resistivity variations over the upper surface of the human skull.

    PubMed

    Law, S K

    1993-01-01

    A study of skull thickness and resistivity variations over the upper surface was made for an adult human skull. Physical measurements of thickness and qualitative analysis of photographs and CT scans of the skull were performed to determine internal and external features of the skull. Resistivity measurements were made using the four-electrode method and ranged from 1360 to 21400 Ohm-cm with an overall mean of 7560 +/- 4130 Ohm-cm. The presence of sutures was found to decrease resistivity substantially. The absence of cancellous bone was found to increase resistivity, particularly for samples from the temporal bone. An inverse relationship between skull thickness and resistivity was determined for trilayer bone (n = 12, p < 0.001). The results suggest that the skull cannot be considered a uniform layer and that local resistivity variations should be incorporated into realistic geometric and resistive head models to improve resolution in EEG. Influences of these variations on head models, methods for determining these variations, and incorporation into realistic head models, are discussed.

  16. A Variational Formalism for the Radiative Transfer Equation and a Geostrophic, Hydrostatic Atmosphere: Prelude to Model 3

    NASA Technical Reports Server (NTRS)

    Achtemeier, Gary L.

    1991-01-01

    The second step in development of MODEL III is summarized. It combines the four radiative transfer equations of the first step with the equations for a geostrophic and hydrostatic atmosphere. This step is intended to bring radiance into a three dimensional balance with wind, height, and temperature. The use of the geostrophic approximation in place of the full set of primitive equations allows for an easier evaluation of how the inclusion of the radiative transfer equation increases the complexity of the variational equations. Seven different variational formulations were developed for geostrophic, hydrostatic, and radiative transfer equations. The first derivation was too complex to yield solutions that were physically meaningful. For the remaining six derivations, the variational method gave the same physical interpretation (the observed brightness temperatures could provide no meaningful input to a geostrophic, hydrostatic balance) at least through the problem solving methodology used in these studies. The variational method is presented and the Euler-Lagrange equations rederived for the geostrophic, hydrostatic, and radiative transfer equations.

  17. Biomechanical implications of intraspecific shape variation in chimpanzee crania: moving towards an integration of geometric morphometrics and finite element analysis

    PubMed Central

    Smith, Amanda L.; Benazzi, Stefano; Ledogar, Justin A.; Tamvada, Kelli; Smith, Leslie C. Pryor; Weber, Gerhard W.; Spencer, Mark A.; Dechow, Paul C.; Grosse, Ian R.; Ross, Callum F.; Richmond, Brian G.; Wright, Barth W.; Wang, Qian; Byron, Craig; Slice, Dennis E.; Strait, David S.

    2014-01-01

    In a broad range of evolutionary studies, an understanding of intraspecific variation is needed in order to contextualize and interpret the meaning of variation between species. However, mechanical analyses of primate crania using experimental or modeling methods typically encounter logistical constraints that force them to rely on data gathered from only one or a few individuals. This results in a lack of knowledge concerning the mechanical significance of intraspecific shape variation that limits our ability to infer the significance of interspecific differences. This study uses geometric morphometric methods (GM) and finite element analysis (FEA) to examine the biomechanical implications of shape variation in chimpanzee crania, thereby providing a comparative context in which to interpret shape-related mechanical variation between hominin species. Six finite element models (FEMs) of chimpanzee crania were constructed from CT scans following shape-space Principal Component Analysis (PCA) of a matrix of 709 Procrustes coordinates (digitized onto 21 specimens) to identify the individuals at the extremes of the first three principal components. The FEMs were assigned the material properties of bone and were loaded and constrained to simulate maximal bites on the P3 and M2. Resulting strains indicate that intraspecific cranial variation in morphology is associated with quantitatively high levels of variation in strain magnitudes, but qualitatively little variation in the distribution of strain concentrations. Thus, interspecific comparisons should include considerations of the spatial patterning of strains rather than focus only their magnitude. PMID:25529239

  18. Vertical Bridgman growth of Hg 1-xMn xTe with variational withdrawal rate

    NASA Astrophysics Data System (ADS)

    Zhi, Gu; Wan-Qi, Jie; Guo-Qiang, Li; Long, Zhang

    2004-09-01

    Based on the solute redistribution models, Vertical Bridgman growth of Hg1-xMnxTe with variational withdrawal rate is studied. Both theoretical analysis and experimental results show that the axial composition uniformity is improved and the crystal growth rate is also increased at the optimized variational method of withdrawal rate.

  19. Isogeometric Divergence-conforming B-splines for the Steady Navier-Stokes Equations

    DTIC Science & Technology

    2012-04-01

    discretizations produce pointwise divergence-free velocity elds and hence exactly satisfy mass conservation. Consequently, discrete variational formulations...cretizations produce pointwise divergence-free velocity fields and hence exactly satisfy mass conservation. Consequently, discrete variational ... variational formulation. Using a combination of an advective for- mulation, SUPG, PSPG, and grad-div stabilization, provably convergent numerical methods

  20. Measurement and Socio-Demographic Variation of Social Capital in a Large Population-Based Survey

    ERIC Educational Resources Information Center

    Nieminen, Tarja; Martelin, Tuija; Koskinen, Seppo; Simpura, Jussi; Alanen, Erkki; Harkanen, Tommi; Aromaa, Arpo

    2008-01-01

    Objectives: The main objective of this study was to describe the variation of individual social capital according to socio-demographic factors, and to develop a suitable way to measure social capital for this purpose. The similarity of socio-demographic variation between the genders was also assessed. Data and methods: The study applied…

  1. BayesPI-BAR: a new biophysical model for characterization of regulatory sequence variations

    PubMed Central

    Wang, Junbai; Batmanov, Kirill

    2015-01-01

    Sequence variations in regulatory DNA regions are known to cause functionally important consequences for gene expression. DNA sequence variations may have an essential role in determining phenotypes and may be linked to disease; however, their identification through analysis of massive genome-wide sequencing data is a great challenge. In this work, a new computational pipeline, a Bayesian method for protein–DNA interaction with binding affinity ranking (BayesPI-BAR), is proposed for quantifying the effect of sequence variations on protein binding. BayesPI-BAR uses biophysical modeling of protein–DNA interactions to predict single nucleotide polymorphisms (SNPs) that cause significant changes in the binding affinity of a regulatory region for transcription factors (TFs). The method includes two new parameters (TF chemical potentials or protein concentrations and direct TF binding targets) that are neglected by previous methods. The new method is verified on 67 known human regulatory SNPs, of which 47 (70%) have predicted true TFs ranked in the top 10. Importantly, the performance of BayesPI-BAR, which uses principal component analysis to integrate multiple predictions from various TF chemical potentials, is found to be better than that of existing programs, such as sTRAP and is-rSNP, when evaluated on the same SNPs. BayesPI-BAR is a publicly available tool and is able to carry out parallelized computation, which helps to investigate a large number of TFs or SNPs and to detect disease-associated regulatory sequence variations in the sea of genome-wide noncoding regions. PMID:26202972

  2. Intra- and Inter-Fractional Variation Prediction of Lung Tumors Using Fuzzy Deep Learning

    PubMed Central

    Park, Seonyeong; Lee, Suk Jin; Weiss, Elisabeth

    2016-01-01

    Tumor movements should be accurately predicted to improve delivery accuracy and reduce unnecessary radiation exposure to healthy tissue during radiotherapy. The tumor movements pertaining to respiration are divided into intra-fractional variation occurring in a single treatment session and inter-fractional variation arising between different sessions. Most studies of patients’ respiration movements deal with intra-fractional variation. Previous studies on inter-fractional variation are hardly mathematized and cannot predict movements well due to inconstant variation. Moreover, the computation time of the prediction should be reduced. To overcome these limitations, we propose a new predictor for intra- and inter-fractional data variation, called intra- and inter-fraction fuzzy deep learning (IIFDL), where FDL, equipped with breathing clustering, predicts the movement accurately and decreases the computation time. Through the experimental results, we validated that the IIFDL improved root-mean-square error (RMSE) by 29.98% and prediction overshoot by 70.93%, compared with existing methods. The results also showed that the IIFDL enhanced the average RMSE and overshoot by 59.73% and 83.27%, respectively. In addition, the average computation time of IIFDL was 1.54 ms for both intra- and inter-fractional variation, which was much smaller than the existing methods. Therefore, the proposed IIFDL might achieve real-time estimation as well as better tracking techniques in radiotherapy. PMID:27170914

  3. [Toward exploration of morphological diversity of measurable traits of mammalian skull. 2. Scalar and vector parameters of the forms of group variation].

    PubMed

    Lisovskiĭ, A A; Pavlinov, I Ia

    2008-01-01

    Any morphospace is partitioned by the forms of group variation, its structure is described by a set of scalar (range, overlap) and vector (direction) characteristics. They are analyzed quantitatively for the sex and age variations in the sample of 200 skulls of the pine marten described by 14 measurable traits. Standard dispersion and variance components analyses are employed, accompanied with several resampling methods (randomization and bootstrep); effects of changes in the analysis design on results of the above methods are also considered. Maximum likelihood algorithm of variance components analysis is shown to give an adequate estimates of portions of particular forms of group variation within the overall disparity. It is quite stable in respect to changes of the analysis design and therefore could be used in the explorations of the real data with variously unbalanced designs. A new algorithm of estimation of co-directionality of particular forms of group variation within the overall disparity is elaborated, which includes angle measures between eigenvectors of covariation matrices of effects of group variations calculated by dispersion analysis. A null hypothesis of random portion of a given group variation could be tested by means of randomization of the respective grouping variable. A null hypothesis of equality of both portions and directionalities of different forms of group variation could be tested by means of the bootstrep procedure.

  4. A weighted variational gradient-based fusion method for high-fidelity thin cloud removal of Landsat images

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Chen, Xiu; Wang, Yueyun

    2018-03-01

    Landsat data are widely used in various earth observations, but the clouds interfere with the applications of the images. This paper proposes a weighted variational gradient-based fusion method (WVGBF) for high-fidelity thin cloud removal of Landsat images, which is an improvement of the variational gradient-based fusion (VGBF) method. The VGBF method integrates the gradient information from the reference band into visible bands of cloudy image to enable spatial details and remove thin clouds. The VGBF method utilizes the same gradient constraints to the entire image, which causes the color distortion in cloudless areas. In our method, a weight coefficient is introduced into the gradient approximation term to ensure the fidelity of image. The distribution of weight coefficient is related to the cloud thickness map. The map is built on Independence Component Analysis (ICA) by using multi-temporal Landsat images. Quantitatively, we use R value to evaluate the fidelity in the cloudless regions and metric Q to evaluate the clarity in the cloud areas. The experimental results indicate that the proposed method has the better ability to remove thin cloud and achieve high fidelity.

  5. 3D first-arrival traveltime tomography with modified total variation regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  6. Face landmark point tracking using LK pyramid optical flow

    NASA Astrophysics Data System (ADS)

    Zhang, Gang; Tang, Sikan; Li, Jiaquan

    2018-04-01

    LK pyramid optical flow is an effective method to implement object tracking in a video. It is used for face landmark point tracking in a video in the paper. The landmark points, i.e. outer corner of left eye, inner corner of left eye, inner corner of right eye, outer corner of right eye, tip of a nose, left corner of mouth, right corner of mouth, are considered. It is in the first frame that the landmark points are marked by hand. For subsequent frames, performance of tracking is analyzed. Two kinds of conditions are considered, i.e. single factors such as normalized case, pose variation and slowly moving, expression variation, illumination variation, occlusion, front face and rapidly moving, pose face and rapidly moving, and combination of the factors such as pose and illumination variation, pose and expression variation, pose variation and occlusion, illumination and expression variation, expression variation and occlusion. Global measures and local ones are introduced to evaluate performance of tracking under different factors or combination of the factors. The global measures contain the number of images aligned successfully, average alignment error, the number of images aligned before failure, and the local ones contain the number of images aligned successfully for components of a face, average alignment error for the components. To testify performance of tracking for face landmark points under different cases, tests are carried out for image sequences gathered by us. Results show that the LK pyramid optical flow method can implement face landmark point tracking under normalized case, expression variation, illumination variation which does not affect facial details, pose variation, and that different factors or combination of the factors have different effect on performance of alignment for different landmark points.

  7. Lithium Enolates of Simple Ketones: Structure Determination Using the Method of Continuous Variation

    PubMed Central

    Liou, Lara R.; McNeil, Anne J.; Ramirez, Antonio; Toombes, Gilman E. S.; Gruver, Jocelyn M.

    2009-01-01

    The method of continuous variation in conjunction with 6Li NMR spectroscopy was used to characterize lithium enolates derived from 1-indanone, cyclohexanone, and cyclopentanone in solution. The strategy relies on forming ensembles of homo- and heteroaggregated enolates. The enolates form exclusively chelated dimers in N,N,N’,N’-tetramethylethylenediamine and cubic tetramers in tetrahydrofuran and 1,2-dimethoxyethane. PMID:18336025

  8. Ozone data assimilation with GEOS-Chem: a comparison between 3-D-Var, 4-D-Var, and suboptimal Kalman filter approaches

    NASA Astrophysics Data System (ADS)

    Singh, K.; Sandu, A.; Bowman, K. W.; Parrington, M.; Jones, D. B. A.; Lee, M.

    2011-08-01

    Chemistry transport models determine the evolving chemical state of the atmosphere by solving the fundamental equations that govern physical and chemical transformations subject to initial conditions of the atmospheric state and surface boundary conditions, e.g., surface emissions. The development of data assimilation techniques synthesize model predictions with measurements in a rigorous mathematical framework that provides observational constraints on these conditions. Two families of data assimilation methods are currently widely used: variational and Kalman filter (KF). The variational approach is based on control theory and formulates data assimilation as a minimization problem of a cost functional that measures the model-observations mismatch. The Kalman filter approach is rooted in statistical estimation theory and provides the analysis covariance together with the best state estimate. Suboptimal Kalman filters employ different approximations of the covariances in order to make the computations feasible with large models. Each family of methods has both merits and drawbacks. This paper compares several data assimilation methods used for global chemical data assimilation. Specifically, we evaluate data assimilation approaches for improving estimates of the summertime global tropospheric ozone distribution in August 2006 based on ozone observations from the NASA Tropospheric Emission Spectrometer and the GEOS-Chem chemistry transport model. The resulting analyses are compared against independent ozonesonde measurements to assess the effectiveness of each assimilation method. All assimilation methods provide notable improvements over the free model simulations, which differ from the ozonesonde measurements by about 20 % (below 200 hPa). Four dimensional variational data assimilation with window lengths between five days and two weeks is the most accurate method, with mean differences between analysis profiles and ozonesonde measurements of 1-5 %. Two sequential assimilation approaches (three dimensional variational and suboptimal KF), although derived from different theoretical considerations, provide similar ozone estimates, with relative differences of 5-10 % between the analyses and ozonesonde measurements. Adjoint sensitivity analysis techniques are used to explore the role of of uncertainties in ozone precursors and their emissions on the distribution of tropospheric ozone. A novel technique is introduced that projects 3-D-Variational increments back to an equivalent initial condition, which facilitates comparison with 4-D variational techniques.

  9. Initialization and simulation of a landfalling typhoon using a variational bogus mapped data assimilation (BMDA)

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Wang, B.; Wang, Y.

    2007-12-01

    Recently, a new data assimilation method called “3-dimensional variational data assimilation of mapped observation (3DVM)” has been developed by the authors. We have shown that the new method is very efficient and inexpensive compared with its counterpart 4-dimensional variational data assimilation (4DVar). The new method has been implemented into the Penn State/NCAR mesoscale model MM5V1 (MM5_3DVM). In this study, we apply the new method to the bogus data assimilation (BDA) available in the original MM5 with the 4DVar. By the new approach, a specified sea-level pressure (SLP) field (bogus data) is incorporated into MM5 through the 3DVM (for convenient, we call it variational bogus mapped data assimilation - BMDA) instead of the original 4DVar data assimilation. To demonstrate the effectiveness of the new 3DVM method, initialization and simulation of a landfalling typhoon - typhoon Dan (1999) over the western North Pacific with the new method are compared with that with its counterpart 4DVar in MM5. Results show that the initial structure and the simulated intensity and track are improved more significantly using 3DVM than 4DVar. Sensitivity experiments also show that the simulated typhoon track and intensity are more sensitive to the size of the assimilation window in the 4DVar than that in the 3DVM. Meanwhile, 3DVM takes much less computing cost than its counterpart 4DVar for a given time window.

  10. Spatial Normalization of Reverse Phase Protein Array Data

    PubMed Central

    Kaushik, Poorvi; Molinelli, Evan J.; Miller, Martin L.; Wang, Weiqing; Korkut, Anil; Liu, Wenbin; Ju, Zhenlin; Lu, Yiling; Mills, Gordon; Sander, Chris

    2014-01-01

    Reverse phase protein arrays (RPPA) are an efficient, high-throughput, cost-effective method for the quantification of specific proteins in complex biological samples. The quality of RPPA data may be affected by various sources of error. One of these, spatial variation, is caused by uneven exposure of different parts of an RPPA slide to the reagents used in protein detection. We present a method for the determination and correction of systematic spatial variation in RPPA slides using positive control spots printed on each slide. The method uses a simple bi-linear interpolation technique to obtain a surface representing the spatial variation occurring across the dimensions of a slide. This surface is used to calculate correction factors that can normalize the relative protein concentrations of the samples on each slide. The adoption of the method results in increased agreement between technical and biological replicates of various tumor and cell-line derived samples. Further, in data from a study of the melanoma cell-line SKMEL-133, several slides that had previously been rejected because they had a coefficient of variation (CV) greater than 15%, are rescued by reduction of CV below this threshold in each case. The method is implemented in the R statistical programing language. It is compatible with MicroVigene and SuperCurve, packages commonly used in RPPA data analysis. The method is made available, along with suggestions for implementation, at http://bitbucket.org/rppa_preprocess/rppa_preprocess/src. PMID:25501559

  11. Maintenance of genetic diversity through plant-herbivore interactions

    PubMed Central

    Gloss, Andrew D.; Dittrich, Anna C. Nelson; Goldman-Huertas, Benjamin; Whiteman, Noah K.

    2013-01-01

    Identifying the factors governing the maintenance of genetic variation is a central challenge in evolutionary biology. New genomic data, methods and conceptual advances provide increasing evidence that balancing selection, mediated by antagonistic species interactions, maintains functionally-important genetic variation within species and natural populations. Because diverse interactions between plants and herbivorous insects dominate terrestrial communities, they provide excellent systems to address this hypothesis. Population genomic studies of Arabidopsis thaliana and its relatives suggest spatial variation in herbivory maintains adaptive genetic variation controlling defense phenotypes, both within and among populations. Conversely, inter-species variation in plant defenses promotes adaptive genetic variation in herbivores. Emerging genomic model herbivores of Arabidopsis could illuminate how genetic variation in herbivores and plants interact simultaneously. PMID:23834766

  12. Newton's method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    More, J. J.; Sorensen, D. C.

    1982-02-01

    Newton's method plays a central role in the development of numerical techniques for optimization. In fact, most of the current practical methods for optimization can be viewed as variations on Newton's method. It is therefore important to understand Newton's method as an algorithm in its own right and as a key introduction to the most recent ideas in this area. One of the aims of this expository paper is to present and analyze two main approaches to Newton's method for unconstrained minimization: the line search approach and the trust region approach. The other aim is to present some of themore » recent developments in the optimization field which are related to Newton's method. In particular, we explore several variations on Newton's method which are appropriate for large scale problems, and we also show how quasi-Newton methods can be derived quite naturally from Newton's method.« less

  13. Variation, certainty, evidence, and change in dental education: employing evidence-based dentistry in dental education.

    PubMed

    Marinho, V C; Richards, D; Niederman, R

    2001-05-01

    Variation in health care, and more particularly in dental care, was recently chronicled in a Readers Digest investigative report. The conclusions of this report are consistent with sound scientific studies conducted in various areas of health care, including dental care, which demonstrate substantial variation in the care provided to patients. This variation in care parallels the certainty with which clinicians and faculty members often articulate strongly held, but very different opinions. Using a case-based dental scenario, we present systematic evidence-based methods for accessing dental health care information, evaluating this information for validity and importance, and using this information to make informed curricular and clinical decisions. We also discuss barriers inhibiting these systematic approaches to evidence-based clinical decision making and methods for effectively promoting behavior change in health care professionals.

  14. [Ciliate diversity and spatiotemporal variation in surface sediments of Yangtze River estuary hypoxic zone].

    PubMed

    Feng, Zhao; Kui-Dong, Xu; Zhao-Cui, Meng

    2012-12-01

    By using denaturing gradient gel electrophoresis (DGGE) and sequencing as well as Ludox-QPS method, an investigation was made on the ciliate diversity and its spatiotemporal variation in the surface sediments at three sites of Yangtze River estuary hypoxic zone in April and August 2011. The ANOSIM analysis indicated that the ciliate diversity had significant difference among the sites (R = 0.896, P = 0.0001), but less difference among seasons (R = 0.043, P = 0.207). The sequencing of 18S rDNA DGGE bands revealed that the most predominant groups were planktonic Choreotrichia and Oligotrichia. The detection by Ludox-QPS method showed that the species number and abundance of active ciliates were maintained at a higher level, and increased by 2-5 times in summer, as compared with those in spring. Both the Ludox-QPS method and the DGGE technique detected that the ciliate diversity at the three sites had the similar variation trend, and the Ludox-QPS method detected that there was a significant variation in the ciliate species number and abundance between different seasons. The species number detected by Ludox-QPS method was higher than that detected by DGGE bands. Our study indicated that the ciliates in Yangtze River estuary hypoxic zone had higher diversity and abundance, with the potential to supply food for the polyps of jellyfish.

  15. Scaling up the Single Transducer Thickness-Independent Ultrasonic Imaging Method for Accurate Characterization of Microstructural Gradients in Monolithic and Composite Tubular Structures

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Carney, Dorothy V.; Baaklini, George Y.; Bodis, James R.; Rauser, Richard W.

    1998-01-01

    Ultrasonic velocity/time-of-flight imaging that uses back surface reflections to gauge volumetric material quality is highly suited for quantitative characterization of microstructural gradients including those due to pore fraction, density, fiber fraction, and chemical composition variations. However, a weakness of conventional pulse-echo ultrasonic velocity/time-of-flight imaging is that the image shows the effects of thickness as well as microstructural variations unless the part is uniformly thick. This limits this imaging method's usefulness in practical applications. Prior studies have described a pulse-echo time-of-flight-based ultrasonic imaging method that requires using a single transducer in combination with a reflector plate placed behind samples that eliminates the effect of thickness variation in the image. In those studies, this method was successful at isolating ultrasonic variations due to material microstructure in plate-like samples of silicon nitride, metal matrix composite, and polymer matrix composite. In this study, the method is engineered for inspection of more complex-shaped structures-those having (hollow) tubular/curved geometry. The experimental inspection technique and results are described as applied to (1) monolithic mullite ceramic and polymer matrix composite 'proof-of-concept' tubular structures that contain machined patches of various depths and (2) as-manufactured monolithic silicon nitride ceramic and silicon carbide/silicon carbide composite tubular structures that might be used in 'real world' applications.

  16. Conditional Variational Autoencoder for Prediction and Feature Recovery Applied to Intrusion Detection in IoT.

    PubMed

    Lopez-Martin, Manuel; Carro, Belen; Sanchez-Esguevillas, Antonio; Lloret, Jaime

    2017-08-26

    The purpose of a Network Intrusion Detection System is to detect intrusive, malicious activities or policy violations in a host or host's network. In current networks, such systems are becoming more important as the number and variety of attacks increase along with the volume and sensitiveness of the information exchanged. This is of particular interest to Internet of Things networks, where an intrusion detection system will be critical as its economic importance continues to grow, making it the focus of future intrusion attacks. In this work, we propose a new network intrusion detection method that is appropriate for an Internet of Things network. The proposed method is based on a conditional variational autoencoder with a specific architecture that integrates the intrusion labels inside the decoder layers. The proposed method is less complex than other unsupervised methods based on a variational autoencoder and it provides better classification results than other familiar classifiers. More important, the method can perform feature reconstruction, that is, it is able to recover missing features from incomplete training datasets. We demonstrate that the reconstruction accuracy is very high, even for categorical features with a high number of distinct values. This work is unique in the network intrusion detection field, presenting the first application of a conditional variational autoencoder and providing the first algorithm to perform feature recovery.

  17. Conditional Variational Autoencoder for Prediction and Feature Recovery Applied to Intrusion Detection in IoT

    PubMed Central

    Carro, Belen; Sanchez-Esguevillas, Antonio

    2017-01-01

    The purpose of a Network Intrusion Detection System is to detect intrusive, malicious activities or policy violations in a host or host’s network. In current networks, such systems are becoming more important as the number and variety of attacks increase along with the volume and sensitiveness of the information exchanged. This is of particular interest to Internet of Things networks, where an intrusion detection system will be critical as its economic importance continues to grow, making it the focus of future intrusion attacks. In this work, we propose a new network intrusion detection method that is appropriate for an Internet of Things network. The proposed method is based on a conditional variational autoencoder with a specific architecture that integrates the intrusion labels inside the decoder layers. The proposed method is less complex than other unsupervised methods based on a variational autoencoder and it provides better classification results than other familiar classifiers. More important, the method can perform feature reconstruction, that is, it is able to recover missing features from incomplete training datasets. We demonstrate that the reconstruction accuracy is very high, even for categorical features with a high number of distinct values. This work is unique in the network intrusion detection field, presenting the first application of a conditional variational autoencoder and providing the first algorithm to perform feature recovery. PMID:28846608

  18. Trueness verification of actual creatinine assays in the European market demonstrates a disappointing variability that needs substantial improvement. An international study in the framework of the EC4 creatinine standardization working group.

    PubMed

    Delanghe, Joris R; Cobbaert, Christa; Galteau, Marie-Madeleine; Harmoinen, Aimo; Jansen, Rob; Kruse, Rolf; Laitinen, Päivi; Thienpont, Linda M; Wuyts, Birgitte; Weykamp, Cas; Panteghini, Mauro

    2008-01-01

    The European In Vitro Diagnostics (IVD) directive requires traceability to reference methods and materials of analytes. It is a task of the profession to verify the trueness of results and IVD compatibility. The results of a trueness verification study by the European Communities Confederation of Clinical Chemistry (EC4) working group on creatinine standardization are described, in which 189 European laboratories analyzed serum creatinine in a commutable serum-based material, using analytical systems from seven companies. Values were targeted using isotope dilution gas chromatography/mass spectrometry. Results were tested on their compliance to a set of three criteria: trueness, i.e., no significant bias relative to the target value, between-laboratory variation and within-laboratory variation relative to the maximum allowable error. For the lower and intermediate level, values differed significantly from the target value in the Jaffe and the dry chemistry methods. At the high level, dry chemistry yielded higher results. Between-laboratory coefficients of variation ranged from 4.37% to 8.74%. Total error budget was mainly consumed by the bias. Non-compensated Jaffe methods largely exceeded the total error budget. Best results were obtained for the enzymatic method. The dry chemistry method consumed a large part of its error budget due to calibration bias. Despite the European IVD directive and the growing needs for creatinine standardization, an unacceptable inter-laboratory variation was observed, which was mainly due to calibration differences. The calibration variation has major clinical consequences, in particular in pediatrics, where reference ranges for serum and plasma creatinine are low, and in the estimation of glomerular filtration rate.

  19. A parameter-free variational coupling approach for trimmed isogeometric thin shells

    NASA Astrophysics Data System (ADS)

    Guo, Yujie; Ruess, Martin; Schillinger, Dominik

    2017-04-01

    The non-symmetric variant of Nitsche's method was recently applied successfully for variationally enforcing boundary and interface conditions in non-boundary-fitted discretizations. In contrast to its symmetric variant, it does not require stabilization terms and therefore does not depend on the appropriate estimation of stabilization parameters. In this paper, we further consolidate the non-symmetric Nitsche approach by establishing its application in isogeometric thin shell analysis, where variational coupling techniques are of particular interest for enforcing interface conditions along trimming curves. To this end, we extend its variational formulation within Kirchhoff-Love shell theory, combine it with the finite cell method, and apply the resulting framework to a range of representative shell problems based on trimmed NURBS surfaces. We demonstrate that the non-symmetric variant applied in this context is stable and can lead to the same accuracy in terms of displacements and stresses as its symmetric counterpart. Based on our numerical evidence, the non-symmetric Nitsche method is a viable parameter-free alternative to the symmetric variant in elastostatic shell analysis.

  20. Single nucleotide variations: Biological impact and theoretical interpretation

    PubMed Central

    Katsonis, Panagiotis; Koire, Amanda; Wilson, Stephen Joseph; Hsu, Teng-Kuei; Lua, Rhonald C; Wilkins, Angela Dawn; Lichtarge, Olivier

    2014-01-01

    Genome-wide association studies (GWAS) and whole-exome sequencing (WES) generate massive amounts of genomic variant information, and a major challenge is to identify which variations drive disease or contribute to phenotypic traits. Because the majority of known disease-causing mutations are exonic non-synonymous single nucleotide variations (nsSNVs), most studies focus on whether these nsSNVs affect protein function. Computational studies show that the impact of nsSNVs on protein function reflects sequence homology and structural information and predict the impact through statistical methods, machine learning techniques, or models of protein evolution. Here, we review impact prediction methods and discuss their underlying principles, their advantages and limitations, and how they compare to and complement one another. Finally, we present current applications and future directions for these methods in biological research and medical genetics. PMID:25234433

  1. An automated integration-free path-integral method based on Kleinert's variational perturbation theory

    NASA Astrophysics Data System (ADS)

    Wong, Kin-Yiu; Gao, Jiali

    2007-12-01

    Based on Kleinert's variational perturbation (KP) theory [Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. (World Scientific, Singapore, 2004)], we present an analytic path-integral approach for computing the effective centroid potential. The approach enables the KP theory to be applied to any realistic systems beyond the first-order perturbation (i.e., the original Feynman-Kleinert [Phys. Rev. A 34, 5080 (1986)] variational method). Accurate values are obtained for several systems in which exact quantum results are known. Furthermore, the computed kinetic isotope effects for a series of proton transfer reactions, in which the potential energy surfaces are evaluated by density-functional theory, are in good accordance with experiments. We hope that our method could be used by non-path-integral experts or experimentalists as a "black box" for any given system.

  2. Fat scoring: Sources of variability

    USGS Publications Warehouse

    Krementz, D.G.; Pendleton, G.W.

    1990-01-01

    Fat scoring is a widely used nondestructive method of assessing total body fat in birds. This method has not been rigorously investigated. We investigated inter- and intraobserver variability in scoring as well as the predictive ability of fat scoring using five species of passerines. Between-observer variation in scoring was variable and great at times. Observers did not consistently score species higher or lower relative to other observers nor did they always score birds with more total body fat higher. We found that within-observer variation was acceptable but was dependent on the species being scored. The precision of fat scoring was species-specific and for most species, fat scores accounted for less than 50% of the variation in true total body fat. Overall, we would describe fat scoring as a fairly precise method of indexing total body fat but with limited reliability among observers.

  3. Importance of parametrizing constraints in quantum-mechanical variational calculations

    NASA Technical Reports Server (NTRS)

    Chung, Kwong T.; Bhatia, A. K.

    1992-01-01

    In variational calculations of quantum mechanics, constraints are sometimes imposed explicitly on the wave function. These constraints, which are deduced by physical arguments, are often not uniquely defined. In this work, the advantage of parametrizing constraints and letting the variational principle determine the best possible constraint for the problem is pointed out. Examples are carried out to show the surprising effectiveness of the variational method if constraints are parameterized. It is also shown that misleading results may be obtained if a constraint is not parameterized.

  4. Joint inversion for transponder localization and sound-speed profile temporal variation in high-precision acoustic surveys.

    PubMed

    Li, Zhao; Dosso, Stan E; Sun, Dajun

    2016-07-01

    This letter develops a Bayesian inversion for localizing underwater acoustic transponders using a surface ship which compensates for sound-speed profile (SSP) temporal variation during the survey. The method is based on dividing observed acoustic travel-time data into time segments and including depth-independent SSP variations for each segment as additional unknown parameters to approximate the SSP temporal variation. SSP variations are estimated jointly with transponder locations, rather than calculated separately as in existing two-step inversions. Simulation and sea-trial results show this localization/SSP joint inversion performs better than two-step inversion in terms of localization accuracy, agreement with measured SSP variations, and computational efficiency.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brizard, Alain J.; Tronci, Cesare

    The variational formulations of guiding-center Vlasov-Maxwell theory based on Lagrange, Euler, and Euler-Poincaré variational principles are presented. Each variational principle yields a different approach to deriving guiding-center polarization and magnetization effects into the guiding-center Maxwell equations. The conservation laws of energy, momentum, and angular momentum are also derived by Noether method, where the guiding-center stress tensor is now shown to be explicitly symmetric.

  6. Variational Principles for Buckling of Microtubules Modeled as Nonlocal Orthotropic Shells

    PubMed Central

    2014-01-01

    A variational principle for microtubules subject to a buckling load is derived by semi-inverse method. The microtubule is modeled as an orthotropic shell with the constitutive equations based on nonlocal elastic theory and the effect of filament network taken into account as an elastic surrounding. Microtubules can carry large compressive forces by virtue of the mechanical coupling between the microtubules and the surrounding elastic filament network. The equations governing the buckling of the microtubule are given by a system of three partial differential equations. The problem studied in the present work involves the derivation of the variational formulation for microtubule buckling. The Rayleigh quotient for the buckling load as well as the natural and geometric boundary conditions of the problem is obtained from this variational formulation. It is observed that the boundary conditions are coupled as a result of nonlocal formulation. It is noted that the analytic solution of the buckling problem for microtubules is usually a difficult task. The variational formulation of the problem provides the basis for a number of approximate and numerical methods of solutions and furthermore variational principles can provide physical insight into the problem. PMID:25214886

  7. Artificial mismatch hybridization

    DOEpatents

    Guo, Zhen; Smith, Lloyd M.

    1998-01-01

    An improved nucleic acid hybridization process is provided which employs a modified oligonucleotide and improves the ability to discriminate a control nucleic acid target from a variant nucleic acid target containing a sequence variation. The modified probe contains at least one artificial mismatch relative to the control nucleic acid target in addition to any mismatch(es) arising from the sequence variation. The invention has direct and advantageous application to numerous existing hybridization methods, including, applications that employ, for example, the Polymerase Chain Reaction, allele-specific nucleic acid sequencing methods, and diagnostic hybridization methods.

  8. Natural frequencies of thin rectangular plates clamped on contour using the Finite Element Method

    NASA Astrophysics Data System (ADS)

    (Barboni Haţiegan, L.; Haţiegan, C.; Gillich, G. R.; Hamat, C. O.; Vasile, O.; Stroia, M. D.

    2018-01-01

    This paper presents the determining of natural frequencies of plates without and with damages using the finite element method of SolidWorks program. The first thirty natural frequencies obtained for thin rectangular rectangular plates clamped on contour without and with central damages a for different dimensions. The relative variation of natural frequency was determined and the obtained results by the finite element method (FEM) respectively relative variation of natural frequency, were graphically represented according to their vibration natural modes. Finally, the obtained results were compared.

  9. Measuring the surface tension of a liquid-gas interface by automatic stalagmometer

    NASA Astrophysics Data System (ADS)

    Molina, C.; Victoria, L.; Arenas, A.

    2000-06-01

    We present a variation of the stalagmometer method for automatically determining the surface tension of a liquid-gas interface using a pressure sensor to measure the pressure variation per drop. The presented method does not depend on a knowledge of the density of the problem liquid and obtains values with a measurement error in the range of 1%-2%. Its low cost and simplicity mean that the technique can be used in the teaching and instrumentation laboratory in the same way as other methods.

  10. Introduction of Total Variation Regularization into Filtered Backprojection Algorithm

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Wiślicki, W.; Klimaszewski, K.; Krzemień, W.; Kowalski, P.; Shopa, R. Y.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kisielewska-Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Sharma, N. G.; Sharma, S.; Silarski, M.; Skurzok, M.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    In this paper we extend the state-of-the-art filtered backprojection (FBP) method with application of the concept of Total Variation regularization. We compare the performance of the new algorithm with the most common form of regularizing in the FBP image reconstruction via apodizing functions. The methods are validated in terms of cross-correlation coefficient between reconstructed and real image of radioactive tracer distribution using standard Derenzo-type phantom. We demonstrate that the proposed approach results in higher cross-correlation values with respect to the standard FBP method.

  11. Comparison of Traditional Design Nonlinear Programming Optimization and Stochastic Methods for Structural Design

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2010-01-01

    Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.

  12. Nonlinearly preconditioned semismooth Newton methods for variational inequality solution of two-phase flow in porous media

    NASA Astrophysics Data System (ADS)

    Yang, Haijian; Sun, Shuyu; Yang, Chao

    2017-03-01

    Most existing methods for solving two-phase flow problems in porous media do not take the physically feasible saturation fractions between 0 and 1 into account, which often destroys the numerical accuracy and physical interpretability of the simulation. To calculate the solution without the loss of this basic requirement, we introduce a variational inequality formulation of the saturation equilibrium with a box inequality constraint, and use a conservative finite element method for the spatial discretization and a backward differentiation formula with adaptive time stepping for the temporal integration. The resulting variational inequality system at each time step is solved by using a semismooth Newton algorithm. To accelerate the Newton convergence and improve the robustness, we employ a family of adaptive nonlinear elimination methods as a nonlinear preconditioner. Some numerical results are presented to demonstrate the robustness and efficiency of the proposed algorithm. A comparison is also included to show the superiority of the proposed fully implicit approach over the classical IMplicit Pressure-Explicit Saturation (IMPES) method in terms of the time step size and the total execution time measured on a parallel computer.

  13. On the Total Variation of High-Order Semi-Discrete Central Schemes for Conservation Laws

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Levy, Doron

    2004-01-01

    We discuss a new fifth-order, semi-discrete, central-upwind scheme for solving one-dimensional systems of conservation laws. This scheme combines a fifth-order WENO reconstruction, a semi-discrete central-upwind numerical flux, and a strong stability preserving Runge-Kutta method. We test our method with various examples, and give particular attention to the evolution of the total variation of the approximations.

  14. Demonstration of Systematic Improvements in Application of the Variational Method to Strongly Bound Potentials

    ERIC Educational Resources Information Center

    Ninemire, B.; Mei, W. N.

    2004-01-01

    In applying the variational method, six different sets of trial wave functions are used to calculate the ground state and first excited state energies of the strongly bound potentials, i.e. V(x)=x[2m], where m = 4, 5 and 6. It is shown that accurate results can be obtained from thorough analysis of the asymptotic behaviour of the solutions.…

  15. Tracking and recognition face in videos with incremental local sparse representation model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang

    2013-10-01

    This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.

  16. Individualized statistical learning from medical image databases: application to identification of brain lesions.

    PubMed

    Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos

    2014-04-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Constant Group Velocity Ultrasonic Guided Wave Inspection for Corrosion and Erosion Monitoring in Pipes

    NASA Astrophysics Data System (ADS)

    Instanes, Geir; Pedersen, Audun; Toppe, Mads; Nagy, Peter B.

    2009-03-01

    This paper describes a novel ultrasonic guided wave inspection technique for the monitoring of internal corrosion and erosion in pipes, which exploits the fundamental flexural mode to measure the average wall thickness over the inspection path. The inspection frequency is chosen so that the group velocity of the fundamental flexural mode is essentially constant throughout the wall thickness range of interest, while the phase velocity is highly dispersive and changes in a systematic way with varying wall thickness in the pipe. Although this approach is somewhat less accurate than the often used transverse resonance methods, it smoothly integrates the wall thickness over the whole propagation length, therefore it is very robust and can tolerate large and uneven thickness variations from point to point. The constant group velocity (CGV) method is capable of monitoring the true average of the wall thickness over the inspection length with an accuracy of 1% even in the presence of one order of magnitude larger local variations. This method also eliminates spurious variations caused by changing temperature, which can cause fairly large velocity variations, but do not significantly influence the dispersion as measured by the true phase angle in the vicinity of the CGV point. The CGV guided wave CEM method was validated in both laboratory and field tests.

  18. Individualized Statistical Learning from Medical Image Databases: Application to Identification of Brain Lesions

    PubMed Central

    Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos

    2014-01-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564

  19. Enhancement of Efficiency and Reduction of Grid Thickness Variation on Casting Process with Lean Six Sigma Method

    NASA Astrophysics Data System (ADS)

    Witantyo; Setyawan, David

    2018-03-01

    In a lead acid battery industry, grid casting is a process that has high defect and thickness variation level. DMAIC (Define-Measure-Analyse-Improve-Control) method and its tools will be used to improve the casting process. In the Define stage, it is used project charter and SIPOC (Supplier Input Process Output Customer) method to map the existent problem. In the Measure stage, it is conducted a data retrieval related to the types of defect and the amount of it, also the grid thickness variation that happened. And then the retrieved data is processed and analyzed by using 5 Why’s and FMEA method. In the Analyze stage, it is conducted a grid observation that experience fragile and crack type of defect by using microscope showing the amount of oxide Pb inclusion in the grid. Analysis that is used in grid casting process shows the difference of temperature that is too high between the metal fluid and mold temperature, also the corking process that doesn’t have standard. The Improve stage is conducted a fixing process which generates the reduction of grid variation thickness level and defect/unit level from 9,184% to 0,492%. In Control stage, it is conducted a new working standard determination and already fixed control process.

  20. MEM spectral analysis for predicting influenza epidemics in Japan.

    PubMed

    Sumi, Ayako; Kamo, Ken-ichi

    2012-03-01

    The prediction of influenza epidemics has long been the focus of attention in epidemiology and mathematical biology. In this study, we tested whether time series analysis was useful for predicting the incidence of influenza in Japan. The method of time series analysis we used consists of spectral analysis based on the maximum entropy method (MEM) in the frequency domain and the nonlinear least squares method in the time domain. Using this time series analysis, we analyzed the incidence data of influenza in Japan from January 1948 to December 1998; these data are unique in that they covered the periods of pandemics in Japan in 1957, 1968, and 1977. On the basis of the MEM spectral analysis, we identified the periodic modes explaining the underlying variations of the incidence data. The optimum least squares fitting (LSF) curve calculated with the periodic modes reproduced the underlying variation of the incidence data. An extension of the LSF curve could be used to predict the incidence of influenza quantitatively. Our study suggested that MEM spectral analysis would allow us to model temporal variations of influenza epidemics with multiple periodic modes much more effectively than by using the method of conventional time series analysis, which has been used previously to investigate the behavior of temporal variations in influenza data.

  1. Schwinger-variational-principle theory of collisions in the presence of multiple potentials

    NASA Astrophysics Data System (ADS)

    Robicheaux, F.; Giannakeas, P.; Greene, Chris H.

    2015-08-01

    A theoretical method for treating collisions in the presence of multiple potentials is developed by employing the Schwinger variational principle. The current treatment agrees with the local (regularized) frame transformation theory and extends its capabilities. Specifically, the Schwinger variational approach gives results without the divergences that need to be regularized in other methods. Furthermore, it provides a framework to identify the origin of these singularities and possibly improve the local frame transformation. We have used the method to obtain the scattering parameters for different confining potentials symmetric in x ,y . The method is also used to treat photodetachment processes in the presence of various confining potentials, thereby highlighting effects of the infinitely many closed channels. Two general features predicted are the vanishing of the total photoabsorption probability at every channel threshold and the occurrence of resonances below the channel thresholds for negative scattering lengths. In addition, the case of negative-ion photodetachment in the presence of uniform magnetic fields is also considered where unique features emerge at large scattering lengths.

  2. Effect of Temperature on Ultrasonic Signal Propagation for Extra Virgin Olive Oil Adulteration

    NASA Astrophysics Data System (ADS)

    Alias, N. A.; Hamid, S. B. Abdul; Sophian, A.

    2017-11-01

    Fraud cases involving adulteration of extra virgin olive oil has become significant nowadays due to increasing in cost of supply and highlight given the benefit of extra virgin olive oil for human consumption. This paper presents the effects of temperature variation on spectral formed utilising pulse-echo technique of ultrasound signal. Several methods had been introduced to characterize the adulteration of extra virgin olive oil with other fluid sample such as mass chromatography, standard method by ASTM (density test, distillation test and evaporation test) and mass spectrometer. Pulse-echo method of ultrasound being a non-destructive method to be used to analyse the sound wave signal captured by oscilloscope. In this paper, a non-destructive technique utilizing ultrasound to characterize extra virgin olive oil adulteration level will be presented. It can be observed that frequency spectrum of sample with different ratio and variation temperature shows significant percentages different from 30% up to 70% according to temperature variation thus possible to be used for sample characterization.

  3. Elastic least-squares reverse time migration with velocities and density perturbation

    NASA Astrophysics Data System (ADS)

    Qu, Yingming; Li, Jinli; Huang, Jianping; Li, Zhenchun

    2018-02-01

    Elastic least-squares reverse time migration (LSRTM) based on the non-density-perturbation assumption can generate false-migrated interfaces caused by density variations. We perform an elastic LSRTM scheme with density variations for multicomponent seismic data to produce high-quality images in Vp, Vs and ρ components. However, the migrated images may suffer from crosstalk artefacts caused by P- and S-waves coupling in elastic LSRTM no matter what model parametrizations used. We have proposed an elastic LSRTM with density variations method based on wave modes separation to reduce these crosstalk artefacts by using P- and S-wave decoupled elastic velocity-stress equations to derive demigration equations and gradient formulae with respect to Vp, Vs and ρ. Numerical experiments with synthetic data demonstrate the capability and superiority of the proposed method. The imaging results suggest that our method promises imaging results with higher quality and has a faster residual convergence rate. Sensitivity analysis of migration velocity, migration density and stochastic noise verifies the robustness of the proposed method for field data.

  4. SU-F-T-70: A High Dose Rate Total Skin Electron Irradiation Technique with A Specific Inter-Film Variation Correction Method for Very Large Electron Beam Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, X; Rosenfield, J; Dong, X

    2016-06-15

    Purpose: Rotational total skin electron irradiation (RTSEI) is used in the treatment of cutaneous T-cell lymphoma. Due to inter-film uniformity variations the dosimetry measurement of a large electron beam of a very low energy is challenging. This work provides a method to improve the accuracy of flatness and symmetry for a very large treatment field of low electron energy used in dual beam RTSEI. Methods: RTSEI is delivered by dual angles field a gantry of ±20 degrees of 270 to cover the upper and the lower halves of the patient body with acceptable beam uniformity. The field size is inmore » the order of 230cm in vertical height and 120 cm in horizontal width and beam energy is a degraded 6 MeV (6 mm of PMMA spoiler). We utilized parallel plate chambers, Gafchromic films and OSLDs as a measuring devices for absolute dose, B-Factor, stationary and rotational percent depth dose and beam uniformity. To reduce inter-film dosimetric variation we introduced a new specific correction method to analyze beam uniformity. This correction method uses some image processing techniques combining film value before and after radiation dose to compensate the inter-variation dose response differences among films. Results: Stationary and rotational depth of dose demonstrated that the Rp is 2 cm for rotational and the maximum dose is shifted toward the surface (3mm). The dosimetry for the phantom showed that dose uniformity reduced to 3.01% for the vertical flatness and 2.35% for horizontal flatness after correction thus achieving better flatness and uniformity. The absolute dose readings of calibrated films after our correction matched with the readings from OSLD. Conclusion: The proposed correction method for Gafchromic films will be a useful tool to correct inter-film dosimetric variation for the future clinical film dosimetry verification in very large fields, allowing the optimizations of other parameters.« less

  5. Denoising Medical Images using Calculus of Variations

    PubMed Central

    Kohan, Mahdi Nakhaie; Behnam, Hamid

    2011-01-01

    We propose a method for medical image denoising using calculus of variations and local variance estimation by shaped windows. This method reduces any additive noise and preserves small patterns and edges of images. A pyramid structure-texture decomposition of images is used to separate noise and texture components based on local variance measures. The experimental results show that the proposed method has visual improvement as well as a better SNR, RMSE and PSNR than common medical image denoising methods. Experimental results in denoising a sample Magnetic Resonance image show that SNR, PSNR and RMSE have been improved by 19, 9 and 21 percents respectively. PMID:22606674

  6. A flexible and robust approach for segmenting cell nuclei from 2D microscopy images using supervised learning and template matching

    PubMed Central

    Chen, Cheng; Wang, Wei; Ozolek, John A.; Rohde, Gustavo K.

    2013-01-01

    We describe a new supervised learning-based template matching approach for segmenting cell nuclei from microscopy images. The method uses examples selected by a user for building a statistical model which captures the texture and shape variations of the nuclear structures from a given dataset to be segmented. Segmentation of subsequent, unlabeled, images is then performed by finding the model instance that best matches (in the normalized cross correlation sense) local neighborhood in the input image. We demonstrate the application of our method to segmenting nuclei from a variety of imaging modalities, and quantitatively compare our results to several other methods. Quantitative results using both simulated and real image data show that, while certain methods may work well for certain imaging modalities, our software is able to obtain high accuracy across several imaging modalities studied. Results also demonstrate that, relative to several existing methods, the template-based method we propose presents increased robustness in the sense of better handling variations in illumination, variations in texture from different imaging modalities, providing more smooth and accurate segmentation borders, as well as handling better cluttered nuclei. PMID:23568787

  7. Joint image reconstruction method with correlative multi-channel prior for x-ray spectral computed tomography

    NASA Astrophysics Data System (ADS)

    Kazantsev, Daniil; Jørgensen, Jakob S.; Andersen, Martin S.; Lionheart, William R. B.; Lee, Peter D.; Withers, Philip J.

    2018-06-01

    Rapid developments in photon-counting and energy-discriminating detectors have the potential to provide an additional spectral dimension to conventional x-ray grayscale imaging. Reconstructed spectroscopic tomographic data can be used to distinguish individual materials by characteristic absorption peaks. The acquired energy-binned data, however, suffer from low signal-to-noise ratio, acquisition artifacts, and frequently angular undersampled conditions. New regularized iterative reconstruction methods have the potential to produce higher quality images and since energy channels are mutually correlated it can be advantageous to exploit this additional knowledge. In this paper, we propose a novel method which jointly reconstructs all energy channels while imposing a strong structural correlation. The core of the proposed algorithm is to employ a variational framework of parallel level sets to encourage joint smoothing directions. In particular, the method selects reference channels from which to propagate structure in an adaptive and stochastic way while preferring channels with a high data signal-to-noise ratio. The method is compared with current state-of-the-art multi-channel reconstruction techniques including channel-wise total variation and correlative total nuclear variation regularization. Realistic simulation experiments demonstrate the performance improvements achievable by using correlative regularization methods.

  8. Comparing two Bayes methods based on the free energy functions in Bernoulli mixtures.

    PubMed

    Yamazaki, Keisuke; Kaji, Daisuke

    2013-08-01

    Hierarchical learning models are ubiquitously employed in information science and data engineering. The structure makes the posterior distribution complicated in the Bayes method. Then, the prediction including construction of the posterior is not tractable though advantages of the method are empirically well known. The variational Bayes method is widely used as an approximation method for application; it has the tractable posterior on the basis of the variational free energy function. The asymptotic behavior has been studied in many hierarchical models and a phase transition is observed. The exact form of the asymptotic variational Bayes energy is derived in Bernoulli mixture models and the phase diagram shows that there are three types of parameter learning. However, the approximation accuracy or interpretation of the transition point has not been clarified yet. The present paper precisely analyzes the Bayes free energy function of the Bernoulli mixtures. Comparing free energy functions in these two Bayes methods, we can determine the approximation accuracy and elucidate behavior of the parameter learning. Our results claim that the Bayes free energy has the same learning types while the transition points are different. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. A Continuous Variation Study of Heats of Neutralization.

    ERIC Educational Resources Information Center

    Mahoney, Dennis W.; And Others

    1981-01-01

    Suggests that students study heats of neutralization of a 1 M solution of an unknown acid by 1 M solution of a strong base using the method continuous variation. Reviews results using several common acids. (SK)

  10. Two worlds collide: Image analysis methods for quantifying structural variation in cluster molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steenbergen, K. G., E-mail: kgsteen@gmail.com; Gaston, N.

    2014-02-14

    Inspired by methods of remote sensing image analysis, we analyze structural variation in cluster molecular dynamics (MD) simulations through a unique application of the principal component analysis (PCA) and Pearson Correlation Coefficient (PCC). The PCA analysis characterizes the geometric shape of the cluster structure at each time step, yielding a detailed and quantitative measure of structural stability and variation at finite temperature. Our PCC analysis captures bond structure variation in MD, which can be used to both supplement the PCA analysis as well as compare bond patterns between different cluster sizes. Relying only on atomic position data, without requirement formore » a priori structural input, PCA and PCC can be used to analyze both classical and ab initio MD simulations for any cluster composition or electronic configuration. Taken together, these statistical tools represent powerful new techniques for quantitative structural characterization and isomer identification in cluster MD.« less

  11. Computational Reduction of Specimen Noise to Enable Improved Thermography Characterization of Flaws in Graphite Polymer Composites

    NASA Technical Reports Server (NTRS)

    Winfree, William P.; Howell, Patricia A.; Zalameda, Joseph N.

    2014-01-01

    Flaw detection and characterization with thermographic techniques in graphite polymer composites are often limited by localized variations in the thermographic response. Variations in properties such as acceptable porosity, fiber volume content and surface polymer thickness result in variations in the thermal response that in general cause significant variations in the initial thermal response. These result in a "noise" floor that increases the difficulty of detecting and characterizing deeper flaws. A method is presented for computationally removing a significant amount of the "noise" from near surface porosity by diffusing the early time response, then subtracting it from subsequent responses. Simulations of the thermal response of a composite are utilized in defining the limitations of the technique. This method for reducing the data is shown to give considerable improvement characterizing both the size and depth of damage. Examples are shown for data acquired on specimens with fabricated delaminations and impact damage.

  12. Real-time monitoring of the solution concentration variation during the crystallization process of protein-lysozyme by using digital holographic interferometry.

    PubMed

    Zhang, Yanyan; Zhao, Jianlin; Di, Jianglei; Jiang, Hongzhen; Wang, Qian; Wang, Jun; Guo, Yunzhu; Yin, Dachuan

    2012-07-30

    We report a real-time measurement method of the solution concentration variation during the growth of protein-lysozyme crystals based on digital holographic interferometry. A series of holograms containing the information of the solution concentration variation in the whole crystallization process is recorded by CCD. Based on the principle of double-exposure holographic interferometry and the relationship between the phase difference of the reconstructed object wave and the solution concentration, the solution concentration variation with time for arbitrary point in the solution can be obtained, and then the two-dimensional concentration distribution of the solution during crystallization process can also be figured out under the precondition which the refractive index is constant through the light propagation direction. The experimental results turns out that it is feasible to in situ, full-field and real-time monitor the crystal growth process by using this method.

  13. Variational Approach to Enhanced Sampling and Free Energy Calculations

    NASA Astrophysics Data System (ADS)

    Valsson, Omar; Parrinello, Michele

    2014-08-01

    The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.

  14. Effect of Ice-Shell Thickness Variations on the Tidal Deformation of Enceladus

    NASA Astrophysics Data System (ADS)

    Choblet, G.; Cadek, O.; Behounkova, M.; Tobie, G.; Kozubek, T.

    2015-12-01

    Recent analysis of Enceladus's gravity and topography has suggested that the thickness of the ice shell significantly varies laterally - from 30-40 km in the south polar region to 60 km elsewhere. These variations may influence the activity of the geysers and increase the tidal heat production in regions where the ice shell is thinned. Using a model including a regional or global subsurface ocean and Maxwell viscoelasticity, we investigate the impact of these variations on the tidal deformation of the moon and its heat production. For that purpose, we use different numerical approaches - finite elements, local application of 1d spectral method, and a generalized spectral method. Results obtained with these three approaches for various models of ice-shell thickness variations are presented and compared. Implications of a reduced ice shell thickness for the south polar terrain activity are discussed.

  15. Computational reduction of specimen noise to enable improved thermography characterization of flaws in graphite polymer composites

    NASA Astrophysics Data System (ADS)

    Winfree, William P.; Howell, Patricia A.; Zalameda, Joseph N.

    2014-05-01

    Flaw detection and characterization with thermographic techniques in graphite polymer composites are often limited by localized variations in the thermographic response. Variations in properties such as acceptable porosity, fiber volume content and surface polymer thickness result in variations in the thermal response that in general cause significant variations in the initial thermal response. These result in a "noise" floor that increases the difficulty of detecting and characterizing deeper flaws. A method is presented for computationally removing a significant amount of the "noise" from near surface porosity by diffusing the early time response, then subtracting it from subsequent responses. Simulations of the thermal response of a composite are utilized in defining the limitations of the technique. This method for reducing the data is shown to give considerable improvement characterizing both the size and depth of damage. Examples are shown for data acquired on specimens with fabricated delaminations and impact damage.

  16. Method for Constructing Composite Response Surfaces by Combining Neural Networks with Polynominal Interpolation or Estimation Techniques

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)

    2007-01-01

    A method and system for data modeling that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The invention partitions the parameters into a first set of s simple parameters, where observable data are expressible as low order polynomials, and c complex parameters that reflect more complicated variation of the observed data. Variation of the data with the simple parameters is modeled using polynomials; and variation of the data with the complex parameters at each vertex is analyzed using a neural network. Variations with the simple parameters and with the complex parameters are expressed using a first sequence of shape functions and a second sequence of neural network functions. The first and second sequences are multiplicatively combined to form a composite response surface, dependent upon the parameter values, that can be used to identify an accurate mode

  17. Longitudinal variability in Jupiter's zonal winds derived from multi-wavelength HST observations

    NASA Astrophysics Data System (ADS)

    Johnson, Perianne E.; Morales-Juberías, Raúl; Simon, Amy; Gaulme, Patrick; Wong, Michael H.; Cosentino, Richard G.

    2018-06-01

    Multi-wavelength Hubble Space Telescope (HST) images of Jupiter from the Outer Planets Atmospheres Legacy (OPAL) and Wide Field Coverage for Juno (WFCJ) programs in 2015, 2016, and 2017 are used to derive wind profiles as a function of latitude and longitude. Wind profiles are typically zonally averaged to reduce measurement uncertainties. However, doing this destroys any variations of the zonal-component of winds in the longitudinal direction. Here, we present the results derived from using a "sliding-window" correlation method. This method adds longitudinal specificity, and allows for the detection of spatial variations in the zonal winds. Spatial variations are identified in two jets: 1 at 17 ° N, the location of a prominent westward jet, and the other at 7 ° S, the location of the chevrons. Temporal and spatial variations at the 24°N jet and the 5-μm hot spots are also examined.

  18. Two worlds collide: image analysis methods for quantifying structural variation in cluster molecular dynamics.

    PubMed

    Steenbergen, K G; Gaston, N

    2014-02-14

    Inspired by methods of remote sensing image analysis, we analyze structural variation in cluster molecular dynamics (MD) simulations through a unique application of the principal component analysis (PCA) and Pearson Correlation Coefficient (PCC). The PCA analysis characterizes the geometric shape of the cluster structure at each time step, yielding a detailed and quantitative measure of structural stability and variation at finite temperature. Our PCC analysis captures bond structure variation in MD, which can be used to both supplement the PCA analysis as well as compare bond patterns between different cluster sizes. Relying only on atomic position data, without requirement for a priori structural input, PCA and PCC can be used to analyze both classical and ab initio MD simulations for any cluster composition or electronic configuration. Taken together, these statistical tools represent powerful new techniques for quantitative structural characterization and isomer identification in cluster MD.

  19. Identification of hydrological model parameter variation using ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Deng, Chao; Liu, Pan; Guo, Shenglian; Li, Zejun; Wang, Dingbao

    2016-12-01

    Hydrological model parameters play an important role in the ability of model prediction. In a stationary context, parameters of hydrological models are treated as constants; however, model parameters may vary with time under climate change and anthropogenic activities. The technique of ensemble Kalman filter (EnKF) is proposed to identify the temporal variation of parameters for a two-parameter monthly water balance model (TWBM) by assimilating the runoff observations. Through a synthetic experiment, the proposed method is evaluated with time-invariant (i.e., constant) parameters and different types of parameter variations, including trend, abrupt change and periodicity. Various levels of observation uncertainty are designed to examine the performance of the EnKF. The results show that the EnKF can successfully capture the temporal variations of the model parameters. The application to the Wudinghe basin shows that the water storage capacity (SC) of the TWBM model has an apparent increasing trend during the period from 1958 to 2000. The identified temporal variation of SC is explained by land use and land cover changes due to soil and water conservation measures. In contrast, the application to the Tongtianhe basin shows that the estimated SC has no significant variation during the simulation period of 1982-2013, corresponding to the relatively stationary catchment properties. The evapotranspiration parameter (C) has temporal variations while no obvious change patterns exist. The proposed method provides an effective tool for quantifying the temporal variations of the model parameters, thereby improving the accuracy and reliability of model simulations and forecasts.

  20. Evaluating abundance and trends in a Hawaiian avian community using state-space analysis

    USGS Publications Warehouse

    Camp, Richard J.; Brinck, Kevin W.; Gorresen, P.M.; Paxton, Eben H.

    2016-01-01

    Estimating population abundances and patterns of change over time are important in both ecology and conservation. Trend assessment typically entails fitting a regression to a time series of abundances to estimate population trajectory. However, changes in abundance estimates from year-to-year across time are due to both true variation in population size (process variation) and variation due to imperfect sampling and model fit. State-space models are a relatively new method that can be used to partition the error components and quantify trends based only on process variation. We compare a state-space modelling approach with a more traditional linear regression approach to assess trends in uncorrected raw counts and detection-corrected abundance estimates of forest birds at Hakalau Forest National Wildlife Refuge, Hawai‘i. Most species demonstrated similar trends using either method. In general, evidence for trends using state-space models was less strong than for linear regression, as measured by estimates of precision. However, while the state-space models may sacrifice precision, the expectation is that these estimates provide a better representation of the real world biological processes of interest because they are partitioning process variation (environmental and demographic variation) and observation variation (sampling and model variation). The state-space approach also provides annual estimates of abundance which can be used by managers to set conservation strategies, and can be linked to factors that vary by year, such as climate, to better understand processes that drive population trends.

  1. Short-term landfill methane emissions dependency on wind.

    PubMed

    Delkash, Madjid; Zhou, Bowen; Han, Byunghyun; Chow, Fotini K; Rella, Chris W; Imhoff, Paul T

    2016-09-01

    Short-term (2-10h) variations of whole-landfill methane emissions have been observed in recent field studies using the tracer dilution method for emissions measurement. To investigate the cause of these variations, the tracer dilution method is applied using 1-min emissions measurements at Sandtown Landfill (Delaware, USA) for a 2-h measurement period. An atmospheric dispersion model is developed for this field test site, which is the first application of such modeling to evaluate atmospheric effects on gas plume transport from landfills. The model is used to examine three possible causes of observed temporal emissions variability: temporal variability of surface wind speed affecting whole landfill emissions, spatial variability of emissions due to local wind speed variations, and misaligned tracer gas release and methane emissions locations. At this site, atmospheric modeling indicates that variation in tracer dilution method emissions measurements may be caused by whole-landfill emissions variation with wind speed. Field data collected over the time period of the atmospheric model simulations corroborate this result: methane emissions are correlated with wind speed on the landfill surface with R(2)=0.51 for data 2.5m above ground, or R(2)=0.55 using data 85m above ground, with emissions increasing by up to a factor of 2 for an approximately 30% increase in wind speed. Although the atmospheric modeling and field test are conducted at a single landfill, the results suggest that wind-induced emissions may affect tracer dilution method emissions measurements at other landfills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. How to calculate H3 better.

    PubMed

    Pavanello, Michele; Tung, Wei-Cheng; Adamowicz, Ludwik

    2009-11-14

    Efficient optimization of the basis set is key to achieving a very high accuracy in variational calculations of molecular systems employing basis functions that are explicitly dependent on the interelectron distances. In this work we present a method for a systematic enlargement of basis sets of explicitly correlated functions based on the iterative-complement-interaction approach developed by Nakatsuji [Phys. Rev. Lett. 93, 030403 (2004)]. We illustrate the performance of the method in the variational calculations of H(3) where we use explicitly correlated Gaussian functions with shifted centers. The total variational energy (-1.674 547 421 Hartree) and the binding energy (-15.74 cm(-1)) obtained in the calculation with 1000 Gaussians are the most accurate results to date.

  3. Analysing the magnetopause internal structure: new possibilities offered by MMS

    NASA Astrophysics Data System (ADS)

    Belmont, G.; Rezeau, L.; Manuzzo, R.; Aunai, N.; Dargent, J.

    2017-12-01

    We explore the structure of the magnetopause using a crossing observed by the MMS spacecraft on October 16th, 2015. Several methods (MVA, BV, CVA) are first applied to compute the normal to the magnetopause considered as a whole. The different results obtained are not identical and we show that the whole boundary is not stationary and not planar, so that basic assumptions of these methods are not well satisfied. We then analyse more finely the internal structure for investigating the departures from planarity. Using the basic mathematical definition of what is a one-dimensional physical problem, we introduce a new method, called LNA (Local Normal Analysis) for determining the varying normal, and we compare the results so obtained with those coming from the MDD tool developed by [Shi et al., 2005]. This method gives the dimensionality of the magnetic variations from multi-point measurements and allows estimating the direction of the local normal using the magnetic field. On the other hand, LNA is a single-spacecraft method which gives the local normal from the magnetic field and particle data. This study shows that the magnetopause does include approximate one-dimensional sub-structures but also two and three dimensional intervals. It also shows that the dimensionality of the magnetic variations can differ from the variations of the other fields so that, at some places, the magnetic field can have a 1D structure although all the plasma variations do not verify the properties of a global one-dimensional problem. Finally a generalisation and a systematic application of the MDD method to the physical quantities of interest is shown.

  4. gsSKAT: Rapid gene set analysis and multiple testing correction for rare-variant association studies using weighted linear kernels.

    PubMed

    Larson, Nicholas B; McDonnell, Shannon; Cannon Albright, Lisa; Teerlink, Craig; Stanford, Janet; Ostrander, Elaine A; Isaacs, William B; Xu, Jianfeng; Cooney, Kathleen A; Lange, Ethan; Schleutker, Johanna; Carpten, John D; Powell, Isaac; Bailey-Wilson, Joan E; Cussenot, Olivier; Cancel-Tassin, Geraldine; Giles, Graham G; MacInnis, Robert J; Maier, Christiane; Whittemore, Alice S; Hsieh, Chih-Lin; Wiklund, Fredrik; Catalona, William J; Foulkes, William; Mandal, Diptasri; Eeles, Rosalind; Kote-Jarai, Zsofia; Ackerman, Michael J; Olson, Timothy M; Klein, Christopher J; Thibodeau, Stephen N; Schaid, Daniel J

    2017-05-01

    Next-generation sequencing technologies have afforded unprecedented characterization of low-frequency and rare genetic variation. Due to low power for single-variant testing, aggregative methods are commonly used to combine observed rare variation within a single gene. Causal variation may also aggregate across multiple genes within relevant biomolecular pathways. Kernel-machine regression and adaptive testing methods for aggregative rare-variant association testing have been demonstrated to be powerful approaches for pathway-level analysis, although these methods tend to be computationally intensive at high-variant dimensionality and require access to complete data. An additional analytical issue in scans of large pathway definition sets is multiple testing correction. Gene set definitions may exhibit substantial genic overlap, and the impact of the resultant correlation in test statistics on Type I error rate control for large agnostic gene set scans has not been fully explored. Herein, we first outline a statistical strategy for aggregative rare-variant analysis using component gene-level linear kernel score test summary statistics as well as derive simple estimators of the effective number of tests for family-wise error rate control. We then conduct extensive simulation studies to characterize the behavior of our approach relative to direct application of kernel and adaptive methods under a variety of conditions. We also apply our method to two case-control studies, respectively, evaluating rare variation in hereditary prostate cancer and schizophrenia. Finally, we provide open-source R code for public use to facilitate easy application of our methods to existing rare-variant analysis results. © 2017 WILEY PERIODICALS, INC.

  5. A variational theorem for creep with applications to plates and columns

    NASA Technical Reports Server (NTRS)

    Sanders, J Lyell, Jr; Mccomb, Harvey G , Jr; Schlechte, Floyd R

    1958-01-01

    A variational theorem is presented for a body undergoing creep. Solutions to problems of the creep behavior of plates, columns, beams, and shells can be obtained by means of the direct methods of the calculus of variations in conjunction with the stated theorem. The application of the theorem is illustrated for plates and columns by the solution of two sample problems.

  6. Development of an isocratic HPLC method for catechin quantification and its application to formulation studies.

    PubMed

    Li, Danhui; Martini, Nataly; Wu, Zimei; Wen, Jingyuan

    2012-10-01

    The aim of this study was to develop a simple, rapid and accurate isocratic HPLC analytical method to qualify and quantify five catechin derivatives, namely (+)-catechin (C), (-)-epigallocatechin (EGC), (-)-epicatechin gallate (ECG), (-)-epicatechin (EC) and (-)-epigallocatechin gallate (EGCG). To validate the analytical method, linearity, repeatability, intermediate precision, sensitivity, selectivity and recovery were investigated. The five catechin derivatives were completely separated by HPLC using a mobile phase containing 0.1% TFA in Milli-Q water (pH 2.0) mixed with methanol at the volume ratio of 75:25 at a flow rate of 0.8 ml/min. The method was shown to be linear (r²>0.99), repeatable with instrumental precision<2.0 and intra-assay precision<2.5 (%CV, percent coefficient of variation), precise with intra-day variation<1 and inter-day variation<2.5 (%CV, percent coefficient of variation) and sensitive (LOD<1 μg/mL and LOQ<3 μg/mL) over the calibration range for all five derivatives. Derivatives could be fully recovered in the presence of niosomal formulation (recovery rates>91%). Selectivity of the method was proven by the forced degradation studies, which showed that under acidic, basic, oxidation temperature and photolysis stresses, the parent drug can be separated from the degradation products by means of this analytical method. The described method was successfully applied in the in vitro release studies of catechin-loaded niosomes to manifest its utility in formulation characterization. Obtained results indicated that the drug release from niosomal formulations was a biphasic process and a diffusion mechanism regulated the permeation of catechin niosomes. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Quantification of histochemical stains using whole slide imaging: development of a method and demonstration of its usefulness in laboratory quality control.

    PubMed

    Gray, Allan; Wright, Alex; Jackson, Pete; Hale, Mike; Treanor, Darren

    2015-03-01

    Histochemical staining of tissue is a fundamental technique in tissue diagnosis and research, but it suffers from significant variability. Efforts to address this include laboratory quality controls and quality assurance schemes, but these rely on subjective interpretation of stain quality, are laborious and have low reproducibility. We aimed (1) to develop a method for histochemical stain quantification using whole slide imaging and image analysis and (2) to demonstrate its usefulness in measuring staining variation. A method to quantify the individual stain components of histochemical stains on virtual slides was developed. It was evaluated for repeatability and reproducibility, then applied to control sections of an appendix to quantify H&E staining (H/E intensities and H:E ratio) between automated staining machines and to measure differences between six regional diagnostic laboratories. The method was validated with <0.5% variation in H:E ratio measurement when using the same scanner for a batch of slides (ie, it was repeatable) but was not highly reproducible between scanners or over time, where variation of 7% was found. Application of the method showed H:E ratios between three staining machines varied from 0.69 to 0.93, H:E ratio variation over time was observed. Interlaboratory comparison demonstrated differences in H:E ratio between regional laboratories from 0.57 to 0.89. A simple method using whole slide imaging can be used to quantify and compare histochemical staining. This method could be deployed in routine quality assurance and quality control. Work is needed on whole slide imaging devices to improve reproducibility. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. An efficient multi-objective optimization method for water quality sensor placement within water distribution systems considering contamination probability variations.

    PubMed

    He, Guilin; Zhang, Tuqiao; Zheng, Feifei; Zhang, Qingzhou

    2018-06-20

    Water quality security within water distribution systems (WDSs) has been an important issue due to their inherent vulnerability associated with contamination intrusion. This motivates intensive studies to identify optimal water quality sensor placement (WQSP) strategies, aimed to timely/effectively detect (un)intentional intrusion events. However, these available WQSP optimization methods have consistently presumed that each WDS node has an equal contamination probability. While being simple in implementation, this assumption may do not conform to the fact that the nodal contamination probability may be significantly regionally varied owing to variations in population density and user properties. Furthermore, the low computational efficiency is another important factor that has seriously hampered the practical applications of the currently available WQSP optimization approaches. To address these two issues, this paper proposes an efficient multi-objective WQSP optimization method to explicitly account for contamination probability variations. Four different contamination probability functions (CPFs) are proposed to represent the potential variations of nodal contamination probabilities within the WDS. Two real-world WDSs are used to demonstrate the utility of the proposed method. Results show that WQSP strategies can be significantly affected by the choice of the CPF. For example, when the proposed method is applied to the large case study with the CPF accounting for user properties, the event detection probabilities of the resultant solutions are approximately 65%, while these values are around 25% for the traditional approach, and such design solutions are achieved approximately 10,000 times faster than the traditional method. This paper provides an alternative method to identify optimal WQSP solutions for the WDS, and also builds knowledge regarding the impacts of different CPFs on sensor deployments. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Development of Multistep and Degenerate Variational Integrators for Applications in Plasma Physics

    NASA Astrophysics Data System (ADS)

    Ellison, Charles Leland

    Geometric integrators yield high-fidelity numerical results by retaining conservation laws in the time advance. A particularly powerful class of geometric integrators is symplectic integrators, which are widely used in orbital mechanics and accelerator physics. An important application presently lacking symplectic integrators is the guiding center motion of magnetized particles represented by non-canonical coordinates. Because guiding center trajectories are foundational to many simulations of magnetically confined plasmas, geometric guiding center algorithms have high potential for impact. The motivation is compounded by the need to simulate long-pulse fusion devices, including ITER, and opportunities in high performance computing, including the use of petascale resources and beyond. This dissertation uses a systematic procedure for constructing geometric integrators --- known as variational integration --- to deliver new algorithms for guiding center trajectories and other plasma-relevant dynamical systems. These variational integrators are non-trivial because the Lagrangians of interest are degenerate - the Euler-Lagrange equations are first-order differential equations and the Legendre transform is not invertible. The first contribution of this dissertation is that variational integrators for degenerate Lagrangian systems are typically multistep methods. Multistep methods admit parasitic mode instabilities that can ruin the numerical results. These instabilities motivate the second major contribution: degenerate variational integrators. By replicating the degeneracy of the continuous system, degenerate variational integrators avoid parasitic mode instabilities. The new methods are therefore robust geometric integrators for degenerate Lagrangian systems. These developments in variational integration theory culminate in one-step degenerate variational integrators for non-canonical magnetic field line flow and guiding center dynamics. The guiding center integrator assumes coordinates such that one component of the magnetic field is zero; it is shown how to construct such coordinates for nested magnetic surface configurations. Additionally, collisional drag effects are incorporated in the variational guiding center algorithm for the first time, allowing simulation of energetic particle thermalization. Advantages relative to existing canonical-symplectic and non-geometric algorithms are numerically demonstrated. All algorithms have been implemented as part of a modern, parallel, ODE-solving library, suitable for use in high-performance simulations.

  10. Magnetic field effect on the energy levels of an exciton in a GaAs quantum dot: Application for excitonic lasers.

    PubMed

    Jahan, K Luhluh; Boda, A; Shankar, I V; Raju, Ch Narasimha; Chatterjee, Ashok

    2018-03-22

    The problem of an exciton trapped in a Gaussian quantum dot (QD) of GaAs is studied in both two and three dimensions in the presence of an external magnetic field using the Ritz variational method, the 1/N expansion method and the shifted 1/N expansion method. The ground state energy and the binding energy of the exciton are obtained as a function of the quantum dot size, confinement strength and the magnetic field and compared with those available in the literature. While the variational method gives the upper bound to the ground state energy, the 1/N expansion method gives the lower bound. The results obtained from the shifted 1/N expansion method are shown to match very well with those obtained from the exact diagonalization technique. The variation of the exciton size and the oscillator strength of the exciton are also studied as a function of the size of the quantum dot. The excited states of the exciton are computed using the shifted 1/N expansion method and it is suggested that a given number of stable excitonic bound states can be realized in a quantum dot by tuning the quantum dot parameters. This can open up the possibility of having quantum dot lasers using excitonic states.

  11. An intercomparison for NIRS and NYU passive thoron gas detectors at NYU.

    PubMed

    Sorimachi, Atsuyuki; Ishikawa, Tetsuo; Tokonami, Shinji; Chittaporn, Passaporn; Harley, Naomi H

    2012-04-01

    An intercomparison on thoron ((220)Rn) measurement was carried out between National Institute of Radiological Sciences, Japan (NIRS), and New York University School of Medicine, USA (NYU). The measurements of (220)Rn concentration at NIRS and NYU were performed by using the scintillation cell method and the two-filter method, respectively, as the standard measurement method. Three types of alpha track detectors based on passive radon ((222)Rn)-(220)Rn discriminative measurement technique were used: Raduet and Radopot detectors were used at NIRS, and four-leaf detectors were used at NYU. In this study, the authors evaluated (220)Rn concentration variation in terms of run for exposure, measurement method, and exposure chamber. The detectors were exposed to (220)Rn gas with approximately 15 kBq m(-3) during the period from 0.75 to 3 d. As a result, the variation of each measurement method among these exposure runs was comparable to or less than that for the two-filter method. Agreement between the standard measurement methods of NIRS and NYU was observed to be about 10%, as is the case with the passive detectors. The Raduet detector showed a large variation in the detection response between the NIRS and NYU chambers, which could be related to different traceability.

  12. The Uncertainty of Long-term Linear Trend in Global SST Due to Internal Variation

    NASA Astrophysics Data System (ADS)

    Lian, Tao

    2016-04-01

    In most parts of the global ocean, the magnitude of the long-term linear trend in sea surface temperature (SST) is much smaller than the amplitude of local multi-scale internal variation. One can thus use the record of a specified period to arbitrarily determine the value and the sign of the long-term linear trend in regional SST, and further leading to controversial conclusions on how global SST responds to global warming in the recent history. Analyzing the linear trend coefficient estimated by the ordinary least-square method indicates that the linear trend consists of two parts: One related to the long-term change, and the other related to the multi-scale internal variation. The sign of the long-term change can be correctly reproduced only when the magnitude of the linear trend coefficient is greater than a theoretical threshold which scales the influence from the multi-scale internal variation. Otherwise, the sign of the linear trend coefficient will depend on the phase of the internal variation, or in the other words, the period being used. An improved least-square method is then proposed to reduce the theoretical threshold. When apply the new method to a global SST reconstruction from 1881 to 2013, we find that in a large part of Pacific, the southern Indian Ocean and North Atlantic, the influence from the multi-scale internal variation on the sign of the linear trend coefficient can-not be excluded. Therefore, the resulting warming or/and cooling linear trends in these regions can-not be fully assigned to global warming.

  13. Structural Organization and Strain Variation in the Genome of Varicella Zoster Virus

    DTIC Science & Technology

    1984-10-23

    Zoster 6 Growth of VZV in tissue culture 9 Structure and proteins of VZV 15 Structure of HSV DNA 20 Classification of herpesviruses based on DNA...structure 28 Strain variation in herpesvirus DNA 31 VZV DNA 33 Specific aims 36 II. MATERIALS AND METHODS 38 Cells and viruses 38 Isolation of virus...endonuclease fragments by colony hybridization 106 21. Selected methods of restriction endonuclease mapping .... 109 22. Identification of

  14. Approximate Solution of Time-Fractional Advection-Dispersion Equation via Fractional Variational Iteration Method

    PubMed Central

    İbiş, Birol

    2014-01-01

    This paper aims to obtain the approximate solution of time-fractional advection-dispersion equation (FADE) involving Jumarie's modification of Riemann-Liouville derivative by the fractional variational iteration method (FVIM). FVIM provides an analytical approximate solution in the form of a convergent series. Some examples are given and the results indicate that the FVIM is of high accuracy, more efficient, and more convenient for solving time FADEs. PMID:24578662

  15. Unconventional Hamilton-type variational principle in phase space and symplectic algorithm

    NASA Astrophysics Data System (ADS)

    Luo, En; Huang, Weijiang; Zhang, Hexin

    2003-06-01

    By a novel approach proposed by Luo, the unconventional Hamilton-type variational principle in phase space for elastodynamics of multidegree-of-freedom system is established in this paper. It not only can fully characterize the initial-value problem of this dynamic, but also has a natural symplectic structure. Based on this variational principle, a symplectic algorithm which is called a symplectic time-subdomain method is proposed. A non-difference scheme is constructed by applying Lagrange interpolation polynomial to the time subdomain. Furthermore, it is also proved that the presented symplectic algorithm is an unconditionally stable one. From the results of the two numerical examples of different types, it can be seen that the accuracy and the computational efficiency of the new method excel obviously those of widely used Wilson-θ and Newmark-β methods. Therefore, this new algorithm is a highly efficient one with better computational performance.

  16. A Variational Method in Out-of-Equilibrium Physical Systems

    PubMed Central

    Pinheiro, Mario J.

    2013-01-01

    We propose a new variational principle for out-of-equilibrium dynamic systems that are fundamentally based on the method of Lagrange multipliers applied to the total entropy of an ensemble of particles. However, we use the fundamental equation of thermodynamics on differential forms, considering U and S as 0-forms. We obtain a set of two first order differential equations that reveal the same formal symplectic structure shared by classical mechanics, fluid mechanics and thermodynamics. From this approach, a topological torsion current emerges of the form , where Aj and ωk denote the components of the vector potential (gravitational and/or electromagnetic) and where ω denotes the angular velocity of the accelerated frame. We derive a special form of the Umov-Poynting theorem for rotating gravito-electromagnetic systems. The variational method is then applied to clarify the working mechanism of particular devices. PMID:24316718

  17. Monte Carlo estimation of total variation distance of Markov chains on large spaces, with application to phylogenetics.

    PubMed

    Herbei, Radu; Kubatko, Laura

    2013-03-26

    Markov chains are widely used for modeling in many areas of molecular biology and genetics. As the complexity of such models advances, it becomes increasingly important to assess the rate at which a Markov chain converges to its stationary distribution in order to carry out accurate inference. A common measure of convergence to the stationary distribution is the total variation distance, but this measure can be difficult to compute when the state space of the chain is large. We propose a Monte Carlo method to estimate the total variation distance that can be applied in this situation, and we demonstrate how the method can be efficiently implemented by taking advantage of GPU computing techniques. We apply the method to two Markov chains on the space of phylogenetic trees, and discuss the implications of our findings for the development of algorithms for phylogenetic inference.

  18. Analytical and variational numerical methods for unstable miscible displacement flows in porous media

    NASA Astrophysics Data System (ADS)

    Scovazzi, Guglielmo; Wheeler, Mary F.; Mikelić, Andro; Lee, Sanghyun

    2017-04-01

    The miscible displacement of one fluid by another in a porous medium has received considerable attention in subsurface, environmental and petroleum engineering applications. When a fluid of higher mobility displaces another of lower mobility, unstable patterns - referred to as viscous fingering - may arise. Their physical and mathematical study has been the object of numerous investigations over the past century. The objective of this paper is to present a review of these contributions with particular emphasis on variational methods. These algorithms are tailored to real field applications thanks to their advanced features: handling of general complex geometries, robustness in the presence of rough tensor coefficients, low sensitivity to mesh orientation in advection dominated scenarios, and provable convergence with fully unstructured grids. This paper is dedicated to the memory of Dr. Jim Douglas Jr., for his seminal contributions to miscible displacement and variational numerical methods.

  19. Adaptive torque estimation of robot joint with harmonic drive transmission

    NASA Astrophysics Data System (ADS)

    Shi, Zhiguo; Li, Yuankai; Liu, Guangjun

    2017-11-01

    Robot joint torque estimation using input and output position measurements is a promising technique, but the result may be affected by the load variation of the joint. In this paper, a torque estimation method with adaptive robustness and optimality adjustment according to load variation is proposed for robot joint with harmonic drive transmission. Based on a harmonic drive model and a redundant adaptive robust Kalman filter (RARKF), the proposed approach can adapt torque estimation filtering optimality and robustness to the load variation by self-tuning the filtering gain and self-switching the filtering mode between optimal and robust. The redundant factor of RARKF is designed as a function of the motor current for tolerating the modeling error and load-dependent filtering mode switching. The proposed joint torque estimation method has been experimentally studied in comparison with a commercial torque sensor and two representative filtering methods. The results have demonstrated the effectiveness of the proposed torque estimation technique.

  20. The Intracytoplasmic Domain of the Env Transmembrane Protein Is a Locus for Attenuation of Simian Immunodeficiency Virus SIVmac in Rhesus Macaques

    PubMed Central

    Shacklett, Barbara L.; Weber, Claudia Jo; Shaw, Karen E. S.; Keddie, Elise M.; Gardner, Murray B.; Sonigo, Pierre; Luciw, Paul A.

    2000-01-01

    The human and simian immunodeficiency virus (HIV-1 and SIVmac) transmembrane proteins contain unusually long intracytoplasmic domains (ICD-TM). These domains are suggested to play a role in envelope fusogenicity, interaction with the viral matrix protein during assembly, viral infectivity, binding of intracellular calmodulin, disruption of membranes, and induction of apoptosis. Here we describe a novel mutant virus, SIVmac-M4, containing multiple mutations in the coding region for the ICD-TM of pathogenic molecular clone SIVmac239. Parental SIVmac239-Nef+ produces high-level persistent viremia and simian AIDS in both juvenile and newborn rhesus macaques. The ICD-TM region of SIVmac-M4 contains three stop codons, a +1 frameshift, and mutation of three highly conserved, charged residues in the conserved C-terminal alpha-helix referred to as lentivirus lytic peptide 1 (LLP-1). Overlapping reading frames for tat, rev, and nef are not affected by these changes. In this study, four juvenile macaques received SIVmac-M4 by intravenous injection. Plasma viremia, as measured by branched-DNA (bDNA) assay, reached a peak at 2 weeks postinoculation but dropped to below detectable levels by 12 weeks. At over 1.5 years postinoculation, all four juvenile macaques remain healthy and asymptomatic. In a subsequent experiment, four neonatal rhesus macaques were given SIVmac-M4 intravenously. These animals exhibited high levels of viremia in the acute phase (2 weeks postinoculation) but are showing a relatively low viral load in the chronic phase of infection, with no clinical signs of disease for 1 year. These findings demonstrated that the intracytoplasmic domain of the transmembrane Env (Env-TM) is a locus for attenuation in rhesus macaques. PMID:10846063

Top