Science.gov

Sample records for double hit method

  1. Cutaneous presentation of Double Hit Lymphoma

    PubMed Central

    Khelfa, Yousef; Lebowicz, Yehuda

    2016-01-01

    Diffuse large B-cell lymphoma (DLBCL) is the most common type of non-Hodgkin lymphoma (NHL), representing approximately 25% of diagnosed NHL. DLBCL is heterogeneous disease both clinically and genetically. The 3 most common chromosomal translocations in DLBCL involve the oncogenes BCL2, BCL6, and MYC. Double hit (DH) DLBCL is an aggressive form in which MYC rearrangement is associated with either BCL2 or BCL6 rearrangement. Patients typically present with a rapidly growing mass, often with B symptoms. Extranodal disease is often present. Though there is a paucity of prospective trials in this subtype, double hit lymphoma (DHL) has been linked to very poor outcomes when patients are treated with standard R-CHOP. There is, therefore, a lack of consensus regarding the standard treatment for DHL. Several retrospective analyses have been conducted to help guide treatment of this disease. These suggest that DA EPOCH-R may be the most promising regimen and that achievement of complete resolution predicts better long-term outcomes. PMID:27115017

  2. Current challenges and novel treatment strategies in double hit lymphomas

    PubMed Central

    Anderson, Mary Ann; Tsui, Alpha; Wall, Meaghan; Huang, David C. S.; Roberts, Andrew W.

    2016-01-01

    High-grade B-cell lymphomas with recurrent chromosomal break points have been termed ‘double hit lymphoma’ (DHL). The most commonly seen DHL is diffuse large B-cell lymphoma (DLBCL) with t(14;18) and t(8;14) or t(8;22) resulting in overexpression of BCL2 and MYC, respectively. The increased proliferation due to MYC overexpression, without the ability for an apoptotic brake as a result of BCL2 overexpression, results in ‘the perfect storm of oncogenesis’. Thus this disease presents a number of diagnostic and therapeutic challenges for the hematologist. The first and foremost challenge is to recognize the DHL. As different morphological entities can be affected it is incumbent on pathologists and clinicians to maintain a high index of suspicion especially in disease that appears unusually aggressive or refractory to therapy. Diagnosis by fluorescence in situ hybridization (FISH) is a sensitive and specific method for detection of the disease but is time-consuming and expensive. While detection by immunohistochemistry (IHC) is sensitive and correlates with survival, standardized methods for this are not widely agreed upon. The second and equally important challenge in DHL is optimizing clinical outcome in a group of patients for whom the prognosis is widely regarded as poor. While improvements have been achieved by dose escalating standard chemotherapeutic regimens, many patients continue to do badly. Furthermore as a disease of aging many patients are unsuitable for dose-intensive chemotherapy regimens. There are now multiple novel targeted agents in various stages of clinical development that offer hope for better outcomes without undue toxicity. Among the most exciting of these developments include specific inhibitors of both BCL2 and MYC. PMID:26834954

  3. Double-hit and double-protein-expression lymphomas: aggressive and refractory lymphomas.

    PubMed

    Sarkozy, Clémentine; Traverse-Glehen, Alexandra; Coiffier, Bertrand

    2015-11-01

    Double-hit lymphoma (DHL) is a subgroup of aggressive lymphomas with both MYC and BCL2 gene rearrangements, characterised by a rapidly progressing clinical course that is refractory to aggressive treatment and short survival. Over time, the definition was modified and now includes diffuse large B-cell lymphoma (DLBCL) with MYC translocation combined with an additional translocation involving BCL2 or BCL6. Some cases that have a similar clinical course with concomitant overexpression of MYC or BCL2 proteins were recently characterised as immunohistochemical double-hit lymphomas (ie, double-protein-expression lymphomas [DPLs]). The clinical course of these DPLs is worse than so-called standard DLBCL but suggested by some studies to be slightly better than DHL, although there is overlap between the two categories. Present treatment does not allow cure or long-term survival in patients with genetic or immunohistochemical double-hit lymphomas, but several new drugs are being developed. PMID:26545844

  4. Comparison of recrystallisation kinetics determined by stress relaxation, double hit, optical metallography and EBSD approaches

    SciTech Connect

    Dzubinsky, M.; Husain, Z.; Haaften, W.M. van

    2004-05-15

    A comparison of the recrystallisation kinetics determined by stress relaxation (SR), double-hit (DH), optical metallography and scanning electron microscope/electron backscattered diffraction (SEM/EBSD) mapping experimental approaches has been conducted. Two different types of steel were used as experimental material: C-Mn and interstitial-free (IF). Tests were carried out in the austenitic region for C-Mn steel and just above the Ar{sub 1} temperature for IF steel. Both steels were investigated in static and postdynamic recrystallisation (SRx and PDRx, respectively) regions. The work indicates that some differences exist between the results given by these methods. The biggest correction to the experimental results in the SRx region has to be performed on the 'raw data' obtained by the SR method. The SR method, owing to its continually applied stress, tends to accelerate the recrystallisation kinetics. The estimation of the recrystallised fraction in the PDRx region by the DH test gives even higher error because of dynamic changes of microstructure during the second hit.

  5. Chromosome doubling method

    DOEpatents

    Kato, Akio

    2006-11-14

    The invention provides methods for chromosome doubling in plants. The technique overcomes the low yields of doubled progeny associated with the use of prior techniques for doubling chromosomes in plants such as grasses. The technique can be used in large scale applications and has been demonstrated to be highly effective in maize. Following treatment in accordance with the invention, plants remain amenable to self fertilization, thereby allowing the efficient isolation of doubled progeny plants.

  6. A double hit model for the distribution of time to AIDS onset

    NASA Astrophysics Data System (ADS)

    Chillale, Nagaraja Rao

    2013-09-01

    Incubation time is a key epidemiologic descriptor of an infectious disease. In the case of HIV infection this is a random variable and is probably the longest one. The probability distribution of incubation time is the major determinant of the relation between the incidences of HIV infection and its manifestation to Aids. This is also one of the key factors used for accurate estimation of AIDS incidence in a region. The present article i) briefly reviews the work done, points out uncertainties in estimation of AIDS onset time and stresses the need for its precise estimation, ii) highlights some of the modelling features of onset distribution including immune failure mechanism, and iii) proposes a 'Double Hit' model for the distribution of time to AIDS onset in the cases of (a) independent and (b) dependent time variables of the two markers and examined the applicability of a few standard probability models.

  7. Double-hit and triple-hit lymphomas arising from follicular lymphoma following acquisition of MYC: report of two cases and literature review.

    PubMed

    Xu, Xiaoxiao; Zhang, Le; Wang, Yafei; Zhang, Qing; Zhang, Lianyu; Sun, Baocun; Zhang, Yizhuo

    2013-01-01

    Double-hit or triple-hit B-cell lymphomas (DHL and THL) are rare subtype lymphomas usually associated with poor prognosis. It is defined by two or three recurrent chromosome translocations; MYC/8q24 loci, usually in combination with the t (14; 18) (q32; q21) bcl-2 gene or/and BCL6/3q27 chromosomal translocation. DHL was often observed both in de-novo diffuse large B cell lymphomas (DLBCL). It is otherwise unclassifiable, showing features intermediate that of large B-cell lymphoma and Burkitt lymphoma. Here, we present two follicular lymphoma patients; one transformed to THL, another transformed to DHL. Both cases revealed aggressive clinical courses with poor prognosis and associated with acquisition of c-Myc gene (MYC) and central nervous system (CNS) involvement. We reviewed the related literature, correlated the immunophenotype and clinical manifestations such as response to therapy and prognosis. Although the incidence of DHT and THL is low, cytogenetic and FISH analyses should be included when B-cell lymphoma patients experience relapse or refractory course of disease. We concluded that c-Myc may contribute to aggressive transformation, and more mechanism-based therapy should be explored. PMID:23573328

  8. "Hits" (Not "Discussion Posts") Predict Student Success in Online Courses: A Double Cross-Validation Study

    ERIC Educational Resources Information Center

    Ramos, Cheryl; Yudko, Errol

    2008-01-01

    The efficacy of individual components of an online course on positive course outcome was examined via stepwise multiple regression analysis. Outcome was measured as the student's total score on all exams given during the course. The predictors were page hits, discussion posts, and discussion reads. The vast majority of the variance of outcome was…

  9. Dynamics of hepatic gene expression and serum cytokine profiles in single and double-hit burn and sepsis animal models

    PubMed Central

    Rao, Rohit; Orman, Mehmet A.; Berthiaume, Francois; Androulakis, Ioannis P.

    2015-01-01

    We simulate the pathophysiology of severe burn trauma and burn-induced sepsis, using rat models of experimental burn injury and cecal ligation and puncture (CLP) either individually (singe-hit model) or in combination (double-hit model). The experimental burn injury simulates a systemic but sterile pro-inflammatory response, while the CLP simulates the effect of polymicrobial sepsis. Given the liver׳s central role in mediating the host immune response and onset of hypermetabolism after burn injury, elucidating the alterations in hepatic gene expression in response to injury can lead to a better understanding of the regulation of the inflammatory response, whereas circulating cytokine protein expression, reflects key systemic inflammatory mediators. In this article, we present both the hepatic gene expression and circulating cytokine/chemokine protein expression data for the above-mentioned experimental model to gain insights into the temporal dynamics of the inflammatory and hypermetabolic response following burn and septic injury. This data article supports results discussed in research articles (Yang et al., 2012 [1,4]; Mattick et al. 2012, 2013 [2,3]; Nguyen et al., 2014 [5]; Orman et al., 2011, 2012 [6–8]). PMID:26217749

  10. Six Biophysical Screening Methods Miss a Large Proportion of Crystallographically Discovered Fragment Hits: A Case Study.

    PubMed

    Schiebel, Johannes; Radeva, Nedyalka; Krimmer, Stefan G; Wang, Xiaojie; Stieler, Martin; Ehrmann, Frederik R; Fu, Kan; Metz, Alexander; Huschmann, Franziska U; Weiss, Manfred S; Mueller, Uwe; Heine, Andreas; Klebe, Gerhard

    2016-06-17

    Fragment-based lead discovery (FBLD) has become a pillar in drug development. Typical applications of this method comprise at least two biophysical screens as prefilter and a follow-up crystallographic experiment on a subset of fragments. Clearly, structural information is pivotal in FBLD, but a key question is whether such a screening cascade strategy will retrieve the majority of fragment-bound structures. We therefore set out to screen 361 fragments for binding to endothiapepsin, a representative of the challenging group of aspartic proteases, employing six screening techniques and crystallography in parallel. Crystallography resulted in the very high number of 71 structures. Yet alarmingly, 44% of these hits were not detected by any biophysical screening approach. Moreover, any screening cascade, building on the results from two or more screening methods, would have failed to predict at least 73% of these hits. We thus conclude that, at least in the present case, the frequently applied biophysical prescreening filters deteriorate the number of possible X-ray hits while only the immediate use of crystallography enables exhaustive retrieval of a maximum of fragment structures, which represent a rich source guiding hit-to-lead-to-drug evolution. PMID:27028906

  11. Combining Computational Methods for Hit to Lead Optimization in Mycobacterium tuberculosis Drug Discovery

    PubMed Central

    Ekins, Sean; Freundlich, Joel S.; Hobrath, Judith V.; White, E. Lucile; Reynolds, Robert C

    2013-01-01

    Purpose Tuberculosis treatments need to be shorter and overcome drug resistance. Our previous large scale phenotypic high-throughput screening against Mycobacterium tuberculosis (Mtb) has identified 737 active compounds and thousands that are inactive. We have used this data for building computational models as an approach to minimize the number of compounds tested. Methods A cheminformatics clustering approach followed by Bayesian machine learning models (based on publicly available Mtb screening data) was used to illustrate that application of these models for screening set selections can enrich the hit rate. Results In order to explore chemical diversity around active cluster scaffolds of the dose-response hits obtained from our previous Mtb screens a set of 1924 commercially available molecules have been selected and evaluated for antitubercular activity and cytotoxicity using Vero, THP-1 and HepG2 cell lines with 4.3%, 4.2% and 2.7% hit rates, respectively. We demonstrate that models incorporating antitubercular and cytotoxicity data in Vero cells can significantly enrich the selection of non-toxic actives compared to random selection. Across all cell lines, the Molecular Libraries Small Molecule Repository (MLSMR) and cytotoxicity model identified ~10% of the hits in the top 1% screened (>10 fold enrichment). We also showed that seven out of nine Mtb active compounds from different academic published studies and eight out of eleven Mtb active compounds from a pharmaceutical screen (GSK) would have been identified by these Bayesian models. Conclusion Combining clustering and Bayesian models represents a useful strategy for compound prioritization and hit-to lead optimization of antitubercular agents. PMID:24132686

  12. Novel Double-Hit Model of Radiation and Hyperoxia-Induced Oxidative Cell Damage Relevant to Space Travel.

    PubMed

    Pietrofesa, Ralph A; Velalopoulou, Anastasia; Lehman, Stacey L; Arguiri, Evguenia; Solomides, Pantelis; Koch, Cameron J; Mishra, Om P; Koumenis, Constantinos; Goodwin, Thomas J; Christofidou-Solomidou, Melpo

    2016-01-01

    Spaceflight occasionally requires multiple extravehicular activities (EVA) that potentially subject astronauts to repeated changes in ambient oxygen superimposed on those of space radiation exposure. We thus developed a novel in vitro model system to test lung cell damage following repeated exposure to radiation and hyperoxia. Non-tumorigenic murine alveolar type II epithelial cells (C10) were exposed to >95% O₂ for 8 h only (O₂), 0.25 Gy ionizing γ-radiation (IR) only, or a double-hit combination of both challenges (O₂ + IR) followed by 16 h of normoxia (ambient air containing 21% O₂ and 5% CO₂) (1 cycle = 24 h, 2 cycles = 48 h). Cell survival, DNA damage, apoptosis, and indicators of oxidative stress were evaluated after 1 and 2 cycles of exposure. We observed a significant (p < 0.05) decrease in cell survival across all challenge conditions along with an increase in DNA damage, determined by Comet analysis and H2AX phosphorylation, and apoptosis, determined by Annexin-V staining, relative to cells unexposed to hyperoxia or radiation. DNA damage (GADD45α and cleaved-PARP), apoptotic (cleaved caspase-3 and BAX), and antioxidant (HO-1 and Nqo1) proteins were increased following radiation and hyperoxia exposure after 1 and 2 cycles of exposure. Importantly, exposure to combination challenge O₂ + IR exacerbated cell death and DNA damage compared to individual exposures O₂ or IR alone. Additionally levels of cell cycle proteins phospho-p53 and p21 were significantly increased, while levels of CDK1 and Cyclin B1 were decreased at both time points for all exposure groups. Similarly, proteins involved in cell cycle arrest was more profoundly changed with the combination challenges as compared to each stressor alone. These results correlate with a significant 4- to 6-fold increase in the ratio of cells in G2/G1 after 2 cycles of exposure to hyperoxic conditions. We have characterized a novel in vitro model of double-hit, low-level radiation and hyperoxia

  13. Novel Double-Hit Model of Radiation and Hyperoxia-Induced Oxidative Cell Damage Relevant to Space Travel

    PubMed Central

    Pietrofesa, Ralph A.; Velalopoulou, Anastasia; Lehman, Stacey L.; Arguiri, Evguenia; Solomides, Pantelis; Koch, Cameron J.; Mishra, Om P.; Koumenis, Constantinos; Goodwin, Thomas J.; Christofidou-Solomidou, Melpo

    2016-01-01

    Spaceflight occasionally requires multiple extravehicular activities (EVA) that potentially subject astronauts to repeated changes in ambient oxygen superimposed on those of space radiation exposure. We thus developed a novel in vitro model system to test lung cell damage following repeated exposure to radiation and hyperoxia. Non-tumorigenic murine alveolar type II epithelial cells (C10) were exposed to >95% O2 for 8 h only (O2), 0.25 Gy ionizing γ-radiation (IR) only, or a double-hit combination of both challenges (O2 + IR) followed by 16 h of normoxia (ambient air containing 21% O2 and 5% CO2) (1 cycle = 24 h, 2 cycles = 48 h). Cell survival, DNA damage, apoptosis, and indicators of oxidative stress were evaluated after 1 and 2 cycles of exposure. We observed a significant (p < 0.05) decrease in cell survival across all challenge conditions along with an increase in DNA damage, determined by Comet analysis and H2AX phosphorylation, and apoptosis, determined by Annexin-V staining, relative to cells unexposed to hyperoxia or radiation. DNA damage (GADD45α and cleaved-PARP), apoptotic (cleaved caspase-3 and BAX), and antioxidant (HO-1 and Nqo1) proteins were increased following radiation and hyperoxia exposure after 1 and 2 cycles of exposure. Importantly, exposure to combination challenge O2 + IR exacerbated cell death and DNA damage compared to individual exposures O2 or IR alone. Additionally levels of cell cycle proteins phospho-p53 and p21 were significantly increased, while levels of CDK1 and Cyclin B1 were decreased at both time points for all exposure groups. Similarly, proteins involved in cell cycle arrest was more profoundly changed with the combination challenges as compared to each stressor alone. These results correlate with a significant 4- to 6-fold increase in the ratio of cells in G2/G1 after 2 cycles of exposure to hyperoxic conditions. We have characterized a novel in vitro model of double-hit, low-level radiation and hyperoxia exposure that

  14. A novel finite element method based biomechanical model for HIT-Robot Assisted Orthopedic Surgery System.

    PubMed

    Jia, Zhiheng; Du, Zhijiang; Monan, Wang

    2006-01-01

    To build a biomechanical human model can make much sense for surgical training and surgical rehearse. Especially, it will be more meaningful to develop a biomechanical model to guide the control strategy for the medical robots in HIT-Robot Assisted Orthopedic Surgery System (HIT-RAOS). In this paper, based the successful work of others, a novel reliable finite element method based biomechanical model for HIT-RAOS was developed to simulate the force needed in reposition procedure. Geometrical model was obtained from 3D reconstruction from CT images of a just died man. Using this boundary information, the finite element model of the leg including part of femur, broken upper tibia, broken lower tibia, talus, calcaneus, Kirschner nail, muscles and other soft tissues was created in ANSYS. Furthermore, as it was too difficult to reconstruct the accurate geometry model from CT images, a new simplified muscle model was presented. The bony structures and tendons were defined as linearly elastic, while soft tissues and muscle fibers were assumed to be hyper elastic. To validate this model, the same dead man was involved to simulate the patient, and a set of data of the force needed to separate the two broken bones and the distance between them in reposition procedure was recorded. Then, another set of data was acquired from the finite element analysis. After comparison, the two sets of data matched well. The Finite Element model was proved to be acceptable. PMID:17945663

  15. A novel finite element method based biomechanical model for HIT-robot assisted orthopedic surgery system.

    PubMed

    Jia, Zhiheng; Du, Zhijiang; Wang, Monan

    2006-01-01

    To build a biomechanical human model can make much sense for surgical training and surgical rehearse. Especially, it will be more meaningful to develop a biomechanical model to guide the control strategy for the medical robots in HIT-Robot Assisted Orthopedic Surgery System (HIT-RAOS). In this paper, based the successful work of others, a novel reliable finite element method based biomechanical model for HIT-RAOS was developed to simulate the force needed in reposition procedure. Geometrical model was obtained from 3D reconstruction from CT images of a just died man. Using this boundary information, the finite element model of the leg including part of femur, broken upper tibia, broken lower tibia, talus, calcaneus, Kirschner nail, muscles and other soft tissues was created in ANSYS. Furthermore, as it was too difficult to reconstruct the accurate geometry model from CT images, a new simplified muscle model was presented. The bony structures and tendons were defined as linearly elastic, while soft tissues and muscle fibers were assumed to be hyper elastic. To validate this model, the same dead man was involved to simulate the patient, and a set of data of the force needed to separate the two broken bones and the distance between them in reposition procedure was recorded. Then, another set of data was acquired from the finite element analysis. After comparison, the two sets of data matched well. The Finite Element model was proved to be acceptable. PMID:17959437

  16. A double hit implicates DIAPH3 as an autism risk gene

    PubMed Central

    Vorstman, JAS; van Daalen, E; Jalali, GR; Schmidt, ERE; Pasterkamp, RJ; de Jonge, M; Hennekam, EAM; Janson, E; Staal, WG; van der Zwaag, B; Burbach, JPH; Kahn, RS; Emanuel, BS; van Engeland, H; Ophoff, RA

    2012-01-01

    Recent studies have shown that more than 10% of autism cases are caused by de novo structural genomic rearrangements. Given that some heritable copy number variants (CNVs) have been observed in patients as well as in healthy controls, to date little attention has been paid to the potential function of these non-de novo CNVs in causing autism. A normally intelligent patient with autism, with non-affected parents, was identified with a maternally inherited 10 Mb deletion at 13q21.2. Sequencing of the genes within the deletion identified a paternally inherited nonsynonymous amino-acid substitution at position 614 of diaphanous homolog 3 (DIAPH3) (proline to threonine; Pro614Thr). This variant, present in a highly conserved domain, was not found in 328 healthy subjects. Experiments showed a transient expression of Diaph3 in the developing murine cerebral cortex, indicating it has a function in brain development. Transfection of Pro614Thr in murine fibroblasts showed a significant reduction in the number of induced filopodia in comparison to the wild-type gene. DIAPH3 is involved in cell migration, axon guidance and neuritogenesis, and is suggested to function downstream of SHANK3. Our findings strongly suggest DIAPH3 as a novel autism susceptibility gene. Moreover, this report of a ‘double-hit’ compound heterozygote for a large, maternally inherited, genomic deletion and a paternally inherited rare missense mutation shows that not only de novo genomic variants in patients should be taken seriously in further study but that inherited CNVs may also provide valuable information. PMID:20308993

  17. B-cell lymphomas with concurrent MYC and BCL2 abnormalities other than translocations behave similarly to MYC/BCL2 double-hit lymphomas.

    PubMed

    Li, Shaoying; Seegmiller, Adam C; Lin, Pei; Wang, Xuan J; Miranda, Roberto N; Bhagavathi, Sharathkumar; Medeiros, L Jeffrey

    2015-02-01

    Large B-cell lymphomas with IGH@BCL2 and MYC rearrangement, known as double-hit lymphoma (DHL), are clinically aggressive neoplasms with a poor prognosis. Some large B-cell lymphomas have concurrent abnormalities of MYC and BCL2 other than coexistent translocations. Little is known about patients with these lymphomas designated here as atypical DHL. We studied 40 patients of atypical DHL including 21 men and 19 women, with a median age of 60 years. Nine (23%) patients had a history of B-cell non-Hodgkin lymphoma. There were 30 diffuse large B-cell lymphoma (DLBCL), 7 B-cell lymphoma, unclassifiable, with features intermediate between DLBCL and Burkitt lymphoma, and 3 DLBCL with coexistent follicular lymphoma. CD10, BCL2, and MYC were expressed in 28/39 (72%), 33/35 (94%), and 14/20 (70%) cases, respectively. Patients were treated with standard (n=14) or more aggressive chemotherapy regimens (n=17). We compared the atypical DHL group with 76 patients with DHLand 35 patients with DLBCL lacking MYC and BCL2 abnormalities. The clinicopathologic features and therapies were similar between patients with atypical and typical DHL. The overall survival of patients with atypical double-hit lymphoma was similar to that of patients with double-hit lymphoma (P=0.47) and significantly worse than that of patients with DLBCL with normal MYC and BCL2 (P=0.02). There were some minor differences. Cases of atypical double-hit lymphoma more often have DLBCL morphology (P<0.01), less frequently expressed CD10 (P<0.01), and patients less often had an elevated serum lactate dehydrogenase level (P=0.01). In aggregate, these results support expanding the category of MYC/BCL2 DHL to include large B-cell lymphomas with coexistent MYC and BCL2 abnormalities other than concurrent translocations. PMID:25103070

  18. Descending nociceptive inhibition is modulated in a time-dependent manner in a double-hit model of chronic/tonic pain.

    PubMed

    Parent, A J; Tétreault, P; Roux, M; Belleville, K; Longpré, J-M; Beaudet, N; Goffaux, P; Sarret, P

    2016-02-19

    Clinical evidences suggest that an imbalance between descending inhibition and facilitation drives the development of chronic pain. However, potential mechanisms promoting the establishment of a persistent pain state and the increased pain vulnerability remain unknown. This preclinical study was designed to evaluate temporal changes in descending pain modulation at specific experimental endpoints (12, 28, 90 and 168 days) using a novel double-hit model of chronic/tonic pain (first hit: chronic constriction injury (CCI) model; second hit: tonic formalin pain in the contralateral hindpaw). Basal activity of bulbo-spinal monoaminergic systems was further assessed through liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) screening of cerebrospinal fluid (CSF). We found that CCI-operated rats exhibited a reduced nociceptive response profile, peaking on day 28, when subjected to tonic pain. This behavioral response was accompanied by a rapid increase in basal CSF serotonin and norepinephrine levels 12 days after neuropathy, followed by a return to sham levels on day 28. These molecular and behavioral adaptive changes in descending pain inhibition seemed to slowly fade over time. We therefore suggest that chronic neuropathic pain produces a transient hyperactivation of bulbo-spinal monoaminergic drive when previously primed using a tonic pain paradigm (i.e., formalin test), translating into inhibition of subsequent nociceptive behaviors. Altogether, we propose that early hyperactivation of descending pain inhibitory mechanisms, and its potential ensuing exhaustion, could be part of the temporal neurophysiological chain of events favoring chronic neuropathic pain establishment. PMID:26691963

  19. The Double Absorbing Boundary method

    NASA Astrophysics Data System (ADS)

    Hagstrom, Thomas; Givoli, Dan; Rabinovich, Daniel; Bielak, Jacobo

    2014-02-01

    A new approach is devised for solving wave problems in unbounded domains. It has common features to each of two types of existing techniques: local high-order Absorbing Boundary Conditions (ABC) and Perfectly Matched Layers (PML). However, it is different from both and enjoys relative advantages with respect to both. The new method, called the Double Absorbing Boundary (DAB) method, is based on truncating the unbounded domain to produce a finite computational domain Ω, and on applying a local high-order ABC on two parallel artificial boundaries, which are a small distance apart, and thus form a thin non-reflecting layer. Auxiliary variables are defined on the two boundaries and inside the layer bounded by them, and participate in the numerical scheme. The DAB method is first introduced in general terms, using the 2D scalar time-dependent wave equation as a model. Then it is applied to the 1D Klein-Gordon equation, using finite difference discretization in space and time, and to the 2D wave equation in a wave guide, using finite element discretization in space and dissipative time stepping. The computational aspects of the method are discussed, and numerical experiments demonstrate its performance.

  20. Two Methods for Efficient Solution of the Hitting-Set Problem

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh; Fijany, Amir

    2005-01-01

    A paper addresses much of the same subject matter as that of Fast Algorithms for Model-Based Diagnosis (NPO-30582), which appears elsewhere in this issue of NASA Tech Briefs. However, in the paper, the emphasis is more on the hitting-set problem (also known as the transversal problem), which is well known among experts in combinatorics. The authors primary interest in the hitting-set problem lies in its connection to the diagnosis problem: it is a theorem of model-based diagnosis that in the set-theory representation of the components of a system, the minimal diagnoses of a system are the minimal hitting sets of the system. In the paper, the hitting-set problem (and, hence, the diagnosis problem) is translated from a combinatorial to a computational problem by mapping it onto the Boolean satisfiability and integer- programming problems. The paper goes on to describe developments nearly identical to those summarized in the cited companion NASA Tech Briefs article, including the utilization of Boolean-satisfiability and integer- programming techniques to reduce the computation time and/or memory needed to solve the hitting-set problem.

  1. The Effect of Lipopolysaccharide on Ischemic-Reperfusion Injury of Heart: A Double Hit Model of Myocardial Ischemia and Endotoxemia

    PubMed Central

    Nader, Nader D.; Asgeri, Mehrdad; Davari-Farid, Sina; Pourafkari, Leili; Ahmadpour, Faraz; Porhomayon, Jahan; Javadzadeghan, Hassan; Negargar, Sohrab; Knight, Paul R.

    2015-01-01

    Introduction: Myocardial ischemia may coincide and interact with sepsis and inflammation. Our objective was to examine the effects of bacterial endotoxin on myocardial functions and cell injury during acute ischemia. Methods: Rabbits were pretreated with incremental doses of E. Coli lipopolysaccharide (LPS) or normal saline. Myocardial ischemia was induced by 50-minute occlusion of left anterior descending artery. S-TNFaR was additionally used to block the effects LPS. Results: Ventricular contractility as it was measured by dp/dt during systole decreased from 2445± 1298 to 1422 ± 944 mm Hg/s, P = .019. Isovolumetric relaxation time as an index of diastolic function was prolonged from 50±18 ms to 102± 64 ms following ischemia. Pretreatment with low concentrations of LPS (<1 μg) had no effect on dp/dt, while at higher concentrations it suppressed both contractility and prolonged IVRT. Cell injury as measured by cardiac troponin I level increased to 15.1± 3.2 ng/dL following ischemia and continued to rise with higher doses of LPS. While blocking TNFa did not improve the myocardial contractility after ischemia, it eliminated additional deleterious effects of LPS. Conclusion: Lower doses of LPS had no deleterious effect on myocardial function, whereas higher doses of this endotoxin cause cardiac dysfunction and increased extent of injury. PMID:26430494

  2. A simple method for analyzing actives in random RNAi screens: introducing the “H Score” for hit nomination & gene prioritization

    PubMed Central

    Bhinder, Bhavneet; Djaballah, Hakim

    2013-01-01

    Due to the numerous challenges in hit identification from random RNAi screening, we have examined current practices with a discovery of a variety of methodologies employed and published in many reports; majority of them, unfortunately, do not address the minimum associated criteria for hit nomination, as this could potentially have been the cause or may well be the explanation as to the lack of confirmation and follow up studies, currently facing the RNAi field. Overall, we find that these criteria or parameters are not well defined, in most cases arbitrary in nature, and hence rendering it extremely difficult to judge the quality of and confidence in nominated hits across published studies. For this purpose, we have developed a simple method to score actives independent of assay readout; and provide, for the first time, a homogenous platform enabling cross-comparison of active gene lists resulting from different RNAi screening technologies. Here, we report on our recently developed method dedicated to RNAi data output analysis referred to as the BDA method applicable to both arrayed and pooled RNAi technologies; wherein the concerns pertaining to inconsistent hit nomination and off-target silencing in conjugation with minimal activity criteria to identify a high value target are addressed. In this report, a combined hit rate per gene, called “H score”, is introduced and defined. The H score provides a very useful tool for stringent active gene nomination, gene list comparison across multiple studies, prioritization of hits, and evaluation of the quality of the nominated gene hits. PMID:22934950

  3. A new double views motion deblurring method

    NASA Astrophysics Data System (ADS)

    Hong, Hanyu; Hua, Xia; Zhang, Wenmo; Shi, Yu

    2015-12-01

    With improving of intelligent and automation in modern industrial production area, the detection and reconstruction of the 3D surface of the product has become an important technology, but the image which acquire on the actual production line has motion blur and this problem will affect the later reconstruction work. In order to solve this problem, a deblurring method which based on double view moving target image is proposed in this paper. We can deduce the relationship of the point spread function(PSF) path between the double view image through the epipolar geometry and the camera model. The experimental results show that deblurring with the PSF path solved by the geometric relationship achieves good results.

  4. Improvements to the stand and hit algorithm

    SciTech Connect

    Boneh, A.; Boneh, S.; Caron, R.; Jibrin, S.

    1994-12-31

    The stand and hit algorithm is a probabilistic algorithm for detecting necessary constraints. The algorithm stands at a point in the feasible region and hits constraints by moving towards the boundary along randomly generated directions. In this talk we discuss methods for choosing the standing point. As well, we present the undetected first rule for determining the hit constraints.

  5. Formulation of the Multi-Hit Model With a Non-Poisson Distribution of Hits

    SciTech Connect

    Vassiliev, Oleg N.

    2012-07-15

    Purpose: We proposed a formulation of the multi-hit single-target model in which the Poisson distribution of hits was replaced by a combination of two distributions: one for the number of particles entering the target and one for the number of hits a particle entering the target produces. Such an approach reflects the fact that radiation damage is a result of two different random processes: particle emission by a radiation source and interaction of particles with matter inside the target. Methods and Materials: Poisson distribution is well justified for the first of the two processes. The second distribution depends on how a hit is defined. To test our approach, we assumed that the second distribution was also a Poisson distribution. The two distributions combined resulted in a non-Poisson distribution. We tested the proposed model by comparing it with previously reported data for DNA single- and double-strand breaks induced by protons and electrons, for survival of a range of cell lines, and variation of the initial slopes of survival curves with radiation quality for heavy-ion beams. Results: Analysis of cell survival equations for this new model showed that they had realistic properties overall, such as the initial and high-dose slopes of survival curves, the shoulder, and relative biological effectiveness (RBE) In most cases tested, a better fit of survival curves was achieved with the new model than with the linear-quadratic model. The results also suggested that the proposed approach may extend the multi-hit model beyond its traditional role in analysis of survival curves to predicting effects of radiation quality and analysis of DNA strand breaks. Conclusions: Our model, although conceptually simple, performed well in all tests. The model was able to consistently fit data for both cell survival and DNA single- and double-strand breaks. It correctly predicted the dependence of radiation effects on parameters of radiation quality.

  6. Novel method for hit-position reconstruction using voltage signals in plastic scintillators and its application to Positron Emission Tomography

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Moskal, P.; Kowalski, P.; Wiślicki, W.; Bednarski, T.; Białas, P.; Czerwiński, E.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kozik, T.; Krzemień, W.; Kubicz, E.; Molenda, M.; Moskal, I.; Niedźwiecki, Sz.; Pałka, M.; Pawlik-Niedźwiecka, M.; Rudy, Z.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zieliński, M.; Zoń, N.

    2014-11-01

    Currently inorganic scintillator detectors are used in all commercial Time of Flight Positron Emission Tomograph (TOF-PET) devices. The J-PET collaboration investigates a possibility of construction of a PET scanner from plastic scintillators which would allow for single bed imaging of the whole human body. This paper describes a novel method of hit-position reconstruction based on sampled signals and an example of an application of the method for a single module with a 30 cm long plastic strip, read out on both ends by Hamamatsu R4998 photomultipliers. The sampling scheme to generate a vector with samples of a PET event waveform with respect to four user-defined amplitudes is introduced. The experimental setup provides irradiation of a chosen position in the plastic scintillator strip with an annihilation gamma quanta of energy 511 keV. The statistical test for a multivariate normal (MVN) distribution of measured vectors at a given position is developed, and it is shown that signals sampled at four thresholds in a voltage domain are approximately normally distributed variables. With the presented method of a vector analysis made out of waveform samples acquired with four thresholds, we obtain a spatial resolution of about 1 cm and a timing resolution of about 80 ps (σ).

  7. Attitude Detection Method of Nano Satellite HIT-SAT by Fluctuation of Received Signal Strength

    NASA Astrophysics Data System (ADS)

    Sato, Tatsuhiro; Mitsuhashi, Ryuichi; Satori, Shin; Sasaki, Issei

    In recent years, many universities have already launched small satellite on-orbit. Attitude information is important for satellite operation. However, many nano-satellite communication systems used amateur radio by narrow bandwidth frequency segment. Therefore, down-link of attitude information is expected to require substantial time. In this paper, we propose establishment of the attitude estimation method by means of the received radio power. And, satellite attitude information being estimated from the fluctuation of the received power. Then, comparison of satellite data obtained from on-board sensors with ground experiment. One major problem in this approach is the effect of Earth's ionosphere. As the radio signal passes through the ionosphere, the polarization angle is rotated by the Faraday Effect. The detection of attitude by the radiation pattern has been assumed that inaccurate. However, we are get data the spin satellite of liner polarization antenna. As a result, we can estimate 0.12rad/s accuracy of angular velocity measurement. This method can be applied to the attitude detection of many small satellites.

  8. Φ-score: A cell-to-cell phenotypic scoring method for sensitive and selective hit discovery in cell-based assays

    PubMed Central

    Guyon, Laurent; Lajaunie, Christian; fer, Frédéric; bhajun, Ricky; sulpice, Eric; pinna, Guillaume; campalans, Anna; radicella, J. Pablo; rouillier, Philippe; mary, Mélissa; combe, Stéphanie; obeid, Patricia; vert, Jean-Philippe; gidrol, Xavier

    2015-01-01

    Phenotypic screening monitors phenotypic changes induced by perturbations, including those generated by drugs or RNA interference. Currently-used methods for scoring screen hits have proven to be problematic, particularly when applied to physiologically relevant conditions such as low cell numbers or inefficient transfection. Here, we describe the Φ-score, which is a novel scoring method for the identification of phenotypic modifiers or hits in cell-based screens. Φ-score performance was assessed with simulations, a validation experiment and its application to gene identification in a large-scale RNAi screen. Using robust statistics and a variance model, we demonstrated that the Φ-score showed better sensitivity, selectivity and reproducibility compared to classical approaches. The improved performance of the Φ-score paves the way for cell-based screening of primary cells, which are often difficult to obtain from patients in sufficient numbers. We also describe a dedicated merging procedure to pool scores from small interfering RNAs targeting the same gene so as to provide improved visualization and hit selection. PMID:26382112

  9. Method for double-sided processing of thin film transistors

    DOEpatents

    Yuan, Hao-Chih; Wang, Guogong; Eriksson, Mark A.; Evans, Paul G.; Lagally, Max G.; Ma, Zhenqiang

    2008-04-08

    This invention provides methods for fabricating thin film electronic devices with both front- and backside processing capabilities. Using these methods, high temperature processing steps may be carried out during both frontside and backside processing. The methods are well-suited for fabricating back-gate and double-gate field effect transistors, double-sided bipolar transistors and 3D integrated circuits.

  10. Recognition of Hits in a Target

    NASA Astrophysics Data System (ADS)

    Semerak, Vojtech; Drahansky, Martin

    This paper describes two possible ways of hit recognition in a target. First method is based on frame differencing with use of a stabilization algorithm to eliminate movements of a target. Second method uses flood fill with random seed point definition to find hits in the target scene.

  11. Method for limiting heat flux in double-wall tubes

    DOEpatents

    Hwang, Jaw-Yeu

    1982-01-01

    A method of limiting the heat flux in a portion of double-wall tubes including heat treating the tubes so that the walls separate when subjected to high heat flux and supplying an inert gas mixture to the gap at the interface of the double-wall tubes.

  12. Photoluminescence method of testing double heterostructure wafers

    SciTech Connect

    Besomi, P.R.; Wilt, D.P.

    1984-04-10

    Under photoluminescence (PL) excitation, the lateral spreading of photo-excited carriers can suppress the photoluminescence signal from double heterostructure (DH) wafers containing a p-n junction. In any DH with a p-n junction in the active layer, PL is suppressed if the power of the excitation source does not exceed a threshold value. This effect can be advantageously used for a nondestructive optical determination of the top cladding layer sheet conductance as well as p-n junction misplacement, important parameters for injection lasers and LEDs.

  13. Forward physics in CMS: Simulation of PMT hits in HF and Higgs mass reconstruction methods with a focus on forward jet tagging

    NASA Astrophysics Data System (ADS)

    Moeller, Anthony Richard

    Abnormally high energy events were seen in the Hadronic Forward (HF) calorimeter for pion and muon data during testbeam in 2004. Analysis of testbeam data suggested that such events were caused by particles traveling the entire length of HF and striking the photomultiplier (PMT) windows in the readout box behind HF. Charged particles traversing the window of the PMT emit cerenkov radiation, which creates abnormally high energy events in the data. To further study these events, a modification of the existing official CMS HF simulation was created that added the PMT windows to the simulation as sensitive detectors. In agreement with testbeam data, abnormally high energy events in the PMTs were seen in the simulation for muons and pions. The simulation was then extended to jets simulated with Pythia, and then for collision like events as well. PMT hits were seen in both of these cases. Energy sharing between PMTs for long and short fibers in HF as well as timing differences between normal HF events and PMT events were investigated as methods to tag such abnormal events. While both methods were somewhat successful, it was determined that they were not sufficient. The simulation was also modified to use thinner PMT windows. Reducing the thickness of the window reduced the number of PMT hits, and drastically reduced the energy of these hits, bringing most of them below standard jet energy thresholds. These results led to the replacement of the existing PMTs with new PMTs with a smaller, thinner window. Higgs mass reconstruction methods were applied to Monte Carlo datasets for 115 and 130 GeV Higgs produced through vector boson fusion. In these datasets, the Higgs boson decayed to two tau particles, each of which decayed leptonically. The mass reconstruction methods successfully created a peak at the proper mass for both datasets. In addition to creating a Higgs, the vector boson fusion signal also has two forward jets. These jets are not found in the signal of the

  14. The SVT Hit Buffer

    SciTech Connect

    Belforte, S.; Dell`Orso, M.; Donati, S.

    1996-06-01

    The Hit Buffer is part of the Silicon Vertex Tracker, a trigger processor dedicated to the reconstruction of particle trajectories in the Silicon Vertex Detector and the Central Tracking Chamber of the Collider Detector at Fermilab. The Hit Buffer is a high speed data-traffic node, where thousands of words are received in arbitrary order and simultaneously organized in an internal structured data base, to be later promptly retrieved and delivered in response to specific requests. The Hit Buffer is capable of processing data at a rate of 25 MHz, thanks to the use of special fast devices like Cache-Tag RAMs and high performance Erasable Programmable Logic Devices from the XILINX XC7300 family.

  15. Hitting Is Contagious in Baseball: Evidence from Long Hitting Streaks

    PubMed Central

    Bock, Joel R.; Maewal, Akhilesh; Gough, David A.

    2012-01-01

    Data analysis is used to test the hypothesis that “hitting is contagious”. A statistical model is described to study the effect of a hot hitter upon his teammates’ batting during a consecutive game hitting streak. Box score data for entire seasons comprising streaks of length games, including a total observations were compiled. Treatment and control sample groups () were constructed from core lineups of players on the streaking batter’s team. The percentile method bootstrap was used to calculate confidence intervals for statistics representing differences in the mean distributions of two batting statistics between groups. Batters in the treatment group (hot streak active) showed statistically significant improvements in hitting performance, as compared against the control. Mean for the treatment group was found to be to percentage points higher during hot streaks (mean difference increased points), while the batting heat index introduced here was observed to increase by points. For each performance statistic, the null hypothesis was rejected at the significance level. We conclude that the evidence suggests the potential existence of a “statistical contagion effect”. Psychological mechanisms essential to the empirical results are suggested, as several studies from the scientific literature lend credence to contagious phenomena in sports. Causal inference from these results is difficult, but we suggest and discuss several latent variables that may contribute to the observed results, and offer possible directions for future research. PMID:23251507

  16. Measuring Double Stars with the Modified Video Drift Method

    NASA Astrophysics Data System (ADS)

    Iverson, Ernest W.; Nugent, Richard L.

    2015-05-01

    The usefulness of a common CCTV video camera for measuring the position angle and separation of a double star is limited by the camera sensitivity and telescope aperture. The video drift method is enhanced by using an integrating camera but frame integrations longer than 0.132 seconds (4 frames) are impractical. This is due to the target stars elongating (streaking) and moving in incremental steps. A simple modification to the Video Drift Method and corresponding VidPro analysis program significantly increases the magnitude at which double stars can be measured. Double stars down to magnitude +16 have been measured with a 14-inch (35.6-cm) telescope using this method compared to magnitude +12 using the original video drift method under comparable seeing conditions.

  17. On 2D bisection method for double eigenvalue problems

    SciTech Connect

    Ji, X.

    1996-06-01

    The two-dimensional bisection method presented in (SIAM J. Matrix Anal. Appl. 13(4), 1085 (1992)) is efficient for solving a class of double eigenvalue problems. This paper further extends the 2D bisection method of full matrix cases and analyses its stability. As in a single parameter case, the 2D bisection method is very stable for the tridiagonal matrix triples satisfying the symmetric-definite condition. Since the double eigenvalue problems arise from two-parameter boundary value problems, an estimate of the discretization error in eigenpairs is also given. Some numerical examples are included. 42 refs., 1 tab.

  18. But Can You Hit?

    ERIC Educational Resources Information Center

    Johnson, R. E.

    2009-01-01

    The author shares a story told to him by a colleague more than thirty years ago. The dean of a midsized American university was explaining the path to tenure to a roomful of newly appointed assistant professors. "We know you boys can all "field"," he declared. "Now we want to see if you can hit." A lot has changed over the intervening decades. If…

  19. Comparison of double-ended transition state search methods

    NASA Astrophysics Data System (ADS)

    Koslover, Elena F.; Wales, David J.

    2007-10-01

    While a variety of double-ended transition state search methods have been developed, their relative performance in characterizing complex multistep pathways between structurally disparate molecular conformations remains unclear. Three such methods (doubly-nudged elastic band, a string method, and a growing string method) are compared for a series of benchmarks ranging from permutational isomerizations of the seven-atom Lennard-Jones cluster (LJ7) to highly cooperative LJ38 and LJ75 rearrangements, and the folding pathways of two peptides. A database of short paths between LJ13 local minima is used to explore the effects of parameters and suggest reasonable default values. Each double-ended method was employed within the framework of a missing connection network flow algorithm to construct more complicated multistep pathways. We find that in our implementation none of the three methods definitively outperforms the others, and that their relative effectiveness is strongly system and parameter dependent.

  20. Comparison of double-ended transition state search methods.

    PubMed

    Koslover, Elena F; Wales, David J

    2007-10-01

    While a variety of double-ended transition state search methods have been developed, their relative performance in characterizing complex multistep pathways between structurally disparate molecular conformations remains unclear. Three such methods (doubly-nudged elastic band, a string method, and a growing string method) are compared for a series of benchmarks ranging from permutational isomerizations of the seven-atom Lennard-Jones cluster (LJ(7)) to highly cooperative LJ(38) and LJ(75) rearrangements, and the folding pathways of two peptides. A database of short paths between LJ(13) local minima is used to explore the effects of parameters and suggest reasonable default values. Each double-ended method was employed within the framework of a missing connection network flow algorithm to construct more complicated multistep pathways. We find that in our implementation none of the three methods definitively outperforms the others, and that their relative effectiveness is strongly system and parameter dependent. PMID:17919006

  1. Double micropipettes configuration method of scanning ion conductance microscopy

    NASA Astrophysics Data System (ADS)

    Zhuang, Jian; Li, Zeqing; Jiao, Yangbohan

    2016-07-01

    In this paper, a new double micropipettes configuration mode of scanning ion conductance microscopy (SICM) is presented to better overcome ionic current drift and further improve the performance of SICM, which is based on a balance bridge circuit. The article verifies the feasibility of this new configuration mode from theoretical and experimental analyses, respectively, and compares the quality of scanning images in the conventional single micropipette configuration mode and the new double micropipettes configuration mode. The experimental results show that the double micropipettes configuration mode of SICM has better effect on restraining ionic current drift and better performance of imaging. Therefore, this article not only proposes a new direction of overcoming the ionic current drift but also develops a new method of SICM stable imaging.

  2. Double micropipettes configuration method of scanning ion conductance microscopy.

    PubMed

    Zhuang, Jian; Li, Zeqing; Jiao, Yangbohan

    2016-07-01

    In this paper, a new double micropipettes configuration mode of scanning ion conductance microscopy (SICM) is presented to better overcome ionic current drift and further improve the performance of SICM, which is based on a balance bridge circuit. The article verifies the feasibility of this new configuration mode from theoretical and experimental analyses, respectively, and compares the quality of scanning images in the conventional single micropipette configuration mode and the new double micropipettes configuration mode. The experimental results show that the double micropipettes configuration mode of SICM has better effect on restraining ionic current drift and better performance of imaging. Therefore, this article not only proposes a new direction of overcoming the ionic current drift but also develops a new method of SICM stable imaging. PMID:27475561

  3. Evaluating method for the double image phenomenon of LED lighting

    NASA Astrophysics Data System (ADS)

    Wu, Wen-Hong; Kuo, Chao-Hui; Hung, Min-Wei; Huang, Kuo-Cheng

    In recent years, the overriding advantages long life, high efficiency, small size and short reaction time have made LED become a viable alternative to conventional light sources. LED lighting sources are usually composed of several individual LED cells which must be mounted on a panel as a lighting module. Being composed of several individual LED cells, the LED sources will cause the double image phenomenon. The double image phenomenon is more obvious when the LED sources are more closer, such as LED table lamp, and limits the applications of LED sources. By using a proper secondary optical lens, the double image phenomenon can be reduced. In this research, an evaluating method based on image processing is developed for the double image phenomenon of a LED sources. By analyzing the gray-scale of the grabbed image which is obtained by putting a rob under a LED source, an index of double image can be established and be a criterion to judge different LED sources. Furthermore, a series of LED lighting simulations are shown in this paper and several type of secondary optical lens are compared and discussed in this paper as well.

  4. Double wall vacuum tubing and method of manufacture

    DOEpatents

    Stahl, Charles R.; Gibson, Michael A.; Knudsen, Christian W.

    1989-01-01

    An evacuated double wall tubing is shown together with a method for the manufacture of such tubing which includes providing a first pipe of predetermined larger diameter and a second pipe having an O.D. substantially smaller than the I.D. of the first pipe. An evacuation opening is then in the first pipe. The second pipe is inserted inside the first pipe with an annular space therebetween. The pipes are welded together at one end. A stretching tool is secured to the other end of the second pipe after welding. The second pipe is then prestressed mechanically with the stretching tool an amount sufficient to prevent substantial buckling of the second pipe under normal operating conditions of the double wall pipe. The other ends of the first pipe and the prestressed second pipe are welded together, preferably by explosion welding, without the introduction of mechanical spacers between the pipes. The annulus between the pipes is evacuated through the evacuation opening, and the evacuation opening is finally sealed. The first pipe is preferably of steel and the second pipe is preferably of titanium. The pipes may be of a size and wall thickness sufficient for the double wall pipe to be structurally load bearing or may be of a size and wall thickness insufficient for the double wall pipe to be structurally load bearing, and the double wall pipe positioned with a sliding fit inside a third pipe of a load-bearing size.

  5. Health Information Technology Knowledge and Skills Needed by HIT Employers

    PubMed Central

    Fenton, S.H.; Gongora-Ferraez, M.J.; Joost, E.

    2012-01-01

    Objective To evaluate the health information technology (HIT) workforce knowledge and skills needed by HIT employers. Methods Statewide face-to-face and online focus groups of identified HIT employer groups in Austin, Brownsville, College Station, Dallas, El Paso, Houston, Lubbock, San Antonio, and webinars for rural health and nursing informatics. Results HIT employers reported needing an HIT workforce with diverse knowledge and skills ranging from basic to advanced, while covering information technology, privacy and security, clinical practice, needs assessment, contract negotiation, and many other areas. Consistent themes were that employees needed to be able to learn on the job and must possess the ability to think critically and problem solve. Many employers wanted persons with technical skills, yet also the knowledge and understanding of healthcare operations. Conclusion The HIT employer focus groups provided valuable insight into employee skills needed in this fast-growing field. Additionally, this information will be utilized to develop a statewide HIT workforce needs assessment survey. PMID:23646090

  6. Hitting is contagious in baseball: evidence from long hitting streaks.

    PubMed

    Bock, Joel R; Maewal, Akhilesh; Gough, David A

    2012-01-01

    Data analysis is used to test the hypothesis that "hitting is contagious". A statistical model is described to study the effect of a hot hitter upon his teammates' batting during a consecutive game hitting streak. Box score data for entire seasons comprising [Formula: see text] streaks of length [Formula: see text] games, including a total [Formula: see text] observations were compiled. Treatment and control sample groups ([Formula: see text]) were constructed from core lineups of players on the streaking batter's team. The percentile method bootstrap was used to calculate [Formula: see text] confidence intervals for statistics representing differences in the mean distributions of two batting statistics between groups. Batters in the treatment group (hot streak active) showed statistically significant improvements in hitting performance, as compared against the control. Mean [Formula: see text] for the treatment group was found to be [Formula: see text] to [Formula: see text] percentage points higher during hot streaks (mean difference increased [Formula: see text] points), while the batting heat index [Formula: see text] introduced here was observed to increase by [Formula: see text] points. For each performance statistic, the null hypothesis was rejected at the [Formula: see text] significance level. We conclude that the evidence suggests the potential existence of a "statistical contagion effect". Psychological mechanisms essential to the empirical results are suggested, as several studies from the scientific literature lend credence to contagious phenomena in sports. Causal inference from these results is difficult, but we suggest and discuss several latent variables that may contribute to the observed results, and offer possible directions for future research. PMID:23251507

  7. Concerning the Video Drift Method to Measure Double Stars

    NASA Astrophysics Data System (ADS)

    Nugent, Richard L.; Iverson, Ernest W.

    2015-05-01

    Classical methods to measure position angles and separations of double stars rely on just a few measurements either from visual observations or photographic means. Visual and photographic CCD observations are subject to errors from the following sources: misalignments from eyepiece/camera/barlow lens/micrometer/focal reducers, systematic errors from uncorrected optical distortions, aberrations from the telescope system, camera tilt, magnitude and color effects. Conventional video methods rely on calibration doubles and graphically calculating the east-west direction plus careful choice of select video frames stacked for measurement. Atmospheric motion is one of the larger sources of error in any exposure/measurement method which is on the order of 0.5-1.5. Ideally, if a data set from a short video can be used to derive position angle and separation, with each data set self-calibrating independent of any calibration doubles or star catalogues, this would provide measurements of high systematic accuracy. These aims are achieved by the video drift method first proposed by the authors in 2011. This self calibrating video method automatically analyzes 1,000's of measurements from a short video clip.

  8. Double sided circuit board and a method for its manufacture

    DOEpatents

    Lindenmeyer, C.W.

    1988-04-14

    Conductance between the sides of a large double sided printed circuit board is provided using a method which eliminates the need for chemical immersion or photographic exposure of the entire large board. A plurality of through-holes are drilled or punched in a substratum according to the desired pattern, conductive laminae are made to adhere to both sides of the substratum covering the holes and the laminae are pressed together and permanently joined within the holes, providing conductive paths. 4 figs.

  9. Double sided circuit board and a method for its manufacture

    DOEpatents

    Lindenmeyer, Carl W.

    1989-01-01

    Conductance between the sides of a large double sided printed circuit board is provided using a method which eliminates the need for chemical immersion or photographic exposure of the entire large board. A plurality of through-holes are drilled or punched in a substratum according to the desired pattern, conductive laminae are made to adhere to both sides of the substratum covering the holes and the laminae are pressed together and permanently joined within the holes, providing conductive paths.

  10. Double sided circuit board and a method for its manufacture

    DOEpatents

    Lindenmeyer, Carl W.

    1989-07-04

    Conductance between the sides of a large double sided printed circuit board is provided using a method which eliminates the need for chemical immersion or photographic exposure of the entire large board. A plurality of through-holes are drilled or punched in a substratum according to the desired pattern, conductive laminae are made to adhere to both sides of the substratum covering the holes and the laminae are pressed together and permanently joined within the holes, providing conductive paths.

  11. A method for generating double-ring-shaped vector beams

    NASA Astrophysics Data System (ADS)

    Huan, Chen; Xiao-Hui, Ling; Zhi-Hong, Chen; Qian-Guang, Li; Hao, Lv; Hua-Qing, Yu; Xu-Nong, Yi

    2016-07-01

    We propose a method for generating double-ring-shaped vector beams. A step phase introduced by a spatial light modulator (SLM) first makes the incident laser beam have a nodal cycle. This phase is dynamic in nature because it depends on the optical length. Then a Pancharatnam–Berry phase (PBP) optical element is used to manipulate the local polarization of the optical field by modulating the geometric phase. The experimental results show that this scheme can effectively create double-ring-shaped vector beams. It provides much greater flexibility to manipulate the phase and polarization by simultaneously modulating the dynamic and the geometric phases. Project supported by the National Natural Science Foundation of China (Grant No. 11547017), the Hubei Engineering University Research Foundation, China (Grant No. z2014001), and the Natural Science Foundation of Hubei Province, China (Grant No. 2014CFB578).

  12. New Methods for Improved Double Circular-Arc Helical Gears

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Lu, Jian

    1997-01-01

    The authors have extended the application of double circular-arc helical gears for internal gear drives. The geometry of the pinion and gear tooth surfaces has been determined. The influence of errors of alignment on the transmission errors and the shift of the bearing contact have been investigated. Application of a predesigned parabolic function for the reduction of transmission errors was proposed. Methods of grinding of the pinion-gear tooth surfaces by a disk-shaped tool and a grinding worm were proposed.

  13. Improvements of HITS Algorithms for Spam Links

    NASA Astrophysics Data System (ADS)

    Asano, Yasuhito; Tezuka, Yu; Nishizeki, Takao

    The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find “linkfarms,” that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.

  14. Hurricane Iris Hits Belize

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Hurricane Iris hit the small Central American country of Belize around midnight on October 8, 2001. At the time, Iris was the strongest Atlantic hurricane of the season, with sustained winds up to 225 kilometers per hour (140 mph). The hurricane caused severe damage-destroying homes, flooding streets, and leveling trees-in coastal towns south of Belize City. In addition, a boat of American recreational scuba divers docked along the coast was capsized by the storm, leaving 20 of the 28 passengers missing. Within hours the winds had subsided to only 56 kph (35 mph), a modest tropical depression, but Mexico, Guatemala, El Salvador, and Honduras were still expecting heavy rains. The above image is a combination of visible and thermal infrared data (for clouds) acquired by a NOAA Geostationary Operational Environmental Satellite (GOES-8) on October 8, 2001, at 2:45 p.m., and the Moderate-resolution Imaging Spectroradiometer (MODIS) (for the color of the ground). The three-dimensional view is from the south-southeast (north is towards the upper left). Belize is off the image to the left. Image courtesy Marit Jentoft-Nilsen, NASA GSFC Visualization Analysis Lab

  15. Heat Waves Hit Seniors Hardest

    MedlinePlus

    ... https://medlineplus.gov/news/fullstory_160425.html Heat Waves Hit Seniors Hardest Risk of high-temperature trouble ... much of the Northeast struggles with a heat wave that isn't expected to ease until the ...

  16. Upscaling of Hydraulic Conductivity using the Double Constraint Method

    NASA Astrophysics Data System (ADS)

    El-Rawy, Mustafa; Zijl, Wouter; Batelaan, Okke

    2013-04-01

    The mathematics and modeling of flow through porous media is playing an increasingly important role for the groundwater supply, subsurface contaminant remediation and petroleum reservoir engineering. In hydrogeology hydraulic conductivity data are often collected at a scale that is smaller than the grid block dimensions of a groundwater model (e.g. MODFLOW). For instance, hydraulic conductivities determined from the field using slug and packer tests are measured in the order of centimeters to meters, whereas numerical groundwater models require conductivities representative of tens to hundreds of meters of grid cell length. Therefore, there is a need for upscaling to decrease the number of grid blocks in a groundwater flow model. Moreover, models with relatively few grid blocks are simpler to apply, especially when the model has to run many times, as is the case when it is used to assimilate time-dependent data. Since the 1960s different methods have been used to transform a detailed description of the spatial variability of hydraulic conductivity to a coarser description. In this work we will investigate a relatively simple, but instructive approach: the Double Constraint Method (DCM) to identify the coarse-scale conductivities to decrease the number of grid blocks. Its main advantages are robustness and easy implementation, enabling to base computations on any standard flow code with some post processing added. The inversion step of the double constraint method is based on a first forward run with all known fluxes on the boundary and in the wells, followed by a second forward run based on the heads measured on the phreatic surface (i.e. measured in shallow observation wells) and in deeper observation wells. Upscaling, in turn is inverse modeling (DCM) to determine conductivities in coarse-scale grid blocks from conductivities in fine-scale grid blocks. In such a way that the head and flux boundary conditions applied to the fine-scale model are also honored at the

  17. Lubrication approximation in completed double layer boundary element method

    NASA Astrophysics Data System (ADS)

    Nasseri, S.; Phan-Thien, N.; Fan, X.-J.

    This paper reports on the results of the numerical simulation of the motion of solid spherical particles in shear Stokes flows. Using the completed double layer boundary element method (CDLBEM) via distributed computing under Parallel Virtual Machine (PVM), the effective viscosity of suspension has been calculated for a finite number of spheres in a cubic array, or in a random configuration. In the simulation presented here, the short range interactions via lubrication forces are also taken into account, via the range completer in the formulation, whenever the gap between two neighbouring particles is closer than a critical gap. The results for particles in a simple cubic array agree with the results of Nunan and Keller (1984) and Stoksian Dynamics of Brady etal. (1988). To evaluate the lubrication forces between particles in a random configuration, a critical gap of 0.2 of particle's radius is suggested and the results are tested against the experimental data of Thomas (1965) and empirical equation of Krieger-Dougherty (Krieger, 1972). Finally, the quasi-steady trajectories are obtained for time-varying configuration of 125 particles.

  18. Boolean computation of optimum hitting sets

    SciTech Connect

    Hulme, B.L.; Baca, L.S.; Shiver, A.W.; Worrell, R.B.

    1984-04-01

    This report presents the results of computational experience in solving weighted hitting set problems by Boolean algebraic methods. The feasible solutions are obtained by Boolean formula manipulations, and the optimum solutions are obtained by comparing the weight sums of the feasible solutions. Both the algebra and the optimization can be accomplished using the SETS language. One application is to physical protection problems. 8 references, 2 tables.

  19. Probability of Brownian motion hitting an obstacle

    SciTech Connect

    Knessl, C.; Keller, J.B.

    2000-02-01

    The probability p(x) that Brownian motion with drift, starting at x, hits an obstacle is analyzed. The obstacle {Omega} is a compact subset of R{sup n}. It is shown that p(x) is expressible in terms of the field U(x) scattered by {Omega} when it is hit by plane wave. Therefore results for U(x), and methods for finding U(x) can be used to determine p(x). The authors illustrate this by obtaining exact and asymptotic results for p(x) when {Omega} is a slit in R{sup 2}, and asymptotic results when {Omega} is a disc in R{sup 3}.

  20. Car Hits Boy on Bicycle

    ERIC Educational Resources Information Center

    Ruiz, Michael J.

    2005-01-01

    In this article we present the fascinating reconstruction of an accident where a car hit a boy riding his bicycle. The boy dramatically flew several metres through the air after the collision and was injured, but made a swift and complete recovery from the accident with no long-term after-effects. Students are challenged to determine the speed of…

  1. Redoublement lexical, procede intensif (Lexical Doubling, Intensive Method).

    ERIC Educational Resources Information Center

    George, Kenneth E. M.

    1983-01-01

    An often-neglected aspect of daily language is syllable doubling or repetition, as in infant language ("nounou"), onomatopoeia ("ronron"), interjections or responses ("oui oui"), names ("Mimi"), or military slang ("coco" for "commandant"). The mechanisms and semantic functions of this phenomenon are outlined, drawing on examples from French…

  2. Manual laterality and hitting performance in major league baseball.

    PubMed

    Grondin, S; Guiard, Y; Ivry, R B; Koren, S

    1999-06-01

    Asymmetrical hand function was examined in the context of expert sports performance: hitting in professional baseball. An archival study was conducted to examine the batting performance of all Major League Baseball players from 1871 to 1992, focusing on those who batted left (n = 1,059) to neutralize the game asymmetry. Among them, left-handers (n = 421) were more likely to hit with power and to strike out than right-handers (n = 638). One possible account, based on the idea of hand dominance and an analogy to tennis, is that batting left involves a double-handed forehand for left-handers and a weaker and more reliable double-handed backhand for right-handers. The results are also interpretable in the light of Y. Guiard's (1987) kinematic chain model of a between-hands asymmetrical division of labor, which provides a detailed account of why left batting is optimal for left-handers. PMID:10385985

  3. Double Cross-Validation in Multiple Regression: A Method of Estimating the Stability of Results.

    ERIC Educational Resources Information Center

    Rowell, R. Kevin

    In multiple regression analysis, where resulting predictive equation effectiveness is subject to shrinkage, it is especially important to evaluate result replicability. Double cross-validation is an empirical method by which an estimate of invariance or stability can be obtained from research data. A procedure for double cross-validation is…

  4. Interferometric Methods of Measuring Refractive Indices and Double-Refraction of Fibres.

    ERIC Educational Resources Information Center

    Hamza, A. A.; El-Kader, H. I. Abd

    1986-01-01

    Presents two methods used to measure the refractive indices and double-refraction of fibers. Experiments are described, with one involving the use of Pluta microscope in the double-beam interference technique, the other employing the multiple-beam technique. Immersion liquids are discussed that can be used in the experiments. (TW)

  5. Inverse scattering method and soliton double solution family for the general symplectic gravity model

    SciTech Connect

    Gao Yajun

    2008-08-15

    A previously established Hauser-Ernst-type extended double-complex linear system is slightly modified and used to develop an inverse scattering method for the stationary axisymmetric general symplectic gravity model. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the inverse scattering method applied fine and effective. As an application, a concrete family of soliton double solutions for the considered theory is obtained.

  6. MTF measurement of IRFPA based on double-knife edge scanning method

    NASA Astrophysics Data System (ADS)

    Ying, Cheng-ping; Wu, Bin; Wang, Heng-fei; Shi, Xue-shun; Liu, Hong-yuan

    2013-09-01

    Modulation transfer function (MTF) is one of the most important parameters of infrared focal plane array (IRFPA). A double-knife edge scanning method is proposed for MTF measurement of IRFPA. In this method, a double-knife edge was used as a target, and the IRFPA under test was positioned in the focal plane of the imaging optical system by a 3-axis translation stage. With an IRFPA data acquisition system, the image of the double-knife edge was restored. By scanning in the direction orthogonal to the double-knife edge image, edge spread function (ESF) curve of each pixel swept across the knife-edge image was obtained. MTF could be calculated from the subsequent fitting, differential and Fourier transformation procedures. With double-knife edge scanning, two ESF curves of double-knife edge were obtained simultaneously, and symmetry of the two ESF curves could be used to evaluate the verticality between photosensitive surface of IRFPA and optical axis of the double-knife edge imaging system. In addition, this method can be used to judge the existing of interference from outside such as vibration, stray light and electrical noise. A measurement facility for IRFPA's MTF based on double-knife edge scanning method was also established in this study. The facility is composed of double-knife edge imaging optical system, 3-axis translation stage and data acquisition system, et al. As the kernel of the facility, the double-knife edge imaging optical system mainly comprises two symmetrical parabolic mirrors coating with reflective material, and the magnification of the optical system is 1 with an operation wavelength range of (1˜14) μm.

  7. Direct determination of the hit locations from experimental HPGe pulses

    NASA Astrophysics Data System (ADS)

    Désesquelles, P.; Boston, A. J.; Boston, H. C.; Cresswell, J. R.; Dimmock, M. R.; Lazarus, I. H.; Ljungvall, J.; Nelson, L.; Nga, D.-T.; Nolan, P. J.; Rigby, S. V.; Simpson, J.; Van-Oanh, N.-T.

    2013-11-01

    The gamma-tracking technique optimises the determination of the energy and emission angle of gamma-rays detected by modern segmented HPGe detectors. This entails the determination, using the delivered pulse shapes, of the interaction points of the gamma-ray within the crystal. The direct method presented here allows the localisation of the hits using only a large sample of pulses detected in the actual operating conditions. No external crystal scanning system or pulse shape simulation code is needed. In order to validate this method, it is applied to sets of pulses obtained using the University of Liverpool scanning system. The hit locations are determined by the method with good precision.

  8. The double exponential sinc collocation method for singular Sturm-Liouville problems

    NASA Astrophysics Data System (ADS)

    Gaudreau, P.; Slevinsky, R.; Safouhi, H.

    2016-04-01

    Sturm-Liouville problems are abundant in the numerical treatment of scientific and engineering problems. In the present contribution, we present an efficient and highly accurate method for computing eigenvalues of singular Sturm-Liouville boundary value problems. The proposed method uses the double exponential formula coupled with sinc collocation method. This method produces a symmetric positive-definite generalized eigenvalue system and has exponential convergence rate. Numerical examples are presented and comparisons with single exponential sinc collocation method clearly illustrate the advantage of using the double exponential formula.

  9. HITS - The Navy's new DATPG system

    NASA Astrophysics Data System (ADS)

    Hosley, L.; Modi, M.

    A new digital automatic test program generation standard called HITS (Hierarchical Integrated Test Simulator), developed by the U.S. Navy as the answer to digital LSI/VLSI circuit technology is discussed. Three major areas of the HITS program which include system flow/unique capabilities, modeling language structures, and management of HITS are preseented. HITS contains the following major software modules: the primary model processor, the secondary model processor, the test language processor, the simulator, and the tester output generator. The functions performed by the individual system modules are described. A circuit description language, which provides user flexibility when describing complex circuit models, and its components are considered. The major areas of HITS management include: (1) HITS accessibility, distribution, and availability; (2) user support; (3) advanced development; and (4) Navy/DOD coordination and standardization.

  10. Heparin-induced thrombocytopenia (HIT II) - a drug-associated autoimmune disease.

    PubMed

    Nowak, Götz

    2009-11-01

    Autoimmune thrombocytopenia (ITP) is an acquired autoimmune disease characterised by isolated persistent thrombocytopenia and normal megakaryopoiesis. This definition also applies to heparin-induced thrombocytopenia (HIT II), a frequent side effect of heparin treatment. In HIT II, the immunogen is a coagulation active complex of heparin and platelet factor 4 (PF4). By now, diagnostics of HIT II is often material and time consuming. Three groups of patients were investigated for HIT II antibodies (HIT II-AB): 54 hospitalised stroke patients, 87 hospitalised cardiac patients, and 71 patients on chronic haemodialysis, all treated with heparin. Furthermore, 100 healthy volunteers were investigated. For detection of HIT II-AB the innovative whole blood test PADA-HIT (PADA: platelet adhesion assay) was used. PADA-HIT quantifies the interaction of IgG antibodies with FcgammaIIA receptors by comparing the activation state of platelets in citrated and heparinised whole blood. The occurrence of HIT II-AB in blood was very high with 44 % of stroke patients, 69% of cardiac patients and 38% of haemodialysis patients compared to only 15% of healthy volunteers. This demonstrates a high incidence and a rapid onset of HIT II-AB in patients being acutely treated with heparin. HIT II is one of the most frequent and severe autoimmune diseases bearing a great thrombosis risk. PADA-HIT represents an innovative diagnostic method for detection of autoimmune antibodies of IgG type that are directed against platelet factor 4 (PF4)-heparin-complex. By early and fast diagnostics and appropriate treatment severe complications of HIT II can be prevented. PMID:19888524

  11. The double-assignment method for the exponential chaotic tabu search in quadratic assignment problems

    NASA Astrophysics Data System (ADS)

    Shibata, Kazuaki; Horio, Yoshihiko; Aihara, Kazuyuki

    The quadratic assignment problem (QAP) is one of the NP-hard combinatorial optimization problems. An exponential chaotic tabu search using a 2-opt algorithm driven by chaotic neuro-dynamics has been proposed as one heuristic method for solving QAPs. In this paper we first propose a new local search, the double-assignment method, suitable for the exponential chaotic tabu search, which adopts features of the Lin-Kernighan algorithm. We then introduce chaotic neuro-dynamics into the double-assignment method to propose a novel exponential chaotic tabu search. We further improve the proposed exponential chaotic tabu search with the double-assignment method by enhancing the effect of chaotic neuro-dynamics.

  12. Double Force Compensation Method to Enhance the Performance of a Null Balance Force Sensor

    NASA Astrophysics Data System (ADS)

    Choi, In-Mook; Choi, Dong-June; Kim, Soo Hyun

    2002-06-01

    Microforce measurement is becoming more essential as precision industries such as biomedicine, precision chemistry, semiconductor manufacturing, and so forth develop. A null balance method has been introduced in order to improve on force measurement performances involving a loadcell. The null-balance type force sensor is analyzed and designed for the improvement of measurement performances. The measurement range and the resolution are dependent on the force generation capacity and the various error sources. These characteristics are estimated and verified according to the mechanical sensitivity and the force compensation sensitivity. Two different coil systems are designed and tested experimentally. Double force compensation is proposed in order to obtain a large range and high resolution. The measurement range of the large coil system and the resolution of the small one are fully realized by the double compensation method. After manufacturing, a range over 300 gf and resolution under ± 0.1 mgf were obtained by the double compensation method.

  13. Developing Health Information Technology (HIT) Programs and HIT Curriculum: The Southern Polytechnic State University Experience

    ERIC Educational Resources Information Center

    Zhang, Chi; Reichgelt, Han; Rutherfoord, Rebecca H.; Wang, Andy Ju An

    2014-01-01

    Health Information Technology (HIT) professionals are in increasing demand as healthcare providers need help in the adoption and meaningful use of Electronic Health Record (EHR) systems while the HIT industry needs workforce skilled in HIT and EHR development. To respond to this increasing demand, the School of Computing and Software Engineering…

  14. Hitting Is Contagious: Experience and Action Induction

    ERIC Educational Resources Information Center

    Gray, Rob; Beilock, Sian L.

    2011-01-01

    In baseball, it is believed that "hitting is contagious," that is, probability of success increases if the previous few batters get a hit. Could this effect be partially explained by action induction--that is, the tendency to perform an action related to one that has just been observed? A simulation was used to investigate the effect of inducing…

  15. Double-layered target and identification method of individual target correlated with evaporation residues

    NASA Astrophysics Data System (ADS)

    Kaji, D.; Morimoto, K.

    2015-08-01

    A double-layered target system and an identification method (target ID) for individual targets mounted on a rotating wheel using correlation with evaporation residues were newly developed for the study of superheavy elements (SHE). The target system can be used in three modes: conventional single-layered mode, double-layered mode, and energy-degrader mode. The target ID method can be utilized for masking a target, measuring an excitation function without changing the beam energy from the accelerator, and searching for SHE nuclides using multiple targets during a single irradiation.

  16. Method based on the double sideband technique for the dynamic tracking of micrometric particles

    NASA Astrophysics Data System (ADS)

    Ramirez, Claudio; Lizana, Angel; Iemmi, Claudio; Campos, Juan

    2016-06-01

    Digital holography (DH) methods are of interest in a large number of applications. Recently, the double sideband (DSB) technique was proposed, which is a DH based method that, by using double filtering, provides reconstructed images without distortions and is free of twin images by using an in-line configuration. In this work, we implement a method for the investigation of the mobility of particles based on the DSB technique. Particle holographic images obtained using the DSB method are processed with digital picture recognition methods, allowing us to accurately track the spatial position of particles. The dynamic nature of the method is achieved experimentally by using a spatial light modulator. The suitability of the proposed tracking method is validated by determining the trajectory and velocity described by glass microspheres in movement.

  17. A new hybrid double divisor ratio spectra method for the analysis of ternary mixtures

    NASA Astrophysics Data System (ADS)

    Youssef, Rasha M.; Maher, Hadir M.

    2008-10-01

    A new spectrophotometric method was developed for the simultaneous determination of ternary mixtures, without prior separation steps. This method is based on convolution of the double divisor ratio spectra, obtained by dividing the absorption spectrum of the ternary mixture by a standard spectrum of two of the three compounds in the mixture, using combined trigonometric Fourier functions. The magnitude of the Fourier function coefficients, at either maximum or minimum points, is related to the concentration of each drug in the mixture. The mathematical explanation of the procedure is illustrated. The method was applied for the assay of a model mixture consisting of isoniazid (ISN), rifampicin (RIF) and pyrazinamide (PYZ) in synthetic mixtures, commercial tablets and human urine samples. The developed method was compared with the double divisor ratio spectra derivative method (DDRD) and derivative ratio spectra-zero-crossing method (DRSZ). Linearity, validation, accuracy, precision, limits of detection, limits of quantitation, and other aspects of analytical validation are included in the text.

  18. A FORTRAN Program for Computing Refractive Index Using the Double Variation Method.

    ERIC Educational Resources Information Center

    Blanchard, Frank N.

    1984-01-01

    Describes a computer program which calculates a best estimate of refractive index and dispersion from a large number of observations using the double variation method of measuring refractive index along with Sellmeier constants of the immersion oils. Program listing with examples will be provided on written request to the author. (Author/JM)

  19. Mean centering of double divisor ratio spectra, a novel spectrophotometric method for analysis of ternary mixtures.

    PubMed

    Hassan, Said A; Elzanfaly, Eman S; Salem, Maissa Y; El-Zeany, Badr A

    2016-01-15

    A novel spectrophotometric method was developed for determination of ternary mixtures without previous separation, showing significant advantages over conventional methods. The new method is based on mean centering of double divisor ratio spectra. The mathematical explanation of the procedure is illustrated. The method was evaluated by determination of model ternary mixture and by the determination of Amlodipine (AML), Aliskiren (ALI) and Hydrochlorothiazide (HCT) in laboratory prepared mixtures and in a commercial pharmaceutical preparation. For proper presentation of the advantages and applicability of the new method, a comparative study was established between the new mean centering of double divisor ratio spectra (MCDD) and two similar methods used for analysis of ternary mixtures, namely mean centering (MC) and double divisor of ratio spectra-derivative spectrophotometry (DDRS-DS). The method was also compared with a reported one for analysis of the pharmaceutical preparation. The method was validated according to the ICH guidelines and accuracy, precision, repeatability and robustness were found to be within the acceptable limits. PMID:26298680

  20. Mean centering of double divisor ratio spectra, a novel spectrophotometric method for analysis of ternary mixtures

    NASA Astrophysics Data System (ADS)

    Hassan, Said A.; Elzanfaly, Eman S.; Salem, Maissa Y.; El-Zeany, Badr A.

    2016-01-01

    A novel spectrophotometric method was developed for determination of ternary mixtures without previous separation, showing significant advantages over conventional methods. The new method is based on mean centering of double divisor ratio spectra. The mathematical explanation of the procedure is illustrated. The method was evaluated by determination of model ternary mixture and by the determination of Amlodipine (AML), Aliskiren (ALI) and Hydrochlorothiazide (HCT) in laboratory prepared mixtures and in a commercial pharmaceutical preparation. For proper presentation of the advantages and applicability of the new method, a comparative study was established between the new mean centering of double divisor ratio spectra (MCDD) and two similar methods used for analysis of ternary mixtures, namely mean centering (MC) and double divisor of ratio spectra-derivative spectrophotometry (DDRS-DS). The method was also compared with a reported one for analysis of the pharmaceutical preparation. The method was validated according to the ICH guidelines and accuracy, precision, repeatability and robustness were found to be within the acceptable limits.

  1. Double Hypernuclei Experiment with Hybrid Emulsion Method at J-PARC

    NASA Astrophysics Data System (ADS)

    Ekawa, Hiroyuki

    Double hypernuclei are important probes to study the system with strangeness S = -2. Several emulsion experiments had been performed to search for them. We are planning a new experiment to search for double hypernuclei at the K1.8 beam line in the Hadron Experimental Facility (J-PARC E07 experiment). Ξ- tracks in the emulsion plates and SSD will be automatically connected by a hybrid method. The estimated Ξ- stopped statistics is 10 times as high as that of the KEK E373 experiment. Discoveries of 10 new double hypernuclear species are expected, which enable us to discuss binding energy in terms of mass number dependence. On the other hand, we will also observe X rays from Ξ- atoms with a germanium detector array installed close to theemulsion plates by tagging Ξ- stopped events. This will be the first measurement to give information on the Ξ- potential at the nuclear surface region.

  2. An analytical method for analyzing symmetry-breaking bifurcation and period-doubling bifurcation

    NASA Astrophysics Data System (ADS)

    Zou, Keguan; Nagarajaiah, Satish

    2015-05-01

    A new modification of homotopy analysis method (HAM) is proposed in this paper. The auxiliary differential operator is specifically chosen so that more than one secular term must be eliminated. The proposed method can capture asymmetric and period-2 solutions with satisfactory accuracy and hence can be used to predict symmetry-breaking and period-doubling bifurcation points. The variation of accuracy is investigated when different number of frequencies are considered.

  3. Lead generation and examples opinion regarding how to follow up hits.

    PubMed

    Orita, Masaya; Ohno, Kazuki; Warizaya, Masaichi; Amano, Yasushi; Niimi, Tatsuya

    2011-01-01

    In fragment-based drug discovery (FBDD), not only identifying the starting fragment hit to be developed but also generating a drug lead from that starting fragment hit is important. Converting fragment hits to leads is generally similar to a high-throughput screening (HTS) hits-to-leads approach in that properties associated with activity for a target protein, such as selectivity against other targets and absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox), and physicochemical properties should be taken into account. However, enhancing the potency of the fragment hit is a key requirement in FBDD, unlike HTS, because initial fragment hits are generally weak. This enhancement is presently achieved by adding additional chemical groups which bind to additional parts of the target protein or by joining or combining two or more hit fragments; however, strategies for effecting greater improvements in effective activity are needed. X-ray analysis is a key technology attractive for converting fragments to drug leads. This method makes it clear whether a fragment hit can act as an anchor and provides insight regarding introduction of functional groups to improve fragment activity. Data on follow-up chemical synthesis of fragment hits has allowed for the differentiation of four different strategies: fragment optimization, fragment linking, fragment self-assembly, and fragment evolution. Here, we discuss our opinion regarding how to follow up on fragment hits, with a focus on the importance of fragment hits as an anchor moiety to so-called hot spots in the target protein using crystallographic data. PMID:21371599

  4. Rising Blood Sugar Hitting More Obese Adults

    MedlinePlus

    ... https://medlineplus.gov/news/fullstory_159853.html Rising Blood Sugar Hitting More Obese Adults To curb diabetes, researchers ... HealthDay News) -- Among obese American adults, control of blood sugar is worsening, leading to more diabetes and heart ...

  5. Investigation of an innovative method for DC flow suppression of double-inlet pulse tube coolers

    NASA Astrophysics Data System (ADS)

    Hu, J. Y.; Luo, E. C.; Wu, Z. H.; Dai, W.; Zhu, S. L.

    2007-05-01

    The use of double-inlet mode in the pulse tube cooler opens up a possibility of DC flow circulating around the regenerator and the pulse tube. The DC flow sometimes deteriorates the performance of the cryocooler because such a steady flow adds an unwanted thermal load to the cold heat exchanger. It seems that this problem is still not well solved although a lot of effort has been made. Here we introduce a membrane-barrier method for DC flow suppression in double-inlet pulse tube coolers. An elastic membrane is installed between the pulse tube cooler inlet and the double-inlet valve to break the closed-loop flow path of DC flow. The membrane is acoustically transparent, but would block the DC flow completely. Thus the DC flow is thoroughly suppressed and the merit of double-inlet mode is remained. With this method, a temperature reduction of tens of Kelvin was obtained in our single-stage pulse tube cooler and the lowest temperature reached 29.8 K.

  6. Application of magnetic printing method to hard-disk media with double recording layers

    NASA Astrophysics Data System (ADS)

    Ono, Takuya; Kuboki, Yoshiyuki; Ajishi, Yoshifumi; Saito, Akira

    2003-05-01

    The magnetic printing method, which can duplicate soft magnetic patterns containing digital information such as servosignals formed on a master disk onto recording media, enables signals to be written to hard-disk media having high coercivities above 6000 Oe. We propose the application of the magnetic printing method to a hard-disk medium having double recording layers, one layer of which has high coercivity and is to be printed with digital information. This double recording layer medium is a hard-disk medium that has a magnetic read-only-memory (MROM) layer. In this study, we demonstrated a method for printing to this medium, which has MROM, and discussed the magnetic properties and recording performances of this medium.

  7. A double filtering method for measuring the translational velocity of fluorescently stained cells

    SciTech Connect

    Yasokawa, Toshiki; Ishimaru, Ichirou; Kuriyama, Shigeki; Masaki, Tsutomu; Takegawa, Kaoru; Tanaka, Naotaka

    2007-09-24

    The authors propose a double filtering method to measure translational velocity for tracking fluorescently stained cells. This method employs two diffraction gratings installed in the infinity space through which the parallel pencil beam of the fluorescence passes. With this method, the change in light intensity whose period is proportional to the translational velocity of the sample can be obtained at the imaging surface. By using a sample that has a random distribution of fluorescence intensity, the authors verified that translational velocity measurements could be achieved using the proposed method.

  8. Description of a double centrifugation tube method for concentrating canine platelets

    PubMed Central

    2013-01-01

    Background To evaluate the efficiency of platelet-rich plasma preparations by means of a double centrifugation tube method to obtain platelet-rich canine plasma at a concentration at least 4 times higher than the baseline value and a concentration of white blood cells not exceeding twice the reference range. A complete blood count was carried out for each sample and each concentrate. Whole blood samples were collected from 12 clinically healthy dogs (consenting blood donors). Blood was processed by a double centrifugation tube method to obtain platelet concentrates, which were then analyzed by a flow cytometry haematology system for haemogram. Platelet concentration and white blood cell count were determined in all samples. Results Platelet concentration at least 4 times higher than the baseline value and a white blood cell count not exceeding twice the reference range were obtained respectively in 10 cases out of 12 (83.3%) and 11 cases out of 12 (91.6%). Conclusions This double centrifugation tube method is a relatively simple and inexpensive method for obtaining platelet-rich canine plasma, potentially available for therapeutic use to improve the healing process. PMID:23876182

  9. Novel Microdilution Method to Assess Double and Triple Antibiotic Combination Therapy In Vitro

    PubMed Central

    El-Azizi, Mohamed

    2016-01-01

    An in vitro microdilution method was developed to assess double and triple combinations of antibiotics. Five antibiotics including ciprofloxacin, amikacin, ceftazidime, piperacillin, and imipenem were tested against 10 clinical isolates of Pseudomonas aeruginosa. Each isolate was tested against ten double and nine triple combinations of the antibiotics. A 96-well plate was used to test three antibiotics, each one alone and in double and triple combinations against each isolate. The minimum bacteriostatic and bactericidal concentrations in combination were determined with respect to the most potent antibiotic. An Interaction Code (IC) was generated for each combination, where a numerical value was designated based on the 2-fold increase or decrease in the MICs with respect to the most potent antibiotic. The results of the combinations were verified by time-kill assay at constant concentrations of the antibiotics and in a chemostat. Only 13% of the double combinations were synergistic, whereas 5% showed antagonism. Forty-three percent of the triple combinations were synergistic with no antagonism observed, and 100% synergism was observed in combination of ciprofloxacin, amikacin, and ceftazidime. The presented protocol is simple and fast and can help the clinicians in the early selection of the effective antibiotic therapy for treatment of severe infections. PMID:27195009

  10. Research on text encryption and hiding method with double-random phase-encoding

    NASA Astrophysics Data System (ADS)

    Xu, Hongsheng; Sang, Nong

    2013-10-01

    By using optical image processing techniques, a novel text encryption and hiding method applied by double-random phase-encoding technique is proposed in the paper. The first step is that the secret message is transformed into a 2- dimension array. The higher bits of the elements in the array are used to fill with the bit stream of the secret text, while the lower bits are stored specific values. Then, the transformed array is encoded by double random phase encoding technique. Last, the encoded array is embedded on a public host image to obtain the image embedded with hidden text. The performance of the proposed technique is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient.

  11. Key management of the double random-phase-encoding method using public-key encryption

    NASA Astrophysics Data System (ADS)

    Saini, Nirmala; Sinha, Aloka

    2010-03-01

    Public-key encryption has been used to encode the key of the encryption process. In the proposed technique, an input image has been encrypted by using the double random-phase-encoding method using extended fractional Fourier transform. The key of the encryption process have been encoded by using the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. The encoded key has then been transmitted to the receiver side along with the encrypted image. In the decryption process, first the encoded key has been decrypted using the secret key and then the encrypted image has been decrypted by using the retrieved key parameters. The proposed technique has advantage over double random-phase-encoding method because the problem associated with the transmission of the key has been eliminated by using public-key encryption. Computer simulation has been carried out to validate the proposed technique.

  12. Temperature measurement of wood flame based on the double line method of atomic emission spectra

    NASA Astrophysics Data System (ADS)

    Hao, Xiaojian; Liu, Zhenhua; Sang, Tao

    2016-01-01

    Aimed at the testing requirement of the transient high temperature in explosion field and the bore of barrel weapon, the temperature measurement system of double line of atomic emission spectrum was designed, the method of flame spectrum testing system were used for experimental analysis. The experimental study of wood burning spectra was done with flame spectrum testing system. The measured spectra contained atomic emission spectra of the elements K, Na, and the excitation ease of two kinds atomic emission spectra was analyzed. The temperature was calculated with two spectral lines of K I 766.5nm and 769.9nm. The results show that, compared with Na, the excitation temperature of K atomic emission spectra is lower. By double line method, the temperature of wood burning is 1040K, and error is 3.7%.

  13. Chosen-plaintext attack on double-random-phase-encoding-based image hiding method

    NASA Astrophysics Data System (ADS)

    Xu, Hongsheng; Li, Guirong; Zhu, Xianchen

    2015-12-01

    By using optical image processing techniques, a novel text encryption and hiding method applied by double-random phase-encoding technique is proposed in the paper. The first step is that the secret message is transformed into a 2- dimension array. The higher bits of the elements in the array are used to fill with the bit stream of the secret text, while the lower bits are stored specific values. Then, the transformed array is encoded by double random phase encoding technique. Last, the encoded array is embedded on a public host image to obtain the image embedded with hidden text. The performance of the proposed technique is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient.

  14. Double tracer autoradiographic method for sequential evaluation of regional cerebral perfusion

    SciTech Connect

    Matsuda, H.; Tsuji, S.; Oba, H.; Kinuya, K.; Terada, H.; Sumiya, H.; Shiba, K.; Mori, H.; Hisada, K.; Maeda, T. )

    1989-01-01

    A new double tracer autoradiographic method for the sequential evaluation of altered regional cerebral perfusion in the same animal is presented. This method is based on the sequential injection of two tracers, {sup 99m}Tc-hexamethylpropyleneamine oxime and N-isopropyl-({sup 125}I)p-iodoamphetamine. This method is validated in the assessment of brovincamine effects on regional cerebral perfusion in an experimental model of chronic brain ischemia in the rat. The drug enhanced perfusion recovery in low-flow areas, selectively in surrounding areas of infarction. The results suggest that this technique is of potential use in the study of neuropharmacological effects applied during the experiment.

  15. A Double-difference Earthquake location algorithm: Method and application to the Northern Hayward Fault, California

    USGS Publications Warehouse

    Waldhauser, F.; Ellsworth, W.L.

    2000-01-01

    We have developed an efficient method to determine high-resolution hypocenter locations over large distances. The location method incorporates ordinary absolute travel-time measurements and/or cross-correlation P-and S-wave differential travel-time measurements. Residuals between observed and theoretical travel-time differences (or double-differences) are minimized for pairs of earthquakes at each station while linking together all observed event-station pairs. A least-squares solution is found by iteratively adjusting the vector difference between hypocentral pairs. The double-difference algorithm minimizes errors due to unmodeled velocity structure without the use of station corrections. Because catalog and cross-correlation data are combined into one system of equations, interevent distances within multiplets are determined to the accuracy of the cross-correlation data, while the relative locations between multiplets and uncorrelated events are simultaneously determined to the accuracy of the absolute travel-time data. Statistical resampling methods are used to estimate data accuracy and location errors. Uncertainties in double-difference locations are improved by more than an order of magnitude compared to catalog locations. The algorithm is tested, and its performance is demonstrated on two clusters of earthquakes located on the northern Hayward fault, California. There it colapses the diffuse catalog locations into sharp images of seismicity and reveals horizontal lineations of hypocenter that define the narrow regions on the fault where stress is released by brittle failure.

  16. High resolution image reconstruction method for a double-plane PET system with changeable spacing

    NASA Astrophysics Data System (ADS)

    Gu, Xiao-Yue; Zhou, Wei; Li, Lin; Wei, Long; Yin, Peng-Fei; Shang, Lei-Min; Yun, Ming-Kai; Lu, Zhen-Rui; Huang, Xian-Chao

    2016-05-01

    Breast-dedicated positron emission tomography (PET) imaging techniques have been developed in recent years. Their capacities to detect millimeter-sized breast tumors have been the subject of many studies. Some of them have been confirmed with good results in clinical applications. With regard to biopsy application, a double-plane detector arrangement is practicable, as it offers the convenience of breast immobilization. However, the serious blurring effect of the double-plane PET, with changeable spacing for different breast sizes, should be studied. We investigated a high resolution reconstruction method applicable for a double-plane PET. The distance between the detector planes is changeable. Geometric and blurring components were calculated in real-time for different detector distances, and accurate geometric sensitivity was obtained with a new tube area model. Resolution recovery was achieved by estimating blurring effects derived from simulated single gamma response information. The results showed that the new geometric modeling gave a more finite and smooth sensitivity weight in the double-plane PET. The blurring component yielded contrast recovery levels that could not be reached without blurring modeling, and improved visual recovery of the smallest spheres and better delineation of the structures in the reconstructed images were achieved with the blurring component. Statistical noise had lower variance at the voxel level with blurring modeling at matched resolution, compared to without blurring modeling. In distance-changeable double-plane PET, finite resolution modeling during reconstruction achieved resolution recovery, without noise amplification. Supported by Knowledge Innovation Project of The Chinese Academy of Sciences (KJCX2-EW-N06)

  17. HIT: time to end behavioral health discrimination.

    PubMed

    Rosenberg, Linda

    2012-10-01

    While the Health Information Technology for Economic and Clinical Health Act, enacted as part of the American Recovery and Reinvestment Act of 2009, provided $20.6 billion for incentive payments to support the adoption and meaningful use of health information technology (HIT), behavioral health organizations were not eligible to receive facility payments. The consequences of excluding behavioral health from HIT incentive payments are found in the results of the "HIT Adoption and Meaningful Use Readiness in Community Behavioral Health" survey. The survey found that only 2% of community behavioral health organizations are able to meet federal meaningful use (MU) requirements-compare this to the 27% of Federally Qualified Health Centers and 20% of hospitals that already meet some level of MU requirements. Behavioral health organizations, serving more than eight million adults, children, and families with mental illnesses and addiction disorders, are ready and eager to adopt HIT to meet the goals of better healthcare, better health, and lower costs. But reaching these goals may prove impossible unless behavioral health achieves "parity" within healthcare and receives resources for the adoption of HIT. PMID:22956203

  18. Single-Mach and double-Mach reflection - Its representation in Ernst Mach's historical soot method

    NASA Astrophysics Data System (ADS)

    Krehl, P.

    In 1875 Ernst Mach discovered the effect of irregular interaction of shock waves, the so-called single Mach reflection (SMR), which for symmetric geometry is characterized by two triple points. He recorded their two trajectories on a soot-covered glass plate. Appearing as two mirror-symmetric V-branches, they form the well-known Mach soot funnel. Combining this soot method with the schlieren technique facilitates the interpretation of soot-recorded interaction phenomena as well as allows to resolve the soot removal mechanism in time. Increasing the dynamic recording range of the soot layer in terms of reflected shock pressures even renders visualization of double-Mach reflection (DMR) which, in the case of symmetric shock interaction, is characterized by a second concentric, external 'double-Mach funnel'. At transition of DMR to SMR it merges into the ordinary 'single-Mach funnel'.

  19. Earthquake hypocenter relocation using double difference method in East Java and surrounding areas

    SciTech Connect

    C, Aprilia Puspita; Nugraha, Andri Dian; Puspito, Nanang T

    2015-04-24

    Determination of precise hypocenter location is very important in order to provide information about subsurface fault plane and for seismic hazard analysis. In this study, we have relocated hypocenter earthquakes in Eastern part of Java and surrounding areas from local earthquake data catalog compiled by Meteorological, Climatological, and Geophysical Agency of Indonesia (MCGA) in time period 2009-2012 by using the double-difference method. The results show that after relocation processes, there are significantly changes in position and orientation of earthquake hypocenter which is correlated with the geological setting in this region. We observed indication of double seismic zone at depths of 70-120 km within the subducting slab in south of eastern part of Java region. Our results will provide useful information for advance seismological studies and seismic hazard analysis in this study.

  20. Earthquake hypocenter relocation using double difference method in East Java and surrounding areas

    NASA Astrophysics Data System (ADS)

    C, Aprilia Puspita; Nugraha, Andri Dian; Puspito, Nanang T.

    2015-04-01

    Determination of precise hypocenter location is very important in order to provide information about subsurface fault plane and for seismic hazard analysis. In this study, we have relocated hypocenter earthquakes in Eastern part of Java and surrounding areas from local earthquake data catalog compiled by Meteorological, Climatological, and Geophysical Agency of Indonesia (MCGA) in time period 2009-2012 by using the double-difference method. The results show that after relocation processes, there are significantly changes in position and orientation of earthquake hypocenter which is correlated with the geological setting in this region. We observed indication of double seismic zone at depths of 70-120 km within the subducting slab in south of eastern part of Java region. Our results will provide useful information for advance seismological studies and seismic hazard analysis in this study.

  1. Hypersingular meshless method using double-layer potentials for three-dimensional exterior acoustic problems.

    PubMed

    Young, D L; Chen, K H; Liu, T Y; Wu, C S

    2016-01-01

    Three-dimensional exterior acoustic problems with irregular domains are solved using a hypersingular meshless method. In particular, the method of fundamental solutions (MFS) is used to formulate and analyze such acoustic problems. It is well known that source points for MFS cannot be located on the real boundary due to the singularity of the kernel functions. Thus, the diagonal terms of the influence matrices are unobtainable when source points are located on the boundary. An efficient approach is proposed to overcome such difficulties, when the MFS is used for three-dimensional exterior acoustic problems. This work is an extension of previous research on two-dimensional problems. The solution of the problem is expressed in terms of a double-layer potential representation on the physical boundary. Three examples are presented in which the proposed method is compared to the MFS and boundary element method. Good numerical performance is demonstrated by the proposed hypersingular meshless method. PMID:26827046

  2. Low-noise multiple watermarks technology based on complex double random phase encoding method

    NASA Astrophysics Data System (ADS)

    Zheng, Jihong; Lu, Rongwen; Sun, Liujie; Zhuang, Songlin

    2010-11-01

    Based on double random phase encoding method (DRPE), watermarking technology may provide a stable and robust method to protect the copyright of the printing. However, due to its linear character, DRPE exist the serious safety risk when it is attacked. In this paper, a complex coding method, which means adding the chaotic encryption based on logistic mapping before the DRPE coding, is provided and simulated. The results testify the complex method will provide better security protection for the watermarking. Furthermore, a low-noise multiple watermarking is studied, which means embedding multiple watermarks into one host printing and decrypt them with corresponding phase keys individually. The Digital simulation and mathematic analysis show that with the same total embedding weight factor, multiply watermarking will improve signal noise ratio (SNR) of the output printing image significantly. The complex multiply watermark method may provide a robust, stability, reliability copyright protection with higher quality printing image.

  3. A Quantum Algorithm for Estimating Hitting Times of Markov Chains

    NASA Astrophysics Data System (ADS)

    Narayan Chowdhury, Anirban; Somma, Rolando

    We present a quantum algorithm to estimate the hitting time of a reversible Markov chain faster than classically possible. To this end, we show that the hitting time is given by an expected value of the inverse of a Hermitian matrix. To obtain this expected value, our algorithm combines three important techniques developed in the literature. One such a technique is called spectral gap amplification and we use it to amplify the gap of the Hermitian matrix or reduce its condition number. We then use a new algorithm by Childs, Kothari, and Somma to implement the inverse of a matrix, and finally use methods developed in the context of quantum metrology to reduce the complexity of expected-value estimation for a given precision. The authors acknowledge support from AFOSR Grant Number FA9550-12-1-0057 and the Google Research Award.

  4. Hitting is contagious: experience and action induction.

    PubMed

    Gray, Rob; Beilock, Sian L

    2011-03-01

    In baseball, it is believed that "hitting is contagious," that is, probability of success increases if the previous few batters get a hit. Could this effect be partially explained by action induction--that is, the tendency to perform an action related to one that has just been observed? A simulation was used to investigate the effect of inducing stimuli on batting performance for more-experienced (ME) and less-experienced (LE) baseball players. Three types of inducing stimuli were compared with a no-induction condition: action (a simulated ball traveling from home plate into left, right, or center field), outcome (a ball resting in either left, right, or center field), and verbal (the word "left", "center", or "right"). For both ME and LE players, fewer pitchers were required for a successful hit in the action condition. For ME players, there was a significant relationship between the inducing stimulus direction and hit direction for both the action and outcome prompts. For LE players, the prompt only had a significant effect on batting performance in the action condition, and the magnitude of the effect was significantly smaller than for ME. The effect of the inducing stimulus decreased as the delay (i.e., no. of pitches between prompt and hit) increased, with the effect being eliminated after roughly 4 pitches for ME and 2 pitches for LE. It is proposed that the differences in the magnitude and time course of action induction as a function of experience occurred because ME have more well-developed perceptual-motor representations for directional hitting. PMID:21443380

  5. A time-resolved Langmuir double-probe method for the investigation of pulsed magnetron discharges

    SciTech Connect

    Welzel, Th.; Dunger, Th.; Kupfer, H.; Richter, F.

    2004-12-15

    Langmuir probes are important means for the characterization of plasma discharges. For measurements in plasmas used for the deposition of thin films, the Langmuir double probe is especially suited. With the increasing popularity of pulsed deposition discharges, there is also an increasing need for time-resolved characterization methods. For Langmuir probes, several single-probe approaches to time-resolved measurements are reported but very few for the double probe. We present a time-resolved Langmuir double-probe technique, which is applied to a pulsed magnetron discharge at several 100 kHz used for MgO deposition. The investigations show that a proper treatment of the current measurement is necessary to obtain reliable results. In doing so, a characteristic time dependence of the charge-carrier density during the ''pulse on'' time containing maximum values of almost 2{center_dot}10{sup 11} cm{sup -3} was found. This characteristic time dependence varies with the pulse frequency and the duty cycle. A similar time dependence of the electron temperature is only observed when the probe is placed near the magnesium target.

  6. Second-order perturbation corrections to singles and doubles coupled-cluster methods: General theory and application to the valence optimized doubles model

    NASA Astrophysics Data System (ADS)

    Gwaltney, Steven R.; Sherrill, C. David; Head-Gordon, Martin; Krylov, Anna I.

    2000-09-01

    We present a general perturbative method for correcting a singles and doubles coupled-cluster energy. The coupled-cluster wave function is used to define a similarity-transformed Hamiltonian, which is partitioned into a zeroth-order part that the reference problem solves exactly plus a first-order perturbation. Standard perturbation theory through second-order provides the leading correction. Applied to the valence optimized doubles (VOD) approximation to the full-valence complete active space self-consistent field method, the second-order correction, which we call (2), captures dynamical correlation effects through external single, double, and semi-internal triple and quadruple substitutions. A factorization approximation reduces the cost of the quadruple substitutions to only sixth order in the size of the molecule. A series of numerical tests are presented showing that VOD(2) is stable and well-behaved provided that the VOD reference is also stable. The second-order correction is also general to standard unwindowed coupled-cluster energies such as the coupled-cluster singles and doubles (CCSD) method itself, and the equations presented here fully define the corresponding CCSD(2) energy.

  7. Arabidopsis HIT4, a regulator involved in heat-triggered reorganization of chromatin and release of transcriptional gene silencing, relocates from chromocenters to the nucleolus in response to heat stress.

    PubMed

    Wang, Lian-Chin; Wu, Jia-Rong; Hsu, Yi-Ju; Wu, Shaw-Jye

    2015-01-01

    Arabidopsis HIT4 is known to mediate heat-induced decondensation of chromocenters and release from transcriptional gene silencing (TGS) with no change in the level of DNA methylation. It is unclear whether HIT4 and MOM1, a well-known DNA methylation-independent transcriptional silencer, have overlapping regulatory functions. A hit4-1/mom1 double mutant strain was generated. Its nuclear morphology and TGS state were compared with those of wild-type, hit4-1, and mom1 plants. Fluorescent protein tagging was employed to track the fates of HIT4, hit4-1 and MOM1 in vivo under heat stress. HIT4- and MOM1-mediated TGS were distinguishable. Both HIT4 and MOM1 were localized normally to chromocenters. Under heat stress, HIT4 relocated to the nucleolus, whereas MOM1 dispersed with the chromocenters. hit4-1 was able to relocate to the nucleolus under heat stress, but its relocation was insufficient to trigger the decompaction of chromocenters. The hypersensitivity to heat associated with the impaired reactivation of TGS in hit4-1 was not alleviated by mom1-induced release from TGS. HIT4 delineates a novel and MOM1-independent TGS regulation pathway. The involvement of a currently unidentified component that links HIT4 relocation and the large-scale reorganization of chromatin, and which is essential for heat tolerance in plants is hypothesized. PMID:25329561

  8. Statistical properties and pre-hit dynamics of price limit hits in the Chinese stock markets.

    PubMed

    Wan, Yu-Lei; Xie, Wen-Jie; Gu, Gao-Feng; Jiang, Zhi-Qiang; Chen, Wei; Xiong, Xiong; Zhang, Wei; Zhou, Wei-Xing

    2015-01-01

    Price limit trading rules are adopted in some stock markets (especially emerging markets) trying to cool off traders' short-term trading mania on individual stocks and increase market efficiency. Under such a microstructure, stocks may hit their up-limits and down-limits from time to time. However, the behaviors of price limit hits are not well studied partially due to the fact that main stock markets such as the US markets and most European markets do not set price limits. Here, we perform detailed analyses of the high-frequency data of all A-share common stocks traded on the Shanghai Stock Exchange and the Shenzhen Stock Exchange from 2000 to 2011 to investigate the statistical properties of price limit hits and the dynamical evolution of several important financial variables before stock price hits its limits. We compare the properties of up-limit hits and down-limit hits. We also divide the whole period into three bullish periods and three bearish periods to unveil possible differences during bullish and bearish market states. To uncover the impacts of stock capitalization on price limit hits, we partition all stocks into six portfolios according to their capitalizations on different trading days. We find that the price limit trading rule has a cooling-off effect (object to the magnet effect), indicating that the rule takes effect in the Chinese stock markets. We find that price continuation is much more likely to occur than price reversal on the next trading day after a limit-hitting day, especially for down-limit hits, which has potential practical values for market practitioners. PMID:25874716

  9. Statistical Properties and Pre-Hit Dynamics of Price Limit Hits in the Chinese Stock Markets

    PubMed Central

    Wan, Yu-Lei; Xie, Wen-Jie; Gu, Gao-Feng; Jiang, Zhi-Qiang; Chen, Wei; Xiong, Xiong; Zhang, Wei; Zhou, Wei-Xing

    2015-01-01

    Price limit trading rules are adopted in some stock markets (especially emerging markets) trying to cool off traders’ short-term trading mania on individual stocks and increase market efficiency. Under such a microstructure, stocks may hit their up-limits and down-limits from time to time. However, the behaviors of price limit hits are not well studied partially due to the fact that main stock markets such as the US markets and most European markets do not set price limits. Here, we perform detailed analyses of the high-frequency data of all A-share common stocks traded on the Shanghai Stock Exchange and the Shenzhen Stock Exchange from 2000 to 2011 to investigate the statistical properties of price limit hits and the dynamical evolution of several important financial variables before stock price hits its limits. We compare the properties of up-limit hits and down-limit hits. We also divide the whole period into three bullish periods and three bearish periods to unveil possible differences during bullish and bearish market states. To uncover the impacts of stock capitalization on price limit hits, we partition all stocks into six portfolios according to their capitalizations on different trading days. We find that the price limit trading rule has a cooling-off effect (object to the magnet effect), indicating that the rule takes effect in the Chinese stock markets. We find that price continuation is much more likely to occur than price reversal on the next trading day after a limit-hitting day, especially for down-limit hits, which has potential practical values for market practitioners. PMID:25874716

  10. Investigation of lasing in lead vapor by the double pulse method

    SciTech Connect

    Kazakov, V.V.; Markova, S.V.; Molchanova, L.V.; Petrash, G.G.

    1983-05-01

    The double pulse method was to study the 722.9 nm line emitted by a lead vapor laser. The parameters of the second excitation pulse were measured simultaneously with the lasing characteristics. It was found that the main factor determining a decrease in the laser output power in the second pulse was the residual population of the lower active level. Variations in the excitation pulse had a relatively weak influence on the form of this dependence and were essentially manifested when there was a relatively long delay between the excitation pulses.

  11. Double-filtering method based on two acousto-optic tunable filters for hyperspectral imaging application.

    PubMed

    Wang, Pengchong; Zhang, Zhonghua

    2016-05-01

    A hyperspectral imaging system was demonstrated based on two acousto-optic tunable filters (AOTFs). Efficient regulation of the incoherent beam was executed by means of the wide-angular regime of Bragg diffraction in the birefringent materials. A double-filtering process was achieved when these two AOTFs operated with a central wavelength difference. In comparison with the single-filtering method, the spectral bandwidth was greatly compressed, giving an increment of 42.02% in spectral resolution at the wavelength of 651.62 nm. Experimental results and theoretical calculations are basically identical. Furthermore, the sidelobe was found to be suppressed by the double-filtering process with the first order maximum decreased from -9.25 dB to -22.35 dB. The results indicated high spectral resolution and high spectral purity were obtained simultaneously from this method. The basic spectral resolution performance was examined with a didymium glass by this configuration. We present our experimental methods and the detailed results obtained. PMID:27137600

  12. Precise timing when hitting falling balls

    PubMed Central

    Brenner, Eli; Driesen, Ben; Smeets, Jeroen B. J.

    2014-01-01

    People are extremely good at hitting falling balls with a baseball bat. Despite the ball's constant acceleration, they have been reported to time hits with a standard deviation of only about 7 ms. To examine how people achieve such precision, we compared performance when there were no added restrictions, with performance when looking with one eye, when vision was blurred, and when various parts of the ball's trajectory were hidden from view. We also examined how the size of the ball and varying the height from which it was dropped influenced temporal precision. Temporal precision did not become worse when vision was blurred, when the ball was smaller, or when balls falling from different heights were randomly interleaved. The disadvantage of closing one eye did not exceed expectations from removing one of two independent estimates. Precision was higher for slower balls, but only if the ball being slower meant that one saw it longer before the hit. It was particularly important to see the ball while swinging the bat. Together, these findings suggest that people time their hits so precisely by using the changing elevation throughout the swing to adjust the bat's movement to that of the ball. PMID:24904380

  13. Precise timing when hitting falling balls.

    PubMed

    Brenner, Eli; Driesen, Ben; Smeets, Jeroen B J

    2014-01-01

    People are extremely good at hitting falling balls with a baseball bat. Despite the ball's constant acceleration, they have been reported to time hits with a standard deviation of only about 7 ms. To examine how people achieve such precision, we compared performance when there were no added restrictions, with performance when looking with one eye, when vision was blurred, and when various parts of the ball's trajectory were hidden from view. We also examined how the size of the ball and varying the height from which it was dropped influenced temporal precision. Temporal precision did not become worse when vision was blurred, when the ball was smaller, or when balls falling from different heights were randomly interleaved. The disadvantage of closing one eye did not exceed expectations from removing one of two independent estimates. Precision was higher for slower balls, but only if the ball being slower meant that one saw it longer before the hit. It was particularly important to see the ball while swinging the bat. Together, these findings suggest that people time their hits so precisely by using the changing elevation throughout the swing to adjust the bat's movement to that of the ball. PMID:24904380

  14. Science hit by US government crisis

    NASA Astrophysics Data System (ADS)

    Gwynne, Peter

    2013-11-01

    A 16-day government shutdown last month hit the US physics community hard as research projects ranging from space missions to polar geophysics were closed down after Congress failed to vote on its budget for the financial year 2014, which started on 1 October.

  15. Cognitive orientations in marathon running and "hitting the wall"

    PubMed Central

    Stevinson, C. D.; Biddle, S. J.

    1998-01-01

    OBJECTIVES: To investigate whether runners' cognitions during a marathon are related to "hitting the wall". To test a new and more comprehensive system for classifying cognition of marathon runners. METHODS: Non-elite runners (n = 66) completed a questionnaire after finishing the 1996 London marathon. The runners were recruited through the charity SPARKS for whom they were raising money by running in the race. RESULTS: Most runners reported that during the race their thoughts were internally associative, with internally dissociative thoughts being the least prevalent. Runners who "hit the wall" used more internal dissociation than other runners, indicating that it is a hazardous strategy, probably because sensory feedback is blocked. However, internal association was related to an earlier onset of "the wall", suggesting that too much attention on physical symptoms may magnify them, thereby exaggerating any discomfort. External dissociation was related to a later onset, probably because it may provide a degree of distraction but keeps attention on the race. CONCLUSIONS: "Hitting the wall" for recreational non-elite marathon runners is associated with their thought patterns during the race. In particular, "the wall" is associated with internal dissociation. 




 PMID:9773172

  16. A comparison of error detection rates between the reading aloud method and the double data entry method.

    PubMed

    Kawado, Miyuki; Hinotsu, Shiro; Matsuyama, Yutaka; Yamaguchi, Takuhiro; Hashimoto, Shuji; Ohashi, Yasuo

    2003-10-01

    Data entry and its verification are important steps in the process of data management in clinical studies. In Japan, a kind of visual comparison called the reading aloud (RA) method is often used as an alternative to or in addition to the double data entry (DDE) method. In a typical RA method, one operator reads previously keyed data aloud while looking at a printed sheet or computer screen, and another operator compares the voice with the corresponding data recorded on case report forms (CRFs) to confirm whether the data are the same. We compared the efficiency of the RA method with that of the DDE method in the data management system of the Japanese Registry of Renal Transplantation. Efficiency was evaluated in terms of error detection rate and expended time. Five hundred sixty CRFs were randomly allocated to two operators for single data entry. Two types of DDE and RA methods were performed. Single data entry errors were detected in 358 of 104,720 fields (per-field error rate=0.34%). Error detection rates were 88.3% for the DDE method performed by a different operator, 69.0% for the DDE method performed by the same operator, 59.5% for the RA method performed by a different operator, and 39.9% for the RA method performed by the same operator. The differences in these rates were significant (p<0.001) between the two verification methods as well as between the types of operator (same or different). The total expended times were 74.8 hours for the DDE method and 57.9 hours for the RA method. These results suggest that in detecting errors of single data entry, the RA method is inferior to the DDE method, while its time cost is lower. PMID:14500053

  17. Double hexagonal graphene ring synthesized using a growth-etching method

    NASA Astrophysics Data System (ADS)

    Liu, Jinyang; Xu, Yangyang; Cai, Hongbing; Zuo, Chuandong; Huang, Zhigao; Lin, Limei; Guo, Xiaomin; Chen, Zhendong; Lai, Fachun

    2016-07-01

    Precisely controlling the layer number, stacking order, edge configuration, shape and structure of graphene is extremely challenging but highly desirable in scientific research. In this report, a new concept named the growth-etching method has been explored to synthesize a graphene ring using the chemical vapor deposition process. The graphene ring is a hexagonal structure, which contains a hexagonal exterior edge and a hexagonal hole in the centre region. The most important concept introduced here is that the oxide nanoparticle derived from annealing is found to play a dual role. Firstly, it acts as a nucleation site to grow the hexagonal graphene domain and then it works as a defect for etching to form a hole. The evolution process of the graphene ring with the etching time was carefully studied. In addition, a double hexagonal graphene ring was successfully synthesized for the first time by repeating the growth-etching process, which not only confirms the validity and repeatability of the method developed here but may also be further extended to grow unique graphene nanostructures with three, four, or even tens of graphene rings. Finally, a schematic model was drawn to illustrate how the double hexagonal graphene ring is generated and propagated. The results shown here may provide valuable guidance for the design and growth of unique nanostructures of graphene and other two-dimensional materials.

  18. Double hexagonal graphene ring synthesized using a growth-etching method.

    PubMed

    Liu, Jinyang; Xu, Yangyang; Cai, Hongbing; Zuo, Chuandong; Huang, Zhigao; Lin, Limei; Guo, Xiaomin; Chen, Zhendong; Lai, Fachun

    2016-08-01

    Precisely controlling the layer number, stacking order, edge configuration, shape and structure of graphene is extremely challenging but highly desirable in scientific research. In this report, a new concept named the growth-etching method has been explored to synthesize a graphene ring using the chemical vapor deposition process. The graphene ring is a hexagonal structure, which contains a hexagonal exterior edge and a hexagonal hole in the centre region. The most important concept introduced here is that the oxide nanoparticle derived from annealing is found to play a dual role. Firstly, it acts as a nucleation site to grow the hexagonal graphene domain and then it works as a defect for etching to form a hole. The evolution process of the graphene ring with the etching time was carefully studied. In addition, a double hexagonal graphene ring was successfully synthesized for the first time by repeating the growth-etching process, which not only confirms the validity and repeatability of the method developed here but may also be further extended to grow unique graphene nanostructures with three, four, or even tens of graphene rings. Finally, a schematic model was drawn to illustrate how the double hexagonal graphene ring is generated and propagated. The results shown here may provide valuable guidance for the design and growth of unique nanostructures of graphene and other two-dimensional materials. PMID:27387556

  19. Evidence for a one-hit theory in the immune bactericidal reaction and demonstration of a multi-hit response for hemolysis by streptolysin O and Clostridium perfringens theta-toxin.

    PubMed Central

    Inoue, K; Akiyama, Y; Kinoshita, T; Higashi, Y; Amano, T

    1976-01-01

    An analytical method was developed for estimating the number of hits necessary to lyse or kill cells in which various concentrations of the cells are treated with a constant amount of the lytic or killing agent in a constant reaction volume. The reaction may be due to a single-component agent or occur by a sequential chain of reactions due to a multi-component agent, even including side, abortive, or counter-reactions. It was clearly shown by this method that immune bactericidal reactions followed a one-hit theory. It was shown by this method that streptolysin O required four or five hits for hemolysis and Clostridium perfringens theta-toxin required two hits. These results were confirmed by both logarithmic dose-response and survival analyses. It was also shown that streptolysin O and theta-toxin can act complementarily on accumulation of the hits for hemolysis. PMID:177364

  20. A double-observer method to estimate detection rate during aerial waterfowl surveys

    USGS Publications Warehouse

    Koneff, M.D.; Royle, J. Andrew; Otto, M.C.; Wortham, J.S.; Bidwell, J.K.

    2008-01-01

    We evaluated double-observer methods for aerial surveys as a means to adjust counts of waterfowl for incomplete detection. We conducted our study in eastern Canada and the northeast United States utilizing 3 aerial-survey crews flying 3 different types of fixed-wing aircraft. We reconciled counts of front- and rear-seat observers immediately following an observation by the rear-seat observer (i.e., on-the-fly reconciliation). We evaluated 6 a priori models containing a combination of several factors thought to influence detection probability including observer, seat position, aircraft type, and group size. We analyzed data for American black ducks (Anas rubripes) and mallards (A. platyrhynchos), which are among the most abundant duck species in this region. The best-supported model for both black ducks and mallards included observer effects. Sample sizes of black ducks were sufficient to estimate observer-specific detection rates for each crew. Estimated detection rates for black ducks were 0.62 (SE = 0.10), 0.63 (SE = 0.06), and 0.74 (SE = 0.07) for pilot-observers, 0.61 (SE = 0.08), 0.62 (SE = 0.06), and 0.81 (SE = 0.07) for other front-seat observers, and 0.43 (SE = 0.05), 0.58 (SE = 0.06), and 0.73 (SE = 0.04) for rear-seat observers. For mallards, sample sizes were adequate to generate stable maximum-likelihood estimates of observer-specific detection rates for only one aerial crew. Estimated observer-specific detection rates for that crew were 0.84 (SE = 0.04) for the pilot-observer, 0.74 (SE = 0.05) for the other front-seat observer, and 0.47 (SE = 0.03) for the rear-seat observer. Estimated observer detection rates were confounded by the position of the seat occupied by an observer, because observers did not switch seats, and by land-cover because vegetation and landform varied among crew areas. Double-observer methods with on-the-fly reconciliation, although not without challenges, offer one viable option to account for detection bias in aerial waterfowl

  1. Research on absorption test methods of Yb-doped double cladding fiber

    NASA Astrophysics Data System (ADS)

    Wang, Pupu; Li, Rundong; Rong, Liang; Ji, Wei; Gao, Yankun; Jiang, Cong; Gu, Shaoyi

    2016-01-01

    Absorption coefficient is a very useful feature for active fiber. In fiber laser system, the length of active fiber is chosen according to absorption coefficient. And the length of fiber can directly influence the feature of fiber laser. Therefore, how to obtain an accurate absorption coefficient is very important. Because fiber exists re-emission in typical absorption band pumped by power. It is difficult to accurately measure absorption coefficient. The absorption coefficients of Yb-doped double cladding fiber at 975 nm measured by several methods were compared. In conclusion, for the fibers with same length pumped by white light, the absorption coefficient is the highest when cutback only once. Meanwhile, when fibers with different length were measured by the same method, the absorption coefficient is inversely proportional to optical fiber length.

  2. Simulation of electric double-layer capacitors: evaluation of constant potential method

    NASA Astrophysics Data System (ADS)

    Wang, Zhenxing; Laird, Brian; Yang, Yang; Olmsted, David; Asta, Mark

    2014-03-01

    Atomistic simulations can play an important role in understanding electric double-layer capacitors (EDLCs) at a molecular level. In such simulations, typically the electrode surface is modeled using fixed surface charges, which ignores the charge fluctuation induced by local fluctuations in the electrolyte solution. In this work we evaluate an explicit treatment of charges, namely constant potential method (CPM)[1], in which the electrode charges are dynamically updated to maintain constant electrode potential. We employ a model system with a graphite electrode and a LiClO4/acetonitrile electrolyte, examined as a function of electrode potential differences. Using various molecular and macroscopic properties as metrics, we compare CPM simulations on this system to results using fixed surface charges. Specifically, results for predicted capacity, electric potential gradient and solvent density profile are identical between the two methods; However, ion density profiles and solvation structure yield significantly different results.

  3. Computing Principal Eigenvectors of Large Web Graphs: Algorithms and Accelerations Related to PageRank and HITS

    ERIC Educational Resources Information Center

    Nagasinghe, Iranga

    2010-01-01

    This thesis investigates and develops a few acceleration techniques for the search engine algorithms used in PageRank and HITS computations. PageRank and HITS methods are two highly successful applications of modern Linear Algebra in computer science and engineering. They constitute the essential technologies accounted for the immense growth and…

  4. Method for determining effective nonradiative lifetime and leakage losses in double-heterostructure lasers

    SciTech Connect

    van Opdorp, C.; 't Hooft, G.W.

    1981-06-01

    Carrier losses in double-heterostructure lasers are twofold: (i) nonradiative recombination through killers in the bulk of the active region and at all its boundaries (interfaces and surfaces), and (ii) leakage out of the active region. A simple theory shows the following. In the high-injection regime (papprox. =n) all processes under (i) are directly proportional to n. Consequently their contributions can be lumped together in a single effective nonradiative carrier lifetime tau/sub nr/ ; this tau/sub nr/ is constant (i.e., independent of n) owing to the constant degree of occupation of all killers in the mentioned regime. On the other hand, the leakage losses (ii) are superlinear in n. This provides a well-grounded basis for disentangling the contributions of (i) and (ii) in a given sample. Further, a simple method is presented for accurately determining tau/sub nr/ from data of the external quantum efficiency eta/sub ext/ measured as a function of current I in the spontaneous high-injection regime below the laser threshold. Knowledge of the light-extraction factor (i.e., the ratio of external and internal quantum efficiencies) is essentially unnecessary with this method. However, optionally it can be determined easily from a slight extension of the method. For illustration the method of determining tau/sub nr/, which is also applicable to double-hetero LED's, has been applied to some thirty LPE and metal-organic VPE GaAs-(Ga,Al)As lasers of widely varying qualities. The values found vary between 0.8 and 55 ns. From the measured values of tau/sub nr/ it follows that the upper limit for the interface recombination velocity in the best samples is 270 cm/s. For most samples tau/sub nr/ cannot account for all electrical losses at laser threshold. The superlinear excess losses are ascribable to leakage.

  5. A new three-dimensional shape measurement method based on double-frequency fringes

    NASA Astrophysics Data System (ADS)

    Li, Biao; Yang, Jie; Wu, Haitao; Fu, Yanjun

    2015-10-01

    Fringe projection profilometry (FPP) is a rapidly developing technique which is widely used for industrial manufacture, heritage conservation, and medicine etc. because of its high speed, high precision, non-contact operation, full-field acquisition, and easy information processing. Among the various FFP methods, the squared binary defocused projection method (SBM) has been promptly expanding with several advantages: (1) high projection speed because of 1-bit grayscale fringe; (2) eliminating nonlinear gamma of the projector for the defocusing effect. Nevertheless, the method is not trouble-free. When the fringe stripe is wide, it brings down the fringe contrast and is difficult to control the defocused degree, resulting in a low measurement accuracy. In order to further improve high-speed and high-precision three-dimensional shape measurement, this paper presents a new three-dimensional shape measurement method based on double-frequency fringes projection. This new method needs to project two sets of 1-bit grayscale fringe patterns (low-frequency fringe and high-frequency fringe) onto the object surface under slightly defocused projection mode. The method has the following advantages: (1) high projection speed because of 1-bit grayscale fringe; (2) high measurement precision for selectively removing undesired harmonics. Low-frequency fringe is produced by error-diffusion dithering (Dithering) technique and high-frequency fringe is generated by optimal pulse-width modulation (OPWM) technique. The two kinds of fringe patterns have each superiorities and flaws. The low-frequency fringe has a low measurement accuracy, but the continue phase can be easily retrieved. However, the property of high-frequency fringe and low-frequency fringe is the opposite. The general idea of this method proposed is as follows: Because the both fringes test the same object, the height is the same. The low-frequency fringe can be used to assist the high frequency fringe to retrieve

  6. Hitting and trapping times on branched structures

    NASA Astrophysics Data System (ADS)

    Agliari, Elena; Sartori, Fabio; Cattivelli, Luca; Cassi, Davide

    2015-05-01

    In this work we consider a simple random walk embedded in a generic branched structure and we find a close-form formula to calculate the hitting time H (i ,f ) between two arbitrary nodes i and j . We then use this formula to obtain the set of hitting times {H (i,f)} for combs and their expectation values, namely, the mean first-passage time, where the average is performed over the initial node while the final node f is given, and the global mean first-passage time, where the average is performed over both the initial and the final node. Finally, we discuss applications in the context of reaction-diffusion problems.

  7. Hitting and trapping times on branched structures.

    PubMed

    Agliari, Elena; Sartori, Fabio; Cattivelli, Luca; Cassi, Davide

    2015-05-01

    In this work we consider a simple random walk embedded in a generic branched structure and we find a close-form formula to calculate the hitting time H(i,f) between two arbitrary nodes i and j. We then use this formula to obtain the set of hitting times {H(i,f)} for combs and their expectation values, namely, the mean first-passage time, where the average is performed over the initial node while the final node f is given, and the global mean first-passage time, where the average is performed over both the initial and the final node. Finally, we discuss applications in the context of reaction-diffusion problems. PMID:26066144

  8. Double isotopic method using dansyl chloride for the determination of GABA in rat C6 astrocytoma cell cultures

    SciTech Connect

    Kohl, R.L.; Quay, W.B.; Perez-Polo, J.R.

    1986-01-01

    Methods are described for the quantitative measurement of GABA in culture. The method can be adapted to any amino acid or dansyl-chloride-reactive species. The sensitivity and selectivity of the procedure result from the double isotopic design in which (/sup 14/C)-labeled internal standard was added to the samples before reaction with (3M)-labeled dansyl chloride. Values obtained by ion-exchange amino acid analysis of cultures agree closely with the values obtained by the double isotopic method. This method is sensitive enough to measure GABA intracellularly and the condition medium.

  9. Visual factors in hitting and catching.

    PubMed

    Regan, D

    1997-12-01

    To hit or catch an approaching ball, it is necessary to move a bat or hand to the right place at the right time. The performance of top sports players is remarkable: positional errors of less than 5 cm and temporal errors of less than 2 or 3 ms are reliably maintained. There are three schools of thought about how this is achieved. One holds that predictive visual information about where the ball will be at some future instance (when) is used to achieve the hit or catch. The second holds that the bat or hand is moved to the correct position by exploiting some relation between visual information and the required movement. The third focuses on the use of prior knowledge to supplement inadequate visual information. For a rigid spherical ball travelling at constant speed along or close to the line of sight, the retinal images contain both binocular and monocular correlates of the ball's instantaneous direction of motion in depth. Also, the retinal images contain both binocular and monocular information about time of arrival. Humans can unconfound and use this visual information, but they are unable to estimate the absolute distance of the ball or its approach speed other than crudely. In cricket, this visual inadequacy allows a slow bowler to cause the batsman to misjudge where the ball will hit the ground. Such a bowler uses a three-pronged strategy: first, to deliver the ball in such a way as to prevent the batsman from obtaining the necessary visual information until it is too late to react; secondly, to force the batsman to rely entirely on inadequate retinal image information; thirdly, to allow the batsman to learn a particular relationship between the early part of the ball's flight and the point where the ball hits the ground, and then to change the relationship with such skill that the batsman does not detect the change. PMID:9486432

  10. Hitting a baseball: a biomechanical description.

    PubMed

    Welch, C M; Banks, S A; Cook, F F; Draovitch, P

    1995-11-01

    A tremendous amount of time and energy has been dedicated to the development of conditioning programs, mechanics drills, and rehabilitation protocols for the throwing athlete. In comparison, a significantly smaller amount has been spent on the needs of the hitting athlete. Before these needs can be addressed, an understanding of mechanics and the demands placed on the body during the swing must be developed. This study uses three-dimensional kinematic and kinetic data to define and quantify biomechanics during the baseball swing. The results show that a hitter starts the swing with a weight shift toward the rear foot and the generation of trunk coil. As the hitter strides forward, force applied by the front foot equal to 123% of body weight promotes segment acceleration around the axis of the trunk. The hip segment rotates to a maximum speed of 714 degrees/sec followed by a maximum shoulder segment velocity of 937 degrees/sec. The product of this kinetic link is a maximum linear bat velocity of 31 m/sec. By quantifying the hitting motion, a more educated approach can be made in developing rehabilitation, strength, and conditioning programs for the hitting athlete. PMID:8580946

  11. Simulated likelihood methods for complex double-platform line transect surveys.

    PubMed

    Schweder, T; Skaug, H J; Langaas, M; Dimakos, X K

    1999-09-01

    The conventional line transect approach of estimating effective search width from the perpendicular distance distribution is inappropriate in certain types of surveys, e.g., when an unknown fraction of the animals on the track line is detected, the animals can be observed only at discrete points in time, there are errors in positional measurements, and covariate heterogeneity exists in detectability. For such situations a hazard probability framework for independent observer surveys is developed. The likelihood of the data, including observed positions of both initial and subsequent observations of animals, is established under the assumption of no measurement errors. To account for measurement errors and possibly other complexities, this likelihood is modified by a function estimated from extensive simulations. This general method of simulated likelihood is explained and the methodology applied to data from a double-platform survey of minke whales in the northeastern Atlantic in 1995. PMID:11314993

  12. Equation-of-motion coupled cluster method for high spin double electron attachment calculations

    SciTech Connect

    Musiał, Monika Lupa, Łukasz; Kucharski, Stanisław A.

    2014-03-21

    The new formulation of the equation-of-motion (EOM) coupled cluster (CC) approach applicable to the calculations of the double electron attachment (DEA) states for the high spin components is proposed. The new EOM equations are derived for the high spin triplet and quintet states. In both cases the new equations are easier to solve but the substantial simplification is observed in the case of quintets. Out of 21 diagrammatic terms contributing to the standard DEA-EOM-CCSDT equations for the R{sub 2} and R{sub 3} amplitudes only four terms survive contributing to the R{sub 3} part. The implemented method has been applied to the calculations of the excited states (singlets, triplets, and quintets) energies of the carbon and silicon atoms and potential energy curves for selected states of the Na{sub 2} (triplets) and B{sub 2} (quintets) molecules.

  13. 1-D seismic velocity model and hypocenter relocation using double difference method around West Papua region

    SciTech Connect

    Sabtaji, Agung E-mail: agung.sabtaji@bmkg.go.id; Nugraha, Andri Dian

    2015-04-24

    West Papua region has fairly high of seismicity activities due to tectonic setting and many inland faults. In addition, the region has a unique and complex tectonic conditions and this situation lead to high potency of seismic hazard in the region. The precise earthquake hypocenter location is very important, which could provide high quality of earthquake parameter information and the subsurface structure in this region to the society. We conducted 1-D P-wave velocity using earthquake data catalog from BMKG for April, 2009 up to March, 2014 around West Papua region. The obtained 1-D seismic velocity then was used as input for improving hypocenter location using double-difference method. The relocated hypocenter location shows fairly clearly the pattern of intraslab earthquake beneath New Guinea Trench (NGT). The relocated hypocenters related to the inland fault are also observed more focus in location around the fault.

  14. Study of acoustic field modulation in the regenerator by double loudspeakers method.

    PubMed

    Zhou, Lihua; Xie, Xiujuan; Li, Qing

    2011-11-01

    A model to modulate acoustic field in a regenerator of a thermoacoustic system by the double loudspeakers method is presented in this paper. The equations are derived for acoustic field modulation. They represent the relations among acoustic field (complex pressure p(0), complex velocity u(0), and acoustic impedance Z(0)), driving parameters of loudspeakers (voltage amplitude and its phase difference), and operating parameters involved in a matrix H (frequency, temperature of regenerator). The range of acoustic field is adjustable and limited by the maximal driving voltages of loudspeakers according to driving parameters. The range is simulated and analyzed in the amplitude-phase and complex coordinate planes for a given or variable H. The simulated results indicate that the range has its intrinsic characteristics. The expected acoustic field in a regenerator can be obtained feasibly by the modulation. PMID:22087899

  15. A double-observer method for reducing bias in faecal pellet surveys of forest ungulates

    USGS Publications Warehouse

    Jenkins, K.J.; Manly, B.F.J.

    2008-01-01

    1. Faecal surveys are used widely to study variations in abundance and distribution of forest-dwelling mammals when direct enumeration is not feasible. The utility of faecal indices of abundance is limited, however, by observational bias and variation in faecal disappearance rates that obscure their relationship to population size. We developed methods to reduce variability in faecal surveys and improve reliability of faecal indices. 2. We used double-observer transect sampling to estimate observational bias of faecal surveys of Roosevelt elk Cervus elaphus roosevelti and Columbian black-tailed deer Odocoileus hemionus columbianus in Olympic National Park, Washington, USA. We also modelled differences in counts of faecal groups obtained from paired cleared and uncleared transect segments as a means to adjust standing crop faecal counts for a standard accumulation interval and to reduce bias resulting from variable decay rates. 3. Estimated detection probabilities of faecal groups ranged from < 0.2-1.0 depending upon the observer, whether the faecal group was from elk or deer, faecal group size, distance of the faecal group from the sampling transect, ground vegetation cover, and the interaction between faecal group size and distance from the transect. 4. Models of plot-clearing effects indicated that standing crop counts of deer faecal groups required 34% reduction on flat terrain and 53% reduction on sloping terrain to represent faeces accumulated over a standard 100-day interval, whereas counts of elk faecal groups required 0% and 46% reductions on flat and sloping terrain, respectively. 5. Synthesis and applications. Double-observer transect sampling provides a cost-effective means of reducing observational bias and variation in faecal decay rates that obscure the interpretation of faecal indices of large mammal abundance. Given the variation we observed in observational bias of faecal surveys and persistence of faeces, we emphasize the need for future

  16. ISS Double-Gimbaled CMG Subsystem Simulation Using the Agile Development Method

    NASA Technical Reports Server (NTRS)

    Inampudi, Ravi

    2016-01-01

    This paper presents an evolutionary approach in simulating a cluster of 4 Control Moment Gyros (CMG) on the International Space Station (ISS) using a common sense approach (the agile development method) for concurrent mathematical modeling and simulation of the CMG subsystem. This simulation is part of Training systems for the 21st Century simulator which will provide training for crew members, instructors, and flight controllers. The basic idea of how the CMGs on the space station are used for its non-propulsive attitude control is briefly explained to set up the context for simulating a CMG subsystem. Next different reference frames and the detailed equations of motion (EOM) for multiple double-gimbal variable-speed control moment gyroscopes (DGVs) are presented. Fixing some of the terms in the EOM becomes the special case EOM for ISS's double-gimbaled fixed speed CMGs. CMG simulation development using the agile development method is presented in which customer's requirements and solutions evolve through iterative analysis, design, coding, unit testing and acceptance testing. At the end of the iteration a set of features implemented in that iteration are demonstrated to the flight controllers thus creating a short feedback loop and helping in creating adaptive development cycles. The unified modeling language (UML) tool is used in illustrating the user stories, class designs and sequence diagrams. This incremental development approach of mathematical modeling and simulating the CMG subsystem involved the development team and the customer early on, thus improving the quality of the working CMG system in each iteration and helping the team to accurately predict the cost, schedule and delivery of the software.

  17. Efficient hit-finding approaches for histone methyltransferases: the key parameters.

    PubMed

    Ahrens, Thomas; Bergner, Andreas; Sheppard, David; Hafenbradl, Doris

    2012-01-01

    For many novel epigenetics targets the chemical ligand space and structural information were limited until recently and are still largely unknown for some targets. Hit-finding campaigns are therefore dependent on large and chemically diverse libraries. In the specific case of the histone methyltransferase G9a, the authors have been able to apply an efficient process of intelligent selection of compounds for primary screening, rather than screening the full diverse deck of 900 000 compounds to identify hit compounds. A number of different virtual screening methods have been applied for the compound selection, and the results have been analyzed in the context of their individual success rates. For the primary screening of 2112 compounds, a FlashPlate assay format and full-length histone H3.1 substrate were employed. Validation of hit compounds was performed using the orthogonal fluorescence lifetime technology. Rated by purity and IC(50) value, 18 compounds (0.9% of compound screening deck) were finally considered validated primary G9a hits. The hit-finding approach has led to novel chemotypes being identified, which can facilitate hit-to-lead projects. This study demonstrates the power of virtual screening technologies for novel, therapeutically relevant epigenetics protein targets. PMID:21990582

  18. Uniscale multi-view registration using double dog-leg method

    NASA Astrophysics Data System (ADS)

    Chen, Chao-I.; Sargent, Dusty; Tsai, Chang-Ming; Wang, Yuan-Fang; Koppel, Dan

    2009-02-01

    3D computer models of body anatomy can have many uses in medical research and clinical practices. This paper describes a robust method that uses videos of body anatomy to construct multiple, partial 3D structures and then fuse them to form a larger, more complete computer model using the structure-from-motion framework. We employ the Double Dog-Leg (DDL) method, a trust-region based nonlinear optimization method, to jointly optimize the camera motion parameters (rotation and translation) and determine a global scale that all partial 3D structures should agree upon. These optimized motion parameters are used for constructing local structures, and the global scale is essential for multi-view registration after all these partial structures are built. In order to provide a good initial guess of the camera movement parameters and outlier free 2D point correspondences for DDL, we also propose a two-stage scheme where multi-RANSAC with a normalized eight-point algorithm is first performed and then a few iterations of an over-determined five-point algorithm is used to polish the results. Our experimental results using colonoscopy video show that the proposed scheme always produces more accurate outputs than the standard RANSAC scheme. Furthermore, since we have obtained many reliable point correspondences, time-consuming and error-prone registration methods like the iterative closest points (ICP) based algorithms can be replaced by a simple rigid-body transformation solver when merging partial structures into a larger model.

  19. Enrollment Forecasting with Double Exponential Smoothing: Two Methods for Objective Weight Factor Selection. AIR Forum 1980 Paper.

    ERIC Educational Resources Information Center

    Gardner, Don E.

    The merits of double exponential smoothing are discussed relative to other types of pattern-based enrollment forecasting methods. The difficulties associated with selecting an appropriate weight factor are discussed, and their potential effects on prediction results are illustrated. Two methods for objectively selecting the "best" weight factor…

  20. A hybrid ensemble method based on double disturbance for classifying microarray data.

    PubMed

    Chen, Tao; Xue, Huifeng; Hong, Zenglin; Cui, Man; Zhao, Hui

    2015-01-01

    Microarray data has small samples and high dimension, and it contains a significant amount of irrelevant and redundant genes. This paper proposes a hybrid ensemble method based on double disturbance to improve classification performance. Firstly, original genes are ranked through reliefF algorithm and part of the genes are selected from the original genes set, and then a new training set is generated from the original training set according to the previously selected genes. Secondly, D bootstrap training subsets are produced from the previously generated training set by bootstrap technology. Thirdly, an attribute reduction method based on neighborhood mutual information with a different radius is used to reduce genes on each bootstrap training subset to produce new training subsets. Each new training subset is applied to train a base classifier. Finally, a part of the base classifiers are selected based on the teaching-learning-based optimization to build an ensemble by weighted voting. Experimental results on six benchmark cancer microarray datasets showed proposed method decreased ensemble size and obtained higher classification performance compared with Bagging, AdaBoost, and Random Forest. PMID:26405970

  1. Low-order mathematical modelling of electric double layer supercapacitors using spectral methods

    NASA Astrophysics Data System (ADS)

    Drummond, Ross; Howey, David A.; Duncan, Stephen R.

    2015-03-01

    This work investigates two physics-based models that simulate the non-linear partial differential algebraic equations describing an electric double layer supercapacitor. In one model the linear dependence between electrolyte concentration and conductivity is accounted for, while in the other model it is not. A spectral element method is used to discretise the model equations and it is found that the error convergence rate with respect to the number of elements is faster compared to a finite difference method. The increased accuracy of the spectral element approach means that, for a similar level of solution accuracy, the model simulation computing time is approximately 50% of that of the finite difference method. This suggests that the spectral element model could be used for control and state estimation purposes. For a typical supercapacitor charging profile, the numerical solutions from both models closely match experimental voltage and current data. However, when the electrolyte is dilute or where there is a long charging time, a noticeable difference between the numerical solutions of the two models is observed. Electrical impedance spectroscopy simulations show that the capacitance of the two models rapidly decreases when the frequency of the perturbation current exceeds an upper threshold.

  2. Evaluation of the constant potential method in simulating electric double-layer capacitors.

    PubMed

    Wang, Zhenxing; Yang, Yang; Olmsted, David L; Asta, Mark; Laird, Brian B

    2014-11-14

    A major challenge in the molecular simulation of electric double layer capacitors (EDLCs) is the choice of an appropriate model for the electrode. Typically, in such simulations the electrode surface is modeled using a uniform fixed charge on each of the electrode atoms, which ignores the electrode response to local charge fluctuations in the electrolyte solution. In this work, we evaluate and compare this Fixed Charge Method (FCM) with the more realistic Constant Potential Method (CPM), [S. K. Reed et al., J. Chem. Phys. 126, 084704 (2007)], in which the electrode charges fluctuate in order to maintain constant electric potential in each electrode. For this comparison, we utilize a simplified LiClO4-acetonitrile/graphite EDLC. At low potential difference (ΔΨ ⩽ 2 V), the two methods yield essentially identical results for ion and solvent density profiles; however, significant differences appear at higher ΔΨ. At ΔΨ ⩾ 4 V, the CPM ion density profiles show significant enhancement (over FCM) of "inner-sphere adsorbed" Li(+) ions very close to the electrode surface. The ability of the CPM electrode to respond to local charge fluctuations in the electrolyte is seen to significantly lower the energy (and barrier) for the approach of Li(+) ions to the electrode surface. PMID:25399127

  3. Evaluation of the constant potential method in simulating electric double-layer capacitors

    NASA Astrophysics Data System (ADS)

    Wang, Zhenxing; Yang, Yang; Olmsted, David L.; Asta, Mark; Laird, Brian B.

    2014-11-01

    A major challenge in the molecular simulation of electric double layer capacitors (EDLCs) is the choice of an appropriate model for the electrode. Typically, in such simulations the electrode surface is modeled using a uniform fixed charge on each of the electrode atoms, which ignores the electrode response to local charge fluctuations in the electrolyte solution. In this work, we evaluate and compare this Fixed Charge Method (FCM) with the more realistic Constant Potential Method (CPM), [S. K. Reed et al., J. Chem. Phys. 126, 084704 (2007)], in which the electrode charges fluctuate in order to maintain constant electric potential in each electrode. For this comparison, we utilize a simplified LiClO4-acetonitrile/graphite EDLC. At low potential difference (ΔΨ ⩽ 2 V), the two methods yield essentially identical results for ion and solvent density profiles; however, significant differences appear at higher ΔΨ. At ΔΨ ⩾ 4 V, the CPM ion density profiles show significant enhancement (over FCM) of "inner-sphere adsorbed" Li+ ions very close to the electrode surface. The ability of the CPM electrode to respond to local charge fluctuations in the electrolyte is seen to significantly lower the energy (and barrier) for the approach of Li+ ions to the electrode surface.

  4. Evaluation of the constant potential method in simulating electric double-layer capacitors

    SciTech Connect

    Wang, Zhenxing; Laird, Brian B.; Yang, Yang; Olmsted, David L.; Asta, Mark

    2014-11-14

    A major challenge in the molecular simulation of electric double layer capacitors (EDLCs) is the choice of an appropriate model for the electrode. Typically, in such simulations the electrode surface is modeled using a uniform fixed charge on each of the electrode atoms, which ignores the electrode response to local charge fluctuations in the electrolyte solution. In this work, we evaluate and compare this Fixed Charge Method (FCM) with the more realistic Constant Potential Method (CPM), [S. K. Reed et al., J. Chem. Phys. 126, 084704 (2007)], in which the electrode charges fluctuate in order to maintain constant electric potential in each electrode. For this comparison, we utilize a simplified LiClO{sub 4}-acetonitrile/graphite EDLC. At low potential difference (ΔΨ ⩽ 2 V), the two methods yield essentially identical results for ion and solvent density profiles; however, significant differences appear at higher ΔΨ. At ΔΨ ⩾ 4 V, the CPM ion density profiles show significant enhancement (over FCM) of “inner-sphere adsorbed” Li{sup +} ions very close to the electrode surface. The ability of the CPM electrode to respond to local charge fluctuations in the electrolyte is seen to significantly lower the energy (and barrier) for the approach of Li{sup +} ions to the electrode surface.

  5. Calibration of Hydraulic Conductivities by the Kalman Filtered Double Constraint Method

    NASA Astrophysics Data System (ADS)

    Zijl, Wouter; El-Rawy, Mustafa; Batelaan, Okke

    2014-05-01

    To assess the consequences of a changing environment for future management decisions we need quantitative techniques validated by case studies. In this context dealing with the limited data availability and inherent uncertainty is a major challenge. In this contribution we present a combination of two techniques (the Double Constraint Method and the Kalman Filter) exemplified by case studies. The techniques assist in the calibration of hydraulic grid block conductivities as well as in finding the reliability of the result. To focus on the basic principles we exemplify our approach for flow in which storage by water compressibility and pore space deformation is negligible. Only storage by water table movements plays a role. Such conditions hold for most flow problems in the relatively shallow aquifer-aquitard systems of deltaic regions. In a forward problem the conductivity is specified in all grid blocks. In addition, on each point of the boundary and in each well only one type of boundary condition has to be specified: either head, or flux. Our approach is based on the principle that calibration of the initial conductivities is meaningful only if we can specify both head and flux at a number of boundary points or wells, including no-flux monitoring wells. In general a hydrogeological model is a "flux model," i.e., the model is as much as possible based on specified ("measured") fluxes through the boundaries and in the wells. An exception is Tóth's flow systems analysis where, instead of the usual recharge fluxes, heads are specified on the water table. This suggests building a second forward model, a "head model," that is as much as possible based on specified (measured) heads on the boundaries and in the wells. The initial conductivities are then updated by applying Darcy's law K = -q/(¶h/¶x) to the fluxes q obtained by the "flux model" and the head gradients ¶h/¶x obtained by the "head model." This so-called "double constraint method" (DCM) leads to a

  6. Simple hydraulic conductivity estimation by the Kalman filtered double constraint method.

    PubMed

    El-Rawy, M A; Batelaan, O; Zijl, W

    2015-01-01

    This paper presents the Kalman Filtered Double Constraint Method (DCM-KF) as a technique to estimate the hydraulic conductivities in the grid blocks of a groundwater flow model. The DCM is based on two forward runs with the same initial grid block conductivities, but with alternating flux-head conditions specified on parts of the boundary and the wells. These two runs are defined as: (1) the flux run, with specified fluxes (recharge and well abstractions), and (2) the head run, with specified heads (measured in piezometers). Conductivities are then estimated as the initial conductivities multiplied by the fluxes obtained from the flux run and divided by the fluxes obtained from the head run. The DCM is easy to implement in combination with existing models (e.g., MODFLOW). Sufficiently accurate conductivities are obtained after a few iterations. Because of errors in the specified head-flux couples, repeated estimation under varying hydrological conditions results in different conductivities. A time-independent estimate of the conductivities and their inaccuracy can be obtained by a simple linear KF with modest computational requirements. For the Kleine Nete catchment, Belgium, the DCM-KF yields sufficiently accurate calibrated conductivities. The method also results in distinguishing regions where the head-flux observations influence the calibration from areas where it is not able to influence the hydraulic conductivity. PMID:24854328

  7. Double ionization of helium by fast electrons with the Generalized Sturmian Functions method

    NASA Astrophysics Data System (ADS)

    Ambrosio, M. J.; Colavecchia, F. D.; Gasaneo, G.; Mitnik, D. M.; Ancarani, L. U.

    2015-03-01

    The double ionization of helium by high energy electron impact is studied. The corresponding four-body Schrödinger equation is transformed into a set of driven equations containing successive orders in the projectile-target interaction. The first order driven equation is solved with a generalized Sturmian functions approach. The transition amplitude, extracted from the asymptotic limit of the first order solution, is equivalent to the familiar first Born approximation. Fivefold differential cross sections are calculated for (e, 3e) processes within the high incident energy and small momentum transfer regimes. The results are compared with other numerical methods, and with the only absolute experimental data available. Our cross sections agree in shape and magnitude with those of the convergent close coupling method for the (10+10) eV and (4+4) eV emission energies. To date this had not been achieved by any two different numerical schemes when solving the three-body continuum problem for the fast projectile (e, 3e) process. Though agreement with the experimental data, in particular with respect to the magnitude, is not achieved, our findings partly clarify a long standing puzzle.

  8. Determination of optical properties in dental restorative biomaterials using the inverse-adding-doubling method

    NASA Astrophysics Data System (ADS)

    Fernández-Oliveras, Alicia; Rubiño, Manuel; Pérez, María. M.

    2013-11-01

    Light propagation in biological media is characterized by the absorption coefficient, the scattering coefficient, the scattering phase function, the refractive index, and the surface conditions (roughness). By means of the inverse-adding-doubling (IAD) method, transmittance and reflectance measurements lead to the determination of the absorption coefficient and the reduced scattering coefficient. The additional measurement of the phase function performed by goniometry allows the separation of the reduced scattering coefficient into the scattering coefficient and the scattering anisotropy factor. The majority of techniques, such as the one utilized in this work, involve the use of integrating spheres to measure total transmission and reflection. We have employed an integrating sphere setup to measure the total transmittance and reflectance of dental biomaterials used in restorative dentistry. Dental biomaterials are meant to replace dental tissues, such as enamel and dentine, in irreversibly diseased teeth. In previous works we performed goniometric measurements in order to evaluate the scattering anisotropy factor for these kinds of materials. In the present work we have used the IAD method to combine the measurements performed using the integrating sphere setup with the results of the previous goniometric measurements. The aim was to optically characterize the dental biomaterials analyzed, since whole studies to assess the appropriate material properties are required in medical applications. In this context, complete optical characterizations play an important role in achieving the fulfillment of optimal quality and the final success of dental biomaterials used in restorative dentistry.

  9. A blind image detection method for information hiding with double random-phase encoding

    NASA Astrophysics Data System (ADS)

    Sheng, Yuan; Xin, Zhou; Jian-guo, Chen; Yong-liang, Xiao; Qiang, Liu

    2009-07-01

    In this paper, a blind image detection method based on a statistical hypothesis test for information hiding with double random-phase encoding (DRPE) is proposed. This method aims to establish a quantitative criterion which is used to judge whether there is secret information embedded in the detected image. The main process can be described as follows: at the beginning, we decompose the detected gray-scale image into 8 bit planes considering it has 256 gray levels, and suppose that a secret image has been hidden in the detected image after it was encrypted by DRPE, thus the lower bit planes of the detected image exhibit strong randomness. Then, we divide the bit plane to be tested into many windows, and establish a statistical variable to measure the relativity between pixels in every window. Finally, judge whether the secret image exists in the detected image by operating the t test on all statistical variables. Numerical simulation shows that the accuracy is quite satisfactory, when we need to distinguish the images carrying secret information from a large amount of images.

  10. A NUMERICAL METHOD FOR STUDYING SUPER-EDDINGTON MASS TRANSFER IN DOUBLE WHITE DWARF BINARIES

    SciTech Connect

    Marcello, Dominic C.; Tohline, Joel E. E-mail: tohline@phys.lsu.edu

    2012-04-01

    We present a numerical method for the study of double white dwarf (DWD) binary systems at the onset of super-Eddington mass transfer. We incorporate the physics of ideal inviscid hydrodynamical flow, Newtonian self-gravity, and radiation transport on a three-dimensional uniformly rotating cylindrical Eulerian grid. Care has been taken to conserve the key physical quantities such as angular momentum and energy. Our new method conserves total energy to a higher degree of accuracy than other codes that are presently being used to model mass transfer in DWD systems. We present the results of verification tests and simulate the first 20 + orbits of a binary system of mass ratio q 0.7 at the onset of dynamically unstable direct impact mass transfer. The mass transfer rate quickly exceeds the critical Eddington limit by many orders of magnitude, and thus we are unable to model a trans-Eddington phase. It appears that radiation pressure does not significantly affect the accretion flow in the highly super-Eddington regime. An optically thick common envelope forms around the binary within a few orbits. Although this envelope quickly exceeds the spatial domain of the computational grid, the fraction of the common envelope that exceeds zero gravitational binding energy is extremely small, suggesting that radiation-driven mass loss is insignificant in this regime. It remains to be seen whether simulations that capture the trans-Eddington phase of such flows will lead to the same conclusion or show that substantial material gets expelled.

  11. Automated microaneurysm detection method based on double ring filter in retinal fundus images

    NASA Astrophysics Data System (ADS)

    Mizutani, Atsushi; Muramatsu, Chisako; Hatanaka, Yuji; Suemori, Shinsuke; Hara, Takeshi; Fujita, Hiroshi

    2009-02-01

    The presence of microaneurysms in the eye is one of the early signs of diabetic retinopathy, which is one of the leading causes of vision loss. We have been investigating a computerized method for the detection of microaneurysms on retinal fundus images, which were obtained from the Retinopathy Online Challenge (ROC) database. The ROC provides 50 training cases, in which "gold standard" locations of microaneurysms are provided, and 50 test cases without the gold standard locations. In this study, the computerized scheme was developed by using the training cases. Although the results for the test cases are also included, this paper mainly discusses the results for the training cases because the "gold standard" for the test cases is not known. After image preprocessing, candidate regions for microaneurysms were detected using a double-ring filter. Any potential false positives located in the regions corresponding to blood vessels were removed by automatic extraction of blood vessels from the images. Twelve image features were determined, and the candidate lesions were classified into microaneurysms or false positives using the rule-based method and an artificial neural network. The true positive fraction of the proposed method was 0.45 at 27 false positives per image. Forty-two percent of microaneurysms in the 50 training cases were considered invisible by the consensus of two co-investigators. When the method was evaluated for visible microaneurysms, the sensitivity for detecting microaneurysms was 65% at 27 false positives per image. Our computerized detection scheme could be improved for helping ophthalmologists in the early diagnosis of diabetic retinopathy.

  12. 77 FR 32639 - HIT Standards Committee and HIT Policy Committee; Call for Nominations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-01

    ... Committee was established under the American Recovery and Reinvestment ] Act 2009 (ARRA)(Pub. L. 111-5... participating in payment reform initiatives, accountable care organizations, pharmacists, behavioral health.... The HIT Policy Committee was established under the American Recovery and Reinvestment Act 2009...

  13. 77 FR 23250 - HIT Standards Committee; Schedule for the Assessment of HIT Policy Committee Recommendations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-18

    ... timeline, which may also account for NIST testing, where appropriate, and include dates when the HIT... timeline provided by the subcommittee, and, if necessary, revise it; and (2) Assign subcommittee(s) to... in a timely manner. (C) Advise the National Coordinator, consistent with the accepted timeline in...

  14. 76 FR 25355 - HIT Standards Committee; Schedule for the Assessment of HIT Policy Committee Recommendations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-04

    ... gaps; and (3) A timeline, which may also account for NIST testing, where appropriate, and include dates...) Upon receipt of a subcommittee report, the HIT Standards Committee will: (1) Accept the timeline... timely manner. (C) Advise the National Coordinator, consistent with the accepted timeline in (B)(1)...

  15. HitKeeper, a generic software package for hit list management

    PubMed Central

    Hau, Jörg; Muller, Michael; Pagni, Marco

    2007-01-01

    Background The automated annotation of biological sequences (protein, DNA) relies on the computation of hits (predicted features) on the sequences using various algorithms. Public databases of biological sequences provide a wealth of biological "knowledge", for example manually validated annotations (features) that are located on the sequences, but mining the sequence annotations and especially the predicted and curated features requires dedicated tools. Due to the heterogeneity and diversity of the biological information, it is difficult to handle redundancy, frequent updates, taxonomic information and "private" data together with computational algorithms in a common workflow. Results We present HitKeeper, a software package that controls the fully automatic handling of multiple biological databases and of hit list calculations on a large scale. The software implements an asynchronous update system that introduces updates and computes hits as soon as new data become available. A query interface enables the user to search sequences by specifying constraints, such as retrieving sequences that contain specific motifs, or a defined arrangement of motifs ("metamotifs"), or filtering based on the taxonomic classification of a sequence. Conclusion The software provides a generic and modular framework to handle the redundancy and incremental updates of biological databases, and an original query language. It is published under the terms and conditions of version 2 of the GNU Public License and available at . PMID:17391514

  16. Symmetrized complex amplitudes for He double photoionization from the time-dependent close coupling and exterior complex scaling methods

    SciTech Connect

    Horner, D.A.; Colgan, J.; Martin, F.; McCurdy, C.W.; Pindzola, M.S.; Rescigno, T.N.

    2004-06-01

    Symmetrized complex amplitudes for the double photoionization of helium are computed by the time-dependent close-coupling and exterior complex scaling methods, and it is demonstrated that both methods are capable of the direct calculation of these amplitudes. The results are found to be in excellent agreement with each other and in very good agreement with results of other ab initio methods and experiment.

  17. HANFORD DOUBLE SHELL TANK (DST) THERMAL & SEISMIC PROJECT BUCKLING EVALUATION METHODS & RESULTS FOR THE PRIMARY TANKS

    SciTech Connect

    MACKEY, T.C.

    2006-03-17

    This report documents a detailed buckling evaluation of the primary tanks in the Hanford double shell waste tanks. The analysis is part of a comprehensive structural review for the Double-Shell Tank Integrity Project. This work also provides information on tank integrity that specifically responds to concerns raise by the Office of Environment, Safety, and Health (ES&H) Oversight (EH-22) during a review (in April and May 2001) of work being performed on the double-shell tank farms, and the operation of the aging waste facility (AWF) primary tank ventilation system.

  18. A Facile Method to Fabricate Double Gyroid as A Polymer Template for Nanohybrids

    NASA Astrophysics Data System (ADS)

    Wang, Hsiao-Fang; Ho, Rong-Ming

    2015-03-01

    Here, we suggest a facile method to acquire double gyroid (DG) phase from the self-assembly of chiral block copolymers (BCPs*), polystyrene- b-poly(L-lactide) (PS-PLLA). A wide region for the formation of DG can be found in the phase diagram of the BCPs*, suggesting that helical phase (H*) from the self-assembly of BCPs* can serve as a stepping stone for the formation of the DG due to an easy path for order-order transition from two-dimensional to three-dimensional (network) structure. Moreover, the order-order transition from metastable H* to stable DG can be expedited by blending the PS-PLLA with compatible entity. Moreover, PS-PLLA blends are prepared by using styrene oligomer (S) to fine-tune the morphologies of the blends at which the molecular weight ratio of the S and compatible PS block (r) is less than 0.1. Owing to the use of the low-molecular-weight oligomer, the increase of BCP chain mobility in the blends significantly reduces the transformation time for the order-order transition from H* to DG. Consequently, nanoporous gyroid SiO2 can be fabricated using hydrolyzed PS-PLLA blends as a template for sol-gel reaction followed by removal of the PS matrix.

  19. Preparation and in vitro evaluation of ethyl cellulose microspheres containing stavudine by the double emulsion method.

    PubMed

    Sahoo, S K; Mallick, A A; Barik, B B; Senapati, P C

    2007-02-01

    The aim of this study was to formulate and evaluate microspheres of stavudine by water-in-oil-in-oil (w/o/o) double emulsion solvent diffusion method using ethyl cellulose and ethyl cellulose in combination with polyvinyl pyrrolidone. A mixed solvent system consisting of acetonitrile and dichloromethane in an 1: 1 ratio and light liquid paraffin was chosen as primary and secondary oil phase, respectively. Span 80 was used as surfactant for stabilizing the secondary oil phase. The influence of formulation factors like stirring speed, surfactant concentration on particle size and polymer:drug ratio and combination of polymers on drug release characteristics of the microspheres was investigated. The prepared microspheres characterized by micrometric properties, drug loading, Fourier transform infrared spectroscopy, X-ray powder difractometry and scanning electron microscopy. The prepared microspheres were white, free flowing and spherical in shape, stable in nature, with 41-65% of drug entrapment efficiency. The best-fit release kinetics was achieved with Higuchi plot followed by first order and zero order. The release of stavudine was influenced by the drug to polymer ratio, particle size and polymer combination. PMID:17341031

  20. Relative clock estimation method between two LEO satellites with a double-difference solution constraint

    NASA Astrophysics Data System (ADS)

    Liu, Junhong; Gu, Defeng; Ju, Bing; Lai, Yuwang; Yi, Dongyun

    2015-04-01

    A method of estimating the relative clocks between two spaceborne global positioning system (GPS) receivers based on the single-difference (SD) observations is investigated in this paper. Especially, the advantages of introducing a double-difference (DD) solution constraint, including the orbits and ambiguities, are discussed with the simulated data and the real data of Gravity Recovery And Climate Experiment (GRACE) satellites. The theoretical accuracy analysis shows that the accuracy of the relative clocks is improved and the edge effects are eliminated with a DD solution constraint. The simulations indicate a potential accuracy improvement of at least 30% of the relative clocks with the constraint. Furthermore, one month's real data is processed and the overlapping data arcs are used to validate the accuracy of the relative clock solutions. The average overlapping root mean square (RMS) of the relative clock solutions is approximate 99 ps and 31 ps without and with the DD solution constraint, respectively. Moreover, the jumps of the day boundaries are weakened evidently by adding the DD solution constraint. This paper demonstrates that the accuracy and stability of the estimated relative clocks between two low earth orbit (LEO) satellites from SD observations are improved obviously with the DD solution constraint.

  1. Radiation dose determines the method for quantification of DNA double strand breaks.

    PubMed

    Bulat, Tanja; Keta, Otilija; Korićanac, Lela; Žakula, Jelena; Petrović, Ivan; Ristić-Fira, Aleksandra; Todorović, Danijela

    2016-03-01

    Ionizing radiation induces DNA double strand breaks (DSBs) that trigger phosphorylation of the histone protein H2AX (γH2AX). Immunofluorescent staining visualizes formation of γH2AX foci, allowing their quantification. This method, as opposed to Western blot assay and Flow cytometry, provides more accurate analysis, by showing exact position and intensity of fluorescent signal in each single cell. In practice there are problems in quantification of γH2AX. This paper is based on two issues: the determination of which technique should be applied concerning the radiation dose, and how to analyze fluorescent microscopy images obtained by different microscopes. HTB140 melanoma cells were exposed to γ-rays, in the dose range from 1 to 16 Gy. Radiation effects on the DNA level were analyzed at different time intervals after irradiation by Western blot analysis and immunofluorescence microscopy. Immunochemically stained cells were visualized with two types of microscopes: AxioVision (Zeiss, Germany) microscope, comprising an ApoTome software, and AxioImagerA1 microscope (Zeiss, Germany). Obtained results show that the level of γH2AX is time and dose dependent. Immunofluorescence microscopy provided better detection of DSBs for lower irradiation doses, while Western blot analysis was more reliable for higher irradiation doses. AxioVision microscope containing ApoTome software was more suitable for the detection of γH2AX foci. PMID:26959322

  2. Double stapling method of anastomosis after esophagectomy with endoscopic stapler to prevent postoperative stricture.

    PubMed

    Murayama, I; Sato, H; Suzuki, T; Ootsuka, Y; Song, K; Yamagata, M; Fukase, T; Iwai, S

    1998-10-01

    To prevent stricture of an anastomotic site after operation of esophageal cancer, a new surgical technique, the "double-stapling method," was designed and applied clinically to 29 patients. According to the surgical technique, an automatic suture device for endoscopy was inserted from the side of the lesser curvature of the stomach to the esophageal side after performing end-side anastomosis between the esophagus and the stomach tube using a conventional circular anastomotic device to perform anastomosis between the anterior wall of the esophagus and the posterior wall of the stomach tube. As a result, a conventional anastomotic site, which was a plane (two dimensional), was transformed into a three-dimensional configuration. In the postoperative measurement of the anastomotic site using a measurement forceps, the inner diameter of the site was 8.6+/-3.1 mm in the circular group, while it was 17.2+/-4.5 mm in the DS group, showing a significant difference (p < 0.0001). Minor leakage was observed in three patients as a postoperative complication, but no postoperative hemorrhage occurred. PMID:9820722

  3. Lagrange-type modeling of continuous dielectric permittivity variation in double-higher-order volume integral equation method

    NASA Astrophysics Data System (ADS)

    Chobanyan, E.; Ilić, M. M.; Notaroš, B. M.

    2015-05-01

    A novel double-higher-order entire-domain volume integral equation (VIE) technique for efficient analysis of electromagnetic structures with continuously inhomogeneous dielectric materials is presented. The technique takes advantage of large curved hexahedral discretization elements—enabled by double-higher-order modeling (higher-order modeling of both the geometry and the current)—in applications involving highly inhomogeneous dielectric bodies. Lagrange-type modeling of an arbitrary continuous variation of the equivalent complex permittivity of the dielectric throughout each VIE geometrical element is implemented, in place of piecewise homogeneous approximate models of the inhomogeneous structures. The technique combines the features of the previous double-higher-order piecewise homogeneous VIE method and continuously inhomogeneous finite element method (FEM). This appears to be the first implementation and demonstration of a VIE method with double-higher-order discretization elements and conformal modeling of inhomogeneous dielectric materials embedded within elements that are also higher (arbitrary) order (with arbitrary material-representation orders within each curved and large VIE element). The new technique is validated and evaluated by comparisons with a continuously inhomogeneous double-higher-order FEM technique, a piecewise homogeneous version of the double-higher-order VIE technique, and a commercial piecewise homogeneous FEM code. The examples include two real-world applications involving continuously inhomogeneous permittivity profiles: scattering from an egg-shaped melting hailstone and near-field analysis of a Luneburg lens, illuminated by a corrugated horn antenna. The results show that the new technique is more efficient and ensures considerable reductions in the number of unknowns and computational time when compared to the three alternative approaches.

  4. Methods of Smoothing Double-Entry Expectancy Tables Applied to the Prediction of Success in College. Research Report No. 91.

    ERIC Educational Resources Information Center

    Kolen, Michael J.; And Others

    Six methods for smoothing double-entry expectancy tables (tables that relate two predictor variables to probability of attaining a selected level of success on a criterion) were compared using data for entering students at 85 colleges and universities. ACT composite scores and self-reported high school grade averages were used to construct…

  5. Preparation of single or double-network chitosan/poly(vinyl alcohol) gel films through selective cross-linking method

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A selective cross-linking method was developed to create single or double network chitosan/poly(vinyl alcohol) gel films. The cross-linking is based on the hydrogen bonding between PVA and borate and the strong electrostatic interaction between chitosan and tripolyphosphate. The resultant gel films ...

  6. Imaging of 3-D seismic velocity structure of Southern Sumatra region using double difference tomographic method

    NASA Astrophysics Data System (ADS)

    Lestari, Titik; Nugraha, Andri Dian

    2015-04-01

    Southern Sumatra region has a high level of seismicity due to the influence of the subduction system, Sumatra fault, Mentawai fault and stretching zone activities. The seismic activities of Southern Sumatra region are recorded by Meteorological Climatological and Geophysical Agency (MCGA's) Seismograph network. In this study, we used earthquake data catalog compiled by MCGA for 3013 events from 10 seismic stations around Southern Sumatra region for time periods of April 2009 - April 2014 in order to invert for the 3-D seismic velocities structure (Vp, Vs, and Vp/Vs ratio). We applied double-difference seismic tomography method (tomoDD) to determine Vp, Vs and Vp/Vs ratio with hypocenter adjustment. For the inversion procedure, we started from the initial 1-D seismic velocity model of AK135 and constant Vp/Vs of 1.73. The synthetic travel time from source to receiver was calculated using ray pseudo-bending technique, while the main tomographic inversion was applied using LSQR method. The resolution model was evaluated using checkerboard test and Derivative Weigh Sum (DWS). Our preliminary results show low Vp and Vs anomalies region along Bukit Barisan which is may be associated with weak zone of Sumatran fault and migration of partial melted material. Low velocity anomalies at 30-50 km depth in the fore arc region may indicated the hydrous material circulation because the slab dehydration. We detected low seismic seismicity in the fore arc region that may be indicated as seismic gap. It is coincides contact zone of high and low velocity anomalies. And two large earthquakes (Jambi and Mentawai) also occurred at the contact of contrast velocity.

  7. Imaging of 3-D seismic velocity structure of Southern Sumatra region using double difference tomographic method

    SciTech Connect

    Lestari, Titik; Nugraha, Andri Dian

    2015-04-24

    Southern Sumatra region has a high level of seismicity due to the influence of the subduction system, Sumatra fault, Mentawai fault and stretching zone activities. The seismic activities of Southern Sumatra region are recorded by Meteorological Climatological and Geophysical Agency (MCGA’s) Seismograph network. In this study, we used earthquake data catalog compiled by MCGA for 3013 events from 10 seismic stations around Southern Sumatra region for time periods of April 2009 – April 2014 in order to invert for the 3-D seismic velocities structure (Vp, Vs, and Vp/Vs ratio). We applied double-difference seismic tomography method (tomoDD) to determine Vp, Vs and Vp/Vs ratio with hypocenter adjustment. For the inversion procedure, we started from the initial 1-D seismic velocity model of AK135 and constant Vp/Vs of 1.73. The synthetic travel time from source to receiver was calculated using ray pseudo-bending technique, while the main tomographic inversion was applied using LSQR method. The resolution model was evaluated using checkerboard test and Derivative Weigh Sum (DWS). Our preliminary results show low Vp and Vs anomalies region along Bukit Barisan which is may be associated with weak zone of Sumatran fault and migration of partial melted material. Low velocity anomalies at 30-50 km depth in the fore arc region may indicated the hydrous material circulation because the slab dehydration. We detected low seismic seismicity in the fore arc region that may be indicated as seismic gap. It is coincides contact zone of high and low velocity anomalies. And two large earthquakes (Jambi and Mentawai) also occurred at the contact of contrast velocity.

  8. Detecting multi-hit events in a CdZnTe coplanar grid detector using pulse shape analysis: A method for improving background rejection in the COBRA 0νββ experiment

    NASA Astrophysics Data System (ADS)

    McGrath, J.; Fulton, B. R.; Joshi, P.; Davies, P.; Muenstermann, D.; Schulz, O.; Zuber, K.; Freer, M.

    2010-03-01

    A number of experiments are underway to search for a rare form of radioactivity, neutrinoless double beta decay, as a measurement of the half-life would enable the neutrino mass to be determined. The COBRA collaboration [1,2] (Zuber, 2001; Dawson, 2009) employs CdZnTe detectors in such a search. This paper describes techniques using pulse shape analysis for identifying two-centre events in a coplanar grid CdZnTe detector. This enables Compton scatter events to be identified and so suppressing the background present within the COBRA detectors.

  9. Sensitivity enhancement of 29Si double-quantum dipolar recoupling spectroscopy by Carr-Purcell-Meiboom-Gill acquisition method

    NASA Astrophysics Data System (ADS)

    Goswami, M.; Madhu, P. K.; Dittmer, J.; Nielsen, N. C.; Ganapathy, S.

    2009-08-01

    An enhancement in the detection sensitivity of dipolar recoupled 29Si double-quantum magic-angle spinning experiment is shown with a Carr-Purcell-Meiboom-Gill (CPMG) train of π pulses during the acquisition period. Symmetry-adapted pulse schemes, such as POST-C7 and SR26411, are used for the double-quantum excitation. Application of POST-C7-CPMG method for framework characterisation is demonstrated in the disordered and catalytically important ZSM-5 molecular sieve. Based on the observed double-quantum dipole-dipole correlations and the known T-site Si connectivities, the assignment of all the signals is made for the orthorhombic phase of the as-synthesised (CN form) material.

  10. Vanadium Nitrogenase: A Two-Hit Wonder?

    PubMed Central

    Hu, Yilin; Lee, Chi Chung; Ribbe, Markus W.

    2013-01-01

    Nitrogenase catalyzes the biological conversion of atmospheric dinitrogen to bioavailable ammonia. The molybdenum (Mo)- and vanadium (V)-dependent nitrogenases are two homologous members of this metalloenzyme family. However, despite their similarities in structure and function, the characterization of V-nitrogenase has taken a much longer and more winding path than that of its Mo-counterpart. From the initial discovery of this nitrogen-fixing system, to the recent finding of its CO-reducing capacity, V-nitrogenase has proven to be a two-hit wonder in the over-a-century-long research of nitrogen fixation. This perspective provides a brief account of the catalytic function and structural basis of V-nitrogenase, as well as a short discussion of the theoretical and practical potentials of this unique metalloenzyme. PMID:22101422

  11. Vanadium nitrogenase: a two-hit wonder?

    PubMed

    Hu, Yilin; Lee, Chi Chung; Ribbe, Markus W

    2012-01-28

    Nitrogenase catalyzes the biological conversion of atmospheric dinitrogen to bioavailable ammonia. The molybdenum (Mo)- and vanadium (V)-dependent nitrogenases are two homologous members of this metalloenzyme family. However, despite their similarities in structure and function, the characterization of V-nitrogenase has taken a much longer and more winding path than that of its Mo-counterpart. From the initial discovery of this nitrogen-fixing system, to the recent finding of its CO-reducing capacity, V-nitrogenase has proven to be a two-hit wonder in the over-a-century-long research of nitrogen fixation. This perspective provides a brief account of the catalytic function and structural basis of V-nitrogenase, as well as a short discussion of the theoretical and practical potentials of this unique metalloenzyme. PMID:22101422

  12. A Two-Hit Model of Autism: Adolescence as the Second Hit

    PubMed Central

    Picci, Giorgia; Scherf, K. Suzanne

    2015-01-01

    Adolescence brings dramatic changes in behavior and neural organization. Unfortunately, for some 30% of individuals with autism, there is marked decline in adaptive functioning during adolescence. We propose a two-hit model of autism. First, early perturbations in neural development function as a “first hit” that sets up a neural system that is “built to fail” in the face of a second hit. Second, the confluence of pubertal hormones, neural reorganization, and increasing social demands during adolescence provides the “second hit” that interferes with the ability to transition into adult social roles and levels of adaptive functioning. In support of this model, we review evidence about adolescent-specific neural and behavioral development in autism. We conclude with predictions and recommendations for empirical investigation about several domains in which developmental trajectories for individuals with autism may be uniquely deterred in adolescence. PMID:26609500

  13. External validation of the HIT Expert Probability (HEP) score.

    PubMed

    Joseph, Lee; Gomes, Marcelo P V; Al Solaiman, Firas; St John, Julie; Ozaki, Asuka; Raju, Manjunath; Dhariwal, Manoj; Kim, Esther S H

    2015-03-01

    The diagnosis of heparin-induced thrombocytopenia (HIT) can be challenging. The HIT Expert Probability (HEP) Score has recently been proposed to aid in the diagnosis of HIT. We sought to externally and prospectively validate the HEP score. We prospectively assessed pre-test probability of HIT for 51 consecutive patients referred to our Consultative Service for evaluation of possible HIT between August 1, 2012 and February 1, 2013. Two Vascular Medicine fellows independently applied the 4T and HEP scores for each patient. Two independent HIT expert adjudicators rendered a diagnosis of HIT likely or unlikely. The median (interquartile range) of 4T and HEP scores were 4.5 (3.0, 6.0) and 5 (3.0, 8.5), respectively. There were no significant differences between area under receiver-operating characteristic curves of 4T and HEP scores against the gold standard, confirmed HIT [defined as positive serotonin release assay and positive anti-PF4/heparin ELISA] (0.74 vs 0.73, p = 0.97). HEP score ≥ 2 was 100 % sensitive and 16 % specific for determining the presence of confirmed HIT while a 4T score > 3 was 93 % sensitive and 35 % specific. In conclusion, the HEP and 4T scores are excellent screening pre-test probability models for HIT, however, in this prospective validation study, test characteristics for the diagnosis of HIT based on confirmatory laboratory testing and expert opinion are similar. Given the complexity of the HEP scoring model compared to that of the 4T score, further validation of the HEP score is warranted prior to widespread clinical acceptance. PMID:25588983

  14. Linear augmented cylindrical wave method for calculating the electronic structure of double-wall carbon nanotubes

    NASA Astrophysics Data System (ADS)

    D'Yachkov, P. N.; Makaev, D. V.

    2006-10-01

    Electronic structure of double-wall carbon nanotubes (DWNTs) consisting of two concentric graphene cylinders with extremely strong covalent bonding of atoms within the individual graphitic sheets, but very weak van der Waals type interaction between them is calculated in the terms of the linear augmented cylindrical wave (LACW) method. A one-electron potential is used and the approximations are made in the sense of muffin-tin (MT) potentials and local density functional theory only. The atoms of DWNT are considered to be enclosed between cylinder-shaped potential barriers. In this approach, the electronic spectrum of the DWNTs is governed by the free movement of electron in the interatomic space of two cylindrical layers, by electron scattering on the MT spheres, and by electron tunneling between the layers. We have calculated the complete band structures and densities of states in the Fermi level region of the purely semiconducting zigzag DWNTs (n,0)@(n',0) ( 10⩽n⩽23 and 19⩽n'⩽32 ) with interlayer distance 3.2Å⩽Δd⩽3.7Å . Analogously data are obtained for metallic armchair (n,n)@(n',n') nanotubes ( n=5 or 4 and n'=10 or 9). According to the LACW calculations, the interwall coupling results in a distinctly stronger perturbation of the band structure of inner tube as compared to that of the outer one. In the case of semiconducting DWNTs, the minimum gap E11 between the singularities of the conduction and valence bands of the shell tubules decreases from 0.15to0.22eV or increases from 0.7to0.15eV , if dividing n' by three leaves a remainder of 1 or 2, respectively. In both cases, the ΔE11 shifts of the gap do not decay, but slightly oscillate as one goes to the tubules with larger diameters d . For inner tubules, the ΔE11 shift depends strongly on the d . For nmod3=2 series with 10⩽n⩽16 , the shifts ΔE11 are positive, the maximum values of ΔE11 being equal to 0.39 and 0.32eV , respectively. As one goes to the inner tubules with larger diameters

  15. Effect of intestinal lymphatic circulation blockage in two-hit rats

    PubMed Central

    Niu, Chun-Yu; Li, Ji-Cheng; Zhao, Zi-Gang; Zhang, Jing; Shao, Xue-Hui

    2006-01-01

    AIM: To study the effect of blocking intestinal lymphatic circulation in two-hit rats and explore the significance of intestinal lymphatic circulation in two-hit. METHODS: Wistar rats were divided equally into three groups: mesenteric lymph duct ligation group, non-ligation group and sham group. Mesenteric lymph was diverted by ligation of mesenteric lymph duct, and the two-hit model was established by hemorrhage and lipopolysaccharide (LPS) methods. All rats were sampled for serum pre-experiment and 24 h post-experiment. The organs including kidney, liver, lung and heart were collected for pathomorphologic observation and biochemical investigation. The nitric oxide (NO), malondialdehyde (MDA) and superoxide dismutase (SOD) were determined in serum and tissue homogenate. RESULTS: Pathomorphology study showed that the structures of kidney, lung, liver and heart tissues were normal in sham group; congestion, degeneration and necrosis in non-ligation group; but only mild lesions in ligation group. After two-hits, the contents of AST, ALT, BUN, Cr and LDH-1 in the serum of non-ligation group and ligation group were obviously higher than that in pre-experiment group and sham group, but obviously lower than that in non-ligation group. The contents of NO2-/NO3-, NOS, iNOS and MDA in the serum of non-ligation group were significantly increased, compared with pre-experiment and sham group, but SOD was significantly lower. These parameters were significantly different in ligation group compared with that in sham group, but NO2-/NO3-, iNOS and MDA in ligation group were significantly lower than that in non-ligation group. CONCLUSION: Ligation of mesenteric lymph duct could improve the disturbance of organic function and morphologic damage in two-hit rats; the lymphatic mechanism in two-hit should be emphasized. PMID:17007046

  16. Rational Prediction with Molecular Dynamics for Hit Identification

    PubMed Central

    Nichols, Sara E; Swift, Robert V; Amaro, Rommie E

    2012-01-01

    Although the motions of proteins are fundamental for their function, for pragmatic reasons, the consideration of protein elasticity has traditionally been neglected in drug discovery and design. This review details protein motion, its relevance to biomolecular interactions and how it can be sampled using molecular dynamics simulations. Within this context, two major areas of research in structure-based prediction that can benefit from considering protein flexibility, binding site detection and molecular docking, are discussed. Basic classification metrics and statistical analysis techniques, which can facilitate performance analysis, are also reviewed. With hardware and software advances, molecular dynamics in combination with traditional structure-based prediction methods can potentially reduce the time and costs involved in the hit identification pipeline. PMID:23110535

  17. Development of a Double Glass Mounting Method Using Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) and its Evaluation for Permanent Mounting of Small Nematodes

    PubMed Central

    ZAHABIUN, Farzaneh; SADJJADI, Seyed Mahmoud; ESFANDIARI, Farideh

    2015-01-01

    Background: Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. Methods: A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. Results: The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Conclusion: Using this method is cost effective and fast for mounting of small nematodes comparing to classic method. PMID:26811729

  18. A New Method for Extraction of Double-Stranded RNA from Plants

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The presence of high molecular weight double-stranded RNA (dsRNA) in plants is associated with the presence of RNA viruses. DsRNA is stable and can be extracted easily from the majority of plant species and provides an excellent tool for characterization of novel viruses that are recalcitrant to pur...

  19. Improved Curveball Hitting through the Enhancement of Visual Cues.

    ERIC Educational Resources Information Center

    Osborne, Kurt; And Others

    1990-01-01

    The study investigated the effectiveness of using visual cues to highlight the seams of baseballs, to improve the hitting of curveballs by five undergraduate varsity baseball team candidates. Results indicated that subjects hit a greater percentage of marked than unmarked balls. (Author/DB)

  20. Infants' Reactions to Object Collision on Hit and Miss Trajectories

    ERIC Educational Resources Information Center

    Schmuckler, Mark A.; Collimore, Lisa M.; Dannemiller, James L.

    2007-01-01

    This experiment investigated the impact of the path of approach of an object, from head on versus from the side, and the type of imminent contact with that object, a hit versus a miss, on young infants' perceptions of object looming. Consistent with earlier studies, we found that 4- to 5-month-old infants do indeed discriminate hits versus misses.…

  1. Object Rotation Effects on the Timing of a Hitting Action

    ERIC Educational Resources Information Center

    Scott, Mark A.; van der Kamp, John; Savelsbergh, Geert J. P.; Oudejans, Raoul R. D.; Davids, Keith

    2004-01-01

    In this article, the authors investigated how perturbing optical information affects the guidance of an unfolding hitting action. Using monocular and binocular vision, six participants were required to hit a rectangular foam object, released from two different heights, under four different approach conditions, two with object rotation (to perturb…

  2. Optoelectronic hit/miss transform for screening cervical smear slides

    NASA Astrophysics Data System (ADS)

    Narayanswamy, R.; Turner, R. M.; McKnight, D. J.; Johnson, K. M.; Sharpe, J. P.

    1995-06-01

    An optoelectronic morphological processor for detecting regions of interest (abnormal cells) on a cervical smear slide using the hit/miss transform is presented. Computer simulation of the algorithm tested on 184 Pap-smear images provided 95% detection and 5% false alarm. An optoelectronic implementation of the hit/miss transform is presented, along with preliminary experimental results.

  3. The Effect of Grip Size on the Hitting Force During a Soft Tennis Forehand Stroke

    PubMed Central

    Ohguni, Mika; Aoki, Mitsuhiro; Sato, Hiroki; Imada, Kohdai; Funane, Sota

    2009-01-01

    Background: Grip size of a tennis racquet has been reported to influence performance, but clear evidence of a correlation has yet to be established. Hypothesis: Hitting force during a soft tennis forehand stroke correlates with grip size. Study Design: Controlled clinical study. Methods: A total of 40 healthy volunteers (20 men and 20 women) with a mean age of 21.9 years were enrolled. Of the 40 participants, 20 were experienced soft tennis players (10 men and 10 women) and 20 were nonexperienced soft tennis players (10 men and 10 women). Based on racquets with 5 different grip sizes, the hitting force during a soft tennis forehand stroke was measured with a handheld dynamometer. Correlations between 4 factors (sex, experience, grip and pinch strengths, and middle finger length) and hitting force were evaluated with each grip size. Measurements for each factor were repeated, and a 2-way analysis of variance was performed on the obtained data. Results: The hitting force was greater for male players than for female players and greater for experienced players than for nonexperienced players (P < .01). Men with large grip and pinch strengths demonstrated an increased hitting force with an increase in grip size. Men who had a long middle finger also demonstrated increased hitting force when grip size increased (P < .05). Conclusion: The hypothesis proved accurate for experienced men who had a large grip strength, a large pinch strength, and a long middle finger. Clinical Relevance: Large-grip-sized racquets may result in better forehand stroke performance when used by experienced male soft tennis players with a large grip strength, a large pinch strength, and a long middle finger. PMID:23015889

  4. Method for Increasing Sensitivity of the Distance Protection on a 330 KV Double-Circuit Transmission Line

    NASA Astrophysics Data System (ADS)

    Brinkis, K.; Vanzovich, E.; Drozd, D.; Mutule, A.

    2010-01-01

    Possibilities of increasing the distance protection (DP) sensitivity are considered for a double-circuit transmission line connected to a substation busbar only with one circuit-breaker in each phase. The DP sensitivity can be increased on such a line using mutual connections of the phase wires along its whole length. The minimal (optimal) number of the connections is found by the proposed calculation method.

  5. Antarctic ozone hole hits record depth

    SciTech Connect

    Not Available

    1991-10-18

    A bad year for the ozone over Antarctica looked like a good bet this year. For the past 2 years, stratospheric ozone destruction has equaled the record set in 1987. Now things look even worse, with a record-setting ozone hole. In 1987, 1989, and 1990, the minimum amount of ozone over Antarctica early each October was 120 to 125 Dobson units compared to the typical level of 220 that prevailed before manmade Chlorofluorocarbons (CFCs) began eating into the ozone layer. The depletion allowed as much as twice the usual amount of biologically damaging ultraviolet light to reach the earth's surface. But researchers took some comfort in the fact that the hole seemed to have hit a barrier to further losses. Now that barrier may have been breached. On 6 October, the satellite-borne Total Ozone Mapping Spectrometer detected an ozone minimum of 110 Dobson units. The region of the lower stratosphere where icy cloud particles and the chlorine of CFCs combine to destroy ozone - between 14 and 24 kilometers - looks much the same as it did in 1987.

  6. Determination of argon resonance line emission in an ICP hitting a biological sample

    NASA Astrophysics Data System (ADS)

    Mertmann, P.; Bibinov, N.; Halfmann, H.; Awakowicz, P.

    2010-02-01

    A Monte Carlo model for the calculation of argon resonance line photon trapping in a double inductively coupled plasma is presented. Different probabilities of photon behaviour are calculated and the flux of photons hitting a target placed in the middle of the chamber is determined by simulation. Different gas admixtures or gas impurities can absorb photons or quench excited argon atoms, which is considered in the simulation. Electron energy distribution function and electron density are measured with a Langmuir probe and optical emission spectroscopy (OES). Nitrogen impurities, due to opening of the chamber, are measured using OES. These measured values and other additional input values such as gas temperature are used for simulation.

  7. 42 CFR 495.340 - As-needed HIT PAPD update and as-needed HIT IAPD update requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... update or a HIT IAPD no later than 60 days after the occurrence of project changes including but not... document or the HIT implementation advance planning document. (d) A change in implementation concept or a change to the scope of the project. (e) A change to the approved cost allocation methodology....

  8. Development of methods based on double Hough transform or Gabor filtering to discriminate between crop and weed in agronomic images

    NASA Astrophysics Data System (ADS)

    Bossu, Jérémie; Gée, Christelle; Guillemin, Jean-Philippe; Truchetet, Frédéric

    2006-02-01

    This paper presents two spatial methods to discriminate between crop and weeds. The application is related to agronomic image with perspective crop rows. The first method uses a double Hough Transform permitting a detection of crop rows and a classification between crop and weeds. The second method is based on Gabor filtering, a band pass filter. The parameters of this filter are detected from a Fast Fourier Transform of the image. For each method, a weed infestation rate is obtained. The two methods are compared and a discussion concludes about the abilities of these methods to detect the crop rows in agronomic images. Finally, we discuss this method regarding the capability of the spatial approach for classifying weeds from crop.

  9. Comparing two methods of plastination and glycerin preservation to study skeletal system after Alizarin red-Alcian blue double staining

    PubMed Central

    Mohsen, Setayesh M.; Esfandiari, Ebrahim; Rabiei, Abbas A.; Hanaei, Mahsa S.; Rashidi, Bahman

    2013-01-01

    Background: Plastination is a new method of preserving tissue samples for a long time. This study aimed to compare the new plastination technique with the conventional preservative method in glycerin for fetus skeleton tissues and young rats dyed by Alizarin red- Alcian blue double staining. Materials and Methods: In this study, 4 groups of 1-day, 3-day, 12-day and mature rats were selected and, after being anesthetized and slaughtered, their skin was completely removed. In Alizarin red- Alcian blue double staining method, first the samples were fixed in 95% ethanol and then their cartilages were dyed by 0.225% Alcian blue solution; after that, they were cleared in 1% KOH. Then, the bones were dyed in 0.003% Alizarin red solution and finally the tissue was decolorized in 95% ethanol. In each group, half of the samples were preserved by the conventional method in a glycerin container and the other half were plastinated. Results: In the present study, the samples preserved by plastination technique were dry, odorless, indecomposable and tangible. Quality of coloring had an inverse relationship with rats’ age. Transparency of the plastinated samples had also an inverse relationship with rats’ age. Therefore, skeletal tissue of younger rats had higher quality and transparency in both preservation methods (glycerin and plastination). Conclusion: This study showed that plastination technique was an appropriate method in comparison with glycerin preservation, which conserved skeletal tissue of fetus and young rats colored by Alizarin red- Alcian blue double staining. And the final result was that plastination technique can generate dry, odorless, indecomposable and tangible samples. PMID:23930264

  10. A method to study the history of a double oxide film defect in liquid aluminum alloys

    NASA Astrophysics Data System (ADS)

    Raiszadeh, R.; Griffiths, W. D.

    2006-12-01

    Entrained double oxide films have been held responsible for reductions in mechanical properties in aluminum casting alloys. However, their behavior in the liquid metal, once formed, has not been studied directly. It has been proposed that the atmosphere entrapped in the double oxide film defect will continue to react with the liquid metal surrounding it, perhaps leading to its elimination as a significant defect. A silicon-nitride rod with a hole in one end was plunged into liquid aluminum to hold a known volume of air in contact with the liquid metal at a constant temperature. The change in the air volume with time was recorded by real-time X-ray radiography to determine the reaction rates of the trapped atmosphere with the liquid aluminum, creating a model for the behavior of an entrained double oxide film defect. The results from this experiment showed that first oxygen, and then nitrogen, was consumed by the aluminum alloy, to form aluminum oxide and aluminum nitride, respectively. The effect of adding different elements to the liquid aluminum and the effect of different hydrogen contents were also studied.

  11. A novel method for extraction and analysis of gunpowder residues on double-side adhesive coated stubs.

    PubMed

    Zeichner, Arie; Eldar, Baruch

    2004-11-01

    A study was conducted to develop an efficient method for extraction and analysis of gunpowder (propellant) residues from double-side adhesive coated stubs, which are used for sampling suspects or their clothing for gunshot (primer) residues (GSR). Conductive and non-conductive double-side adhesives were examined, and the analysis was carried out by gas chromatography/thermal energy analyzer (GC/TEA) and ion mobility spectrometry (IMS). The optimal procedure for the extraction, as was developed in the present study, employs two stages: (1) extraction of the stubs with a mixture of 80% v/v aqueous solution of 0.1% w/v of sodium azide and 20% v/v of ethanol employing sonication at 80 degrees C for 15 min. and (2) residues from the obtained extract were further extracted with methylene chloride. The methylene chloride phase was concentrated by evaporation prior to analysis. Extraction efficiencies of 30-90% for nitroglycerine (NG) and for 2,4-dinitro toluene (2,4-DNT) were found. No significant interferences in the analysis were observed from the adhesives or skin. Interferences were observed in the analysis by the GC/TEA of the samples collected from hair. The method enables analysis of propellant residues on a double-side adhesive coated stub after it was examined for primer residues by scanning electron microscopy/energy dispersive X-ray spectroscopy (SEM/EDX). Thus, the probative value of the evidence may be increased. PMID:15568690

  12. Grid-based methods for diatomic quantum scattering problems III: Double photoionization of molecular hydrogen in prolate spheroidal coordinates

    SciTech Connect

    Tao, Liang; McCurdy, Bill; Rescigno, Tom

    2010-06-10

    Our previously developed finite-element/ discrete variable representation in prolate spheroidal coordinates is extended to two-electron systems with a study of double ionization of H$_2$ with fixed-nuclei. Particular attention is paid to the development of fast and accurate methods for treating the electron-electron interaction. The use of exterior complex scaling in the implementation offers a simple way of enforcing Coulomb boundary conditions for the electronic double continuum. While the angular distributions calculated in this study are found to be completely consistent with our earlier treatments that employed single-center expansions in spherical coordinates, we find that the magnitude of the integrated cross sections are sensitive to small changes in the initial-state wave function. The present formulation offers significant advantages with respect to convergence and efficiency and opens the way to calculations on more complicated diatomic targets.

  13. The Kernel Energy Method: Construction of 3 & 4 tuple Kernels from a List of Double Kernel Interactions

    PubMed Central

    Huang, Lulu; Massa, Lou

    2010-01-01

    The Kernel Energy Method (KEM) provides a way to calculate the ab-initio energy of very large biological molecules. The results are accurate, and the computational time reduced. However, by use of a list of double kernel interactions a significant additional reduction of computational effort may be achieved, still retaining ab-initio accuracy. A numerical comparison of the indices that name the known double interactions in question, allow one to list higher order interactions having the property of topological continuity within the full molecule of interest. When, that list of interactions is unpacked, as a kernel expansion, which weights the relative importance of each kernel in an expression for the total molecular energy, high accuracy, and a further significant reduction in computational effort results. A KEM molecular energy calculation based upon the HF/STO3G chemical model, is applied to the protein insulin, as an illustration. PMID:21243065

  14. Ozone loss hits us where we live

    SciTech Connect

    Appenzeller, T.

    1991-11-01

    The news about Earth's ozone layer just keeps getting worse. Three weeks ago, NASA researchers reported that the ozone hole over the Antarctic hit a record depth this year. Now comes the United Nations Environment Program, together with the World Meteorological Organization, with an even more distressing assessment of the state of the ozone layer. For the first time, the 80-member UN panel said, measurements show the ozone shield is eroding over temperate latitudes in summer, exposing crops and people to a larger dose of ultraviolet light just when they are most vulnerable. For a small group of atmospheric modelers, though, the bad news is bittersweet. Four months ago researchers predicted summertime ozone losses of just the magnitude the UN panel has now reported: about 3% over the past decade for northern temperate latitudes. Ozone modelers are encouraged by the agreement, particularly because other models are now yielding the same result. The modeling effort was spurred by earlier measurements showing a serious erosion of ozone at midlatitudes, mainly in winter. In 1988, an analysis of data collected from the ground showed that ozone levels at the latitude of the US were dropping by about 1% to 3% per decade; last April, an analysis of measurements from the satellite-borne Total Ozone Mapping Spectrometer boosted that figure to between 4% and 5%. Those findings raised the question: What mechanisms could be driving the midlatitude losses The fact that the losses seemed to be concentrated in winter suggested one possibility. The winter ozone losses at the poles are driven by chemical reactions taking place on the surface of ice crystals in polar stratospheric clouds. Such clouds don't form at temperate latitudes. But some researchers suggested that masses of air already depleted in ozone or enriched in reactive chlorine by the chemistry in the polar clouds might be escaping to temperate latitudes during the winter.

  15. Novel method for a flexible double-sided microelectrode fabrication process

    NASA Astrophysics Data System (ADS)

    Doerge, Thomas; Kammer, S.; Hanauer, M.; Sossalla, Adam; Steltenkamp, S.

    2009-05-01

    Flexible devices with integrated micro electrodes are widely used for neuronal as well as myogenic stimulation and recording applications. One main intention by using micro electrodes is the ability of placing an appropriate amount of electrodes on the active sites. With an increasing number of single electrodes the selectivity for signal acquirement and analysis is significantly improved. The further advantage of small and elastic structures inside the biological tissue is the perfect fit. This lead to lower traumatisation of the nerve and muscle fibres during and after acute and chronically surgery. Different designed and structured flexible micro electrodes have been developed at the IBMT based on polyimide as substrate material over the last years including cuff, intrafascicular and shaft electrodes. All these systems are generally built up as single sided devices which reduce the possible electrode site half the area. Especially for shaft and intrafascicular applications having double sided electrode arrangement would increase the selectivity enormous. So areas on both sides can be monitored simultaneously. Recent developments of double sided flexible electrode systems lead to promising results especially for varied signal recording. Though these developments revealed some challenges in the field of micromachining including low yield rates. In this work we describe a new technical approach to develop double sided flexible micro electrode systems with a reproducible high yield rate. Prototypes of intrafascicular and intramuscular electrode systems have been developed and investigated by the means of electrochemical characterisation and mechanical behaviour. Additional investigations have been performed with scanning electron microscopy. We also give an outlook to future in vitro and in vivo experiments to investigate the application performance of the developed systems

  16. A Novel Method for Calculation of Strain Energy Release Rate of Asymmetric Double Cantilever Laminated Composite Beams

    NASA Astrophysics Data System (ADS)

    Shokrieh, M. M.; Zeinedini, A.

    2014-06-01

    In this research, a novel data reduction method for calculation of the strain energy release rate ( SERR) of asymmetric double cantilever beams ( ADCB) is presented. For this purpose the elastic beam theory ( EBT) is modified and the new method is called as the modified elastic beam theory ( MEBT). Also, the ADCB specimens are modeled using ABAQUS/Standard software. Then, the initiation of delamination of ADCB specimens is modeled using the virtual crack closure technique ( VCCT). Furthermore, magnitudes of the SERR for different samples are also calculated by an available data reduction method, called modified beam theory ( MBT). Using the hand lay-up method, different laminated composite samples are manufactured by E-glass/epoxy unidirectional plies. In order to measure the SERR, all samples are tested using an experimental setup. The results determined by the new data reduction method ( MEBT) show good agreements with the results of the VCCT and the MBT.

  17. A noniterative asymmetric triple excitation correction for the density-fitted coupled-cluster singles and doubles method: Preliminary applications.

    PubMed

    Bozkaya, Uğur

    2016-04-14

    An efficient implementation of the asymmetric triples correction for the coupled-cluster singles and doubles [ΛCCSD(T)] method [S. A. Kucharski and R. J. Bartlett, J. Chem. Phys. 108, 5243 (1998); T. D. Crawford and J. F. Stanton, Int. J. Quantum Chem. 70, 601 (1998)] with the density-fitting [DF-ΛCCSD(T)] approach is presented. The computational time for the DF-ΛCCSD(T) method is compared with that of ΛCCSD(T). Our results demonstrate that the DF-ΛCCSD(T) method provide substantially lower computational costs than ΛCCSD(T). Further application results show that the ΛCCSD(T) and DF-ΛCCSD(T) methods are very beneficial for the study of single bond breaking problems as well as noncovalent interactions and transition states. We conclude that ΛCCSD(T) and DF-ΛCCSD(T) are very promising for the study of challenging chemical systems, where the coupled-cluster singles and doubles with perturbative triples method fails. PMID:27083709

  18. A noniterative asymmetric triple excitation correction for the density-fitted coupled-cluster singles and doubles method: Preliminary applications

    NASA Astrophysics Data System (ADS)

    Bozkaya, Uǧur

    2016-04-01

    An efficient implementation of the asymmetric triples correction for the coupled-cluster singles and doubles [ΛCCSD(T)] method [S. A. Kucharski and R. J. Bartlett, J. Chem. Phys. 108, 5243 (1998); T. D. Crawford and J. F. Stanton, Int. J. Quantum Chem. 70, 601 (1998)] with the density-fitting [DF-ΛCCSD(T)] approach is presented. The computational time for the DF-ΛCCSD(T) method is compared with that of ΛCCSD(T). Our results demonstrate that the DF-ΛCCSD(T) method provide substantially lower computational costs than ΛCCSD(T). Further application results show that the ΛCCSD(T) and DF-ΛCCSD(T) methods are very beneficial for the study of single bond breaking problems as well as noncovalent interactions and transition states. We conclude that ΛCCSD(T) and DF-ΛCCSD(T) are very promising for the study of challenging chemical systems, where the coupled-cluster singles and doubles with perturbative triples method fails.

  19. Performances of a method for reconstructing the energy of neutrons detected by a double scattering spectrometer

    SciTech Connect

    Agnello, M.; Botta, E.; Bressani, T.; Calvo, D.; Gianotti, P.; Iazzi, F.; Lamberti, C.; Minetti, B. ); Balocco, E. )

    1992-10-01

    This paper reports on a neutron spectrometer based on the double scattering technique which has been designed and built at the Laboratorio Tecnologico of INFN - Turin (Italy) for Cold Fusion experiments. The operating principle for the reconstruction of the energy can be applied to various fields (neutron emission from sources, fission and fusion) and is described together with the performed tests: a resolution of less than 560 KeV FWHM has been obtained for neutrons of 2.45 MeV, in a typical running configuration.

  20. A method for double-labeling sputum cells for p53 and cytokeratin

    SciTech Connect

    Neft, R.E.; Tierney, L.A.; Belinsky, S.A.

    1995-12-01

    Molecular and immunological techniques may enhance the usefulness of sputum cytology as a screening tool for lung cancer. These techniques may also be useful in detecting and following the early progression of disease from metaplasia to dysplasia, carcinoma in situ, and finally to invasive carcinoma. Longitudinal information on the evolution of these malignant changes in the respiratory epithelium can be gained by prospective study of populations at high risk for lung cancer. This work is significant because double-labeling of cells in sputum with p53 and cytokeratin antibodies facilitates rapid screening of p53 positive neoplastic and preneoplastic lung cells by brightfield and fluorescence microscopy.

  1. Semi-monolithic cavity for external resonant frequency doubling and method of performing the same

    NASA Technical Reports Server (NTRS)

    Hemmati, Hamid (Inventor)

    1999-01-01

    The fabrication of an optical cavity for use in a laser, in a frequency doubling external cavity, or any other type of nonlinear optical device, can be simplified by providing the nonlinear crystal in combination with a surrounding glass having an index of refraction substantially equal to that of the nonlinear crystal. The closed optical path in this cavity is formed in the surrounding glass and through the nonlinear crystal which lies in one of the optical segments of the light path. The light is transmitted through interfaces between the surrounding glass in the nonlinear crystal through interfaces which are formed at the Brewster-angle to minimize or eliminate reflection.

  2. Double photoionization of SO 2 and fragmentation spectroscopy of SO 2++ studied by a photoion-photoion coincidence method

    NASA Astrophysics Data System (ADS)

    Dujardin, Gérald; Leach, Sydney; Dutuit, Odile; Guyon, Paul-Marie; Richard-Viard, Martine

    1984-08-01

    Doubly charged sulphur dioxide cations (SO 2++) are produced by photoionization with synchrotron radiation from ACO in the excitation-energy range 34-54 eV. A new photoion-photoion coincidence (PIPICO) experiment is described in which coincidences between photoion fragments originating from the dissociation of the doubly charged parent cation are counted. This PIPICO method enables us to study the fragmentation of individual electronically excited states of SO 2++ and to determine the corresponding absolute double-photoionization partial cross sections as a function of the excitation energy. A tentative assignment of the three observed α, β and γ SO 2++ states is given. The dissociation processes of the α and β states into the products SO + + O + are found to be non-statistical in nature; the γ state dissociates completely into three atomic fragments S + + O + + O. Three main observed features of the double-photoionization cross-section curves are discussed in the text: appearance potentials, linear threshold laws, and constant double-photoionization cross sections relative to the total ionization cross section at high energies.

  3. Finite Element Method Simulation of Double-Ended Tuning-Fork Quartz Resonator for Application to Vibratory Gyro-Sensor

    NASA Astrophysics Data System (ADS)

    Sato, Kenji; Ono, Atsushi; Tomikawa, Yoshiro

    2003-05-01

    In the present paper, we propose a double-ended tuning-fork quartz resonator for a flatly supported vibratory gyro-sensor in parallel with its rotating plane. The resonator has the advantages of ease of miniaturization and high resistance to external shock, because the height of the proposed resonator is less than that of the conventional vertical-type tuning-fork. In addition, the proposed resonator has two end-support parts. The resonator also has the following features: (1) the vibration energy of the resonator is trapped in the driving part, therefore the resonator is only slightly affected by the support parts and (2) unwanted output signals can be removed by differential connection of the output signals from two symmetric detection electrodes. The resonator was designed using the finite element method (FEM), and its characteristics were also simulated by FEM. The obtained results show that the double-ended tuning-fork quartz resonator is applicable as a vibratory gyro-sensor, and the I/O voltage ratio of the gyro-sensor was found to be proportional to the applied angular velocity. That is, we clarified that the double-ended tuning-fork quartz resonator could be used as a gyro-sensor.

  4. HITS-CLIP yields genome-wide insights into brain alternative RNA processing

    NASA Astrophysics Data System (ADS)

    Licatalosi, Donny D.; Mele, Aldo; Fak, John J.; Ule, Jernej; Kayikci, Melis; Chi, Sung Wook; Clark, Tyson A.; Schweitzer, Anthony C.; Blume, John E.; Wang, Xuning; Darnell, Jennifer C.; Darnell, Robert B.

    2008-11-01

    Protein-RNA interactions have critical roles in all aspects of gene expression. However, applying biochemical methods to understand such interactions in living tissues has been challenging. Here we develop a genome-wide means of mapping protein-RNA binding sites in vivo, by high-throughput sequencing of RNA isolated by crosslinking immunoprecipitation (HITS-CLIP). HITS-CLIP analysis of the neuron-specific splicing factor Nova revealed extremely reproducible RNA-binding maps in multiple mouse brains. These maps provide genome-wide in vivo biochemical footprints confirming the previous prediction that the position of Nova binding determines the outcome of alternative splicing; moreover, they are sufficiently powerful to predict Nova action de novo. HITS-CLIP revealed a large number of Nova-RNA interactions in 3' untranslated regions, leading to the discovery that Nova regulates alternative polyadenylation in the brain. HITS-CLIP, therefore, provides a robust, unbiased means to identify functional protein-RNA interactions in vivo.

  5. Even Mild Football Head Hits Can Harm Vision

    MedlinePlus

    ... www.nlm.nih.gov/medlineplus/news/fullstory_158807.html Even Mild Football Head Hits Can Harm Vision Study of college players raises concerns about repetitive non-concussive impacts To use the sharing features on ...

  6. Concussion Study Shows Player-To-Player Hits Most Damaging

    MedlinePlus

    ... Study Shows Player-to-Player Hits Most Damaging Running longer before the contact happens also spells more ... the University of Georgia. "We also found that running a long distance before colliding with an opponent ...

  7. People with learning disabilities are hit hard by funding cutbacks.

    PubMed

    Hughes, Charlie

    Thank you for publishing Ken Mack's letter (March 5) drawing attention to the suffering and distress caused by welfare reforms and cutbacks, and how people with disabilities and their families are being hit particularly hard. PMID:24617400

  8. Even Mild Football Head Hits Can Harm Vision

    MedlinePlus

    ... html Even Mild Football Head Hits Can Harm Vision Study of college players raises concerns about repetitive ... Repeated blows to the head can cause near vision to blur slightly, even if the individual impacts ...

  9. The Chelyabinsk Meteorite Hits an Anomalous Zone in the Urals

    NASA Astrophysics Data System (ADS)

    Kochemasov, G. G.

    2013-09-01

    The Chelyabinsk meteorite is "strange" because it hits an area in the Urals where anomalous events are observed: shining skies, light balls, UFOs, electrphonic bolids. The area tectonically occurs at the intersection of two fold belts: Urals and Timan.

  10. HANFORD DOUBLE SHELL TANK (DST) THERMAL & SEISMIC PROJECT BUCKLING EVALUATION METHODS & RESULTS FOR THE PRIMARY TANKS

    SciTech Connect

    MACKEY TC; JOHNSON KI; DEIBLER JE; PILLI SP; RINKER MW; KARRI NK

    2007-02-14

    This report documents a detailed buckling evaluation of the primary tanks in the Hanford double-shell waste tanks (DSTs), which is part of a comprehensive structural review for the Double-Shell Tank Integrity Project. This work also provides information on tank integrity that specifically responds to concerns raised by the Office of Environment, Safety, and Health (ES&H) Oversight (EH-22) during a review of work performed on the double-shell tank farms and the operation of the aging waste facility (AWF) primary tank ventilation system. The current buckling review focuses on the following tasks: (1) Evaluate the potential for progressive I-bolt failure and the appropriateness of the safety factors that were used for evaluating local and global buckling. The analysis will specifically answer the following questions: (a) Can the EH-22 scenario develop if the vacuum is limited to -6.6-inch water gage (w.g.) by a relief valve? (b) What is the appropriate factor of safety required to protect against buckling if the EH-22 scenario can develop? (c) What is the appropriate factor of safety required to protect against buckling if the EH-22 scenario cannot develop? (2) Develop influence functions to estimate the axial stresses in the primary tanks for all reasonable combinations of tank loads, based on detailed finite element analysis. The analysis must account for the variation in design details and operating conditions between the different DSTs. The analysis must also address the imperfection sensitivity of the primary tank to buckling. (3) Perform a detailed buckling analysis to determine the maximum allowable differential pressure for each of the DST primary tanks at the current specified limits on waste temperature, height, and specific gravity. Based on the I-bolt loads analysis and the small deformations that are predicted at the unfactored limits on vacuum and axial loads, it is very unlikely that the EH-22 scenario (i.e., progressive I-bolt failure leading to global

  11. Combined hit theory-microdosimetric explanation of cellular radiobiological action

    SciTech Connect

    Bond, V.P.; Varma, M.N.

    1983-01-01

    Hit theory is combined with microdosimetry in a stochastic approach that explains the observed responses of cell populations exposed in radiation fields of different qualities. The central thesis is that to expose a population of cells in a low-level radiation field is to subject the cells to the potential for interaction with charged particles in the vicinity of the cells, quantifiable in terms of the charged particle fluence theta. When such an interaction occurs there is a resulting stochastic transfer of energy to a critical volume (CV) of cross section sigma, within the cell(s). The severity of cell injury is dependent on the amount of energy thus imparted, or the hit size. If the severity is above some minimal level, there is a non-zero probability that the injury will result in a quantal effect (e.g., a mutational or carcinogenic initial event, cell transformation). A microdosimetric proportional counter, viewed here as a phantom cell CV that permits measurements not possible in the living cell, is used to determine the incidence of hit cells and the spectrum of hit sizes. Each hit is then weighted on the basis of an empirically-determined function that provides the fraction of cells responding quantally, as a function of hit size. The sum of the hits so weighted provides the incidence of quantally-responding cells, for any amount of exposure theta in a radiation field of any quality or mixture qualities. The hit size weighting function for pink mutations in Tradescantia is discussed, as are its implications in terms of a replacement for RBE and dose equivalent. 14 references, 9 figures.

  12. Do pigeons prefer alternatives that include near-hit outcomes?

    PubMed

    Stagner, Jessica P; Case, Jacob P; Sticklen, Mary F; Duncan, Amanda K; Zentall, Thomas R

    2015-07-01

    Pigeons show suboptimal choice on a gambling-like task similar to that shown by humans. Humans also show a preference for gambles in which there are near hits (losses that come close to winning). In the present research, we asked if pigeons would show a preference for alternatives with near-hit-like trials. In Experiment 1, we included an alternative that presented a near hit, in which a stimulus associated with reinforcement (a presumed conditioned reinforcer) changed to a stimulus associated with the absence of reinforcement (a presumed conditioned inhibitor). The pigeons tended to avoid this alternative. In Experiment 2, we varied the duration of the presumed conditioned reinforcer (2 vs. 8 s) that changed to a presumed conditioned inhibitor (8 vs. 2 s) and found that the longer the conditioned reinforcer was presented, the more the pigeons avoided it. In Experiment 3, the near-hit alternative involved an ambiguous stimulus for 8 s that changed to a presumed conditioned reinforcer (or a presumed conditioned inhibitor) for 2 s, but the pigeons still avoided it. In Experiment 4, we controlled for the duration of the conditioned reinforcer by presenting it first for 2 s followed by the ambiguous stimulus for 8 s. Once again, the pigeons avoided the alternative with the near-hit trials. In all 4 experiments, the pigeons tended to avoid alternatives that provided near-hit-like trials. We concluded that humans may be attracted to near-hit trials because near-hit trials give them the illusion of control, whereas this does not appear to be a factor for pigeons. PMID:26167775

  13. Superfast Cosmic Jet "Hits the Wall"

    NASA Astrophysics Data System (ADS)

    1999-01-01

    -288. The jet travelled quickly until its advance suddenly was stopped and the endpoint of the jet became brighter than the core. "This fast-moving material obviously hit something," Hjellming said. What did it it hit? "Probably a mixture of external material plus material from a previous jet ejection." Further studies of the collision could yield new information about the physics of cosmic jets. Such jets are believed to be powered by black holes into which material is being drawn. The exact mechanism by which the black hole's gravitational energy accelerates particles to nearly the speed of light is not well understood. There is even dispute about the types of particles ejected. Competing models call for either a mixture of electrons and protons or a mixture of electrons and positrons. Because protons are more than 1,800 times more massive than electrons or positrons (the positively-charged antiparticle of the electron), the electron-proton mixture would be much more massive than the electron-positron pair. Thus, an electron-proton jet is called a heavy jet and an electron-positron jet is called a light jet. A light jet would be much more easily slowed or stopped by tenuous interstellar material than a heavy jet, so the collision of XTE J1748-288's jet may indicate that it is a light jet. "There's still a lot more work to do before anyone can conclude that, but the collision offers the possibility of answering the light-heavy jet question," Hjellming said. A 1998 VLA study by John Wardle of Brandeis University and his colleagues indicated that the jet of a distant quasar is a light, electron-positron jet. Though the black holes in quasars are supermassive, usually millions of times more massive than the Sun, the physics of jet production in them is thought to be similar to the physics of jet production by smaller black holes, only a few times more massive than the sun, such as the one possibly in XTE J1748-288. The VLA is an instrument of the National Radio Astronomy

  14. Fracture Toughness of Thin Plates by the Double-Torsion Test Method

    NASA Technical Reports Server (NTRS)

    Salem, Jonathan A.; Radovic, Miladin; Lara-Curzio, Edgar; Nelson, George

    2006-01-01

    Double torsion testing can produce fracture toughness values without crack length measurement that are comparable to those measured via standardized techniques such as the chevron-notch, surface-crack-in-flexure and precracked beam if the appropriate geometry is employed, and the material does not exhibit increasing crack growth resistance. Results to date indicate that 8 < W/d < 80 and L/W > 2 are required if crack length is not considered in stress intensity calculations. At L/W = 2, the normalized crack length should be 0.35 < a/L < 0.65; whereas for L/W = 3, 0.2 < a/L < 0.75 is acceptable. In addition, the load-points need to roll to reduce friction. For an alumina exhibiting increasing crack growth resistance, values corresponding to the plateau of the R-curve were measured. For very thin plates (W/d > 80) nonlinear effects were encountered.

  15. [Comparison of two types of double-lined simulated landfill leakage detection based on high voltage DC method].

    PubMed

    Yang, Ping; Nai, Chang-Xin; Dong, Lu; Wang, Qi; Wang, Yan-Wen

    2006-01-01

    Two types of double high density polyethylene (HDPE) liners landfill that clay or geogrid was added between the two HDPE liners. The general resistance of the second mode is 15% larger than the general resistance of the first mode in the primary HDPE liner detection, and 20% larger than that of the first one in the secondary HDPE liner detection. High voltage DC method can accomplish the leakage detection and location of these two types of landfill and the error of leakage location is less than 10cm when electrode space is 1m. PMID:16599145

  16. Thomson Scattering Measurements on HIT-SI3

    NASA Astrophysics Data System (ADS)

    Everson, C. J.; Morgan, K. D.; Jarboe, T. R.

    2015-11-01

    A multi-point Thomson Scattering diagnostic has been implemented on HIT-SI3 (Helicity Injected Torus - Steady Inductive 3) to measure electron temperature. The HIT-SI3 experiment is a modification of the original HIT-SI apparatus that uses three injectors instead of two. This modification alters the configuration of magnetic fields and thus the plasma behavior in the device. The scientific aim of HIT-SI3 is to develop a deeper understanding of how injector behavior and interactions influence current drive and plasma performance in the spheromak. The Thomson Scattering system includes a 20 J (1 GW pulse) Ruby laser that provides the incident beam, and collection optics that are installed such that measurements can be taken at four spatial locations in HIT-SI3 plasmas. For each measurement point, a 3-channel polychromator is used to detect the relative level of scattering. These measurements allow for the presence of temperature gradients in the spheromak to be investigated. Preliminary HIT-SI3 temperature data are presented and can be compared to predictions from computational models. Work supported by the D.O.E.

  17. Evaluation of molecular dynamics simulation methods for ionic liquid electric double layers.

    PubMed

    Haskins, Justin B; Lawson, John W

    2016-05-14

    We investigate how systematically increasing the accuracy of various molecular dynamics modeling techniques influences the structure and capacitance of ionic liquid electric double layers (EDLs). The techniques probed concern long-range electrostatic interactions, electrode charging (constant charge versus constant potential conditions), and electrolyte polarizability. Our simulations are performed on a quasi-two-dimensional, or slab-like, model capacitor, which is composed of a polarizable ionic liquid electrolyte, [EMIM][BF4], interfaced between two graphite electrodes. To ensure an accurate representation of EDL differential capacitance, we derive new fluctuation formulas that resolve the differential capacitance as a function of electrode charge or electrode potential. The magnitude of differential capacitance shows sensitivity to different long-range electrostatic summation techniques, while the shape of differential capacitance is affected by charging technique and the polarizability of the electrolyte. For long-range summation techniques, errors in magnitude can be mitigated by employing two-dimensional or corrected three dimensional electrostatic summations, which led to electric fields that conform to those of a classical electrostatic parallel plate capacitor. With respect to charging, the changes in shape are a result of ions in the Stern layer (i.e., ions at the electrode surface) having a higher electrostatic affinity to constant potential electrodes than to constant charge electrodes. For electrolyte polarizability, shape changes originate from induced dipoles that soften the interaction of Stern layer ions with the electrode. The softening is traced to ion correlations vertical to the electrode surface that induce dipoles that oppose double layer formation. In general, our analysis indicates an accuracy dependent differential capacitance profile that transitions from the characteristic camel shape with coarser representations to a more diffuse

  18. Evaluation of molecular dynamics simulation methods for ionic liquid electric double layers

    NASA Astrophysics Data System (ADS)

    Haskins, Justin B.; Lawson, John W.

    2016-05-01

    We investigate how systematically increasing the accuracy of various molecular dynamics modeling techniques influences the structure and capacitance of ionic liquid electric double layers (EDLs). The techniques probed concern long-range electrostatic interactions, electrode charging (constant charge versus constant potential conditions), and electrolyte polarizability. Our simulations are performed on a quasi-two-dimensional, or slab-like, model capacitor, which is composed of a polarizable ionic liquid electrolyte, [EMIM][BF4], interfaced between two graphite electrodes. To ensure an accurate representation of EDL differential capacitance, we derive new fluctuation formulas that resolve the differential capacitance as a function of electrode charge or electrode potential. The magnitude of differential capacitance shows sensitivity to different long-range electrostatic summation techniques, while the shape of differential capacitance is affected by charging technique and the polarizability of the electrolyte. For long-range summation techniques, errors in magnitude can be mitigated by employing two-dimensional or corrected three dimensional electrostatic summations, which led to electric fields that conform to those of a classical electrostatic parallel plate capacitor. With respect to charging, the changes in shape are a result of ions in the Stern layer (i.e., ions at the electrode surface) having a higher electrostatic affinity to constant potential electrodes than to constant charge electrodes. For electrolyte polarizability, shape changes originate from induced dipoles that soften the interaction of Stern layer ions with the electrode. The softening is traced to ion correlations vertical to the electrode surface that induce dipoles that oppose double layer formation. In general, our analysis indicates an accuracy dependent differential capacitance profile that transitions from the characteristic camel shape with coarser representations to a more diffuse

  19. Chemical etching method assisted double-pulse LIBS for the analysis of silicon crystals

    NASA Astrophysics Data System (ADS)

    Khalil, A. A. I.

    2015-06-01

    Two Nd:YAG lasers working in pulsed modes are combined in the same direction (collinear arrangement) to focus on silicon (Si) crystals in reduced oxygen atmosphere (0.1 mbar) for double-pulse laser-induced breakdown spectroscopy (DP-LIBS) system. Silicon crystals of (100) and (111) orientations were investigated, and Si samples were measured either without prior treatment ("untreated") or after fabrication of nano-pores ("treated"). Nano-pores are produced by metal coating and by chemical etching. DP-LIBS spectra were compared for different Si samples (untreated, treated, (100) and (111) orientations), for double-pulse (DP) (with 266 nm pulse followed by 1064 nm pulse) excitation and for different delay times (times between the excitation laser pulse and the detection ICCD gate); treatment by chemical etching has been studied as well. The intensity of the atomic line Si I at 288.16 nm was enhanced by a factor of about three by using the DP-LIBS signals as compared to the single-pulse (SP) signal which could increase the sensitivity of the LIBS technique. This study proved that an optimized value of the etching time of Si during etching by chemical processes and short delay times are required. Plasma parameters [the electron temperature ( T e) and the electron number density ( N e)] were calculated from measured SP- and DP-LIBS spectra. The most important result of this study is the much higher DP-LIBS intensity observed on Si (100) as compared to Si (111) for measurements under the same experimental conditions. This study could provide important reference data for the design and optimization of DP-LIBS systems involved in plasma-facing components diagnostics.

  20. Potential of a spectroscopic measurement method using adding-doubling to retrieve the bulk optical properties of dense microalgal media.

    PubMed

    Bellini, Sarah; Bendoula, Ryad; Latrille, Eric; Roger, Jean-Michel

    2014-01-01

    In the context of algal mass cultivation, current techniques used for the characterization of algal cells require time-consuming sample preparation and a large amount of costly, standard instrumentation. As the physical and chemical properties of the algal cells strongly affect their optical properties, the optical characterization is seen as a promising method to provide an early diagnosis in the context of mass cultivation monitoring. This article explores the potential of a spectroscopic measurement method coupled with the inversion of the radiative transfer theory for the retrieval of the bulk optical properties of dense algal samples. Total transmittance and total reflectance measurements were performed over the 380-1020 nm range on dense algal samples with a double integrating sphere setup. The bulk absorption and scattering coefficients were thus extracted over the 380-1020 nm range by inverting the radiative transfer theory using inverse-adding-doubling computations. The experimental results are presented and discussed; the configuration of the optical setup remains a critical point. The absorption coefficients obtained for the four samples of this study appear not to be more informative about pigment composition than would be classical methods in analytical spectroscopy; however, there is a real added value in measuring the reduced scattering coefficient, as it appears to be strongly correlated to the size distribution of the algal cells. PMID:25198389

  1. Known plaintext attack on double random phase encoding using fingerprint as key and a method for avoiding the attack.

    PubMed

    Tashima, Hideaki; Takeda, Masafumi; Suzuki, Hiroyuki; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2010-06-21

    We have shown that the application of double random phase encoding (DRPE) to biometrics enables the use of biometrics as cipher keys for binary data encryption. However, DRPE is reported to be vulnerable to known-plaintext attacks (KPAs) using a phase recovery algorithm. In this study, we investigated the vulnerability of DRPE using fingerprints as cipher keys to the KPAs. By means of computational experiments, we estimated the encryption key and restored the fingerprint image using the estimated key. Further, we propose a method for avoiding the KPA on the DRPE that employs the phase retrieval algorithm. The proposed method makes the amplitude component of the encrypted image constant in order to prevent the amplitude component of the encrypted image from being used as a clue for phase retrieval. Computational experiments showed that the proposed method not only avoids revealing the cipher key and the fingerprint but also serves as a sufficiently accurate verification system. PMID:20588510

  2. [Effects of different tillage methods on photosynthetic characteristics, dry matter production and economic benefit of double cropping soybean].

    PubMed

    Tang, Jiang-hua; Su, Li-li; Li, Ya-jie; Xu, Wen-xiu; Peng, Jiang-long

    2016-01-01

    In order to explore suitable mode of high yield cultivation of double cropping soybean after wheat under drip irrigation in northern Xinjiang, field trials were set in 2013-2014 to investigate physiological indices and agronomic traits of double cropping soybean under different tillage methods under drip irrigation. The results showed that leaf area index (LAI), chlorophyll content (SPAD), leaf net photosynthetic rate (Pn), transpiration rate (Tr) and stomatal conductance (g(s)) during the determination period under different tillage methods were in the order of tillage plus film covering (TP)> tillage (T)> rotary tillage (RT) > no-tillage (NT) , and the concentration of intercellular CO₂(Ci) was the opposite. LAI, SPAD, Pn, Tr, and g(s) of TP were higher than that with NT by 55.0%, 9.1%, 41.8%, 37.5% and 56.4%, respectively, and Ci was decreased by 22.1%. TP enhanced the photosynthetic efficiency of soybean and improved the ability of CO₂assimilation, consequently leading to the increase of soybean yield under TP compared to NT. The plant dry matter accumulation of TP treatment was improved greatly, with the pod number and seeds number per plant, 100-seed mass and yield of quadric sowing soybean being increased by 50.3%, 48.1%, 11.8% and 20.8% compared with that under NT, and the differences were significant. Therefore, the plastic film mulching combined with tillage under drip irrigation technology was suitable for double cropping soybean after wheat in northern Xinjiang under this experimental condition. PMID:27228608

  3. Double-salting out assisted liquid-liquid extraction (SALLE) HPLC method for estimation of temozolomide from biological samples.

    PubMed

    Jain, Darshana; Athawale, Rajani; Bajaj, Amrita; Shrikhande, Shruti

    2014-11-01

    The role of temozolomide (TMZ) in treatment of high grade gliomas, melanomas and other malignancies is being defined by the current clinical developmental trials. Temozolomide belongs to the group of alkylating agents and is prescribed to patients suffering from most aggressive forms of brain tumors. The estimation techniques for temozolomide from the extracted plasma or biological samples includes high-performance liquid chromatography with UV detection (HPLC-UV), micellar electrokinetic capillary chromatography (MKEC) and liquid chromatography coupled to mass spectroscopy (LC-MS). These methods suffer from disadvantages like low resolution, low sensitivity, low recovery or cost involvement. An analytical method possessing capacity to estimate low quantities of TMZ in plasma samples with high extraction efficiency (%) and high resolution with cost effectiveness needs to be developed. Cost effective, robust and low plasma component interfering HPLC method using salting out liquid-liquid extraction (SALLE) technique was developed and validated for estimation of drug from plasma samples. The extraction efficiency (%) with conventional LLE technique with methanol, ethyl acetate, dichloromethane and acetonitrile was found to be 5.99±2.45, 45.39±4.56, 46.04±1.14 and 46.23±3.67 respectively. Extraction efficiency (%) improved with SALLE where sodium chloride was used as an electrolyte and was found to be 6.80±5.56, 52.01±3.13, 62.69±2.11 and 69.20±1.18 with methanol, ethyl acetate, dichloromethane and acetonitrile as organic solvent. Upon utilization of two salts for extraction (double salting liquid-liquid extraction) the extraction efficiency (%) was further improved and was twice of LLE. It was found that double salting liquid-liquid extraction technique yielded extraction efficiency (%) of 11.71±5.66, 55.62±3.44, 77.28±2.89 and 87.75±0.89. Hence a method based on double SALLE was developed for quantification of TMZ demonstrating linearity in the range of

  4. 42 CFR 495.344 - Approval of the State Medicaid HIT plan, the HIT PAPD and update, the HIT IAPD and update, and...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) STANDARDS AND CERTIFICATION STANDARDS FOR THE ELECTRONIC HEALTH RECORD TECHNOLOGY INCENTIVE PROGRAM... include all of the information required under this subpart. ... 42 Public Health 5 2010-10-01 2010-10-01 false Approval of the State Medicaid HIT plan, the...

  5. A double-index method to classify Kuroshio intrusion paths in the Luzon Strait

    NASA Astrophysics Data System (ADS)

    Huang, Zhida; Liu, Hailong; Hu, Jianyu; Lin, Pengfei

    2016-06-01

    A double index (DI), which is made up of two sub-indices, is proposed to describe the spatial patterns of the Kuroshio intrusion and mesoscale eddies west to the Luzon Strait, based on satellite altimeter data. The area-integrated negative and positive geostrophic vorticities are defined as the Kuroshio warm eddy index (KWI) and the Kuroshio cold eddy index (KCI), respectively. Three typical spatial patterns are identified by the DI: the Kuroshio warm eddy path (KWEP), the Kuroshio cold eddy path (KCEP), and the leaking path. The primary features of the DI and three patterns are further investigated and compared with previous indices. The effects of the integrated area and the algorithm of the integration are investigated in detail. In general, the DI can overcome the problem of previously used indices in which the positive and negative geostrophic vorticities cancel each other out. Thus, the proportions of missing and misjudged events are greatly reduced using the DI. The DI, as compared with previously used indices, can better distinguish the paths of the Kuroshio intrusion and can be used for further research.

  6. An Automatic Quality Control Pipeline for High-Throughput Screening Hit Identification.

    PubMed

    Zhai, Yufeng; Chen, Kaisheng; Zhong, Yang; Zhou, Bin; Ainscow, Edward; Wu, Ying-Ta; Zhou, Yingyao

    2016-09-01

    The correction or removal of signal errors in high-throughput screening (HTS) data is critical to the identification of high-quality lead candidates. Although a number of strategies have been previously developed to correct systematic errors and to remove screening artifacts, they are not universally effective and still require fair amount of human intervention. We introduce a fully automated quality control (QC) pipeline that can correct generic interplate systematic errors and remove intraplate random artifacts. The new pipeline was first applied to ~100 large-scale historical HTS assays; in silico analysis showed auto-QC led to a noticeably stronger structure-activity relationship. The method was further tested in several independent HTS runs, where QC results were sampled for experimental validation. Significantly increased hit confirmation rates were obtained after the QC steps, confirming that the proposed method was effective in enriching true-positive hits. An implementation of the algorithm is available to the screening community. PMID:27313114

  7. Hitting Matrix and Domino Tiling with Diagonal Impurities

    NASA Astrophysics Data System (ADS)

    Nakano, Fumihiko; Sadahiro, Taizo

    2013-06-01

    As a continuation to our previous work (Nakano and Sadahiro in Fundam. Inform. 117:249-264, 2012; Nakano and Sadahiro in J. Stat. Phys. 139(4):565-597, 2010), we consider the domino tiling problem with impurities. (1) If we have more than two impurities on the boundary, we can compute the number of corresponding perfect matchings by using the hitting matrix method (Fomin in Trans. Am. Math. Soc. 353(9):3563-3583, 2001). (2) We have an alternative proof of the main result in Nakano and Sadahiro (Fundam. Inform. 117:249-264, 2012) and result in (1) above using the formula by Kenyon and Wilson (Trans. Am. Math. Soc. 363(3):1325-1364, 2011; Electron. J. Comb. 16(1):112, 2009) of counting the number of groves on circular planar graphs. (3) We study the behavior of the probability of finding the impurity at a given site when the size of the graph tends to infinity, as well as the scaling limit of those.

  8. Incorporation of a QM/MM Buffer Zone in the Variational Double Self-Consistent Field Method

    PubMed Central

    Xie, Wangshen; Song, Lingchun

    2009-01-01

    The explicit polarization (X-Pol) potential is an electronic-structure-based polarization force field, designed for molecular dynamics simulations and modeling of biopolymers. In this approach, molecular polarization and charge transfer effects are explicitly treated by a combined quantum mechanical and molecular mechanical (QM/MM) scheme, and the wave function of the entire system is variationally optimized by a double self-consistent field (DSCF) method. In the present article, we introduce a QM buffer zone for a smooth transition from a QM region to an MM region. Instead of using the Mulliken charge approximation for all QM/MM interactions, the Coulombic interactions between the adjacent fragments are determined directly by electronic structure theory. The present method is designed to accelerate the speed of convergence of the total energy and charge density of the system. PMID:18937511

  9. Multiple ion counting ICPMS double spike method for precise U isotopic analysis at ultra-trace levels

    NASA Astrophysics Data System (ADS)

    Snow, Jonathan E.; Friedrich, Jon M.

    2005-04-01

    Of the various methods for the measurement of the isotopic composition of U in solids and solutions, few offer both sensitivity and precision. In recent years, the use of ICPMS technology for this determination has become increasingly prevalent. Here we describe a method for the determination of the 235U/238U ratio in very small quantities (<=350 pg) with an accuracy of better than 3[per mille sign]. We measured several terrestrial standard materials and repeated analyses of the U960 isotopic composition standard. We used a 233U/236U double spike, with multiple ion counting on an unmodified Nu Instruments multicollector ICPMS and a non-standard detector configuration that allows an approximately 20-fold sensitivity gain over the best conventional techniques. This technique shows promise for the detection of isotopic tracers in the environment (for example anthropogenic 238U) at very extreme dilutions, or in cases where the total amount of analyte is necessarily limited.

  10. Comparison of validity and reliability of the Migraine disability assessment (MIDAS) versus headache impact test (HIT) in an Iranian population

    PubMed Central

    Chitsaz, Ahmad

    2011-01-01

    Background Migraine is one of the most common headaches that affect 11% or more adult population. Recently, researchers have designed two questionnaires, namely Headache Impact Test (HIT) and Migraine Disability Assessment (MIDAS), with the aim of improving migraine care. These two tests provide a standard measurement about migraine's effects on people's life style that divide patients into 4 groups (grades) based on headaches intensity. The aim of this study was to compare the validity and reliability of these two tests. Methods This study was designed as a multicenter, descriptive study to compare validity and reliability of Persian version of MIDAS and HIT questionnaires in 240 males and females with a migraine diagnosis according to criteria for headache and facial pain of the International Headache Society (IHS). The patients were enrolled in the study from 3 neurology clinics in Isfahan, Iran, between July 2004 and January 2005 and were evaluated at baseline (visit 1) and 4 weeks later (visit 2). Results According to our study, there was a high correlation between two tests (r = 0.94). This decreased their MIDAS grade in comparison to their grade HIT questionnaire. Conclusion These findings demonstrated that Persian version of HIT have the same validity and reliability as MIDAS. Replying to HIT questionnaire was easier than MIDAS for Iranian patients. Physicians can reliably use the Persian translation of both MIDAS and HIT questionnaires to define the severity of illness and its treatment strategy as a self-administered report by migraine patients. However, we recommend HIT for its simplicity in headache clinics. PMID:24250844

  11. Ensemble density functional theory method correctly describes bond dissociation, excited state electron transfer, and double excitations

    SciTech Connect

    Filatov, Michael; Huix-Rotllant, Miquel; Burghardt, Irene

    2015-05-14

    State-averaged (SA) variants of the spin-restricted ensemble-referenced Kohn-Sham (REKS) method, SA-REKS and state-interaction (SI)-SA-REKS, implement ensemble density functional theory for variationally obtaining excitation energies of molecular systems. In this work, the currently existing version of the SA-REKS method, which included only one excited state into the ensemble averaging, is extended by adding more excited states to the averaged energy functional. A general strategy for extension of the REKS-type methods to larger ensembles of ground and excited states is outlined and implemented in extended versions of the SA-REKS and SI-SA-REKS methods. The newly developed methods are tested in the calculation of several excited states of ground-state multi-reference systems, such as dissociating hydrogen molecule, and excited states of donor–acceptor molecular systems. For hydrogen molecule, the new method correctly reproduces the distance dependence of the lowest excited state energies and describes an avoided crossing between the doubly excited and singly excited states. For bithiophene–perylenediimide stacked complex, the SI-SA-REKS method correctly describes crossing between the locally excited state and the charge transfer excited state and yields vertical excitation energies in good agreement with the ab initio wavefunction methods.

  12. Layered double hydroxide/polyethylene terephthalate nanocomposites. Influence of the intercalated LDH anion and the type of polymerization heating method

    SciTech Connect

    Herrero, M.; Martinez-Gallegos, S.; Labajos, F.M.; Rives, V.

    2011-11-15

    Conventional and microwave heating routes have been used to prepare PET-LDH (polyethylene terephthalate-layered double hydroxide) composites with 1-10 wt% LDH by in situ polymerization. To enhance the compatibility between PET and the LDH, terephthalate or dodecyl sulphate had been previously intercalated in the LDH. PXRD and TEM were used to detect the degree of dispersion of the filler and the type of the polymeric composites obtained, and FTIR spectroscopy confirmed that the polymerization process had taken place. The thermal stability of these composites, as studied by thermogravimetric analysis, was enhanced when the microwave heating method was applied. Dodecyl sulphate was more effective than terephthalate to exfoliate the samples, which only occurred for the terephthalate ones under microwave irradiation. - Graphical abstract: Conventional and microwave heating routes were used to prepare PET-LDH (polyethylene terephthalate-layered double hydroxide) composites with 1-10 wt% LDH by in situ polymerization. To enhance the compatibility between PET and the LDH, terephthalate or dodecyl sulphate was previously intercalated into the LDH. The microwave process improves the dispersion and the thermal stability of nanocomposites due to the interaction of the microwave radiation and the dipolar properties of EG and the homogeneous heating. Highlights: > LDH-PET compatibility is enhanced by preintercalation of organic anions. > Dodecylsulphate performance is much better than that of terephthalate. > Microwave heating improves the thermal stability of the composites. > Microwave heating improves as well the dispersion of the inorganic phase.

  13. Rapid recovery of polycrystalline silicon from kerf loss slurry using double-layer organic solvent sedimentation method

    NASA Astrophysics Data System (ADS)

    Xing, Peng-fei; Guo, Jing; Zhuang, Yan-xin; Li, Feng; Tu, Gan-feng

    2013-10-01

    The rapid development of photovoltaic (PV) industries has led to a shortage of silicon feedstock. However, more than 40% silicon goes into slurry wastes due to the kerf loss in the wafer slicing process. To effectively recycle polycrystalline silicon from the kerf loss slurry, an innovative double-layer organic solvent sedimentation process was presented in the paper. The sedimentation velocities of Si and SiC particles in some organic solvents were investigated. Considering the polarity, viscosity, and density of solvents, the chloroepoxy propane and carbon tetrachloride were selected to separate Si and SiC particles. It is found that Si and SiC particles in the slurry waste can be successfully separated by the double-layer organic solvent sedimentation method, which can greatly reduce the sedimentation time and improve the purity of obtained Si-rich and SiC-rich powders. The obtained Si-rich powders consist of 95.04% Si, and the cast Si ingot has 99.06% Si.

  14. A PCR-Based Method to Construct Lentiviral Vector Expressing Double Tough Decoy for miRNA Inhibition.

    PubMed

    Qiu, Huiling; Zhong, Jiasheng; Luo, Lan; Liu, Nian; Kang, Kang; Qu, Junle; Peng, Wenda; Gou, Deming

    2015-01-01

    DNA vector-encoded Tough Decoy (TuD) miRNA inhibitor is attracting increased attention due to its high efficiency in miRNA suppression. The current methods used to construct TuD vectors are based on synthesizing long oligonucleotides (~90 mer), which have been costly and problematic because of mutations during synthesis. In this study, we report a PCR-based method for the generation of double Tough Decoy (dTuD) vector in which only two sets of shorter oligonucleotides (< 60 mer) were used. Different approaches were employed to test the inhibitory potency of dTuDs. We demonstrated that dTuD is the most efficient method in miRNA inhibition in vitro and in vivo. Using this method, a mini dTuD library against 88 human miRNAs was constructed and used for a high-throughput screening (HTS) of AP-1 pathway-related miRNAs. Seven miRNAs (miR-18b-5p, -101-3p, -148b-3p, -130b-3p, -186-3p, -187-3p and -1324) were identified as candidates involved in AP-1 pathway regulation. This novel method allows for an accurate and cost-effective generation of dTuD miRNA inhibitor, providing a powerful tool for efficient miRNA suppression in vitro and in vivo. PMID:26624995

  15. Improved curveball hitting through the enhancement of visual cues.

    PubMed

    Osborne, K; Rudrud, E; Zezoney, F

    1990-01-01

    This study investigated the effectiveness of using visual cues to highlight the seams of baseballs to improve the hitting of curveballs. Five undergraduate varsity baseball team candidates served as subjects. Behavior change was assessed through an alternating treatments design involving unmarked balls and two treatment conditions that included baseballs with 1/4-in. and 1/8-in. orange stripes marking the seams of the baseballs. Results indicated that subjects hit a greater percentage of marked than unmarked balls. These results suggest that the addition of visual cues may be a significant and beneficial technique to enhance hitting performance. Further research is suggested regarding the training procedures, effect of feedback, rate of fading cues, generalization to live pitching, and generalization to other types of pitches. PMID:2249972

  16. Influence of Running on Pistol Shot Hit Patterns.

    PubMed

    Kerkhoff, Wim; Bolck, Annabel; Mattijssen, Erwin J A T

    2016-01-01

    In shooting scene reconstructions, risk assessment of the situation can be important for the legal system. Shooting accuracy and precision, and thus risk assessment, might be correlated with the shooter's physical movement and experience. The hit patterns of inexperienced and experienced shooters, while shooting stationary (10 shots) and in running motion (10 shots) with a semi-automatic pistol, were compared visually (with confidence ellipses) and statistically. The results show a significant difference in precision (circumference of the hit patterns) between stationary shots and shots fired in motion for both inexperienced and experienced shooters. The decrease in precision for all shooters was significantly larger in the y-direction than in the x-direction. The precision of the experienced shooters is overall better than that of the inexperienced shooters. No significant change in accuracy (shift in the hit pattern center) between stationary shots and shots fired in motion can be seen for all shooters. PMID:26331462

  17. Quality Improvement Method using Double Error Correction in Burst Transmission Systems

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Naosuke; Tomiyama, Shigenori; Tanaka, Kimio

    Recently, it has a tendency to reduce an error correction and flow control in order to realize a high speed transmission in a burst transmission systems such as ATM network, IP (Internet Protocol) network, frame relay and so on. Therefore a degradation of network quality, an information loss caused by buffer overflow and decrease of average bit error rate, are occurred, especially for high speed information such as high definition television signals, it is necessary to improve these degradations. This paper proposes one of the typical reconstruction methods of lost information and an improvement of average bit error rate. In order to analyse the degradation phenomena, the Gilbert model is introduced for burst errors and the Fluid flow model for buffer overflow. This method is applied to ATM network which mainly transmit a video signals and it makes clear that proposed method is useful for high speed transmission.

  18. Novel current drive experiments on the CDX-U, HIT, and DIII-D Tokamaks

    SciTech Connect

    Ono, M.; Forest, C.B.; Hwang, Y.S.; Armstrong, R.J.; Choe, W.; Darrow, D.S.; Greene, G.; Jones, T. . Plasma Physics Lab.); Jarboe, T.R.; Martin, A.; Nelson, B.A.; Orvis, D.; Painter, C.; Zhou, L.; Rogers, J.A. ); Schaffer, M.J.; Hyatt, A.W.; Pinsker, R.I.; Staebler, G.M.; Stambaugh, R.D.; Strait, E.J.; Greene, K.L.; Leuer, J.A.; Lohr, J.

    1992-01-01

    Two types of novel, non-inductive current drive concepts for starting-up and maintaining tokamak discharges have been developed on the CDX-U, HIT, and DIII-D Tokamaks. On CDX-U, a new, non-inductive current drive technique utilizing fully internally generated pressure driven currents has been demonstrated. The measured current density profile shows a non-hollow profile which agrees with a modeling calculation including helicity conserving non-classical current transport providing the seed current''. Another current drive concept, dc-helicity injection, has been investigated on, CDX-U, HIT and DIII-D. This method utilizes injection of magnetic helicity via low energy electron currents, maintaining the plasma current through helicity conserving relaxiation. In these experiments, non-ohmic tokamak plasmas were formed and maintained in the tens of kA range.

  19. Novel current drive experiments on the CDX-U, HIT, and DIII-D Tokamaks

    SciTech Connect

    Ono, M.; Forest, C.B.; Hwang, Y.S.; Armstrong, R.J.; Choe, W.; Darrow, D.S.; Greene, G.; Jones, T.; Jarboe, T.R.; Martin, A.; Nelson, B.A.; Orvis, D.; Painter, C.; Zhou, L.; Rogers, J.A.; Schaffer, M.J.; Hyatt, A.W.; Pinsker, R.I.; Staebler, G.M.; Stambaugh, R.D.; Strait, E.J.; Greene, K.L.; Leuer, J.A.; Lohr, J.M.

    1992-10-01

    Two types of novel, non-inductive current drive concepts for starting-up and maintaining tokamak discharges have been developed on the CDX-U, HIT, and DIII-D Tokamaks. On CDX-U, a new, non-inductive current drive technique utilizing fully internally generated pressure driven currents has been demonstrated. The measured current density profile shows a non-hollow profile which agrees with a modeling calculation including helicity conserving non-classical current transport providing the ``seed current``. Another current drive concept, dc-helicity injection, has been investigated on, CDX-U, HIT and DIII-D. This method utilizes injection of magnetic helicity via low energy electron currents, maintaining the plasma current through helicity conserving relaxiation. In these experiments, non-ohmic tokamak plasmas were formed and maintained in the tens of kA range.

  20. Variation in number of hits for complex searches in Google Scholar

    PubMed Central

    Bramer, Wichor Matthijs

    2016-01-01

    Objective Google Scholar is often used to search for medical literature. Numbers of results reported by Google Scholar outperform the numbers reported by traditional databases. How reliable are these numbers? Why are often not all available 1,000 references shown? Methods For several complex search strategies used in systematic review projects, the number of citations and the total number of versions were calculated. Several search strategies were followed over a two-year period, registering fluctuations in reported search results. Results Changes in numbers of reported search results varied enormously between search strategies and dates. Theories for calculations of the reported and shown number of hits were not proved. Conclusions The number of hits reported in Google Scholar is an unreliable measure. Therefore, its repeatability is problematic, at least when equal results are needed. PMID:27076802

  1. High-Throughput Crystallography: Reliable and Efficient Identification of Fragment Hits.

    PubMed

    Schiebel, Johannes; Krimmer, Stefan G; Röwer, Karine; Knörlein, Anna; Wang, Xiaojie; Park, Ah Young; Stieler, Martin; Ehrmann, Frederik R; Fu, Kan; Radeva, Nedyalka; Krug, Michael; Huschmann, Franziska U; Glöckner, Steffen; Weiss, Manfred S; Mueller, Uwe; Klebe, Gerhard; Heine, Andreas

    2016-08-01

    Today the identification of lead structures for drug development often starts from small fragment-like molecules raising the chances to find compounds that successfully pass clinical trials. At the heart of the screening for fragments binding to a specific target, crystallography delivers structural information essential for subsequent drug design. While it is common to search for bound ligands in electron densities calculated directly after an initial refinement cycle, we raise the important question whether this strategy is viable for fragments characterized by low affinities. Here, we describe and provide a collection of high-quality diffraction data obtained from 364 protein crystals treated with diverse fragments. Subsequent data analysis showed that ∼25% of all hits would have been missed without further refining the resulting structures. To enable fast and reliable hit identification, we have designed an automated refinement pipeline that will inspire the development of optimized tools facilitating the successful application of fragment-based methods. PMID:27452405

  2. Screening and hit evaluation of a chemical library against blood-stage Plasmodium falciparum

    PubMed Central

    2014-01-01

    Background In view of the need to continuously feed the pipeline with new anti-malarial agents adapted to differentiated and more stringent target product profiles (e.g., new modes of action, transmission-blocking activity or long-duration chemo-protection), a chemical library consisting of more than 250,000 compounds has been evaluated in a blood-stage Plasmodium falciparum growth inhibition assay and further assessed for chemical diversity and novelty. Methods The selection cascade used for the triaging of hits from the chemical library started with a robust three-step in vitro assay followed by an in silico analysis of the resulting confirmed hits. Upon reaching the predefined requirements for selectivity and potency, the set of hits was subjected to computational analysis to assess chemical properties and diversity. Furthermore, known marketed anti-malarial drugs were co-clustered acting as ‘signposts’ in the chemical space defined by the hits. Then, in cerebro evaluation of the chemical structures was performed to identify scaffolds that currently are or have been the focus of anti-malarial medicinal chemistry programmes. Next, prioritization according to relaxed physicochemical parameters took place, along with the search for structural analogues. Ultimately, synthesis of novel chemotypes with desired properties was performed and the resulting compounds were subsequently retested in a P. falciparum growth inhibition assay. Results This screening campaign led to a 1.25% primary hit rate, which decreased to 0.77% upon confirmatory repeat screening. With the predefined potency (EC50 < 1 μM) and selectivity (SI > 10) criteria, 178 compounds progressed to the next steps where chemical diversity, physicochemical properties and novelty assessment were taken into account. This resulted in the selection of 15 distinct chemical series. Conclusion A selection cascade was applied to prioritize hits resulting from the screening of a medium-sized chemical

  3. Labeling DNA for Single-Molecule Experiments: Methods of Labeling Internal Specific Sequences on Double-Stranded DNA

    PubMed Central

    Zohar, Hagar; Muller, Susan J.

    2012-01-01

    This review is a practical guide for experimentalists interested in specifically labeling internal sequences on double-stranded (ds) DNA molecules for single-molecule experiments. We describe six labeling approaches demonstrated in a single-molecule context and discuss the merits and drawbacks of each approach with particular attention to the amount of specialized training and reagents required. By evaluating each approach according to criteria relevant to single-molecule experiments, including labeling yield and compatibility with cofactors such as Mg2+, we provide a simple reference for selecting a labeling method for given experimental constraints. Intended for non-specialists seeking accessible solutions to DNA labeling challenges, the approaches outlined emphasize simplicity, robustness, suitability for use by non-biologists, and utility in diverse single-molecule experiments. PMID:21734993

  4. Labeling DNA for single-molecule experiments: methods of labeling internal specific sequences on double-stranded DNA

    NASA Astrophysics Data System (ADS)

    Zohar, Hagar; Muller, Susan J.

    2011-08-01

    This review is a practical guide for experimentalists interested in specifically labeling internal sequences on double-stranded (ds) DNA molecules for single-molecule experiments. We describe six labeling approaches demonstrated in a single-molecule context and discuss the merits and drawbacks of each approach with particular attention to the amount of specialized training and reagents required. By evaluating each approach according to criteria relevant to single-molecule experiments, including labeling yield and compatibility with cofactors such as Mg2+, we provide a simple reference for selecting a labeling method for given experimental constraints. Intended for non-specialists seeking accessible solutions to DNA labeling challenges, the approaches outlined emphasize simplicity, robustness, suitability for use by non-biologists, and utility in diverse single-molecule experiments.

  5. [Study of analgesic efficacy of propacetamol in the postoperative period using a double blind placebo controlled method].

    PubMed

    Nikoda, V V; Maiachkin, R B

    2002-01-01

    The efficiency and safety of postoperative use of propacetamol was estimated in 30 patients by means of double blind placebo controlled method. The first group consisted of 15 patients to whom propacetamol was introduced intravenously in single dose of 2 g along with patient controlled anesthesia with promedol. Placebo in combination with patient control anesthesia were used in 15 patients from the 2nd group. Intravenous introducing of propacetamol in dose of 2 g in 15 minutes provides relief of pain intensity in postoperative period. So it permits to consider propacetamol as basic non-opioid analgesic. In early postoperative period combination of propacetamol and opioid analgesic (promedol) reduces demands in the latter by 44%. PMID:12462772

  6. A method for polycrystalline silicon delineation applicable to a double-diffused MOS transistor

    NASA Technical Reports Server (NTRS)

    Halsor, J. L.; Lin, H. C.

    1974-01-01

    Method is simple and eliminates requirement for unreliable special etchants. Structure is graded in resistivity to prevent punch-through and has very narrow channel length to increase frequency response. Contacts are on top to permit planar integrated circuit structure. Polycrystalline shield will prevent creation of inversion layer in isolated region.

  7. A hierarchy of local coupled cluster singles and doubles response methods for ionization potentials

    NASA Astrophysics Data System (ADS)

    Wälz, Gero; Usvyat, Denis; Korona, Tatiana; Schütz, Martin

    2016-02-01

    We present a hierarchy of local coupled cluster (CC) linear response (LR) methods to calculate ionization potentials (IPs), i.e., excited states with one electron annihilated relative to a ground state reference. The time-dependent perturbation operator V(t), as well as the operators related to the first-order (with respect to V(t)) amplitudes and multipliers, thus are not number conserving and have half-integer particle rank m. Apart from calculating IPs of neutral molecules, the method offers also the possibility to study ground and excited states of neutral radicals as ionized states of closed-shell anions. It turns out that for comparable accuracy IPs require a higher-order treatment than excitation energies; an IP-CC LR method corresponding to CC2 LR or the algebraic diagrammatic construction scheme through second order performs rather poorly. We therefore systematically extended the order with respect to the fluctuation potential of the IP-CC2 LR Jacobian up to IP-CCSD LR, keeping the excitation space of the first-order (with respect to V(t)) cluster operator restricted to the m = /1 2 ⊕ /3 2 subspace and the accuracy of the zero-order (ground-state) amplitudes at the level of CC2 or MP2. For the more expensive diagrams beyond the IP-CC2 LR Jacobian, we employ local approximations. The implemented methods are capable of treating large molecular systems with hundred atoms or more.

  8. Interval Throwing and Hitting Programs in Baseball: Biomechanics and Rehabilitation.

    PubMed

    Chang, Edward S; Bishop, Meghan E; Baker, Dylan; West, Robin V

    2016-01-01

    Baseball injuries from throwing and hitting generally occur as a consequence of the repetitive and high-energy motions inherent to the sport. Biomechanical studies have contributed to understanding the pathomechanics leading to injury and to the development of rehabilitation programs. Interval-based throwing and hitting programs are designed to return an athlete to competition through a gradual progression of sport-specific exercises. Proper warm-up and strict adherence to the program allows the athlete to return as quickly and safely as possible. PMID:26991569

  9. Multi-hit time-to-amplitude CAMAC module (MTAC)

    SciTech Connect

    Kang, H.

    1980-10-01

    A Multi-Hit Time-to-Amplitude Module (MTAC) for the SLAC Mark III drift chamber system has been designed to measure drift time by converting time-proportional chamber signals into analog levels, and converting the analog data by slow readout via a semi-autonomous controller in a CAMAC crate. The single width CAMAC module has 16 wire channels, each with a 4-hit capacity. An externally generated common start initiates an internal precision ramp voltage which is then sampled using a novel shift register gating scheme and CMOS sampling switches. The detailed design and performance specifications are described.

  10. Design of Thomson Scattering Diagnostic for HIT-SI

    NASA Astrophysics Data System (ADS)

    Morgan, Kyle; Fryett, Taylor; Golingo, Raymond; Jarboe, Tom; Victor, Brian

    2012-10-01

    Steady Inductive Helicity Injection (SIHI) is used to create a spheromak inside the HIT-SI machine. A multi-point Thomson scattering diagnostic has been designed and is under construction for the HIT-SI experiment. The system uses a 20J Ruby Laser with 20ns pulse length. The collection system allows for eight spatial measurement locations, with four being active at any time. Four polychromators are being used to spectrally resolve the scattered light. Present Langmuir probe measurements show an electron temperature of about 12eV, within the range the polychromators can resolve. Properties of system and expected measurement are given.