Walker, Virginia L; Lyon, Kristin J; Loman, Sheldon L; Sennott, Samuel
2018-06-01
The purpose of this meta-analysis was to summarize single-case intervention studies in which Functional Communication Training (FCT) involving augmentative and alternative communication (AAC) was implemented in school settings. Overall, the findings suggest that FCT involving AAC was effective in reducing challenging behaviour and promoting aided or unaided AAC use among participants with disability. FCT was more effective for the participants who engaged in less severe forms of challenging behaviour prior to intervention. Additionally, FCT was more effective when informed by a descriptive functional behaviour assessment and delivered within inclusive school settings. Implications for practice and directions for future research related to FCT for students who use AAC are addressed.
Huang, Yunzhi; Zhang, Junpeng; Cui, Yuan; Yang, Gang; Liu, Qi; Yin, Guangfu
2018-01-01
Sensor-level functional connectivity topography (sFCT) contributes significantly to our understanding of brain networks. sFCT can be constructed using either electroencephalography (EEG) or magnetoencephalography (MEG). Here, we compared sFCT within the EEG modality and between EEG and MEG modalities. We first used simulations to look at how different EEG references-including the Reference Electrode Standardization Technique (REST), average reference (AR), linked mastoids (LM), and left mastoid references (LR)-affect EEG-based sFCT. The results showed that REST decreased the reference effects on scalp EEG recordings, making REST-based sFCT closer to the ground truth (sFCT based on ideal recordings). For the inter-modality simulation comparisons, we compared each type of EEG-sFCT with MEG-sFCT using three metrics to quantize the differences: Relative Error (RE), Overlap Rate (OR), and Hamming Distance (HD). When two sFCTs are similar, RE and HD are low, while OR is high. Results showed that among all reference schemes, EEG-and MEG-sFCT were most similar when the EEG was REST-based and the EEG and MEG were recorded simultaneously. Next, we analyzed simultaneously recorded MEG and EEG data from publicly available face-recognition experiments using a similar procedure as in the simulations. The results showed (1) if MEG-sFCT is the standard, REST-and LM-based sFCT provided results closer to this standard in the terms of HD; (2) REST-based sFCT and MEG-sFCT had the highest similarity in terms of RE; (3) REST-based sFCT had the most overlapping edges with MEG-sFCT in terms of OR. This study thus provides new insights into the effect of different reference schemes on sFCT and the similarity between MEG and EEG in terms of sFCT.
Huang, Yunzhi; Zhang, Junpeng; Cui, Yuan; Yang, Gang; Liu, Qi; Yin, Guangfu
2018-01-01
Sensor-level functional connectivity topography (sFCT) contributes significantly to our understanding of brain networks. sFCT can be constructed using either electroencephalography (EEG) or magnetoencephalography (MEG). Here, we compared sFCT within the EEG modality and between EEG and MEG modalities. We first used simulations to look at how different EEG references—including the Reference Electrode Standardization Technique (REST), average reference (AR), linked mastoids (LM), and left mastoid references (LR)—affect EEG-based sFCT. The results showed that REST decreased the reference effects on scalp EEG recordings, making REST-based sFCT closer to the ground truth (sFCT based on ideal recordings). For the inter-modality simulation comparisons, we compared each type of EEG-sFCT with MEG-sFCT using three metrics to quantize the differences: Relative Error (RE), Overlap Rate (OR), and Hamming Distance (HD). When two sFCTs are similar, RE and HD are low, while OR is high. Results showed that among all reference schemes, EEG-and MEG-sFCT were most similar when the EEG was REST-based and the EEG and MEG were recorded simultaneously. Next, we analyzed simultaneously recorded MEG and EEG data from publicly available face-recognition experiments using a similar procedure as in the simulations. The results showed (1) if MEG-sFCT is the standard, REST—and LM-based sFCT provided results closer to this standard in the terms of HD; (2) REST-based sFCT and MEG-sFCT had the highest similarity in terms of RE; (3) REST-based sFCT had the most overlapping edges with MEG-sFCT in terms of OR. This study thus provides new insights into the effect of different reference schemes on sFCT and the similarity between MEG and EEG in terms of sFCT. PMID:29867395
Fetterhoff, Dustin; Opris, Ioan; Simpson, Sean L.; Deadwyler, Sam A.; Hampson, Robert E.; Kraft, Robert A.
2014-01-01
Background Multifractal analysis quantifies the time-scale-invariant properties in data by describing the structure of variability over time. By applying this analysis to hippocampal interspike interval sequences recorded during performance of a working memory task, a measure of long-range temporal correlations and multifractal dynamics can reveal single neuron correlates of information processing. New method Wavelet leaders-based multifractal analysis (WLMA) was applied to hippocampal interspike intervals recorded during a working memory task. WLMA can be used to identify neurons likely to exhibit information processing relevant to operation of brain–computer interfaces and nonlinear neuronal models. Results Neurons involved in memory processing (“Functional Cell Types” or FCTs) showed a greater degree of multifractal firing properties than neurons without task-relevant firing characteristics. In addition, previously unidentified FCTs were revealed because multifractal analysis suggested further functional classification. The cannabinoid-type 1 receptor partial agonist, tetrahydrocannabinol (THC), selectively reduced multifractal dynamics in FCT neurons compared to non-FCT neurons. Comparison with existing methods WLMA is an objective tool for quantifying the memory-correlated complexity represented by FCTs that reveals additional information compared to classification of FCTs using traditional z-scores to identify neuronal correlates of behavioral events. Conclusion z-Score-based FCT classification provides limited information about the dynamical range of neuronal activity characterized by WLMA. Increased complexity, as measured with multifractal analysis, may be a marker of functional involvement in memory processing. The level of multifractal attributes can be used to differentially emphasize neural signals to improve computational models and algorithms underlying brain–computer interfaces. PMID:25086297
A Systematic Review of Parent-Implemented Functional Communication Training for Children With ASD.
Gerow, Stephanie; Hagan-Burke, Shanna; Rispoli, Mandy; Gregori, Emily; Mason, Rose; Ninci, Jennifer
2018-05-01
Supporting parents in reducing challenging behavior of children with autism spectrum disorder (ASD) requires the identification of effective, feasible, and sustainable interventions. Functional communication training (FCT) is one of the most well-established interventions in the behavioral literature and is used increasingly by parents. However, there is a need for additional evaluation of the literature related to parent-implemented FCT. In the present review, we identified 26 peer-reviewed studies on parent-implemented FCT. We conducted systematic descriptive and social validity analyses to summarize the extant literature. Across studies, parent-implemented FCT was effective in reducing child challenging behavior, and in some cases, intervention outcomes maintained and generalized to novel settings and implementers. However, few studies reported fidelity data on parent implementation of FCT, and data regarding sustained use of FCT by parents were limited. Results of the social validity analysis indicate that while FCT is often implemented by natural change agents in typical settings, parent training is often provided by professionals not typically accessible to parents. These findings suggest that future research is warranted in the areas of parent training and long-term sustainability of parent-implemented FCT.
Kunnavatana, S Shanun; Wolfe, Katie; Aguilar, Alexandra N
2018-05-01
Functional communication training (FCT) is a common function-based behavioral intervention used to decrease problem behavior by teaching an alternative communication response. Therapists often arbitrarily select the topography of the alternative response, which may influence long-term effectiveness of the intervention. Assessing individual mand topography preference may increase treatment effectiveness and promote self-determination in the development of interventions. This study sought to reduce arbitrary selection of FCT mand topography by determining preference during response training and acquisition for two adults with autism who had no functional communication skills. Both participants demonstrated a clear preference for one mand topography during choice probes, and the preferred topography was then reinforced during FCT to reduce problem behavior and increase independent communication. The implications of the results for future research on mand selection during FCT are discussed.
Chen, Xiaobo; Zhang, Han; Zhang, Lichi; Shen, Celina; Lee, Seong-Whan; Shen, Dinggang
2017-10-01
Brain functional connectivity (FC) extracted from resting-state fMRI (RS-fMRI) has become a popular approach for diagnosing various neurodegenerative diseases, including Alzheimer's disease (AD) and its prodromal stage, mild cognitive impairment (MCI). Current studies mainly construct the FC networks between grey matter (GM) regions of the brain based on temporal co-variations of the blood oxygenation level-dependent (BOLD) signals, which reflects the synchronized neural activities. However, it was rarely investigated whether the FC detected within the white matter (WM) could provide useful information for diagnosis. Motivated by the recently proposed functional correlation tensors (FCT) computed from RS-fMRI and used to characterize the structured pattern of local FC in the WM, we propose in this article a novel MCI classification method based on the information conveyed by both the FC between the GM regions and that within the WM regions. Specifically, in the WM, the tensor-based metrics (e.g., fractional anisotropy [FA], similar to the metric calculated based on diffusion tensor imaging [DTI]) are first calculated based on the FCT and then summarized along each of the major WM fiber tracts connecting each pair of the brain GM regions. This could capture the functional information in the WM, in a similar network structure as the FC network constructed for the GM, based only on the same RS-fMRI data. Moreover, a sliding window approach is further used to partition the voxel-wise BOLD signal into multiple short overlapping segments. Then, both the FC and FCT between each pair of the brain regions can be calculated based on the BOLD signal segments in the GM and WM, respectively. In such a way, our method can generate dynamic FC and dynamic FCT to better capture functional information in both GM and WM and further integrate them together by using our developed feature extraction, selection, and ensemble learning algorithms. The experimental results verify that the dynamic FCT can provide valuable functional information in the WM; by combining it with the dynamic FC in the GM, the diagnosis accuracy for MCI subjects can be significantly improved even using RS-fMRI data alone. Hum Brain Mapp 38:5019-5034, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Hanley, Gregory P; Piazza, Cathleen C; Fisher, Wayne W; Maglieri, Kristen A
2005-01-01
The current study describes an assessment sequence that may be used to identify individualized, effective, and preferred interventions for severe problem behavior in lieu of relying on a restricted set of treatment options that are assumed to be in the best interest of consumers. The relative effectiveness of functional communication training (FCT) with and without a punishment component was evaluated with 2 children for whom functional analyses demonstrated behavioral maintenance via social positive reinforcement. The results showed that FCT plus punishment was more effective than FCT in reducing problem behavior. Subsequently, participants' relative preference for each treatment was evaluated in a concurrent-chains arrangement, and both participants demonstrated a dear preference for FCT with punishment. These findings suggest that the treatment-selection process may be guided by person-centered and evidence-based values.
Functional Communication Training in the Classroom: A Guide for Success
ERIC Educational Resources Information Center
Mancil, G. Richmond; Boman, Marty
2010-01-01
Researchers have consistently shown the effectiveness of functional communication training (FCT) to address both the communication and behavioral needs of children on the autism spectrum. The three steps of FCT include completing a functional behavior assessment, identifying a communication response, and developing a treatment plan. In addition,…
ERIC Educational Resources Information Center
Fuhrman, Ashley M.; Fisher, Wayne W.; Greer, Brian D.
2016-01-01
Despite the effectiveness and widespread use of functional communication training (FCT), resurgence of destructive behavior can occur if the functional communication response (FCR) contacts a challenge, such as lapses in treatment integrity. We evaluated a method to mitigate resurgence by conducting FCT using a multiple schedule of reinforcement…
ERIC Educational Resources Information Center
Durand, V. Mark; Merges, Eileen
2001-01-01
This article describes functional communication training (FCT) with students who have autism. FCT involves teaching alternative communication strategies to replace problem behaviors. The article reviews the conditions under which this intervention is successful and compares the method with other behavioral approaches. It concludes that functional…
An Evaluation of Resurgence during Functional Communication Training
ERIC Educational Resources Information Center
Wacker, David P.; Harding, Jay W.; Morgan, Theresa A.; Berg, Wendy K.; Schieltz, Kelly M.; Lee, John F.; Padilla, Yaniz C.
2013-01-01
Three children who displayed destructive behavior maintained by negative reinforcement received functional communication training (FCT). During FCT, the children were required to complete a demand and then to mand (touch a card attached to a microswitch, sign, or vocalize) to receive brief play breaks. Prior to and 1 to 3 times following the…
Single-Case Analysis to Determine Reasons for Failure of Behavioral Treatment via Telehealth
ERIC Educational Resources Information Center
Schieltz, Kelly M.; Romani, Patrick W.; Wacker, David P.; Suess, Alyssa N.; Huang, Pei; Berg, Wendy K.; Lindgren, Scott D.; Kopelman, Todd G.
2018-01-01
Functional communication training (FCT) is a widely used and effective function-based treatment for problem behavior. The purpose of this article is to present two cases in which FCT was unsuccessful in reducing the occurrence of problem behavior displayed by two young children with an autism spectrum disorder. Both children received the same…
ERIC Educational Resources Information Center
Suess, Alyssa N.; Romani, Patrick W.; Wacker, David P.; Dyson, Shannon M.; Kuhle, Jennifer L.; Lee, John F.; Lindgren, Scott D.; Kopelman, Todd G.; Pelzel, Kelly E.; Waldron, Debra B.
2014-01-01
We conducted a retrospective, descriptive evaluation of the fidelity with which parents of three children with autism spectrum disorders conducted functional communication training (FCT) in their homes. All training was provided to the parents via telehealth by a behavior consultant in a tertiary-level hospital setting. FCT trials coached by the…
Assembly mechanism of FCT region type 1 pili in serotype M6 Streptococcus pyogenes.
Nakata, Masanobu; Kimura, Keiji Richard; Sumitomo, Tomoko; Wada, Satoshi; Sugauchi, Akinari; Oiki, Eiji; Higashino, Miharu; Kreikemeyer, Bernd; Podbielski, Andreas; Okahashi, Nobuo; Hamada, Shigeyuki; Isoda, Ryutaro; Terao, Yutaka; Kawabata, Shigetada
2011-10-28
The human pathogen Streptococcus pyogenes produces diverse pili depending on the serotype. We investigated the assembly mechanism of FCT type 1 pili in a serotype M6 strain. The pili were found to be assembled from two precursor proteins, the backbone protein T6 and ancillary protein FctX, and anchored to the cell wall in a manner that requires both a housekeeping sortase enzyme (SrtA) and pilus-associated sortase enzyme (SrtB). SrtB is primarily required for efficient formation of the T6 and FctX complex and subsequent polymerization of T6, whereas proper anchoring of the pili to the cell wall is mainly mediated by SrtA. Because motifs essential for polymerization of pilus backbone proteins in other Gram-positive bacteria are not present in T6, we sought to identify the functional residues involved in this process. Our results showed that T6 encompasses the novel VAKS pilin motif conserved in streptococcal T6 homologues and that the lysine residue (Lys-175) within the motif and cell wall sorting signal of T6 are prerequisites for isopeptide linkage of T6 molecules. Because Lys-175 and the cell wall sorting signal of FctX are indispensable for substantial incorporation of FctX into the T6 pilus shaft, FctX is suggested to be located at the pilus tip, which was also implied by immunogold electron microscopy findings. Thus, the elaborate assembly of FCT type 1 pili is potentially organized by sortase-mediated cross-linking between sorting signals and the amino group of Lys-175 positioned in the VAKS motif of T6, thereby displaying T6 and FctX in a temporospatial manner.
Utility of Extinction-Induced Response Variability for the Selection of Mands
ERIC Educational Resources Information Center
Grow, Laura L.; Kelley, Michael E.; Roane, Henry S.; Shillingsburg, M. Alice
2008-01-01
Functional communication training (FCT; Carr & Durand, 1985) is a commonly used differential reinforcement procedure for replacing problem behavior with socially acceptable alternative responses. Most studies in the FCT literature consist of demonstrations of the maintenance of responding when various treatment components (e.g., extinction,…
Involvement of T6 pili in biofilm formation by serotype M6 Streptococcus pyogenes.
Kimura, Keiji Richard; Nakata, Masanobu; Sumitomo, Tomoko; Kreikemeyer, Bernd; Podbielski, Andreas; Terao, Yutaka; Kawabata, Shigetada
2012-02-01
The group A streptococcus (GAS) Streptococcus pyogenes is known to cause self-limiting purulent infections in humans. The role of GAS pili in host cell adhesion and biofilm formation is likely fundamental in early colonization. Pilus genes are found in the FCT (fibronectin-binding protein, collagen-binding protein, and trypsin-resistant antigen) genomic region, which has been classified into nine subtypes based on the diversity of gene content and nucleotide sequence. Several epidemiological studies have indicated that FCT type 1 strains, including serotype M6, produce large amounts of monospecies biofilm in vitro. We examined the direct involvement of pili in biofilm formation by serotype M6 clinical isolates. In the majority of tested strains, deletion of the tee6 gene encoding pilus shaft protein T6 compromised the ability to form biofilm on an abiotic surface. Deletion of the fctX and srtB genes, which encode pilus ancillary protein and class C pilus-associated sortase, respectively, also decreased biofilm formation by a representative strain. Unexpectedly, these mutant strains showed increased bacterial aggregation compared with that of the wild-type strain. When the entire FCT type 1 pilus region was ectopically expressed in serotype M1 strain SF370, biofilm formation was promoted and autoaggregation was inhibited. These findings indicate that assembled FCT type 1 pili contribute to biofilm formation and also function as attenuators of bacterial aggregation. Taken together, our results show the potential role of FCT type 1 pili in the pathogenesis of GAS infections.
Involvement of T6 Pili in Biofilm Formation by Serotype M6 Streptococcus pyogenes
Kimura, Keiji Richard; Nakata, Masanobu; Sumitomo, Tomoko; Kreikemeyer, Bernd; Podbielski, Andreas; Terao, Yutaka
2012-01-01
The group A streptococcus (GAS) Streptococcus pyogenes is known to cause self-limiting purulent infections in humans. The role of GAS pili in host cell adhesion and biofilm formation is likely fundamental in early colonization. Pilus genes are found in the FCT (fibronectin-binding protein, collagen-binding protein, and trypsin-resistant antigen) genomic region, which has been classified into nine subtypes based on the diversity of gene content and nucleotide sequence. Several epidemiological studies have indicated that FCT type 1 strains, including serotype M6, produce large amounts of monospecies biofilm in vitro. We examined the direct involvement of pili in biofilm formation by serotype M6 clinical isolates. In the majority of tested strains, deletion of the tee6 gene encoding pilus shaft protein T6 compromised the ability to form biofilm on an abiotic surface. Deletion of the fctX and srtB genes, which encode pilus ancillary protein and class C pilus-associated sortase, respectively, also decreased biofilm formation by a representative strain. Unexpectedly, these mutant strains showed increased bacterial aggregation compared with that of the wild-type strain. When the entire FCT type 1 pilus region was ectopically expressed in serotype M1 strain SF370, biofilm formation was promoted and autoaggregation was inhibited. These findings indicate that assembled FCT type 1 pili contribute to biofilm formation and also function as attenuators of bacterial aggregation. Taken together, our results show the potential role of FCT type 1 pili in the pathogenesis of GAS infections. PMID:22155780
Interactive full channel teletext system for cable television nets
NASA Astrophysics Data System (ADS)
Vandenboom, H. P. A.
1984-08-01
A demonstration set-up of an interactive full channel teletext (FCT) system for cable TV networks with two-way data communication possibilities was designed and realized. In FCT all image lines are used for teletext data lines. The FCT decoder was placed in the mini-star, and the FCT encoder which provides the FCT signal was placed in the local center. From the FCT signal a number of data lines are selected using an extra FCT decoder. They are placed on the image lines reserved for teletext so that a normal TV receiver equipped with a teletext decoder, can process the selected data lines. For texts not on hand in the FCT signal, a command can be sent to the local center via the data communication path. A cheap and simple system is offered in which the number of commanded pages or books is in principle unlimited, while the used waiting time and channel capacity is limited.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John
Sandia National Laboratories (SNL) Fuel Cycle Technologies (FCT) program activities are conducted in accordance with FCT Quality Assurance Program Document (FCT-QAPD) requirements. The FCT-QAPD interfaces with SNL approved Quality Assurance Program Description (SNL-QAPD) as explained in the Sandia National Laboratories QA Program Interface Document for FCT Activities (Interface Document). This plan describes SNL's FY16 assessment of SNL's FY15 FCT M2 milestone deliverable's compliance with program QA requirements, including SNL R&A requirements. The assessment is intended to confirm that SNL's FY15 milestone deliverables contain the appropriate authenticated review documentation and that there is a copy marked with SNL R&A numbers.
Xie, Wei-Jie; Zhang, Yong-Ping; Xu, Jian; Sun, Xiao-Bo; Yang, Fang-Fang
2017-03-27
In this paper, a new type of physical penetration technology for transdermal administration with traditional Chinese medicine (TCM) characteristics is presented. Fu's cupping therapy (FCT), was established and studied using in vitro and in vivo experiments and the penetration effect and mechanism of FCT physical penetration technology was preliminarily discussed. With 1-(4-chlorobenzoyl)-5-methoxy-2-methylindole-3-ylacetic acid (indomethacin, IM) as a model drug, the establishment of high, medium, and low references was completed for the chemical permeation system via in vitro transdermal tests. Furthermore, using chemical penetration enhancers (CPEs) and iontophoresis as references, the percutaneous penetration effect of FCT for IM patches was evaluated using seven species of in vitro diffusion kinetics models and in vitro drug distribution; the IM quantitative analysis method in vivo was established using ultra-performance liquid chromatography-tandem mass spectrometry technology (UPLC-MS/MS), and pharmacokinetic parameters: area under the zero and first moment curves from 0 to last time t (AUC 0-t , AUMC 0-t ), area under the zero and first moment curves from 0 to infinity (AUC 0-∞ , AUMC 0-∞ ), maximum plasma concentration (C max ) and mean residence time (MRT), were used as indicators to evaluate the percutaneous penetration effect of FCT in vivo. Additionally, we used the 3 K factorial design to study the joint synergistic penetration effect on FCT and chemical penetration enhancers. Through scanning electron microscopy (SEM) and transmission electron microscope (TEM) imaging, micro- and ultrastructural changes on the surface of the stratum corneum (SC) were observed to explore the FCT penetration mechanism. In vitro and in vivo skin permeation experiments revealed that both the total cumulative percutaneous amount and in vivo percutaneous absorption amount of IM using FCT were greater than the amount using CPEs and iontophoresis. Firstly, compared with the control group, the indomethacin skin percutaneous rate of the FCT low-intensity group (FCTL) was 35.52%, and the enhancement ratio (ER) at 9 h was 1.76X, roughly equivalent to the penetration enhancing effect of the CPEs and iontophoresis. Secondly, the indomethacin percutaneous ratio of the FCT middle-intensity group (FCTM) and FCT high-intensity group (FCTH) were 47.36% and 54.58%, respectively, while the ERs at 9 h were 3.58X and 8.39X, respectively. Thirdly, pharmacokinetic data showed that in vivo indomethacin percutaneous absorption of the FCT was much higher than that of the control, that of the FCTM was slightly higher than that of the CPE, and that of the FCTM group was significantly higher than all others. Meanwhile, variance analysis indicated that the combination of the FCT penetration enhancement method and the CPE method had beneficial effects in enhancing skin penetration: the significance level of the CPE method was 0.0004, which was lower than 0.001, meaning the difference was markedly significant; the significance level of the FCT was also below 0.0001 and its difference markedly significant. The significance level of factor interaction A × B was lower than 0.0001, indicating that the difference in synergism was markedly significant. Moreover, SEM and TEM images showed that the SC surfaces of Sprague-Dawley rats treated with FCT were damaged, and it was difficult to observe the complete surface structure, with SC pores growing larger and its special "brick structure" becoming looser. This indicated that the barrier function of the skin was broken, thus revealing a potentially major route of skin penetration. FCT, as a new form of transdermal penetration technology, has significant penetration effects with TCM characteristics and is of high clinical value. It is worth promoting its development.
Evaluation of interventions to reduce multiply controlled vocal stereotypy.
Scalzo, Rachel; Henry, Kelsey; Davis, Tonya N; Amos, Kally; Zoch, Tamara; Turchan, Sarah; Wagner, Tara
2015-07-01
This study examined four interventions targeted at decreasing multiply controlled vocal stereotypy for a 12-year-old boy diagnosed with autism spectrum disorder and a severe intellectual disability. These interventions included Noncontingent Music, Differential Reinforcement of Other Behaviors, Self-Recording, and Functional Communication Training (FCT). In addition to measuring vocal stereotypy during each condition, task engagement and challenging behavior were also monitored. Across conditions, vocal stereotypy did not vary significantly from baseline except in FCT, when it decreased significantly. Task engagement was higher in this condition as well. It is hypothesized that FCT provided an enriched environment by increasing social interaction and access to desired items as well as removal of less preferred activities. For these reasons, there was a decrease in the need for the participant to engage in vocal stereotypy and challenging behavior and increase in his ability to engage in a task. © The Author(s) 2015.
Arumugam, Balamurugan; Tamaki, Takanori; Yamaguchi, Takeo
2015-08-05
Design of Pt alloy catalysts with enhanced activity and durability is a key challenge for polymer electrolyte membrane fuel cells. In the present work, we compare the durability of the ordered intermetallic face-centered tetragonal (fct) PtFeCu catalyst for the oxygen reduction reaction (ORR) relative to its counterpart bimetallic catalysts, i.e., the ordered intermetallic fct-PtFe catalyst and the commercial catalyst from Tanaka Kikinzoku Kogyo, TKK-PtC. Although both fct catalysts initially exhibited an ordered structure and mass activity approximately 2.5 times higher than that of TKK-Pt/C, the presence of Cu at the ordered intermetallic fct-PtFeCu catalyst led to a significant enhancement in durability compared to that of the ordered intermetallic fct-PtFe catalyst. The ordered intermetallic fct-PtFeCu catalyst retained more than 70% of its mass activity and electrochemically active surface area (ECSA) over 10 000 durability cycles carried out at 60 °C. In contrast, the ordered intermetallic fct-PtFe catalyst maintained only about 40% of its activity. The temperature of the durability experiment is also shown to be important: the catalyst was more severely degraded at 60 °C than at room temperature. To obtain insight into the observed enhancement in durability of fct-PtFeCu catalyst, a postmortem analysis of the ordered intermetallic fct-PtFeCu catalyst was carried out using scanning transmission electron microscopy-energy dispersive X-ray spectroscopy (STEM-EDX) line scan. The STEM-EDX line scans of the ordered intermetallic fct-PtFeCu catalyst over 10 000 durability cycles showed a smaller degree of Fe and Cu dissolution from the catalyst. Conversely, large dissolution of Fe was identified in the ordered intermetallic fct-PtFe catalyst, indicating a lesser retention of Fe that causes the destruction of ordered structure and gives rise to poor durability. The enhancement in the durability of the ordered intermetallic fct-PtFeCu catalyst is ascribed to the synergistic effects of Cu presence and the ordered structure of catalyst.
Cocci, Andrea; Capece, Marco; Cito, Gianmartin; Russo, Giorgio Ivan; Falcone, Marco; Timpano, Massimiliano; Rizzo, Michele; Della Camera, Pier Andrea; Morselli, Simone; Campi, Riccardo; Sessa, Francesco; Cacciamani, Giovanni; Minervini, Andrea; Gacci, Mauro; Mirone, Vincenzo; Morelli, Girolamo; Mondaini, Nicola; Polloni, Gaia; Serni, Sergio; Natali, Alessandro
2017-12-01
A new oro-dispersible film (ODF) formulation of sildenafil has been developed for the treatment of erectile dysfunction (ED) to overcome the drawbacks that some patients experience when taking the conventional film-coated tablet (FCT). To assess the effectiveness and safety of sildenafil ODF formulation in patients with ED who were using the conventional FCT. From May 2017 through July 2017, 139 patients with ED were enrolled. Data from penile color-duplex ultrasound, medical history, hormonal evaluation, and patient self-administered questionnaires were collected. All patients were administered sildenafil 100-mg FCT for 4 weeks. Thereafter, they underwent a 2-week washout period and subsequently took sildenafil 75-mg ODF for 4 weeks. The International Index of Erectile Function (IIEF-15), Hospital Anxiety and Depression Scale (HADS), Patient Global Impressions of Improvement (PGI-I), and Clinician Global Impressions of Improvement (CGI-I) questionnaires were administered and severity of ED was classified as severe (IIEF-15 score ≤ 10), moderate (IIEF-15 score 11-16), or mild (IIEF-15 score = 17-25). All patients completed the final protocol. Differences in mean IIEF scores for erectile function, orgasmic function, sexual desire, and intercourse satisfaction were significantly in favor of sildenafil 100-mg FCT, whereas the mean score for overall satisfaction was in favor of sildenafil 75-mg ODF. A significant difference in changes in HADS score was found from washout to final follow-up (mean difference = -0.19; P < .01). For the ODF formulation, the median CGI-I score was 3.5 (interquartile range [IQR] = 2.5-4.5) and the median PGI-I score was 3.0 (IQR = 2.0-4.0). The median action time was 20.0 minutes (IQR = 15.0-30.0) and the median mouth time was 60.0 seconds (IQR = 30.0-120.0). The ODF formulation of a widely known drug, with the same safety and effectiveness of the FCT, was better appreciated by patients in overall satisfaction. This is the first clinical trial to assess the efficacy of a new formulation of sildenafil in patients with ED. The limitations of the study are related to the methodology used: it was not a case-control study and the patients were not drug-naïve for ED treatment. Therefore, only the "additional" side effects of the ODF formulation compared with FCT are reported. The new ODF formulation is as efficient and safe as the FCT formulation and offers a new choice of treatment to specialists for more precisely tailored therapy. Cocci A, Capece M, Cito G, et al. Effectiveness and Safety of Oro-Dispersible Sildenafil in a New Film Formulation for the Treatment of Erectile Dysfunction: Comparison Between Sildenafil 100-mg Film-Coated Tablet and 75-mg Oro-Dispersible Film. J Sex Med 2017;14:1606-1611. Copyright © 2017 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.
Trial-Based Functional Analysis and Functional Communication Training in an Early Childhood Setting
ERIC Educational Resources Information Center
Lambert, Joseph M.; Bloom, Sarah E.; Irvin, Jennifer
2012-01-01
Problem behavior is common in early childhood special education classrooms. Functional communication training (FCT; Carr & Durand, 1985) may reduce problem behavior but requires identification of its function. The trial-based functional analysis (FA) is a method that can be used to identify problem behavior function in schools. We conducted…
Becherelli, Marco; Manetti, Andrea G O; Buccato, Scilla; Viciani, Elisa; Ciucchi, Laura; Mollica, Giulia; Grandi, Guido; Margarit, Imma
2012-01-01
Summary Gram-positive pili are known to play a role in bacterial adhesion to epithelial cells and in the formation of biofilm microbial communities. In the present study we undertook the functional characterization of the pilus ancillary protein 1 (AP1_M6) from Streptococcus pyogenes isolates expressing the FCT-1 pilus variant, known to be strong biofilm formers. Cell binding and biofilm formation assays using S. pyogenes in-frame deletion mutants, Lactococcus expressing heterologous FCT-1 pili and purified recombinant AP1_M6, indicated that this pilin is a strong cell adhesin that is also involved in bacterial biofilm formation. Moreover, we show that AP1_M6 establishes homophilic interactions that mediate inter-bacterial contact, possibly promoting bacterial colonization of target epithelial cells in the form of three-dimensional microcolonies. Finally, AP1_M6 knockout mutants were less virulent in mice, indicating that this protein is also implicated in GAS systemic infection. PMID:22320452
ERIC Educational Resources Information Center
Falcomata, Terry S.; Muething, Colin S.; Gainey, Summer; Hoffman, Katherine; Fragale, Christina
2013-01-01
We evaluated functional communication training (FCT) combined with a chained schedule of reinforcement procedure for the treatment of challenging behavior exhibited by two individuals diagnosed with Asperger syndrome and autism. Following functional analyses that suggested that challenging behavior served multiple functions for both participants,…
ERIC Educational Resources Information Center
Schmidt, Jonathan D.; Drasgow, Erik; Halle, James W.; Martin, Christian A.; Bliss, Sacha A.
2014-01-01
Discrete-trial functional analysis (DTFA) is an experimental method for determining the variables maintaining problem behavior in the context of natural routines. Functional communication training (FCT) is an effective method for replacing problem behavior, once identified, with a functionally equivalent response. We implemented these procedures…
A Component Analysis of Schedule Thinning during Functional Communication Training
ERIC Educational Resources Information Center
Betz, Alison M.; Fisher, Wayne W.; Roane, Henry S.; Mintz, Joslyn C.; Owen, Todd M.
2013-01-01
One limitation of functional communication training (FCT) is that individuals may request reinforcement via the functional communication response (FCR) at exceedingly high rates. Multiple schedules with alternating periods of reinforcement and extinction of the FCR combined with gradually lengthening the extinction-component interval can…
Functional Communication Training
ERIC Educational Resources Information Center
Durand, V. Mark; Moskowitz, Lauren
2015-01-01
Thirty years ago, the first experimental demonstration was published showing that educators could improve significant challenging behavior in children with disabilities by replacing these behaviors with forms of communication that served the same purpose, a procedure called functional communication training (FCT). Since the publication of that…
Fisher, Wayne W.; Greer, Brian D.; Fuhrman, Ashley M.; Querim, Angie C.
2016-01-01
Multiple schedules with signaled periods of reinforcement and extinction have been used to thin reinforcement schedules during functional communication training (FCT) to make the intervention more practical for parents and teachers. We evaluated whether these signals would also facilitate rapid transfer of treatment effects from one setting to the next and from one therapist to the next. With two children, we conducted FCT in the context of mixed (baseline) and multiple (treatment) schedules introduced across settings or therapists using a multiple baseline design. Results indicated that when the multiple schedules were introduced, the functional communication response came under rapid discriminative control, and problem behavior remained at near-zero rates. We extended these findings with another individual by using a more traditional baseline in which problem behavior produced reinforcement. Results replicated those of the previous participants and showed rapid reductions in problem behavior when multiple schedules were implemented across settings. PMID:26384141
Fisher, Wayne W; Greer, Brian D; Fuhrman, Ashley M; Querim, Angie C
2015-12-01
Multiple schedules with signaled periods of reinforcement and extinction have been used to thin reinforcement schedules during functional communication training (FCT) to make the intervention more practical for parents and teachers. We evaluated whether these signals would also facilitate rapid transfer of treatment effects across settings and therapists. With 2 children, we conducted FCT in the context of mixed (baseline) and multiple (treatment) schedules introduced across settings or therapists using a multiple baseline design. Results indicated that when the multiple schedules were introduced, the functional communication response came under rapid discriminative control, and problem behavior remained at near-zero rates. We extended these findings with another individual by using a more traditional baseline in which problem behavior produced reinforcement. Results replicated those of the previous participants and showed rapid reductions in problem behavior when multiple schedules were implemented across settings. © Society for the Experimental Analysis of Behavior.
ERIC Educational Resources Information Center
Austin, Jillian E.; Tiger, Jeffrey H.
2015-01-01
The earliest stages of functional communication training (FCT) involve providing immediate and continuous reinforcement for a communicative response (FCR) that is functionally equivalent to the targeted problem behavior. However, maintaining immediate reinforcement is not practical, and the introduction of delays is associated with increased…
Analysis of Multiple Manding Topographies during Functional Communication Training
ERIC Educational Resources Information Center
Harding, Jay W.; Wacker, David P.; Berg, Wendy K.; Winborn-Kemmerer, Lisa; Lee, John F.; Ibrahimovic, Muska
2009-01-01
We evaluated the effects of reinforcing multiple manding topographies during functional communication training (FCT) to decrease problem behavior for three preschool-age children. During Phase 1, a functional analysis identified conditions that maintained problem behavior for each child. During Phase 2, the children's parents taught them to…
An Evaluation of Generalization of Mands during Functional Communication Training
ERIC Educational Resources Information Center
Falcomata, Terry S.; Wacker, David P.; Ringdahl, Joel E.; Vinquist, Kelly; Dutt, Anuradha
2013-01-01
The primary purpose of this study was to evaluate the generalization of mands during functional communication training (FCT) and sign language training across functional contexts (i.e., positive reinforcement, negative reinforcement). A secondary purpose was to evaluate a training procedure based on stimulus control to teach manual signs. During…
Taher, Ali T; Origa, Raffaella; Perrotta, Silverio; Kourakli, Alexandra; Ruffo, Giovan Battista; Kattamis, Antonis; Goh, Ai-Sim; Cortoos, Annelore; Huang, Vicky; Weill, Marine; Merino Herranz, Raquel; Porter, John B
2017-05-01
Once-daily deferasirox dispersible tablets (DT) have a well-defined safety and efficacy profile and, compared with parenteral deferoxamine, provide greater patient adherence, satisfaction, and quality of life. However, barriers still exist to optimal adherence, including gastrointestinal tolerability and palatability, leading to development of a new film-coated tablet (FCT) formulation that can be swallowed with a light meal, without the need to disperse into a suspension prior to consumption. The randomized, open-label, phase II ECLIPSE study evaluated the safety of deferasirox DT and FCT formulations over 24 weeks in chelation-naïve or pre-treated patients aged ≥10 years, with transfusion-dependent thalassemia or IPSS-R very-low-, low-, or intermediate-risk myelodysplastic syndromes. One hundred seventy-three patients were randomized 1:1 to DT (n = 86) or FCT (n = 87). Adverse events (overall), consistent with the known deferasirox safety profile, were reported in similar proportions of patients for each formulation (DT 89.5%; FCT 89.7%), with a lower frequency of severe events observed in patients receiving FCT (19.5% vs. 25.6% DT). Laboratory parameters (serum creatinine, creatinine clearance, alanine aminotransferase, aspartate aminotransferase and urine protein/creatinine ratio) generally remained stable throughout the study. Patient-reported outcomes showed greater adherence and satisfaction, better palatability and fewer concerns with FCT than DT. Treatment compliance by pill count was higher with FCT (92.9%) than with DT (85.3%). This analysis suggests deferasirox FCT offers an improved formulation with enhanced patient satisfaction, which may improve adherence, thereby reducing frequency and severity of iron overload-related complications. © 2017 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Rispoli, Mandy; Camargo, Síglia; Machalicek, Wendy; Lang, Russell; Sigafoos, Jeff
2014-01-01
This study evaluated the assessment and treatment of problem behaviors related to rituals for children with autism. After functional analyses, we used a multiple-probe design to examine the effects of functional communication training (FCT) plus extinction and schedule thinning as a treatment package for problem behavior and appropriate…
ERIC Educational Resources Information Center
Falcomata, Terry S.; Roane, Henry S.; Muething, Colin S.; Stephenson, Kasey M.; Ing, Anna D.
2012-01-01
In this article, the authors evaluated functional communication training (FCT) and a chained schedule of reinforcement for the treatment of challenging behavior exhibited by two individuals diagnosed with Asperger syndrome and autism, respectively. Following a functional analysis with undifferentiated results, the authors demonstrated that…
Danger, Jessica L.; Cao, Tram N.; Cao, Tran H.; Sarkar, Poulomee; Treviño, Jeanette; Pflughoeft, Kathryn J.; Sumby, Paul
2015-01-01
Summary Bacterial pathogens commonly show intra-species variation in virulence factor expression and often this correlates with pathogenic potential. The group A Streptococcus (GAS) produces a small regulatory RNA (sRNA), FasX, which regulates the expression of pili and the thrombolytic agent streptokinase. As GAS serotypes are polymorphic regarding (a) FasX abundance, (b) the fibronectin, collagen, T-antigen (FCT) region of the genome, which contains the pilus genes (nine different FCT-types), and (c) the streptokinase-encoding gene (ska) sequence (two different alleles), we sought to test whether FasX regulates pilus and streptokinase expression in a serotype-specific manner. Parental, fasX mutant, and complemented derivatives of serotype M1 (ska-2, FCT-2), M2 (ska-1, FCT-6), M6 (ska-2, FCT-1), and M28 (ska-1, FCT-4) isolates were compared. While FasX reduced pilus expression in each serotype, the molecular basis differed, as FasX bound, and inhibited the translation of, different FCT-region mRNAs. FasX enhanced streptokinase expression in each serotype, although the degree of regulation varied. Finally, we established that the regulation afforded by FasX enhances GAS virulence, assessed by a model of bacteremia using human plasminogen-expressing mice. Our data are the first to identify and characterize serotype-specific regulation by an sRNA in GAS, and to show an sRNA directly contributes to GAS virulence. PMID:25586884
NASA Astrophysics Data System (ADS)
Ezer, Tal; Atkinson, Larry P.
2017-06-01
Recent studies show that in addition to wind and air pressure effects, a significant portion of the variability of coastal sea level (CSL) along the US East Coast can be attributed to non-local factors such as variations in the Gulf Stream and the North Atlantic circulation; these variations can cause unpredictable coastal flooding. The Florida Current transport (FCT) measurement across the Florida Straits monitors those variations, and thus, the study evaluated the potential of using the FCT as an indicator for anomalously high water level along the coast. Hourly water level data from 12 tide gauge stations over 12 years are used to construct records of maximum daily water levels (MDWL) that are compared with the daily FCT data. An empirical mode decomposition (EMD) approach is used to divide the data into high-frequency modes (periods T < ˜30 days), middle-frequency modes (˜30 days < T < ˜90 days), and low-frequency modes (˜90 days < T < ˜1 year). Two predictive measures are tested: FCT and FCT change (FCC). FCT is anti-correlated with MDWL in high-frequency modes but positively correlated with MDWL in low-frequency modes. FCC on the other hand is always anti-correlated with MDWL for all frequency bands, and the high water signal lags behind FCC for almost all stations, thus providing a potential predictive skill (i.e., whenever a weakening trend is detected in the FCT, anomalously high water is expected along the coast over the next few days). The MDWL-FCT correlation in the high-frequency modes is maximum in the lower Mid-Atlantic Bight, suggesting influence from the meandering Gulf Stream after it separates from the coast. However, the correlation in low-frequency modes is maximum in the South Atlantic Bight, suggesting impact from variations in the wind pattern over subtropical regions. The middle-frequency and low-frequency modes of the FCT seem to provide the best predictor for medium to large flooding events; it is estimated that ˜10-25% of the sea level variability in those modes can be attributed to variations in the FCT. An example from Hurricane Joaquin (September-October, 2015) demonstrates how an offshore storm that never made landfall can cause a weakening of the FCT and unexpected high water level and flooding along the US East Coast. A regression-prediction model based on the MDWL-FCT correlation shows some skill in estimating high water levels during past storms; the water level prediction is more accurate for slow-moving and offshore storms than it is for fast-moving storms. The study can help to improve water level prediction since current storm surge models rely on local wind but may ignore remote forcing.
ERIC Educational Resources Information Center
Ringdahl, Joel E.; Falcomata, Terry S.; Christensen, Tory J.; Bass-Ringdahl, Sandie M.; Lentz, Alison; Dutt, Anuradha; Schuh-Claus, Jessica
2009-01-01
Recent research has suggested that variables related to specific mand topographies targeted during functional communication training (FCT) can affect treatment outcomes. These include effort, novelty of mands, previous relationships with problem behavior, and preference. However, there is little extant research on procedures for identifying which…
Indirect Effects of Functional Communication Training on Non-Targeted Disruptive Behavior
ERIC Educational Resources Information Center
Schieltz, Kelly M.; Wacker, David P.; Harding, Jay W.; Berg, Wendy K.; Lee, John F.; Padilla Dalmau, Yaniz C.; Mews, Jayme; Ibrahimovic, Muska
2011-01-01
The purpose of this study was to evaluate the effects of functional communication training (FCT) on the occurrence of non-targeted disruptive behavior. The 10 participants were preschool-aged children with developmental disabilities who engaged in both destructive (property destruction, aggression, self-injury) and disruptive (hand flapping,…
Functional Communication Training in Children with Autism Spectrum Disorder
ERIC Educational Resources Information Center
Battaglia, Dana
2017-01-01
This article explicitly addresses the correlation between communication and behavior, and describes how to provide intervention addressing these two overlapping domains using an intervention called functional communication training (FCT; E. G. Carr & Durand, 1985) in individuals with ASD. A step-by-step process is outlined with supporting…
ERIC Educational Resources Information Center
Davis, Dawn H.; Fredrick, Laura D.; Alberto, Paul A.; Gama, Roberto
2012-01-01
This study investigated the effects of functional communication training (FCT) implemented with concurrent schedules of differing magnitudes of reinforcement in lieu of extinction to reduce inappropriate behaviors and increase alternative mands. Participants were four adolescent students diagnosed with severe emotional and behavior disorders…
Campos, Claudia; Leon, Yanerys; Sleiman, Andressa; Urcuyo, Beatriz
2017-03-01
One potential limitation of functional communication training (FCT) is that after the functional communication response (FCR) is taught, the response may be emitted at high rates or inappropriate times. Thus, schedule thinning is often necessary. Previous research has demonstrated that multiple schedules can facilitate schedule thinning by establishing discriminative control of the communication response while maintaining low rates of problem behavior. To date, most applied research evaluating the clinical utility of multiple schedules has done so in the context of behavior maintained by positive reinforcement (e.g., attention or tangible items). This study examined the use of a multiple schedule with alternating Fixed Ratio (FR 1)/extinction (EXT) components for two individuals with developmental disabilities who emitted escape-maintained problem behavior. Although problem behavior remained low during all FCT and multiple schedule phases, the use of the multiple schedule alone did not result in discriminated manding.
ERIC Educational Resources Information Center
Fisher, Wayne W.; Greer, Brian D.; Fuhrman, Ashley M.; Querim, Angie C.
2015-01-01
Multiple schedules with signaled periods of reinforcement and extinction have been used to thin reinforcement schedules during functional communication training (FCT) to make the intervention more practical for parents and teachers. We evaluated whether these signals would also facilitate rapid transfer of treatment effects across settings and…
Camera calibration for multidirectional flame chemiluminescence tomography
NASA Astrophysics Data System (ADS)
Wang, Jia; Zhang, Weiguang; Zhang, Yuhong; Yu, Xun
2017-04-01
Flame chemiluminescence tomography (FCT), which combines computerized tomography theory and multidirectional chemiluminescence emission measurements, can realize instantaneous three-dimensional (3-D) diagnostics for flames with high spatial and temporal resolutions. One critical step of FCT is to record the projections by multiple cameras from different view angles. For high accuracy reconstructions, it requires that extrinsic parameters (the positions and orientations) and intrinsic parameters (especially the image distances) of cameras be accurately calibrated first. Taking the focus effect of the camera into account, a modified camera calibration method was presented for FCT, and a 3-D calibration pattern was designed to solve the parameters. The precision of the method was evaluated by reprojections of feature points to cameras with the calibration results. The maximum root mean square error of the feature points' position is 1.42 pixels and 0.0064 mm for the image distance. An FCT system with 12 cameras was calibrated by the proposed method and the 3-D CH* intensity of a propane flame was measured. The results showed that the FCT system provides reasonable reconstruction accuracy using the camera's calibration results.
ERIC Educational Resources Information Center
Kim, Jinnie
2012-01-01
Far less is known about the effects of functional communication-based toileting interventions for students with developmental disabilities in a school setting. Furthermore, the currently available toileting interventions for students with disabilities include some undesirable procedures such as the use of punishment, unnatural clinic/university…
ERIC Educational Resources Information Center
Mancil, G. Richmond; Conroy, Maureen A.; Haydon, Todd F.
2009-01-01
The purpose of the current study was to evaluate the effectiveness of combining milieu therapy and functional communication training (FCT) to replace aberrant behavior with functional communicative skills in 3 male preschool or elementary aged children with Autism Spectrum Disorders (ASD). Study activities were conducted in the natural…
ERIC Educational Resources Information Center
Wacker, David P.; Schieltz, Kelly M.; Berg, Wendy K.; Harding, Jay W.; Padilla Dalmau, Yaniz C.; Lee, John F.
2017-01-01
This article describes the results of a series of studies that involved functional communication training (FCT) conducted in children's homes by their parents. The 103 children who participated were six years old or younger, had developmental delays, and engaged in destructive behaviors such as self-injury. The core procedures used in each study…
ERIC Educational Resources Information Center
Olive, Melissa L.; Lang, Russell B.; Davis, Tonya N.
2008-01-01
The purpose of this study was to examine the effects of Functional Communication Training (FCT) and a Voice Output Communication Aid (VOCA) on the challenging behavior and language development of a 4-year-old girl with autism spectrum disorder. The participant's mother implemented modified functional analysis (FA) and intervention procedures in…
Structure of tetragonal martensite in the In95.42Cd4.58 cast alloy
NASA Astrophysics Data System (ADS)
Khlebnikova, Yu. V.; Egorova, L. Yu.; Rodionov, D. P.; Kazantsev, V. A.
2017-11-01
The structure of martensite in the In95.42Cd4.58 alloy has been studied by metallography, X-ray diffraction, dilatometry, and transmission electron microscopy. It has been shown that a massive structure built of colonies of tetragonal lamellar plates divided by a twin boundary {101}FCT is formed in the alloy under cooling below the martensite FCC → FCT transition temperature. The alloy recrystallizes after a cycle of FCT → FCC → FCT transitions with a decrease in the grain size by several times compared with the initial structure such fashion that the size of massifs and individual martensite lamella in the massif correlates with the change in the size of the alloy grain. Using thermal cycling, it has been revealed that the alloy tends to stabilize the high-temperature phase.
ERIC Educational Resources Information Center
Kurtz, Patricia F.; Boelter, Eric W.; Jarmolowicz, David P.; Chin, Michelle D.; Hagopian, Louis P.
2011-01-01
This paper examines the literature on the use of functional communication training (FCT) as a treatment for problem behavior displayed by individuals with intellectual disabilities (ID). Criteria for empirically supported treatments developed by Divisions 12 and 16 of the American Psychological Association (Kratochwill & Stoiber, 2002; Task Force,…
Cheng, Wendy Y; Said, Qayyim; Hao, Yanni; Xiao, Yongling; Vekeman, Francis; Bobbili, Priyanka; Duh, Mei Sheng; Nandal, Savita; Blinder, Morey
2018-06-04
To compare real-world adherence to and persistence with deferasirox film-coated tablets (DFX-FCT) and deferasirox dispersible tablets (DFX-DT) among patients who switched from DFX-DT to DFX-FCT, overall and by disease type (sickle cell disease [SCD], thalassemia, and myelodysplastic syndrome [MDS]). Patients were ≥2 years old and had ≥2 DFX-FCT claims over the study period and ≥2 DFX-DT claims before the index date (first DFX-FCT claim). The DFX-DT period was defined from the first DFX-DT claim to the index date; the DFX-FCT period was defined from the index date to the end of the study period. Adherence was measured as medication possession ratio (MPR) and proportion of days covered (PDC). Persistence was defined as continuous medication use without a gap ≥30 or 60 days between refills. Comparisons were conducted using paired-sample Wilcoxon sign-rank and McNemar's tests. In total, 606 patients were selected (SCD: 348; thalassemia: 107; MDS: 106; other: 45). Adherence and persistence in the DFX-FCT vs DFX-DT period was significantly higher across all measures: mean MPR was 0.80 vs 0.76 (p < .001); 60.9% vs 54.3% of patients had MPR ≥ 0.8 (p = .009); mean 3-month PDC was 0.83 vs 0.71 (p < .001); 64.2% vs 45.4% of patients had 3-month PDC ≥ 0.8 (p < .001); 87.2% vs 63.4% of patients had 3-month persistence with no gap ≥30 days and 96.1% vs 79.9% with no gap ≥60 days (p < .001). Adherence and persistence improved after switching across all diseases, particularly MDS. Adherence and persistence improved significantly after switching from DFX-DT to DFX-FCT for all diseases, but especially MDS.
Kang, Eunae; Jung, Hyunok; Park, Je-Geun; Kwon, Seungchul; Shim, Jongmin; Sai, Hiroaki; Wiesner, Ulich; Kim, Jin Kon; Lee, Jinwoo
2011-02-22
A "one-pot" synthetic method was developed to produce L1(0)-phase FePt nanoparticles in ordered mesostructured aluminosilicate/carbon composites using polyisoprene-block-poly(ethylene oxide) (PI-b-PEO) as a structure-directing agent. PI-b-PEO block copolymers with aluminosilicate sols are self-assembled with a hydrophobic iron precursor (dimethylaminomethyl-ferrocene) and a hydrophobic platinum precursor (dimethyl(1,5-cyclooctadiene)platinum(II)) to obtain mesostructured composites. The as-synthesized material was heat-treated to 800 °C under an Ar/H(2) mixture (5% v/v), resulting in the formation of fct FePt nanocrystals encapsulated in ordered mesopores. By changing the quantities of the Fe and Pt precursors in the composite materials, the average particle size of the resulting fct FePt, estimated using the Debye-Scherer equation with X-ray diffraction patterns, can be easily controlled to be 2.6-10.4 nm. Using this simple synthetic method, we can extend the size of directly synthesized fct FePt up to ∼10 nm, which cannot be achieved directly in the colloidal synthetic method. All fct FePt nanoparticles show hysteresis behavior at room temperature, which indicates that ferromagnetic particles are obtained inside mesostructued channels. Well-isolated, ∼10 nm fct FePt have a coercivity of 1100 Oe at 300 K. This coercivity value is higher than values of fct FePt nanoparticles synthesized through the tedious hard template method by employing SBA-15 as a host material. The coercivity value for FePt-1 (2.6 nm) at 5 K is as high as 11 900 Oe, which is one of the largest values reported for FePt nanoparticles, or any other magnetic nanoparticles. The fct FePt nanoparticles also showed exchange-bias behavior.
Hospital-Based Mortality in Federal Capital Territory Hospitals-Nigeria, 2005 - 2008
Preacely, Nykiconia; Biya, Oladayo; Gidado, Saheed; Ayanleke, Halima; Kida, Mohammed; Akhimien, Moses; Abubakar, Aisha; Kurmi, Ibrahim; Ajayi, Ikeoluwapo; Nguku, Patrick; Akpan, Henry
2012-01-01
Background Cause-specific mortality data are important to monitor trends in mortality over time. Medical records provide reliable documentation of the causes of deaths occurring in hospitals. This study describes all causes of mortality reported at hospitals in the Federal Capital Territory (FCT) of Nigeria. Methods Deaths reported in 15 secondary and tertiary FCT hospitals occurring from January 1, 2005 and December 31, 2008 were identified by a retrospective review of hospital records conducted by the Nigeria Field Epidemiology and Laboratory Program (NFELTP). Data extracted from the records included sociodemographics, geographic area of residence and underlying cause-of-death information. Results A total of 4,623 deaths occurred in the hospitals. Overall, the top five causes of death reported were: HIV 951 (21%), road traffic accidents 422 (9%), malaria 264 (6%), septicemia 206 (5%), and hypertension 194 (4%). The median age at death was 30 years (range: 0-100); 888 (20%) of deaths were among those less than one year of age. Among children < 1 year, low birth weight and infections were responsible for the highest proportion 131 (15%) of reported mortality. Conclusion Many of the leading causes of mortality identified in this study are preventable. Infant mortality is a large public health problem in FCT hospitals. Although these findings are not representative of all FCT deaths, they may be used to quantify mortality in that occurs in FCT hospitals. These data combined with other mortality surveillance data can provide evidence to inform policy on public health strategies and interventions for the FCT. PMID:22655100
Transitioning Patients With Iron Overload From Exjade to Jadenu.
Tinsley, Sara M; Hoehner-Cooper, Christine M
Iron overload is a concern for patients who require chronic transfusions as a result of inherited or acquired anemias, including sickle cell disease, thalassemia, and myelodysplastic syndromes. Iron chelation therapy (ICT) is the primary treatment for iron overload in these patients. The ICT deferasirox, which has been available as an oral dispersible tablet for liquid suspension, is now also available as a once-daily, film-coated tablet (FCT). Deferasirox FCT allows greater convenience and may be associated with fewer gastrointestinal side effects versus the original formulation. Dose adjustment increments, determined by titration monitoring, are lower for the FCT because of greater bioavailability.
García-Hernández, Cesar; Arece-García, Javier; Rojo-Rubio, Rolando; Mendoza-Martínez, German David; Albarrán-Portillo, Benito; Vázquez-Armijo, José Fernando; Avendaño-Reyes, Leonel; Olmedo-Juárez, Agustín; Marie-Magdeleine, Carine; López-Leyva, Yoel
2017-01-01
Forty-five Pelibuey sheep were experimentally infested with nematodes to evaluate the effect of three free condensed tannin (FCT) levels of Lysiloma acapulcensis on fecal egg counts (FECs), packed cell volumes (PCV), ocular mucosa colors (OMC), average daily gain (ADG), and adult nematode count. Five treatments were used: 12.5, 25.0, and 37.5 mg of FCT kg -1 of body weight (BW); sterile water (control); and ivermectine (0.22 mg kg -1 of BW) as chemical group. The data were processed through repeated measurement analysis. Even though the three FCT doses decreased (P < 0.05) the FEC, the highest reduction was obtained with 37.5 mg kg -1 of BW. No differences were observed in PCV and OMC. Higher ADG (P < 0.05) was observed with 37.5 mg kg -1 of BW of FCT. The count of adult nematodes (females and males) in the higher dose of FCT was similar to chemical treatment. Dose of 37.5 mg kg -1 of BW decreased the parasite infection and improved the lamb performance. Therefore, this dose could be used as a nutraceutic product in sheep production.
Boguhn, Jeannette; Neumann, Dominik; Helm, André; Strobel, Egbert; Tebbe, Christoph C; Dänicke, Sven; Rodehutscorda, Markus
2010-12-01
The objective of this study was to investigate the effects of the concentrate proportion and Fusarium toxin-contaminated triticale (FCT) in the diet on nutrient degradation, microbial protein synthesis and structure of the microbial community, utilising a rumen simulation technique and single-strand conformation polymorphism (SSCP) profiles based on PCR-amplified small subunit ribosomal RNA genes. Four diets containing 60% or 30% concentrates on a dry matter basis with or without FCT were incubated. The fermentation of nutrients and microbial protein synthesis was measured. On the last day of incubation, microbial mass was obtained from the vessel liquid, DNA was extracted and PCR-primers targeting archaea, fibrobacter, clostridia, bifidobacteria, bacillii, fungi, and bacteria were applied to separately study the individual taxonomic groups with SSCP. The concentrate proportion affected the fermentation and the microbial community, but not the efficiency of microbial protein synthesis. Neither the fermentation of organic matter nor the synthesis and composition of microbial protein was affected by FCT. The fermentation of detergent fibre fractions was lower in diets containing FCT compared to diets with uncontaminated triticale. Except for the clostridia group, none of the microbial groups were affected by presence of FCT. In conclusion, our results give no indication that the supplementation of FCT up to a deoxynivalenol concentration in the diet of 5 mg per kg dry matter affects the fermentation of organic matter and microbial protein synthesis. These findings are independent of the concentrate level in the diets. A change in the microbial community composition of the genus Clostridia may be the reason for a reduction in the cellulolytic activity.
NASA Astrophysics Data System (ADS)
Watanabe, Norihiro; Kolditz, Olaf
2015-07-01
This work reports numerical stability conditions in two-dimensional solute transport simulations including discrete fractures surrounded by an impermeable rock matrix. We use an advective-dispersive problem described in Tang et al. (1981) and examine the stability of the Crank-Nicolson Galerkin finite element method (CN-GFEM). The stability conditions are analyzed in terms of the spatial discretization length perpendicular to the fracture, the flow velocity, the diffusion coefficient, the matrix porosity, the fracture aperture, and the fracture longitudinal dispersivity. In addition, we verify applicability of the recently developed finite element method-flux corrected transport (FEM-FCT) method by Kuzmin () to suppress oscillations in the hybrid system, with a comparison to the commonly utilized Streamline Upwinding/Petrov-Galerkin (SUPG) method. Major findings of this study are (1) the mesh von Neumann number (Fo) ≥ 0.373 must be satisfied to avoid undershooting in the matrix, (2) in addition to an upper bound, the Courant number also has a lower bound in the fracture in cases of low dispersivity, and (3) the FEM-FCT method can effectively suppress the oscillations in both the fracture and the matrix. The results imply that, in cases of low dispersivity, prerefinement of a numerical mesh is not sufficient to avoid the instability in the hybrid system if a problem involves evolutionary flow fields and dynamic material parameters. Applying the FEM-FCT method to such problems is recommended if negative concentrations cannot be tolerated and computing time is not a strong issue.
On the numerical calculation of hydrodynamic shock waves in atmospheres by an FCT method
NASA Astrophysics Data System (ADS)
Schmitz, F.; Fleck, B.
1993-11-01
The numerical calculation of vertically propagating hydrodynamic shock waves in a plane atmosphere by the ETBFCT-version of the Flux Corrected Transport (FCT) method by Boris and Book is discussed. The results are compared with results obtained by a characteristic method with shock fitting. We show that the use of the internal energy density as a dependent variable instead of the total energy density can give very inaccurate results. Consequent discretization rules for the gravitational source terms are derived. The improvement of the results by an additional iteration step is discussed. It appears that the FCT method is an excellent method for the accurate calculation of shock waves in an atmosphere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weakley, Steven A.; Brown, Scott A.
The purpose of the project described in this report is to identify and document the commercial and emerging (projected to be commercialized within the next 3 years) hydrogen and fuel cell technologies and products that resulted from Department of Energy support through the Fuel Cell Technologies (FCT) Program in the Office of Energy Efficiency and Renewable Energy (EERE). To do this, Pacific Northwest National Laboratory (PNNL) undertook two efforts simultaneously to accomplish this project. The first effort was a patent search and analysis to identify hydrogen- and fuel-cell-related patents that are associated with FCT-funded projects (or projects conducted by DOE-EEREmore » predecessor programs) and to ascertain the patents current status, as well as any commercial products that may have used the technology documented in the patent. The second effort was a series of interviews with current and past FCT personnel, a review of relevant program annual reports, and an examination of hydrogen- and fuel-cell-related grants made under the Small Business Innovation Research and Small Business Technology Transfer Programs, and within the FCT portfolio.« less
Electron Microscopic Study of the Structure of Tetragonal Martensite in In-4.5% Cd Alloy
NASA Astrophysics Data System (ADS)
Khlebnikova, Yu. V.; Egorova, L. Yu.; Rodionov, D. P.
2018-04-01
In this work, the formation of a packet structure composed of colonies of lamellar plates separated by twin boundary {101}fct in In-4.5 wt % Cd alloy upon cooling below the fcc → fct martensitic transition temperature has been shown using the methods of metallography, X-ray diffraction, transmission electron microscopy, and EBSD analysis. Two neighboring lamellae differ from each other by the direction of their tetragonality axes. Using EBSD analysis, it has been established that neighboring packets always contain three types of tetragonal martensite lamellae, which are in twin positions and differ from each other by the direction of their tetragonality axes. In turn, each martensite lamella consists of a set of smaller lamellae, which are in twin positions. After the cycle of fct → fcc → fct transitions, the alloy recrystallizes with a decrease in the grain size by several times compared with the initial structure such that the size of packets and the length and width of martensitic lamellae in a packet correlate with a change in the size of an alloy grain.
NASA Astrophysics Data System (ADS)
Yang, Chao; Wu, Wei; Wu, Shu-Cheng; Liu, Hong-Bin; Peng, Qing
2014-02-01
Aroma types of flue-cured tobacco (FCT) are classified into light, medium, and heavy in China. However, the spatial distribution of FCT aroma types and the relationships among aroma types, chemical parameters, and climatic variables were still unknown at national scale. In the current study, multi-year averaged chemical parameters (total sugars, reducing sugars, nicotine, total nitrogen, chloride, and K2O) of FCT samples with grade of C3F and climatic variables (mean, minimum and maximum temperatures, rainfall, relative humidity, and sunshine hours) during the growth periods were collected from main planting areas across China. Significant relationships were found between chemical parameters and climatic variables ( p < 0.05). A spatial distribution map of FCT aroma types were produced using support vector machine algorithms and chemical parameters. Significant differences in chemical parameters and climatic variables were observed among the three aroma types based on one-way analysis of variance ( p < 0.05). Areas with light aroma type had significantly lower values of mean, maximum, and minimum temperatures than regions with medium and heavy aroma types ( p < 0.05). Areas with heavy aroma type had significantly lower values of rainfall and relative humidity and higher values of sunshine hours than regions with light and medium aroma types ( p < 0.05). The output produced by classification and regression trees showed that sunshine hours, rainfall, and maximum temperature were the most important factors affecting FCT aroma types at national scale.
ERIC Educational Resources Information Center
Ogunshola, Roseline Folashade; Adeniyi, Abiodun
2017-01-01
The study investigated principals' personal variables and information and communication technology utilization in Federal Capital Territory (FCT) senior secondary schools, Abuja, Nigeria. The study adopted the correlational research design. The study used a sample of 94 senior secondary schools (including public and private) in FCT. Stratified…
Using the Euclid RTP11.13 Repository in the SEC Environment
2006-03-01
of wrong user, passwd combination. We found out that the user and password are hard coded in the FCT software. It uses defaultEditor@ rtp I I 13.INETI...The FCT will start, but when connecting to the Repository it fails because of wrong user, passwd combination: It uses defaultEditor@rtpl I 13.INETI
Importance of adhesins in the recurrence of pharyngeal infections caused by Streptococcus pyogenes.
Wozniak, Aniela; Scioscia, Natalia; Geoffroy, Enrique; Ponce, Iván; García, Patricia
2017-04-01
Pharyngo-amygdalitis is the most common infection caused by Streptococcus pyogenes (S. pyogenes). Reinfection with strains of different M types commonly occurs. However, a second infection with a strain of the same M type can still occur and is referred to as recurrence. We aimed to assess whether recurrence of S. pyogenes could be associated to erythromycin resistance, biofilm formation or surface adhesins like fibronectin-binding proteins and pilus proteins, both located in the fibronectin-binding, collagen-binding, T-antigen (FCT) region. We analyed clinical isolates of S. pyogenes obtained from children with multiple positive cultures of throat swabs. We analysed potential associations between M types, clonal patterns, biofilm production and FCT types with their capacity of producing a recurrent infection. We genetically defined recurrence as an infection with the same M type (same strain) and reinfection as an infection with a different M type. No differences were observed between recurrent and reinfection isolates in relation to erythromycin resistance, presence and number of domains of prtF1 gene, and biofilm formation capacity; the only significant difference was the higher frequency of FCT-4 type among recurrent isolates. However, when all the factors that could contribute to recurrence (erythromycin resistance, biofilm production, presence of prtF1 gene and FCT-4 type) were analysed together, we observed that recurrent isolates have a higher number of factors than reinfection isolates. Recurrence seems not to be associated with biofilm formation. However, pili and fibronectin-binding proteins could be associated with recurrence because FCT-4 isolates which harbour two fibronectin-binding proteins are more frequent among recurrent isolates.
Seltenhammer, Monika H; Marchart, Katharina; Paula, Pia; Kordina, Nicole; Klupp, Nikolaus; Schneider, Barbara; Fitzl, Christine; Risser, Daniele U
2013-01-01
Aims The main intention of this retrospective study was to investigate whether chronic illicit drug abuse, especially the intravenous use of opioids (heroin), could potentially trigger the development of myocardial fibrosis in drug addicts. Design A retrospective case–control study was performed using myocardial tissue samples from both drug-related deaths (DRD) with verifiable opioid abuse and non-drug-related deaths in the same age group. Setting Department of Forensic Medicine, Medical University of Vienna, Austria (1993–94). Participants Myocardial specimens were retrieved from 76 deceased intravenous opioid users and compared to those of 23 deceased non-drug users. Measurements Drug quantification was carried out using the enzyme-multiplied immunoassay technique (EMIT), followed by [gas chromatography–mass spectrometry (GC–MS), MAT 112®], and analysed using the Integrator 3390A by Hewlett Packard® and LABCOM.1 computer (MSS-G.G.). The amount of fibrous connective tissue (FCT) in the myocardium was determined by using the morphometric software LUCIA Net version 1.16.2©, Laboratory Imaging, with NIS Elements 3.0®. Findings Drug analysis revealed that 67.11% were polydrug users and the same proportion was classified as heroin addicts (6-monoacetylmorphine, 6-MAM)—32.89% were users of pure heroin. In 76.32% of DRD cases, codeine was detected. Only 2.63% consumed cocaine. The mean morphine concentrations were 389.03 ng/g in the cerebellum and 275.52 ng/g in the medulla oblongata, respectively. Morphometric analysis exhibited a strong correlation between DRD and myocardial fibrosis. The mean proportion of FCT content in the drug group was 7.6 ± 2.9% (females: 6.30 ± 2.19%; males: 7.91 ± 3.01%) in contrast to 5.2 ± 1.7% (females: 4.45 ± 1.23%; males: 5.50 ± 1.78%) in the control group, indicating a significant difference (P = 0.0012), and a significant difference in the amount of FCT between females and males (P = 0.0383). There was no significant interaction of age and FCT (P = 0.8472). Conclusions There is a long-term risk of cardiac dysfunction following chronic illicit drug abuse with opioids as a principal component. Regular cardiological examination of patients receiving substitution treatment with morphine is strongly recommended. PMID:23297783
Rice, Karen; Price, Jason R.
2014-01-01
To quantify chemical weathering and biological uptake, mass-balance calculations were performed on two small forested watersheds located in the Blue Ridge Physiographic Province in north-central Maryland, USA. Both watersheds, Bear Branch (BB) and Fishing Creek Tributary (FCT), are underlain by relatively unreactive quartzite bedrock. Such unreactive bedrock and associated low chemical-weathering rates offer the opportunity to quantify biological processes operating within the watershed. Hydrologic and stream-water chemistry data were collected from the two watersheds for the 9-year period from June 1, 1990 to May 31, 1999. Of the two watersheds, FCT exhibited both higher chemical-weathering rates and biomass nutrient uptake rates, suggesting that forest biomass aggradation was limited by the rate of chemical weathering of the bedrock. Although the chemical-weathering rate in the FCT watershed was low relative to the global average, it masked the influence of biomass base-cation uptake on stream-water chemistry. Any differences in bedrock mineralogy between the two watersheds did not exert a significant influence on the overall weathering stoichiometry. The difference in chemical-weathering rates between the two watersheds is best explained by a larger proportion of reactive phyllitic layers within the bedrock of the FCT watershed. Although the stream gradient of BB is about two-times greater than that of FCT, its influence on chemical weathering appears to be negligible. The findings of this study support the biomass nutrient uptake stoichiometry of K1.0Mg1.1Ca0.97 previously determined for the study site. Investigations of the chemical weathering of relatively unreactive quartzite bedrock may provide insight into critical zone processes.
Effects of development on indigenous dietary pattern: A Nigerian case study.
Ezeomah, Bookie; Farag, Karim
2016-12-01
The traditional foods of indigenous people in Nigeria are known for their cultural symbolism and agricultural biodiversity which contributes to their daily healthy and rich diet. In the early 90s, rapid development of the Federal Capital Territory (FCT) was noted and the resettlement of indigenes to other parts of the region was reported. These changes have facilitated the modification of indigenous diets, as indigenous groups rapidly embraced modern foods and also adopted the food culture of migrant ethnic groups. This has led to a gradual erosion of indigenous diets and traditional food systems in the FCT. This study explored the impact of development on traditional food systems and determined indigenes perception of the modification to their food culture as a result of the development of their land within the FCT. Field survey was carried out in four indigenous communities in the FCT (30 indigenes from each of the four areas) using structured questionnaires, Focus Group Discussions (FGDs) and key informant interviews. Person Chi Square analysis of indigenes socio-economic characteristics revealed significant relationships between gender of indigenes and farm size, Age and farm size, Educational level and farm/herd size. Qualitative analysis of FGDs revealed indigenes opinion on the socio-cultural changes in behaviour and food systems as a result of development. The study also identified indigenous youths as being most influenced by development especially through education, white collar jobs and social interactions with migrant ethnic groups in the FCT. The study recommended that indigenes should be provided with more secure land tenure and "back-to-farm" initiatives should be put in place by the Nigerian government to encourage indigenous youth to engaged more in agriculture. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Mancil, G. Richmond; Lorah, Elizabeth R.; Whitby, Peggy Schaefer
2016-01-01
The purpose of the study was to evaluate the use of the iPod Touch™ as a Speech Generated Device (SGD) for Functional Communication Training (FCT). The evaluation of the effects on problem behavior, the effects on generalization and maintenance of the acquired communication repertoire, and the social initiations of peers between the new SGD (iPod…
ERIC Educational Resources Information Center
Buckley, Scott D.; Newchok, Debra K.
2005-01-01
We investigated the effects of response effort on the use of mands during functional communication training (FCT) in a participant with autism. The number of links in a picture exchange response chain determined two levels of response effort. Each level was paired with a fixed ratio (FR3) schedule of reinforcement for aggression in a reversal…
Fuel Cycle Technologies 2014 Achievement Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Bonnie C.
2015-01-01
The Fuel Cycle Technologies (FCT) program supports the Department of Energy’s (DOE’s) mission to: “Enhance U.S. security and economic growth through transformative science, technology innovation, and market solutions to meet our energy, nuclear security, and environmental challenges.” Goal 1 of DOE’s Strategic Plan is to innovate energy technologies that enhance U.S. economic growth and job creation, energy security, and environmental quality. FCT does this by investing in advanced technologies that could transform the nuclear fuel cycle in the decades to come. Goal 2 of DOE’s Strategic Plan is to strengthen national security by strengthening key science, technology, and engineering capabilities.more » FCT does this by working closely with the National Nuclear Security Administration and the U.S Department of State to develop advanced technologies that support the Nation’s nuclear nonproliferation goals.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Huiyuan; Jiang, Guangming; Zhang, Xu
We report the synthesis of core/shell face-centered tetragonal (fct)-FePd/Pd nanoparticles (NPs) via reductive annealing of core/shell Pd/Fe 3O 4 NPs followed by temperature-controlled Fe etching in acetic acid. Among three different kinds of core/shell FePd/Pd NPs studied (FePd core at similar to 8 nm and Pd shell at 0.27, 0.65, or 0.81 nm), the fct-FePd/Pd-0.65 NPs are the most efficient catalyst for the oxygen reduction reaction (ORR) in 0.1 M HClO 4 with Pt-like activity and durability. This enhanced ORR catalysis arises from the desired Pd lattice compression in the 0.65 nm Pd shell induced by the fct-FePd core. Lastly,more » our study offers a general approach to enhance Pd catalysis in acid for ORB.« less
Zhu, Huiyuan; Jiang, Guangming; Zhang, Xu; ...
2015-10-04
We report the synthesis of core/shell face-centered tetragonal (fct)-FePd/Pd nanoparticles (NPs) via reductive annealing of core/shell Pd/Fe 3O 4 NPs followed by temperature-controlled Fe etching in acetic acid. Among three different kinds of core/shell FePd/Pd NPs studied (FePd core at similar to 8 nm and Pd shell at 0.27, 0.65, or 0.81 nm), the fct-FePd/Pd-0.65 NPs are the most efficient catalyst for the oxygen reduction reaction (ORR) in 0.1 M HClO 4 with Pt-like activity and durability. This enhanced ORR catalysis arises from the desired Pd lattice compression in the 0.65 nm Pd shell induced by the fct-FePd core. Lastly,more » our study offers a general approach to enhance Pd catalysis in acid for ORB.« less
Longer rewarming time in finger cooling test in association with HbA1c level in diabetics.
Zeng, Shan; Chen, Qi; Wang, Xiang-Wen; Hong, Kui; Li, Ju-Xiang; Li, Ping; Cheng, Xiao-Shu; Su, Hai
2016-09-01
To assess if rewarming time in finger cooling test (FCT) as an indicator of microvascular dysfunction is abnormal in patients with type 2 diabetes mellitus (T2DM). Forty-three T2DM patients and 48 healthy controls with similarly distributed baseline demographic, clinical and laboratory parameters were subjected to FCT involving 60-second index finger immersion into water at 4°C. Finger temperature was measured before FCT (baseline-T), immediately after cooling stimulus (T0), and at one-minute intervals until baseline-T recovery. Temperature decline amplitude was calculated as the difference between T0 and baseline-T, and rewarming time as time elapsed from T0 to baseline-T recovery. T2DM patients compared with healthy controls had statistically similar baseline-T, significantly larger temperature decline amplitude, significantly lower T0, and significantly longer rewarming time. In T2DM patients, rewarming time positively correlated with T2DM duration (r=0.513, p<0.001) and glycated hemoglobin (HbA1c) level (r=0.446, p=0.003), which also were its independent predictors in multivariate regression analysis. Patients with T2DM display abnormal FCT results suggestive of microvascular dysfunction, with T2DM duration and HbA1c level independently predicting rewarming time. Copyright © 2016. Published by Elsevier Inc.
Yao, Lawrence; Yip, Adrienne L.; Shrader, Joseph A.; Mesdaghinia, Sepehr; Volochayev, Rita; Jansen, Anna V.; Miller, Frederick W.
2016-01-01
Objective. This study examines the utility of MRI, including T2 maps and T2 maps corrected for muscle fat content, in evaluating patients with idiopathic inflammatory myopathy. Methods. A total of 44 patients with idiopathic inflammatory myopathy, 18 of whom were evaluated after treatment with rituximab, underwent MRI of the thighs and detailed clinical assessment. T2, fat fraction (FF) and fat corrected T2 (fc-T2) maps were generated from standardized MRI scans, and compared with semi-quantitative scoring of short tau inversion recovery (STIR) and T1-weighted sequences, as well as various myositis disease metrics, including the Physician Global Activity, the modified Childhood Myositis Assessment Scale and the muscle domain of the Myositis Disease Activity Assessment Tool-muscle (MDAAT-muscle). Results. Mean T2 and mean fc-T2 correlated similarly with STIR scores (Spearman rs = 0.64 and 0.64, P < 0.01), while mean FF correlated with T1 damage scores (rs = 0.69, P < 0.001). Baseline T2, fc-T2 and STIR scores correlated significantly with the Physician Global Activity, modified Childhood Myositis Assessment Scale and MDAAT-muscle (rs range = 0.41–0.74, P < 0.01). The response of MRI measures to rituximab was variable, and did not significantly agree with a standardized clinical definition of improvement. Standardized response means for the MRI measures were similar. Conclusion. Muscle T2, fc-T2 and FF measurements exhibit content validity with reference to semi-quantitative scoring of STIR and T1 MRI, and also exhibit construct validity with reference to several myositis activity and damage measures. T2 was as responsive as fc-T2 and STIR scoring, although progression of muscle damage was negligible during the study. PMID:26412808
Multimetallic nanoparticle catalysts with enhanced electrooxidation
Sun, Shouheng; Zhang, Sen; Zhu, Huiyuan; Guo, Shaojun
2015-07-28
A new structure-control strategy to optimize nanoparticle catalysis is provided. The presence of Au in FePtAu facilitates FePt structure transformation from chemically disordered face centered cubic (fcc) structure to chemically ordered face centered tetragonal (fct) structure, and further promotes formic acid oxidation reaction (FAOR). The fct-FePtAu nanoparticles show high CO poisoning resistance, achieve mass activity as high as about 2810 mA/mg Pt, and retain greater than 90% activity after a 13 hour stability test.
2012-11-01
microwave plasma-enhanced CVD (MPE-CVD) with presputtered metal catalyst, and floating catalyst thermal CVD (FCT-CVD) with xylene and ferrocene liquid...processes with nickel and iron catalysts, respectively. For the FCT-CVD approach, ferrocene is used as an iron source to promoteCNT growth. Based on...furnace is ramped up to the growth temperature of 750∘C. Ferrocene was dissolved into a xylene solvent in a 0.008 : 1molar volume ratio.The xylene
2014-01-01
Background This paper describes the isolation and characterization of pregnancy-associated glycoproteins (PAG) from fetal cotyledonary tissue (FCT) and maternal caruncular tissue (MCT) collected from fallow deer (Dama dama) pregnant females. Proteins issued from FCT and MCT were submitted to affinity chromatographies by using Vicia villosa agarose (VVA) or anti-bovine PAG-2 (R#438) coupled to Sepharose 4B gel. Finally, they were characterized by SDS-PAGE and N-terminal microsequencing. Results Four distinct fallow deer PAG (fdPAG) sequences were identified and submitted to Swiss-Prot database. Comparison of fdPAG with PAG sequences identified in other ruminant species exhibited 64 to 83% identity. Additionally, alpha-fetoprotein was identified in fetal and maternal tissues. Conclusion Our results demonstrate the efficacy of VVA and bovine PAG-2 affinity chromatographies for the isolation of PAG molecules expressed in deer placenta. This is the first report giving four specific amino acid sequences of PAG isolated from feto-maternal junction (FCT and MCT) in the Cervidae family. PMID:24410890
General fuel cell hybrid synergies and hybrid system testing status
NASA Astrophysics Data System (ADS)
Winkler, Wolfgang; Nehter, Pedro; Williams, Mark C.; Tucker, David; Gemmen, Randy
FCT hybrid power systems offer the highest efficiency and the cleanest emissions of all fossil fuelled power. The engineering for the highest possible efficiency at lowest cost and weight depends on general system architecture issues and the performance of the components. Presented in this paper are system studies which provide direction for the most efficient path toward achieving the most beneficial result for this technology. Ultimately, fuel cell-turbine (FCT) hybrid systems applicable to integrated gasification combined cycle power systems will form the basis for reaching the goals for advanced coal-based power generation. The FCT hybrid power island will also be important for the FutureGen plant and will provide new options for carbon dioxide capture and sequestration as well as power and hydrogen generation. The system studies presented in this paper provide insight to current technology 'benchmarks' versus expected benefits from hybrid applications. Discussion is also presented on the effects of different balance of plant arrangements and approaches. Finally, we discuss the status of US DOE is sponsored projects that are looking to help understand the unique requirements for these systems. One of these projects, Hyper, will provide information on FCT dynamics and will help identify technical needs and opportunities for cycle advancement. The methods studied show promise for effective control of a hybrid system without the direct intervention of isolation valves or check valves in the main pressure loop of the system, which introduce substantial pressure losses, allowing for realization of the full potential efficiency of the hybrid system.
NASA Technical Reports Server (NTRS)
Mills, Ryan D.; Simon, Justin I.; Depaolo, Donald J.; Bachmann, Olivier
2013-01-01
Over time high K/Ca continental crust produces a unique Ca isotopic reservoir, with measurable 40Ca excesses compared to Earth's mantle (?Ca=0). Thus, values of ?Cai > 1 indicate a significant crustal contribution to a magma. Values of ?Cai (<1) indistinguishable from mantle Ca indicate that the Ca in those magmas is either directly from the mantle, or is from partial melting of newly formed crust. So, whereas 40Ca excesses clearly define crustal contributions, mantle-like 40Ca/44Ca ratios are not as definitive. Here we present Ca isotopic measurements of intermediate to felsic igneous rocks from the western United States, and two crustal xenoliths found within the Fish Canyon Tuff (FCT). The two crustal xenoliths found within the 28.2 Ma FCT of the southern Rocky Mountain volcanic field (SRMVF) yield ?Ca values of 4 and 7.5, respectively. The 40Ca excesses of these possible source rocks are due to long-term in situ 40K decay and suggest that they are Precambrian in age. However, the FCT (?Cai 0.3) is within uncertainty of the mantle 40Ca/44Ca. Together, these data indicate that little Precambrian crust was involved in the petrogenesis of the FCT. Nd isotopic analyses of the FCT imply that it was generated from 10- 75% of an enriched component, and the Ca isotopic data appear to restrict that component to newly formed lower crust, or enriched mantle. However, the Ca isotopic data do permit assimilation of some crust with low Ca/Nd; decreasing the 143Nd/144Nd without adding much excess 40Ca to the FCT. Several other large tuffs from the SRMVF and from Yellowstone have ?Cai indistinguishable from the mantle. However, a few large tuffs from the SRMVF show significant 40Ca excesses. These tuffs (Wall Mountain, Blue Mesa, and Grizzly Peak) are likely sourced from near, or within the Colorado Mineral Belt. New isotopic measurements of Mesozoic and Tertiary granites from across the northern Great Basin show a range of ?Cai from 0 to 3. In these samples ?Cai is generally correlated with ?Sri and is broadly negatively correlated with ?Ndi. However, for granites with similar ?Ndi at a given general location ?Cai can vary significantly (1 to 2 epsilon units). In rocks where low ?Ndi could also be due to melting from enriched reservoirs in the mantle lithosphere, the combination of high ?Cai with low ?Ndi clearly identifies crustal melts.
Functional Genomics of a Non-Toxic Alexandrium Lusitanicum Culture
2007-02-01
OCE-0402707. This work was also supported by a graduate fellowship from the Fundacao para a Ciencia e Tecnologia , Portugal. Reproduction in whole or...that lead to this point. This work was supported by a graduate fellowship from the Fundaqa-o para a Ciencia e Tecnologia , Portugal. 3 4 TABLE OF...from the Fundaqo para a Ci~ncia e cultures (data not shown). It is therefore possible that the Tecnologia (FCT), Portugal. Funding provided in part
A Generalized Approach to Equational Unification.
1985-08-01
Interpreter Working with Infinite Terms," Technical Report FCT/UNL - 20/82, Faculdade de Ciencias e Tecnologia , November 1982. Quinta da Torre, 2825...x. y , z. For readability, we will use the symbols, + *, and * as binary infix operators. Examples of terms are f(x, a), / (x - 0), and y + 1. Given a...z 2. x . y = y ex. The integer operations of plus and times are only two of the many examples of associative and commutative functions about which we
Operation of the Joy Flexible Conveyor Train at the Marissa Mine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, J.C.
1994-12-31
We have had both successes and difficulties with the Joy FCT at Marissa mine. The successes are obvious, as are the difficulties, and both will require considerable effort to maintain and cure, respectively. We are committed to seeing the FCT successful at Marissa as we believe that continuous haulage is the new paradigm in underground section haulage, and we intend to be on the leading edge of the change. Also, as a wise man once said, Joy can fix anything, if you have enough money.
Proton NMR studies of the electronic structure of ZrH/sub x/
NASA Technical Reports Server (NTRS)
Attalla, A.; Bowman, R. C., Jr.; Craft, B. D.; Venturini, E. L.; Rhim, W. K.
1982-01-01
The proton spin lattice relaxation times and Knight shifts were measured in f.c.c. (delta-phase) and f.c.t. (epsilon-phase) ZrH/sub x/ for 1.5 or = to x or = to 2.0. Both parameters indicate that N(E/sub F/) is very dependent upon hydrogen content with a maximum occurring at ZrH1 83. This behavior is ascribed to modifications in N(E/sub F/) through a fcc/fct distortion in ZrH/sub x/ associated with a Jahn-Teller effect.
2013-01-01
FCT-CVD) with xylene and ferrocene liquid mixture without any prior catalyst deposition. T-CVD is a low cost system that can easily be set up to grow...iron catalysts, respectively. For the FCT-CVD approach, ferrocene is used as an iron source to promote CNT growth. Based on these repeatable results...kept at 250 ° C while the high temperature furnace is ramped up to the growth temperature of 750 ° C. Ferrocene was dissolved into xylene solvent in
NASA Astrophysics Data System (ADS)
Lohmann, Christoph; Kuzmin, Dmitri; Shadid, John N.; Mabuza, Sibusiso
2017-09-01
This work extends the flux-corrected transport (FCT) methodology to arbitrary order continuous finite element discretizations of scalar conservation laws on simplex meshes. Using Bernstein polynomials as local basis functions, we constrain the total variation of the numerical solution by imposing local discrete maximum principles on the Bézier net. The design of accuracy-preserving FCT schemes for high order Bernstein-Bézier finite elements requires the development of new algorithms and/or generalization of limiting techniques tailored for linear and multilinear Lagrange elements. In this paper, we propose (i) a new discrete upwinding strategy leading to local extremum bounded low order approximations with compact stencils, (ii) high order variational stabilization based on the difference between two gradient approximations, and (iii) new localized limiting techniques for antidiffusive element contributions. The optional use of a smoothness indicator, based on a second derivative test, makes it possible to potentially avoid unnecessary limiting at smooth extrema and achieve optimal convergence rates for problems with smooth solutions. The accuracy of the proposed schemes is assessed in numerical studies for the linear transport equation in 1D and 2D.
Biya, Oladayo; Gidado, Saheed; Abraham, Ajibola; Waziri, Ndadilnasiya; Nguku, Patrick; Nsubuga, Peter; Suleman, Idris; Oyemakinde, Akin; Nasidi, Abdulsalami; Sabitu, Kabir
2014-01-01
Early treatment of Tuberculosis (TB) cases is important for reducing transmission, morbidity and mortality associated with TB. In 2007, Federal Capital Territory (FCT), Nigeria recorded low TB case detection rate (CDR) of 9% which implied that many TB cases were undetected. We assessed the knowledge, care-seeking behavior, and factors associated with patient delay among pulmonary TB patients in FCT. We enrolled 160 newly-diagnosed pulmonary TB patients in six directly observed treatment short course (DOTS) hospitals in FCT in a cross-sectional study. We used a structured questionnaire to collect data on socio-demographic variables, knowledge of TB, and care-seeking behavior. Patient delay was defined as > 4 weeks between onset of cough and first hospital contact. Mean age was 32.8 years (± 9 years). Sixty two percent were males. Forty seven percent first sought care in a government hospital, 26% with a patent medicine vendor and 22% in a private hospital. Forty one percent had unsatisfactory knowledge of TB. Forty two percent had patient delay. Having unsatisfactory knowledge of TB (p = 0.046) and multiple care-seeking (p = 0.02) were significantly associated with patient delay. After controlling for travel time and age, multiple care-seeking was independently associated with patient delay (Adjusted Odds Ratio = 2.18, 95% CI = 1.09-4.35). Failure to immediately seek care in DOTS centers and having unsatisfactory knowledge of TB are factors contributing to patient delay. Strategies that promote early care-seeking in DOTS centers and sustained awareness on TB should be implemented in FCT.
Li, Qizhai; Hu, Jiyuan; Ding, Juan; Zheng, Gang
2014-04-01
A classical approach to combine independent test statistics is Fisher's combination of $p$-values, which follows the $\\chi ^2$ distribution. When the test statistics are dependent, the gamma distribution (GD) is commonly used for the Fisher's combination test (FCT). We propose to use two generalizations of the GD: the generalized and the exponentiated GDs. We study some properties of mis-using the GD for the FCT to combine dependent statistics when one of the two proposed distributions are true. Our results show that both generalizations have better control of type I error rates than the GD, which tends to have inflated type I error rates at more extreme tails. In practice, common model selection criteria (e.g. Akaike information criterion/Bayesian information criterion) can be used to help select a better distribution to use for the FCT. A simple strategy of the two generalizations of the GD in genome-wide association studies is discussed. Applications of the results to genetic pleiotrophic associations are described, where multiple traits are tested for association with a single marker.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weakley, Steven A.
The purpose of the project described in this report is to identify and document the commercial and emerging (projected to be commercialized within the next 3 years) hydrogen and fuel cell technologies and products that resulted from Department of Energy support through the Fuel Cell Technologies (FCT) Program in the Office of Energy Efficiency and Renewable Energy (EERE). Pacific Northwest National Laboratory (PNNL) undertook two efforts simultaneously to accomplish this project. The first effort was a patent search and analysis to identify patents related to hydrogen and fuel cells that are associated with FCT-funded projects (or projects conducted by DOE-EEREmore » predecessor programs) and to ascertain the patents’ current status, as well as any commercial products that may have used the technology documented in the patent. The second effort was a series of interviews with current and past FCT personnel, a review of relevant program annual reports, and an examination of grants made under the Small Business Innovation Research and Small Business Technology Transfer Programs that are related to hydrogen and fuel cells.« less
Direct Measurement of Recoil Effects on Ar-Ar Standards
NASA Astrophysics Data System (ADS)
Hall, C. M.
2011-12-01
Advances in the precision possible with the Ar-Ar method using new techniques and equipment have led to considerable effort to improve the accuracy of the calibration of interlaboratory standards. However, ultimately the accuracy of the method relies on the measurement of 40Ar*/39ArK ratios on primary standards that have been calibrated with the K-Ar method and, in turn, on secondary standards that are calibrated against primary standards. It is usually assumed that an Ar-Ar total gas age is equivalent to a K-Ar age, but this assumes that there is zero loss of Ar due to recoil. Instead, traditional Ar-Ar total gas ages are in fact Ar retention ages [1] and not, strictly speaking, comparable to K-Ar ages. There have been efforts to estimate the importance of this effect on standards along with prescriptions for minimizing recoil effects [2,3], but these studies have relied on indirect evidence for 39Ar recoil. We report direct measurements of 39Ar recoil for a set of primary and secondary standards using the vacuum encapsulation techniques of [1] and show that significant adjustments to ages assigned to some standards may be needed. The fraction f of 39Ar lost due to recoil for primary standards MMhb-1 hornblende and GA-1550 biotite are 0.00367 and 0.00314 respectively. It is possible to modify the assumed K-Ar ages of these standards so that when using their measured Ar retention 40Ar*/39ArK ratios, one obtains a correct K-Ar age for an unknown, assuming that the unknown sample has zero loss of 39Ar due to recoil. Assuming a primary K-Ar age for MMhb-1 of 520.4 Ma, the modified age would be 522.1 Ma and assuming a primary K-Ar age for GA-1550 of 98.79 Ma [4] yields a modified effective age of 99.09 Ma. Measured f values for secondary standards FCT-3 biotite, FCT-2 sanidine and TCR-2 sanidine are 0.00932, 0.00182 and 0.00039 respectively. Using an R value for FCT-3 biotite relative to MMhb-1 [5], the K-Ar age for this standard would be 27.83 Ma and using R values for FCT and TC sanidines [4] against GA-1550, their K-Ar ages would be 28.06 Ma and 28.41 Ma respectively. For retrospective recalculation purposes, the effective Ar-Ar age of these samples that should yield correct K-Ar ages for unknowns with zero recoil loss would be 28.09 Ma, 28.11 Ma and 28.42 Ma for FCT-3 biotite, FCT-2 sanidine and TCR-2 sanidine respectively. The measured f for FCT-3 appears to explain the R value of it relative to FCT sanidine of 1.0086 found by [8]. From the low T portion of the Ar release spectra of the biotite and amphibole standards, it is clear that the dominant recoil artifact affecting Ar release is the re-implantation mechanism seen in clay samples [1,6,7] and not the loss of 39Ar at the surface of the grain. The geometry of neighboring grains during irradiation and internal defects may predominate in controlling recoil loss. [1] Dong et al., 1995, Science, 267, 355-359. [2] Paine et al., 2006, Geochim.Cosmochim. Acta, 70, 1507-1517. [3] Jourdan et al., 2007, Geochim. Cosmochim. Acta, 71, 2791-2808 [4] Renne et al., 1998, Chem. Geol., 145 117-152. [5] Hall & Farrell, 1995, Earth Planet. Sci. Lett., 133, 327-338. [6] Hall et al., 1997, Earth Planet. Sci. Lett., 148, 287-298. [7] Hall et al., 2000, Econ. Geol., 95, 1739-1752. [8] Di Vincenzo & Roman Skála, 2009, Geochim. Cosmochim. Acta, 73, 493-513.
Wilmink, Teun; Powers, Sarah; Hollingworth, Lee; Stevenson, Tamasin
2018-05-01
To study the effect of cannulation time on arteriovenous fistula (AVF) survival. Methods. Analysis of two prospective databases of access operations and dialysis sessions from 12 January 2002 through 4 January 2015 with follow-up until 4 January 2016. First cannulation time (FCT), defined from operation to first cannulation, was categorized as <2 weeks, 2-4 weeks, 4-8 weeks, 8-16 weeks and ≥16 weeks. Early cannulation was defined as FCT within 4 weeks. AVF survival was defined as the date until the AVF was abandoned. Maximum machine blood flow rate (BFR) for the first 29 dialysis sessions on AVF was analysed. Altogether, 1167 AVF with functional dialysis use were analysed: 667 (57%) radial cephalic AVF, 383 (33%) brachiocephalic AVF and 117 (10%) brachiobasilic AVF. The 631 (54%) AVF created in on-dialysis patients were analysed separately from 536 (46%) AVF created in pre-dialysis patients. AVF survival was similar between cannulation categories for both pre-dialysis patients (P = 0.19) and on-dialysis patients (P = 0.83). Early cannulation was associated with similar AVF survival in both pre-dialysis patients (P = 0.82) and on-dialysis patients (P = 0.17). Six consecutive successful cannulations from the start were associated with improved AVF survival (P = 0.0002). A below-median BFR at the start of dialysis was associated with better AVF survival (P < 0.0001). A below-median increase in BFR in the first 2 months was associated with worse AVF survival (P = 0.007). The type of AVF, diabetes, pre-dialysis state at operation and six successful cannulations from the start were independent predictors for AVF survival. FCT is not associated with AVF survival. Failures to achieve six successful cannulations from the start of dialysis and higher machine BFR in the first week of dialysis are associated with decreased AVF survival.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yang; Institute of Chemistry and Chemical Engineering, Jiangsu University, Zhenjiang 212013; Jiang, Yuhong
2014-07-01
(FePt){sub 100−x}Au{sub x} (x=0, 5, 10, and 20) nanoparticles were synthesized by the sol–gel method, and effects of Au content on the structural and magnetic properties of samples were investigated. Au doping reduced the phase transition temperature from face-centered cubic (FCC) to face-centered tetragonal (FCT) structure. In addition, additive Au promotes the chemical ordering of L1{sub 0} FePt NPs and increases the grain size of L1{sub 0} FePt NPs. When Au content increased from 0 to 10 at%, the coercivity (H{sub c}) increased due to the increase in degree of ordering S and grain size of L1{sub 0} FePt NPs.more » By increasing the Au content to 20 at%, H{sub c} decreased. - Graphical abstract: (FePt){sub 100}Au{sub 0} NPs are the coexistence of FCT and FCC phases. However, no hints of FCC phase were found for the (FePt){sub 100−x}Au{sub x} NPs (x=5, 10 and 20), which indicates that addition of gold greatly promotes the FCC to FCT phase transition. - Highlights: • (FePt){sub 100−x}Au{sub x} (x=0, 5, 10 and 20) nanoparticles (NPs) were synthesized. • Au addition promotes the chemical ordering of L1{sub 0} FePt NPs. • Au addition reduces ordering temperature of L1{sub 0} FePt NPs from FCC to FCT phase. • (FePt){sub 90}Au{sub 10} NPs show a high coercivity of 9585 Oe at room temperature.« less
Titanite petrochronology in the Fish Canyon Tuff
NASA Astrophysics Data System (ADS)
Schmitz, M. D.; Crowley, J. L.
2014-12-01
The petrologic complexity of the archtypical 'monotonous intermediate' Fish Canyon Tuff (FCT) has been previously established by a variety of mineralogical and geochemical proxies [1-2], and the unusual storage and eruptive dynamics of the FCT have been delineated by several geochronological studies [3-5]. Titanite is an apparent equilibrium phase in the penultimate FCT magma, and can be linked petrographically to hornblende crystals that preserve up-temperature core-to-rim zoning profiles. As a reactive, trace element-rich phase, we hypothesized that titanite may record an intracrystalline record of magma chamber dynamics. Titanite crystals from the same separate analyzed in [4] were oriented and doubly-polished to yield characteristic wedge-shaped cross-sectional wafers approximately 300 µm in thickness. BSE imaging guided LA-ICPMS analyses of a full suite of trace elements using a 25 µm beam diameter and crater depth on multiple locations across both sides of the wafer. Most titanite crystals are characterized by large variations in trace elements, including at least two generations of REE-enriched, actinide-poor, low Sr, large Eu anomaly cores overgrown by REE-depleted, actinide-rich, high Sr domains with small Eu anomalies and distinctive concave-up middle to heavy REE patterns. Trace element contents and patterns correlate strongly with Eu anomaly; intermediate compositions are abundant and spatially correlated to reaction zones between core and rim domains. Within the context of the batholithic rejuvenation model for the FCT magma [1-2], these trace element variations are interpreted to record the partial melting of a differentiated crystalline FCT precursor and its hybridization with a more 'mafic' flux. ID-TIMS dating of end-member titanites confirm older ages (ca 28.4 to 29.0 Ma) for cores and define a younger age for rejuvenation of ca 28.2 Ma, consistent with recent U-Pb zircon and 40Ar/39Ar studies [5-7]. [1] Bachmann & Dungan (2002) Am Mineral 87, 1062-1076. [2] Bachmann et al (2002) J Petrology 43, 1469-1503. [3] Bachmann et al (2007) Chem Geol 236, 134-166. [4] Schmitz & Bowring (2001) GCA 65, 2571-2587. [5] Wotzlaw et al (2013) Geology 41, 867-870. [6] Rivera et al. (2011) EPSL 311, 420-426. [7] Kuiper et al (2008) Science 320, 500-504.
Integrated testing system FiTest for diagnosis of PCBA
NASA Astrophysics Data System (ADS)
Bogdan, Arkadiusz; Lesniak, Adam
2016-12-01
This article presents the innovative integrated testing system FiTest for automatic, quick inspection of printed circuit board assemblies (PCBA) manufactured in Surface Mount Technology (SMT). Integration of Automatic Optical Inspection (AOI), In-Circuit Tests (ICT) and Functional Circuit Tests (FCT) resulted in universal hardware platform for testing variety of electronic circuits. The platform provides increased test coverage, decreased level of false calls and optimization of test duration. The platform is equipped with powerful algorithms performing tests in a stable and repetitive way and providing effective management of diagnosis.
The association between preceding drought occurrence and heat waves in the Mediterranean
NASA Astrophysics Data System (ADS)
Russo, Ana; Gouveia, Célia M.; Ramos, Alexandre M.; Páscoa, Patricia; Trigo, Ricardo M.
2017-04-01
A large number of weather driven extreme events has occurred worldwide in the last decade, namely in Europe that has been struck by record breaking extreme events with unprecedented socio-economic impacts, including the mega-heatwaves of 2003 in Europe and 2010 in Russia, and the large droughts in southwestern Europe in 2005 and 2012. The last IPCC report on extreme events points that a changing climate can lead to changes in the frequency, intensity, spatial extent, duration, and timing of weather and climate extremes. These, combined with larger exposure, can result in unprecedented risk to humans and ecosystems. In this context it is becoming increasingly relevant to improve the early identification and predictability of such events, as they negatively affect several socio-economic activities. Moreover, recent diagnostic and modelling experiments have confirmed that hot extremes are often preceded by surface moisture deficits in some regions throughout the world. In this study we analyze if the occurrence of hot extreme months is enhanced by the occurrence of preceding drought events throughout the Mediterranean area. In order to achieve this purpose, the number of hot days in the regions' hottest month will be associated with a drought indicator. The evolution and characterization of drought was analyzed using both the Standardized Precipitation Evaporation Index (SPEI) and the Standardized Precipitation Index (SPI), as obtained from CRU TS3.23 database for the period 1950-2014. We have used both SPI and SPEI for different time scales between 3 and 9 months with a spatial resolution of 0.5°. The number of hot days and nights per month (NHD and NHN) was determined using the ECAD-EOBS daily dataset for the same period and spatial resolution (dataset v14). The NHD and NHN were computed, respectively, as the number of days with a maximum or minimum temperature exceeding the 90th percentile. Results show that the most frequent hottest months for the Mediterranean region occur in July and August. Moreover, the magnitude of correlations between detrended NHD/NHN and the preceding 6- and 9-month SPEI/SPI are usually dimmer than for the 3 month time-scale. Most regions exhibit significantly negative correlations, i.e. high (low) NHD/NHN following negative (positive) SPEI/SPI values, and thus a potential for NHD/NHN early warning. Finally, correlations between the NHD/NHN with SPI and SPEI differ, with SPEI characterized by slightly higher values observed mainly for the 3-months time-scale. Acknowledgments: This work was partially supported by national funds through FCT (Fundação para a Ciência e a Tecnologia, Portugal) under project IMDROFLOOD (WaterJPI/0004/2014). Ana Russo thanks FCT for granted support (SFRH/BPD/99757/2014). A. M. Ramos was also supported by a FCT postdoctoral grant (FCT/DFRH/ SFRH/BPD/84328/2012).
An evaluation of generalization of mands during functional communication training.
Falcomata, Terry S; Wacker, David P; Ringdahl, Joel E; Vinquist, Kelly; Dutt, Anuradha
2013-01-01
The primary purpose of this study was to evaluate the generalization of mands during functional communication training (FCT) and sign language training across functional contexts (i.e., positive reinforcement, negative reinforcement). A secondary purpose was to evaluate a training procedure based on stimulus control to teach manual signs. During the treatment evaluation, we implemented sign language training in 1 functional context (e.g., positive reinforcement by attention) while continuing the functional analysis conditions in 2 other contexts (e.g., positive reinforcement by tangible item; negative reinforcement by escape). During the generalization evaluation, we tested for the generalization of trained mands across functional contexts (i.e., positive reinforcement; negative reinforcement) by implementing extinction in the 2 nontarget contexts. The results suggested that the stimulus control training procedure effectively taught manual signs and treated destructive behavior. Specific patterns of generalization of trained mands and destructive behavior also were observed. © Society for the Experimental Analysis of Behavior.
Norberto, Alarcón-Herrera; Saúl, Flores-Maya; Belén, Bellido; García-Bores Ana, M; Ernesto, Mendoza; Guillermo, Ávila-Acevedo; Elizabeth, Hernández-Echeagaray
2017-10-01
The raw data showed in this article comes from the published research article entitled "Protective effects of Chlorogenic acid in 3-Nitropropionic acid induced toxicity and genotoxicity" Food Chem Toxicol. 2017 May 3. pii: S0278-6915(17)30226-0. DOI:10.1016/j.fct.2017.04.048. [1]. Data illustrates antitoxic and antigenotoxic effects of Chlorogenic acid (CGA) on toxicity and genotoxicity produced by the in vivo treatment with mitochondria toxin 3-Nitropropionic acid (3-NP) in mice. Toxicity and genotoxicity was evaluated in erythrocytes of peripheral blood through the micronuclei assay. Data was share at the Elsevier repository under the reference number FCT9033.
The influence of Atmospheric Rivers over the South Atlantic on rainfall in South Africa
NASA Astrophysics Data System (ADS)
Ramos, A. M.; Trigo, R. M.; Blamey, R. C.; Tome, R.; Reason, C. J. C.
2017-12-01
An automated atmospheric river (AR) detection algorithm is used for the South Atlantic Ocean basin, allowing the identification of the major ARs impinging on the west coast of South Africa during the austral winter months (April-September) for the period 1979-2014, using two reanalysis products (NCEP-NCAR and ERA-Interim). The two products show relatively good agreement, with 10-15 persistent ARs (lasting 18h or longer) occurring on average per winter and nearly two thirds of these systems occurring poleward of 35°S. The relationship between persistent AR activity and winter rainfall is demonstrated using South African Weather Service rainfall data. Most stations positioned in areas of high topography contained the highest percentage of rainfall contributed by persistent ARs, whereas stations downwind, to the east of the major topographic barriers, had the lowest contributions. Extreme rainfall days in the region are also ranked by their magnitude and spatial extent. It is found that around 70% of the top 50 daily winter rainfall extremes in South Africa were in some way linked to ARs (both persistent and non-persistent). Results suggest that although persistent ARs are important contributors to heavy rainfall events, they are not necessarily a prerequisite. Overall, the findings of this study support akin assessments in the last decade on ARs in the northern hemisphere bound for the western coasts of USA and Europe. AcknowledgementsThe financial support for attending this workshop was possible through FCT project UID/GEO/50019/2013 - Instituto Dom Luiz. The author wishes also to acknowledge the contribution of project IMDROFLOOD - Improving Drought and Flood Early Warning, Forecasting and Mitigation using real-time hydroclimatic indicators (WaterJPI/0004/2014, Funded by Fundação para a Ciência e a Tecnologia, Portugal (FCT)), with the data provided to achieve this work. A. M. Ramos was also supported by a FCT postdoctoral grant (FCT/DFRH/ SFRH/BPD/84328/2012).
Extreme precipitation and floods in the Iberian Peninsula and its socio-economic impacts
NASA Astrophysics Data System (ADS)
Ramos, A. M.; Pereira, S.; Trigo, R. M.; Zêzere, J. L.
2017-12-01
Extreme precipitation events in the Iberian Peninsula can induce floods and landslides that have often major socio-economic impacts. The DISASTER database gathered the basic information on past floods and landslides that caused social consequences in Portugal for the period 1865-2015. This database was built under the assumption that social consequences of floods and landslides are sufficient relevant to be reported by newspapers, that provide the data source. Three extreme historical events were analysed in detail taking into account their associated wide socio-economic impacts. The December 1876 record precipitation and flood event leading to an all-time record flow in two large international rivers (Tagus and Guadiana). As a direct consequence, several Portuguese and Spanish towns and villages located in the banks of both rivers suffered serious flood damage on 7 December 1876. The 20-28 December 1909 event recorded the highest number of flood and landslide cases that occurred in Portugal in the period 1865-2015, having triggered the highest floods in 200 years at the Douro river's mouth and causing 89 fatalities in both Portugal and Spain northern regions. More recently the deadliest flash-flooding event affecting Portugal since, at least, the early 19th century, took place on the 25 and 26 November 1967 causing more than 500 fatalities in the Lisbon region. We provide a detailed analysis of each of these events, including their human impacts, precipitation analyses based on historical datasets and the associated atmospheric circulation conditions from reanalysis datasets. Acknowledgements: This work was supported by the project FORLAND - Hydrogeomorphologic risk in Portugal: driving forces and application for land use planning [PTDC / ATPGEO / 1660/2014] funded by the Portuguese Foundation for Science and Technology (FCT), Portugal. A. M. Ramos was also supported by a FCT postdoctoral grant (FCT/DFRH/ SFRH/BPD/84328/2012). The financial support for attending this workshop was also possible through FCT project UID/GEO/50019/2013 - Instituto Dom Luiz.
NASA Astrophysics Data System (ADS)
Zuzek Rozman, K.; Pecko, D.; Trafela, S.; Samardzija, Z.; Spreitzer, M.; Jaglicic, Z.; Nadrah, P.; Zorko, M.; Bele, M.; Tisler, T.; Pintar, A.; Sturm, S.; Kostevsek, N.
2018-03-01
Fe69±3Pd31±3 nanowires (NWs) with lengths of a few microns and diameters of 200 nm were synthesized via template-assisted pulsed electrodeposition into alumina-based templates. The as-deposited Fe69±3Pd31±3 NWs exhibited α-Fe (bcc-solid solution of Fe, Pd) nanocrystalline structure as seen from the x-ray diffraction (XRD), that got confirmed by transmission electron microscopy (TEM) with some larger grains up 50 nm observed. Annealing of the as-deposited Fe69±3Pd31±3 NWs at 1173 K/45 min was followed by quenching in ice water and resulted in a transformation to the fcc crystal structure (XRD) with grain sizes up to 200 nm (TEM). To induce the austenite-to-martensite, i.e., fcc-to-fct phase transformation the fcc Fe69±3Pd31±3 NWs were cooled to 73 K. The XRD showed the disappearance of the (200) fcc reflection (at room temperature) and the appearance of the (200) fct reflection (at 73 K), confirming the fcc-to-fct transformation took place. The magnetic measurements revealed that the fcc Fe69±3Pd31±3 NWs measured at low temperatures (50 K) had a larger coercivity than at room temperature, which suggests the fct phase was present in the undercooled state, exhibiting a larger magnetocrystalline anisotropy than the fcc phase present at room temperature. As part of our interest in magnetic-shape-memory actuators, the as-deposited Fe69±3Pd31±3 NWs were tested for toxicity on zebrafish. In vivo tests showed no acute lethal or sub-lethal effects, which implies that the Fe69±3Pd31±3 NWs have the potential to be used as nano-actuators in biomedical applications.
Memory Device and Nanofabrication Techniques Using Electrically Configurable Materials
NASA Astrophysics Data System (ADS)
Ascenso Simões, Bruno
Development of novel nanofabrication techniques and single-walled carbon nanotubes field configurable transistor (SWCNT-FCT) memory devices using electrically configurable materials is presented. A novel lithographic technique, electric lithography (EL), that uses electric field for pattern generation has been demonstrated. It can be used for patterning of biomolecules on a polymer surface and patterning of resist as well. Using electrical resist composed of a polymer having Boc protected amine group and iodonium salt, Boc group on the surface of polymer was modified to free amine by applying an electric field. On the modified surface of the polymer, Streptavidin pattern was fabricated with a sub-micron scale. Also patterning of polymer resin composed of epoxy monomers and diaryl iodonium salt by EL has been demonstrated. Reaction mechanism for electric resist configuration is believed to be induced by an acid generation via electrochemical reduction in the resist. We show a novel field configurable transistor (FCT) based on single-walled carbon nanotube network field-effect transistors in which poly (ethylene glycol) crosslinked by electron-beam is incorporated into the gate. The device conductance can be configured to arbitrary states reversibly and repeatedly by applying external gate voltages. Raman spectroscopy revealed that evolution of the ratio of D- to G-band intensity in the SWCNTs of the FCT progressively increases as the device is configured to lower conductance states. Electron transport studies at low temperatures showed a strong temperature dependence of the resistance. Band gap widening of CNTs up to ˜ 4 eV has been observed by examining the differential conductance-gate voltage-bias voltage relationship. The switching mechanism of the FCT is attributed a structural transformation of CNTs via reversible hydrogenation and dehydrogenations induced by gate voltages, which tunes the CNT bandgap continuously and reversibly to non-volatile analog values. The CNT transistors with field tunable band gaps would facilitate field programmable circuits based on the self-organized CNTs, and might also lead to novel analog memory, neuromorphic, and photonic devices.
Séralini, Gilles-Eric; Mesnage, Robin; Defarge, Nicolas; Spiroux de Vendômois, Joël
2014-01-01
We have studied the long-term toxicity of a Roundup-tolerant GM maize (NK603) and a whole Roundup pesticide formulation at environmentally relevant levels from 0.1 ppb. Our study was first published in Food and Chemical Toxicology (FCT) on 19 September, 2012. The first wave of criticisms arrived within a week, mostly from plant biologists without experience in toxicology. We answered all these criticisms. The debate then encompassed scientific arguments and a wave of ad hominem and potentially libellous comments appeared in different journals by authors having serious yet undisclosed conflicts of interests. At the same time, FCT acquired as its new assistant editor for biotechnology a former employee of Monsanto after he sent a letter to FCT to complain about our study. This is in particular why FCT asked for a post-hoc analysis of our raw data. On 19 November, 2013, the editor-in-chief requested the retraction of our study while recognizing that the data were not incorrect and that there was no misconduct and no fraud or intentional misinterpretation in our complete raw data - an unusual or even unprecedented action in scientific publishing. The editor argued that no conclusions could be drawn because we studied 10 rats per group over 2 years, because they were Sprague Dawley rats, and because the data were inconclusive on cancer. Yet this was known at the time of submission of our study. Our study was however never attended to be a carcinogenicity study. We never used the word 'cancer' in our paper. The present opinion is a summary of the debate resulting in this retraction, as it is a historic example of conflicts of interest in the scientific assessments of products commercialized worldwide. We also show that the decision to retract cannot be rationalized on any discernible scientific or ethical grounds. Censorship of research into health risks undermines the value and the credibility of science; thus, we republish our paper.
Computer Software Configuration Item-Specific Flight Software Image Transfer Script Generator
NASA Technical Reports Server (NTRS)
Bolen, Kenny; Greenlaw, Ronald
2010-01-01
A K-shell UNIX script enables the International Space Station (ISS) Flight Control Team (FCT) operators in NASA s Mission Control Center (MCC) in Houston to transfer an entire or partial computer software configuration item (CSCI) from a flight software compact disk (CD) to the onboard Portable Computer System (PCS). The tool is designed to read the content stored on a flight software CD and generate individual CSCI transfer scripts that are capable of transferring the flight software content in a given subdirectory on the CD to the scratch directory on the PCS. The flight control team can then transfer the flight software from the PCS scratch directory to the Electronically Erasable Programmable Read Only Memory (EEPROM) of an ISS Multiplexer/ Demultiplexer (MDM) via the Indirect File Transfer capability. The individual CSCI scripts and the CSCI Specific Flight Software Image Transfer Script Generator (CFITSG), when executed a second time, will remove all components from their original execution. The tool will identify errors in the transfer process and create logs of the transferred software for the purposes of configuration management.
Simacek, Jessica; Dimian, Adele F; McComas, Jennifer J
2017-03-01
Young children with neurodevelopmental disorders such as autism spectrum disorders (ASD) and Rett syndrome often experience severe communication impairments. This study examined the efficacy of parent-implemented communication assessment and intervention with remote coaching via telehealth on the acquisition of early communication skills of three young children with ASD (2) and Rett syndrome (1). Efficacy of the intervention was evaluated using single-case experimental designs. First, functional assessment was used to identify idiosyncratic/potentially communicative responses and contexts for each child. Next, parents implemented functional communication training (FCT). All of the children acquired the targeted communication responses. The findings support the efficacy of telehealth as a service delivery model to coach parents on intervention strategies for their children's early communication skills.
Aires-de-Sousa, João; Aires-de-Sousa, Luisa
2003-01-01
We propose representing individual positions in DNA sequences by virtual potentials generated by other bases of the same sequence. This is a compact representation of the neighbourhood of a base. The distribution of the virtual potentials over the whole sequence can be used as a representation of the entire sequence (SEQREP code). It is a flexible code, with a length independent of the sequence size, does not require previous alignment, and is convenient for processing by neural networks or statistical techniques. To evaluate its biological significance, the SEQREP code was used for training Kohonen self-organizing maps (SOMs) in two applications: (a) detection of Alu sequences, and (b) classification of sequences encoding for HIV-1 envelope glycoprotein (env) into subtypes A-G. It was demonstrated that SOMs clustered sequences belonging to different classes into distinct regions. For independent test sets, very high rates of correct predictions were obtained (97% in the first application, 91% in the second). Possible areas of application of SEQREP codes include functional genomics, phylogenetic analysis, detection of repetitions, database retrieval, and automatic alignment. Software for representing sequences by SEQREP code, and for training Kohonen SOMs is made freely available from http://www.dq.fct.unl.pt/qoa/jas/seqrep. Supplementary material is available at http://www.dq.fct.unl.pt/qoa/jas/seqrep/bioinf2002
Deployment of the OSIRIS EM-PIC code on the Intel Knights Landing architecture
NASA Astrophysics Data System (ADS)
Fonseca, Ricardo
2017-10-01
Electromagnetic particle-in-cell (EM-PIC) codes such as OSIRIS have found widespread use in modelling the highly nonlinear and kinetic processes that occur in several relevant plasma physics scenarios, ranging from astrophysical settings to high-intensity laser plasma interaction. Being computationally intensive, these codes require large scale HPC systems, and a continuous effort in adapting the algorithm to new hardware and computing paradigms. In this work, we report on our efforts on deploying the OSIRIS code on the new Intel Knights Landing (KNL) architecture. Unlike the previous generation (Knights Corner), these boards are standalone systems, and introduce several new features, include the new AVX-512 instructions and on-package MCDRAM. We will focus on the parallelization and vectorization strategies followed, as well as memory management, and present a detailed performance evaluation of code performance in comparison with the CPU code. This work was partially supported by Fundaçã para a Ciência e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014.
NASA Astrophysics Data System (ADS)
Ito, Hisatoshi
2015-04-01
Guillong et al. (2015) mentioned that corrections for abundance sensitivity for 232Th and molecular zirconium sesquioxide ions (Zr2O3+) are critical for reliable determination of 230Th abundances in zircon for LA-ICP-MS analyses. There is no denying that more rigorous treatments are necessary to obtain more reliable ages than those in Ito (2014). However, as shown in Fig. 2 in Guillong et al. (2015), the uncorrected (230Th)/(238U) for reference zircons except for Mud Tank are only 5-20% higher than unity. Since U abundance of Toya Tephra zircons that have U-Pb ages < 1 Ma is in-between that of FCT and Plesovice, the overestimation of 230Th by both abundance sensitivity and molecular interferences is expected to be 5-20% for the Toya Tephra. Moreover Ito (2014) obtained U-Th ages of the Toya Tephra by comparison with Fish Canyon Tuff (FCT) data. Because both the FCT and the Toya Tephra have similar trends of overestimation of 230Th, the effect of overestimation of 230Th to cause overestimation of U-Th age should be cancelled out or negligible. Therefore the pivotal conclusion in Ito (2014) that simultaneous U-Pb and U-Th dating using LA-ICP-MS is possible and useful for Quaternary zircons holds true.
Role of "the frame cycle time" in portal dose imaging using an aS500-II EPID.
Al Kattar Elbalaa, Zeina; Foulquier, Jean Noel; Orthuon, Alexandre; Elbalaa, Hanna; Touboul, Emmanuel
2009-09-01
This paper evaluates the role of an acquisition parameter, the frame cycle time "FCT", in the performance of an aS500-II EPID. The work presented rests on the study of the Varian EPID aS500-II and the image acquisition system 3 (IAS3). We are interested in integrated acquisition using asynchronous mode. For better understanding the image acquisition operation, we investigated the influence of the "frame cycle time" on the speed of acquisition, the pixel value of the averaged gray-scale frame and the noise, using 6 and 15MV X-ray beams and dose rates of 1-6Gy/min on 2100 C/D Linacs. In the integrated mode not synchronized to beam pulses, only one parameter the frame cycle time "FCT" influences the pixel value. The pixel value of the averaged gray-scale frame is proportional to this parameter. When the FCT <55ms (speed of acquisition V(f/s)>18 frames/s), the speed of acquisition becomes unstable and leads to a fluctuation of the portal dose response. A timing instability and saturation are detected when the dose per frame exceeds 1.53MU/frame. Rules were deduced to avoid saturation and to optimize this dosimetric mode. The choice of the acquisition parameter is essential for the accurate portal dose imaging.
Crystal Structural Effect of AuCu Alloy Nanoparticles on Catalytic CO Oxidation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhan, Wangcheng; Wang, Jinglin; Wang, Haifeng
2017-06-07
Controlling the physical and chemical properties of alloy nanoparticles (NPs) is an important approach to optimize NP catalysis. Unlike other tuning knobs, such as size, shape, and composition, crystal structure has received limited attention and not been well understood for its role in catalysis. This deficiency is mainly due to the difficulty in synthesis and fine-tuning of the NPs’ crystal structure. Here, Exemplifying by AuCu alloy NPs with face centered cubic (fcc) and face centered tetragonal (fct) structure, we demonstrate a remarkable difference in phase segregation and catalytic performance depending on the crystal structure. During the thermal treatment in air,more » the Cu component in fcc-AuCu alloy NPs segregates more easily onto the alloy surface as compared to that in fct-AuCu alloy NPs. As a result, after annealing at 250 °C in air for 1 h, the fcc- and fct-AuCu alloy NPs are phase transferred into Au/CuO and AuCu/CuO core/shell structures, respectively. More importantly, this variation in heterostructures introduces a significant difference in CO adsorption on two catalysts, leading to a largely enhanced catalytic activity of AuCu/CuO NP catalyst for CO oxidation. Furthermore, the same concept can be extended to other alloy NPs, making it possible to fine-tune NP catalysis for many different chemical reactions.« less
Jung, Won Suk; Popov, Branko N
2017-07-19
In the bottom-up synthesis strategy performed in this study, the Co-catalyzed pyrolysis of chelate-complex and activated carbon black at high temperatures triggers the graphitization reaction which introduces Co particles in the N-doped graphitic carbon matrix and immobilizes N-modified active sites for the oxygen reduction reaction (ORR) on the carbon surface. In this study, the Co particles encapsulated within the N-doped graphitic carbon shell diffuse up to the Pt surface under the polymer protective layer and forms a chemically ordered face-centered tetragonal (fct) Pt-Co catalyst PtCo/CCCS catalyst as evidenced by structural and compositional studies. The fct-structured PtCo/CCCS at low-Pt loading (0.1 mg Pt cm -2 ) shows 6% higher power density than that of the state-of-the-art commercial Pt/C catalyst. After the MEA durability test of 30 000 potential cycles, the performance loss of the catalyst is negligible. The electrochemical surface area loss is less than 40%, while that of commercial Pt/C is nearly 80%. After the accelerated stress test, the uniform catalyst distribution is retained and the mean particle size increases approximate 1 nm. The results obtained in this study indicated that highly stable compositional and structural properties of chemically ordered PtCo/CCCS catalyst contribute to its exceptional catalyst durability.
Direct observation of void evolution during cement hydration
Moradian, Masoud; Hu, Qinang; Aboustait, Mohammed; ...
2017-09-28
This study follows the hydration of both portland cement and tricalcium silicate pastes between 30 min and 16 h of hydration. In-situ fast X-ray Computed Tomography (fCT) was used to make direct observations of the air-filled void formation in w/s of 0.40 to 0.70 with a micron resolution. The results show that over the first hour of the acceleration period the volume of air-filled voids reaches a maximum value and then decreases during the acceleration period and stays constant. The void distribution changes from a few coarse voids to a large number of smaller and more uniformly distributed voids. Thismore » behavior is suggested to be controlled by changes in the ionic strength that cause exsolution of dissolved air from the pore solution.« less
Direct observation of void evolution during cement hydration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moradian, Masoud; Hu, Qinang; Aboustait, Mohammed
This study follows the hydration of both portland cement and tricalcium silicate pastes between 30 min and 16 h of hydration. In-situ fast X-ray Computed Tomography (fCT) was used to make direct observations of the air-filled void formation in w/s of 0.40 to 0.70 with a micron resolution. The results show that over the first hour of the acceleration period the volume of air-filled voids reaches a maximum value and then decreases during the acceleration period and stays constant. The void distribution changes from a few coarse voids to a large number of smaller and more uniformly distributed voids. Thismore » behavior is suggested to be controlled by changes in the ionic strength that cause exsolution of dissolved air from the pore solution.« less
Fe/Rh (100) multilayer magnetism probed by x-ray magnetic circular dichroism
NASA Astrophysics Data System (ADS)
Tomaz, M. A.; Ingram, D. C.; Harp, G. R.; Lederman, D.; Mayo, E.; O'brien, W. L.
1997-09-01
We report the layer-averaged magnetic moments of both Fe and Rh in sputtered Fe/Rh (100) multilayer thin films as measured by x-ray magnetic circular dichroism. We observe two distinct regimes in these films. The first is characterized by Rh moments of at least 1μB, Fe moments enhanced as much as 30% above bulk, and a bct crystal structure. The second regime is distinguished by sharp declines of both Fe and Rh moments accompanied by a transition to an fct crystal lattice. The demarcation between the two regions is identified as the layer thickness for which both bct and fct phases first coexist, which we term the critical thickness tcrit. We attribute the change in magnetic behavior to the structural transformation.
Operational Solution to the Nonlinear Klein-Gordon Equation
NASA Astrophysics Data System (ADS)
Bengochea, G.; Verde-Star, L.; Ortigueira, M.
2018-05-01
We obtain solutions of the nonlinear Klein-Gordon equation using a novel operational method combined with the Adomian polynomial expansion of nonlinear functions. Our operational method does not use any integral transforms nor integration processes. We illustrate the application of our method by solving several examples and present numerical results that show the accuracy of the truncated series approximations to the solutions. Supported by Grant SEP-CONACYT 220603, the first author was supported by SEP-PRODEP through the project UAM-PTC-630, the third author was supported by Portuguese National Funds through the FCT Foundation for Science and Technology under the project PEst-UID/EEA/00066/2013
NASA Astrophysics Data System (ADS)
Santos, A. M. P. A.; Nieblas, A. E.; Verley, P.; Teles-Machado, A.; Bonhommeau, S.; Lett, C.; Garrido, S.; Peliz, A.
2017-12-01
The European sardine (Sardina pilchardus) is the most important small pelagic fishery of the Western Iberia Upwelling Ecosystem (WIUE). Recently, recruitment of this species has declined due to changing environmental conditions. Furthermore, controversies exist regarding its population structure with barriers thought to exist between the Atlantic-Iberian Peninsula, Northern Africa, and the Mediterranean. Few studies have investigated the transport and dispersal of sardine eggs and larvae off Iberia and the subsequent impact on larval recruitment variability. Here, we examine these issues using a Regional Ocean Modeling System climatology (1989-2008) coupled to the Lagrangian transport model, Ichthyop. Using biological parameters from the literature, we conduct simulations that investigate the effects of spawning patchiness, diel vertical migration behaviors, and egg buoyancy on the transport and recruitment of virtual sardine ichthyoplankton on the continental shelf. We find that release area, release depth, and month of release all significantly affect recruitment. Patchiness has no effect and diel vertical migration causes slightly lower recruitment. Egg buoyancy effects are significant and act similarly to depth of release. As with other studies, we find that recruitment peaks vary by latitude, explained here by the seasonal variability of offshore transport. We find weak, continuous alongshore transport between release areas, though a large proportion of simulated ichthyoplankton transport north to the Cantabrian coast (up to 27%). We also show low level transport into Morocco (up to 1%) and the Mediterranean (up to 8%). The high proportion of local retention and low but consistent alongshore transport supports the idea of a series of metapopulations along this coast. This study was supported by the Portuguese Science and Technology Foundation (FCT) through the research project MODELA (PTDC/MAR/098643/2008) and MedEx (MARIN-ERA/MAR/0002/2008). MedEx is also a project of the EC FP6 ERA-NET Program. This study also contributes to the FCT funded Strategic Project Pest-OE/MAR/UI0199/2011 and UID/Multi/04326/2013. SG was supported by FCT throughout research contract IF/01546/2015. ATM was supported by FCT throughout the PhD grant SFRH/BD/40142/2007.
Atmospheric Rivers in Europe: impacts, predictability, and future climate scenarios
NASA Astrophysics Data System (ADS)
Ramos, A. M.; Tome, R.; Sousa, P. M.; Liberato, M. L. R.; Lavers, D.; Trigo, R. M.
2017-12-01
In recent years a strong relationship has been found between Atmospheric Rivers (ARs) and extreme precipitation and floods across western Europe, with some regions having 8 of their top 10 annual maxima precipitation events related to ARs. In the particular case of the Iberian Peninsula, the association between ARs and extreme precipitation days in the western river basins is noteworthy, while for the eastern and southern basins the impact of ARs is reduced. An automated ARs detection algorithm is used for the North Atlantic Ocean Basin, allowing the identification of major ARs affecting western European coasts in the present climate and under different climate change scenarios. We have used both reanalyzes and six General Circulation models under three climate scenarios (the control simulation, the RCP4.5 and RCP8.5 scenarios). The western coast of Europe was divided into five domains, namely the Iberian Peninsula, France, UK, Southern Scandinavia and the Netherlands, and Northern Scandinavia. It was found that there is an increase in the vertically integrated horizontal water transport which led to an increase in the AR frequency, a result more visible in the high emission scenarios (RCP8.5) for the 2074-2099 period. Since ARs are associated with high impact weather, it is important to study their predictability. This assessment was performed with the ECMWF ensemble forecasts up to 10 days for winters 2013/14, 2014/15 and 2015/16 for events that made landfall in the Iberian Peninsula. We show the model's potential added value to detect upcoming ARs events, which is particularly useful to predict potential hydrometeorological extremes. AcknowledgementsThis work was supported by the project FORLAND - Hydrogeomorphologic risk in Portugal: driving forces and application for land use planning [PTDC / ATPGEO / 1660/2014] funded by the Portuguese Foundation for Science and Technology (FCT), Portugal. A. M. Ramos was also supported by a FCT postdoctoral grant (FCT/DFRH/ SFRH/BPD/84328/2012). The financial support for attending this workshop was also possible through FCT project UID/GEO/50019/2013 - Instituto Dom Luiz.
Mbodi, Felix E; Nguku, P; Okolocha, E; Kabir, J
2014-01-01
The use of antibiotics in poultry can result in residues in eggs. The joint FAO/WHO committee recommended banning the use of chloramphenicol (CAP) in food animals due to its public health hazards of aplastic anaemia, leukaemia, allergy, antibacterial resistance and carcinogenicity. This paper determines the prevalence of CAP residues in chicken eggs and assesses the usage and awareness of its ban amongst poultry farmers in the Federal Capital Territory (FCT), Abuja, Nigeria. A cross-sectional survey of registered poultry farmers in FCT was conducted using questionnaires to determine CAP administration in poultry and awareness of its ban. Pooled egg samples were collected from each poultry farm surveyed and from randomly sampled government-owned markets in FCT. Source of eggs by state were identified by the marketer at the time of collection. Samples were analysed using an enzyme-linked immunosorbent assay (ELISA) technique for the presence of CAP, and prevalence was determined. Of 288 total pooled samples collected, 257 (89.2%) were from the markets and 31 (10.8%) were from poultry farms. A total of 20 (7%) pooled egg samples tested CAP-positive; market eggs originated from 15 (41%) states of the country. Of the market eggs, 16 (6.2%) pooled samples tested positive. Of eggs from poultry farms, four (12.9%) tested positive. Mean CAP concentrations in the positive samples ranged from 0.49 to 1.17 µg kg(-1) (parts per billion). CAP use amongst poultry farmers in FCT was 75.5%; awareness of the CAP ban was 26.3%. Though 66% of veterinarians were unaware of a CAP ban, they were more likely to be aware than other poultry farmers (odds ratio (OR) = 1.4). Farm managers who use CAP were more likely to be aware of CAP ban than the farm managers not using CAP (OR = 5.5; p = 0.04). Establishing a drug residue surveillance and control program and enforcement of CAP legislation/regulation is needful to educate and prohibit the widespread CAP use amongst Nigerian poultry farmers.
Projection model for flame chemiluminescence tomography based on lens imaging
NASA Astrophysics Data System (ADS)
Wan, Minggang; Zhuang, Jihui
2018-04-01
For flame chemiluminescence tomography (FCT) based on lens imaging, the projection model is essential because it formulates the mathematical relation between the flame projections captured by cameras and the chemiluminescence field, and, through this relation, the field is reconstructed. This work proposed the blurry-spot (BS) model, which takes more universal assumptions and has higher accuracy than the widely applied line-of-sight model. By combining the geometrical camera model and the thin-lens equation, the BS model takes into account perspective effect of the camera lens; by combining ray-tracing technique and Monte Carlo simulation, it also considers inhomogeneous distribution of captured radiance on the image plane. Performance of these two models in FCT was numerically compared, and results showed that using the BS model could lead to better reconstruction quality in wider application ranges.
Extreme Windstorms and Related Impacts on Iberia
NASA Astrophysics Data System (ADS)
Liberato, Margarida L. R.; Ordóñez, Paulina; Pinto, Joaquim G.; Ramos, Alexandre M.; Karremann, Melanie K.; Trigo, Isabel F.
2014-05-01
Extreme windstorms are one of the major natural catastrophes in the mid latitudes, one of the most costly natural hazards in Europe and are responsible for substantial economic damages and even fatalities. During the recent winters, the Iberian Peninsula was hit by severe (wind) storms such as Klaus (January 2009), Xynthia (February 2010) and Gong (January 2013) which exhibited uncommon characteristics. They were all explosive extratropical cyclones formed over the mid-Atlantic, travelling then eastwards at lower latitudes than usual along the edge of the dominant North Atlantic storm track. In this work we present a windstorm catalogue for the Iberian Peninsula, where the characteristics of the potentially more destructive windstorms for the 1979-2012 period are identified. For this purpose, the potential impact of high winds over the Iberian Peninsula is assessed by using a daily damage index based on maximum wind speeds that exceeds the local 98th percentile threshold. Then, the characteristics of extratropical cyclones associated with these events are analyzed. Results indicate that these are fast moving, intense cyclones, typically located near the northwestern tip of the Iberian Peninsula. This work was partially supported by FEDER (Fundo Europeu de Desenvolvimento Regional) funds through the COMPETE (Programa Operacional Factores de Competitividade) and by national funds through FCT (Fundação para a Ciência e a Tecnologia, Portugal) under project STORMEx FCOMP-01-0124-FEDER- 019524 (PTDC/AAC-CLI/121339/2010). A. M. Ramos was also supported by a FCT postdoctoral Grant (FCT/DFRH/SFRH/BPD/84328/2012).
NASA Technical Reports Server (NTRS)
Murakawa, M. (Editor); Miyoshi, K. (Editor); Koga, Y. (Editor); Schaefer, L. (Editor); Tzeng, Y. (Editor)
2003-01-01
These are the Proceedings of the Seventh Applied Diamond Conference/Third Frontier Carbon Technology Joint Conference held at Epochal Tsukuba International Conference Center from August 18 to 21, 2003. The diamond CVD process was first reported by Dr. Spitsyn in 1981 and Prof. S. Iijima reported his discovery of carbon nanotubes in 1991. In the past years, both diamond-related materials and novel carbon materials have attracted considerable interest by the scientific, technological, and industrial community. Many practical and commercial products of diamond materials are reported in these proceedings. A broad variety of applications of carbon nanotubes and novel carbons have also been explored and demonstrated. Having more than 175 invited and contributing papers by authors from over 18 countries for presentations at ADC/FCT 2003 clearly demonstrates that these materials, due to the combination of their superior properties, are both scientifically amazing and economically significant.
NASA Astrophysics Data System (ADS)
Durandurdu, Murat
2007-07-01
The behavior of gold crystal under uniaxial, tensile, and three different triaxial stresses is studied using an ab initio constant pressure technique within a generalized gradient approximation. Gold undergoes a phase transformation from the face-centered-cubic structure (fcc) to a body-centered-tetragonal (bct) structure having the space group of I4/mmm with the application of uniaxial stress, while it transforms to a face-centered-tetragonal (fct) phase within I4/mmm symmetry under uniaxial tensile loading. Further uniaxial compression of the bct phase results in a symmetry change from I4/mmm to P1 at high stresses and ultimately structural failure around 200.0GPa . For the case of triaxial stresses, gold also converts into a bct state. The critical stress for the fcc-to-bct transformation increases as the ratio of the triaxial stress increases. Both fct and bct phases are elastically unstable.
NASA Technical Reports Server (NTRS)
Tzeng, Y. (Editor); Miyoshi, K. (Editor); Yoshikawa, M. (Editor); Murakawa, M. (Editor); Koga, Y. (Editor); Kobashi, K. (Editor); Amaratunga, G. A. J. (Editor)
2001-01-01
These are the Proceedings of the Sixth Applied Diamond Conference/Second Frontier Carbon Technology Joint Conference hosted by Auburn University from August 6 to 10, 2001. The diamond CVD process was first reported by Dr. Spitsyn in 1981 and Prof. S. Iijima reported his discovery of carbon nanotubes in 1991. In the past years, both diamond-related materials and novel carbon materials have attracted considerable interest by the scientific, technological, and industrial community. Many practical and commercial products of diamond materials are reported in these proceedings. A broad variety of applications of carbon nanotubes and novel carbons have also been explored and demonstrated. Having more than 200 invited and contributing papers by authors from over 20 countries for presentations at ADC/FCT 2001 clearly demonstrates that these materials, due to the combination of their superior properties, are both scientifically amazing and economically significant.
NASA Technical Reports Server (NTRS)
Shields, Joel F.; Metz, Brandon C.
2010-01-01
The optical pointing sensor provides a means of directly measuring the relative positions of JPL s Formation Control Testbed (FCT) vehicles without communication. This innovation is a steerable infrared (IR) rangefinder that gives measurements in terms of range and bearing to a passive retroreflector.
NASA Astrophysics Data System (ADS)
Gotovac, Hrvoje; Srzic, Veljko
2014-05-01
Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large linear system on adaptive grid because each Fup coefficient is obtained by predefined formulas equalizing Fup expansion around corresponding collocation point and particular collocation operator based on few surrounding solution values. Furthermore, each Fup coefficient can be obtained independently which is perfectly suited for parallel processing. Adaptive grid in each time step is obtained from solution of the last time step or initial conditions and advective Lagrangian step in the current time step according to the velocity field and continuous streamlines. On the other side, we implement explicit stabilized routine SERK2 for dispersive Eulerian part of solution in the current time step on obtained spatial adaptive grid. Overall adaptive concept does not require the solving of large linear systems for the spatial and temporal approximation of conservative transport. Also, this new Eulerian-Lagrangian-Collocation scheme resolves all mentioned numerical problems due to its adaptive nature and ability to control numerical errors in space and time. Proposed method solves advection in Lagrangian way eliminating problems in Eulerian methods, while optimal collocation grid efficiently describes solution and boundary conditions eliminating usage of large number of particles and other problems in Lagrangian methods. Finally, numerical tests show that this approach enables not only accurate velocity field, but also conservative transport even in highly heterogeneous porous media resolving all spatial and temporal scales of concentration field.
Facilitating tolerance of delayed reinforcement during functional communication training.
Fisher, W W; Thompson, R H; Hagopian, L P; Bowman, L G; Krug, A
2000-01-01
Few clinical investigations have addressed the problem of delayed reinforcement. In this investigation, three individuals whose destructive behavior was maintained by positive reinforcement were treated using functional communication training (FCT) with extinction (EXT). Next, procedures used in the basic literature on delayed reinforcement and self-control (reinforcer delay fading, punishment of impulsive responding, and provision of an alternative activity during reinforcer delay) were used to teach participants to tolerate delayed reinforcement. With the first case, reinforcer delay fading alone was effective at maintaining low rates of destructive behavior while introducing delayed reinforcement. In the second case, the addition of a punishment component reduced destructive behavior to near-zero levels and facilitated reinforcer delay fading. With the third case, reinforcer delay fading was associated with increases in masturbation and head rolling, but prompting and praising the individual for completing work during the delay interval reduced all problem behaviors and facilitated reinforcer delay fading.
On The Computation Of The Best-fit Okada-type Tsunami Source
NASA Astrophysics Data System (ADS)
Miranda, J. M. A.; Luis, J. M. F.; Baptista, M. A.
2017-12-01
The forward simulation of earthquake-induced tsunamis usually assumes that the initial sea surface elevation mimics the co-seismic deformation of the ocean bottom described by a simple "Okada-type" source (rectangular fault with constant slip in a homogeneous elastic half space). This approach is highly effective, in particular in far-field conditions. With this assumption, and a given set of tsunami waveforms recorded by deep sea pressure sensors and (or) coastal tide stations it is possible to deduce the set of parameters of the Okada-type solution that best fits a set of sea level observations. To do this, we build a "space of possible tsunami sources-solution space". Each solution consists of a combination of parameters: earthquake magnitude, length, width, slip, depth and angles - strike, rake, and dip. To constrain the number of possible solutions we use the earthquake parameters defined by seismology and establish a range of possible values for each parameter. We select the "best Okada source" by comparison of the results of direct tsunami modeling using the solution space of tsunami sources. However, direct tsunami modeling is a time-consuming process for the whole solution space. To overcome this problem, we use a precomputed database of Empirical Green Functions to compute the tsunami waveforms resulting from unit water sources and search which one best matches the observations. In this study, we use as a test case the Solomon Islands tsunami of 6 February 2013 caused by a magnitude 8.0 earthquake. The "best Okada" source is the solution that best matches the tsunami recorded at six DART stations in the area. We discuss the differences between the initial seismic solution and the final one obtained from tsunami data This publication received funding of FCT-project UID/GEO/50019/2013-Instituto Dom Luiz.
NASA Astrophysics Data System (ADS)
Liberato, M. L. R.; Pinto, J. G.; Gil, V.; Ramos, A. M.; Trigo, R. M.
2017-12-01
Extratropical cyclones dominate autumn and winter weather over Western Europe and particularly over the Iberian Peninsula. Intense, high-impact storms are one of the major weather risks in the region, mostly due to the simultaneous occurrence of high winds and extreme precipitation events. These intense extratropical cyclones may result in windstorm damage, flooding and coastal storm surges, with large societal impacts. In Portugal, due to the extensive human use of coastal areas, the natural and built coastal environments have been amongst the most affected. In this work several historical winter storms that adversely affected the Western Iberian Peninsula are studied in detail in order to contribute to an improved assessment of the characteristics of these events. The diagnosis has been performed based on instrumental daily precipitation and wind records, on satellite images, on reanalysis data and through model simulations. For several examples the synoptic evolution and upper-level dynamics analysis of physical processes controlling the life cycle of extratropical storms associated with the triggering of the considered extreme events has also been accomplished. Furthermore, the space-time variability of the exceptionally severe storms affecting Western Iberia over the last century and under three climate scenarios (the historical simulation, the RCP4.5 and RCP8.5 scenarios) is presented. These studies contribute to improving the knowledge of atmospheric dynamics controlling the life cycle of midlatitude storms associated to severe weather (precipitation and wind) in the Iberian Peninsula. AcknowledgementsThis work is supported by the Portuguese Foundation for Science and Technology (FCT), Portugal, through project UID/GEO/50019/2013 - Instituto Dom Luiz. A. M. Ramos is also supported by a FCT postdoctoral grant (FCT/DFRH/SFRH/BPD/84328/2012).
An Elementary Proof of a Converse Mean-Value Theorem
ERIC Educational Resources Information Center
Almeida, Ricardo
2008-01-01
We present a new converse mean value theorem, with a rather elementary proof. [The work was supported by Centre for Research on Optimization and Control (CEOC) from the "Fundacaopara a Ciencia e a Tecnologia" FCT, co-financed by the European Community Fund FEDER/POCTI.
NASA Astrophysics Data System (ADS)
Cornillon, L.; Devilliers, C.; Behar-Lafenetre, S.; Ait-Zaid, S.; Berroth, K.; Bravo, A. C.
2017-11-01
Dealing with ceramic materials for more than two decades, Thales Alenia Space - France has identified Silicon Nitride Si3N4 as a high potential material for the manufacturing of stiff, stable and lightweight truss structure for future large telescopes. Indeed, for earth observation or astronomic observation, space mission requires more and more telescopes with high spatial resolution, which leads to the use of large primary mirrors, and a long distance between primary and secondary mirrors. Therefore current and future large space telescopes require a huge truss structure to hold and locate precisely the mirrors. Such large structure requires very strong materials with high specific stiffness and a low coefficient of thermal expansion (CTE). Based on the silicon nitride performances and on the know how of FCT Ingenieurkeramik to manufacture complex parts, Thales Alenia Space (TAS) has engaged, in cooperation with FCT, activities to develop and qualify silicon nitride parts for other applications for space projects.
NASA Astrophysics Data System (ADS)
Cornillon, L.; Devilliers, C.; Behar-Lafenetre, S.; Ait-Zaid, S.; Berroth, K.; Bravo, A. C.
2017-11-01
Dealing with ceramic materials for more than two decades, Thales Alenia Space - France has identified Silicon Nitride Si3N4 as a high potential material for the manufacturing of stiff, stable and lightweight truss structure for future large telescopes. Indeed, for earth observation or astronomic observation, space mission requires more and more telescopes with high spatial resolution, which leads to the use of large primary mirrors, and a long distance between primary and secondary mirrors. Therefore current and future large space telescopes require a huge truss structure to hold and locate precisely the mirrors. Such large structure requires very strong materials with high specific stiffness and a low coefficient of thermal expansion (CTE). Based on the silicon nitride performances and on the know how of FCT Ingenieurkeramik to manufacture complex parts, Thales Alenia Space (TAS) has engaged, in cooperation with FCT, activities to develop and qualify silicon nitride parts for other applications for space projects.
Correlative cryogenic tomography of cells using light and soft x-rays
Smith, Elizabeth A.; Cinquin, Bertrand P.; Do, Myan; McDermott, Gerry; Le Gros, Mark A.; Larabell, Carolyn A.
2013-01-01
Correlated imaging is the process of imaging a specimen with two complementary modalities, and then combining the two data sets to create a highly informative, composite view. A recent implementation of this concept has been the combination of soft x-ray tomography (SXT) with fluorescence cryogenic microscopy (FCM). SXT-FCM is used to visualize cells that are held in a near-native, cryo-preserved state. The resultant images are, therefore, highly representative of both the cellular architecture and molecular organization in vivo. SXT quantitatively visualizes the cell and sub-cellular structures; FCM images the spatial distribution of fluorescently labeled molecules. Here, we review the characteristics of SXT-FCM, and briefly discuss how this method compares with existing correlative imaging techniques. We also describe how the incorporation of a cryo-rotation stage into a cryogenic fluorescence microscope allows acquisition of fluorescence cryogenic tomography (FCT) data. FCT is optimally suited to correlation with SXT, since both techniques image the specimen in 3-D, potentially with similar, isotropic spatial resolution. PMID:24355261
NASA Astrophysics Data System (ADS)
Reis, C.; Clain, S.; Figueiredo, J.; Baptista, M. A.; Miranda, J. M. A.
2015-12-01
Numerical tools turn to be very important for scenario evaluations of hazardous phenomena such as tsunami. Nevertheless, the predictions highly depends on the numerical tool quality and the design of efficient numerical schemes still receives important attention to provide robust and accurate solutions. In this study we propose a comparative study between the efficiency of two volume finite numerical codes with second-order discretization implemented with different method to solve the non-conservative shallow water equations, the MUSCL (Monotonic Upstream-Centered Scheme for Conservation Laws) and the MOOD methods (Multi-dimensional Optimal Order Detection) which optimize the accuracy of the approximation in function of the solution local smoothness. The MUSCL is based on a priori criteria where the limiting procedure is performed before updated the solution to the next time-step leading to non-necessary accuracy reduction. On the contrary, the new MOOD technique uses a posteriori detectors to prevent the solution from oscillating in the vicinity of the discontinuities. Indeed, a candidate solution is computed and corrections are performed only for the cells where non-physical oscillations are detected. Using a simple one-dimensional analytical benchmark, 'Single wave on a sloping beach', we show that the classical 1D shallow-water system can be accurately solved with the finite volume method equipped with the MOOD technique and provide better approximation with sharper shock and less numerical diffusion. For the code validation, we also use the Tohoku-Oki 2011 tsunami and reproduce two DART records, demonstrating that the quality of the solution may deeply interfere with the scenario one can assess. This work is funded by the Portugal-France research agreement, through the research project GEONUM FCT-ANR/MAT-NAN/0122/2012.Numerical tools turn to be very important for scenario evaluations of hazardous phenomena such as tsunami. Nevertheless, the predictions highly depends on the numerical tool quality and the design of efficient numerical schemes still receives important attention to provide robust and accurate solutions. In this study we propose a comparative study between the efficiency of two volume finite numerical codes with second-order discretization implemented with different method to solve the non-conservative shallow water equations, the MUSCL (Monotonic Upstream-Centered Scheme for Conservation Laws) and the MOOD methods (Multi-dimensional Optimal Order Detection) which optimize the accuracy of the approximation in function of the solution local smoothness. The MUSCL is based on a priori criteria where the limiting procedure is performed before updated the solution to the next time-step leading to non-necessary accuracy reduction. On the contrary, the new MOOD technique uses a posteriori detectors to prevent the solution from oscillating in the vicinity of the discontinuities. Indeed, a candidate solution is computed and corrections are performed only for the cells where non-physical oscillations are detected. Using a simple one-dimensional analytical benchmark, 'Single wave on a sloping beach', we show that the classical 1D shallow-water system can be accurately solved with the finite volume method equipped with the MOOD technique and provide better approximation with sharper shock and less numerical diffusion. For the code validation, we also use the Tohoku-Oki 2011 tsunami and reproduce two DART records, demonstrating that the quality of the solution may deeply interfere with the scenario one can assess. This work is funded by the Portugal-France research agreement, through the research project GEONUM FCT-ANR/MAT-NAN/0122/2012.
Software Design Description for the HYbrid Coordinate Ocean Model (HYCOM), Version 2.2
2009-02-12
scalars. J. Phys. Oceanogr. 32: 240–264. Carnes, M., (2002). Data base description for the Generalized Digital Environmental Model ( GDEM -V) (U...Direction FCT Flux-Corrected Transport scheme GDEM Generalized Digital Environmental Model GISS NASA Goddard Institute for Space Studies GRD
NASA Technical Reports Server (NTRS)
Murakawa, M. (Editor); Miyoshi, K. (Editor); Koga, Y. (Editor); Schaefer, L. (Editor); Tzeng, Y. (Editor)
2003-01-01
This document contains 2 reports which were presented at the Seventh Applied Diamond Conference/Third Frontier Carbon Technology Joint Conference. The topics discuss the formation of C-N nanofibers as well as the characterization of diamond thin films.
NASA Astrophysics Data System (ADS)
Anderson, R.; Dobrev, V.; Kolev, Tz.; Kuzmin, D.; Quezada de Luna, M.; Rieben, R.; Tomov, V.
2017-04-01
In this work we present a FCT-like Maximum-Principle Preserving (MPP) method to solve the transport equation. We use high-order polynomial spaces; in particular, we consider up to 5th order spaces in two and three dimensions and 23rd order spaces in one dimension. The method combines the concepts of positive basis functions for discontinuous Galerkin finite element spatial discretization, locally defined solution bounds, element-based flux correction, and non-linear local mass redistribution. We consider a simple 1D problem with non-smooth initial data to explain and understand the behavior of different parts of the method. Convergence tests in space indicate that high-order accuracy is achieved. Numerical results from several benchmarks in two and three dimensions are also reported.
Buckley, Scott D; Newchok, Debra K
2005-01-01
We investigated the effects of response effort on the use of mands during functional communication training (FCT) in a participant with autism. The number of links in a picture exchange response chain determined two levels of response effort. Each level was paired with a fixed ratio (FR3) schedule of reinforcement for aggression in a reversal design. Responding to either schedule produced access to a preferred item. The participant opted for the low effort mand while aggression decreased significantly. However, the high effort mand did not compete with the FR3 schedule for aggression. Results are discussed in terms of response effort within a response chain of a picture exchange system and competing ratio schedules for problem behavior during mand training.
NASA Technical Reports Server (NTRS)
Parikh, Paresh; Pirzadeh, Shahyar; Loehner, Rainald
1990-01-01
A set of computer programs for 3-D unstructured grid generation, fluid flow calculations, and flow field visualization was developed. The grid generation program, called VGRID3D, generates grids over complex configurations using the advancing front method. In this method, the point and element generation is accomplished simultaneously, VPLOT3D is an interactive, menudriven pre- and post-processor graphics program for interpolation and display of unstructured grid data. The flow solver, VFLOW3D, is an Euler equation solver based on an explicit, two-step, Taylor-Galerkin algorithm which uses the Flux Corrected Transport (FCT) concept for a wriggle-free solution. Using these programs, increasingly complex 3-D configurations of interest to aerospace community were gridded including a complete Space Transportation System comprised of the space-shuttle orbitor, the solid-rocket boosters, and the external tank. Flow solutions were obtained on various configurations in subsonic, transonic, and supersonic flow regimes.
Pesticides in Ground Water Database A Compilation of ...
... t •-; i.j'ic? ... c-.ji ¦$ t.-e : -c* ., ~ . j «-3 jra ctrc $f*a :"^;i •fct-Of* of "¦*'Z""*Jt'C^ >?r3 ,i,:d,:-rj ss^/Ci' ~ ..t • i ¦ .2'iir4 \\'s ' ' • r; "• s .vnr *¦<;>p ica ^ •••* ...
CoPt/TiN films nanopatterned by RF plasma etching towards dot-patterned magnetic media
NASA Astrophysics Data System (ADS)
Szívós, János; Pothorszky, Szilárd; Soltys, Jan; Serényi, Miklós; An, Hongyu; Gao, Tenghua; Deák, András; Shi, Ji; Sáfrán, György
2018-03-01
CoPt thin films as possible candidates for Bit Patterned magnetic Media (BPM) were prepared and investigated by electron microscopy techniques and magnetic measurements. The structure and morphology of the Direct Current (DC) sputtered films with N incorporation were revealed in both as-prepared and annealed state. Nanopatterning of the samples was carried out by means of Radio Frequency (RF) plasma etching through a Langmuir-Blodgett film of silica nanospheres that is a fast and high throughput technique. As a result, the samples with hexagonally arranged 100 nm size separated dots of fct-phase CoPt were obtained. The influence of the order of nanopatterning and anneling on the nanostructure formation was revealed. The magnetic properties of the nanopatterned fct CoPt films were investigated by Vibrating Sample Magnetometer (VSM) and Magnetic Force Microscopy (MFM). The results show that CoPt thin film nanopatterned by means of the RF plasma etching technique is promising candidate to a possible realization of BPM. Furthermore, this technique is versatile and suitable for scaling up to technological and industrial applications.
Correlative cryogenic tomography of cells using light and soft x-rays.
Smith, Elizabeth A; Cinquin, Bertrand P; Do, Myan; McDermott, Gerry; Le Gros, Mark A; Larabell, Carolyn A
2014-08-01
Correlated imaging is the process of imaging a specimen with two complementary modalities, and then combining the two data sets to create a highly informative, composite view. A recent implementation of this concept has been the combination of soft x-ray tomography (SXT) with fluorescence cryogenic microscopy (FCM). SXT-FCM is used to visualize cells that are held in a near-native, cryopreserved. The resultant images are, therefore, highly representative of both the cellular architecture and molecular organization in vivo. SXT quantitatively visualizes the cell and sub-cellular structures; FCM images the spatial distribution of fluorescently labeled molecules. Here, we review the characteristics of SXT-FCM, and briefly discuss how this method compares with existing correlative imaging techniques. We also describe how the incorporation of a cryo-rotation stage into a cryogenic fluorescence microscope allows acquisition of fluorescence cryogenic tomography (FCT) data. FCT is optimally suited for correlation with SXT, since both techniques image the specimen in 3-D, potentially with similar, isotropic spatial resolution. © 2013 Elsevier B.V. All rights reserved.
Kreikemeyer, Bernd; Nakata, Masanobu; Köller, Thomas; Hildisch, Hendrikje; Kourakos, Vassilios; Standar, Kerstin; Kawabata, Shigetada; Glocker, Michael O; Podbielski, Andreas
2007-12-01
Many Streptococcus pyogenes (group A streptococcus [GAS]) virulence factor- and transcriptional regulator-encoding genes cluster together in discrete genomic regions. Nra is a central regulator of the FCT region. Previous studies exclusively described Nra as a transcriptional repressor of adhesin and toxin genes. Here transcriptome and proteome analysis of a serotype M49 GAS strain and an isogenic Nra mutant of this strain revealed the complete Nra regulon profile. Nra is active in all growth phases tested, with the largest regulon in the transition phase. Almost exclusively, virulence factor-encoding genes are repressed by Nra; these genes include the GAS pilus operon, the capsule synthesis operon, the cytolysin-mediated translocation system genes, all Mga region core virulence genes, and genes encoding other regulators, like the Ihk/Irr system, Rgg, and two additional RofA-like protein family regulators. Surprisingly, our experiments revealed that Nra additionally acts as a positive regulator, mostly for genes encoding proteins and enzymes with metabolic functions. Epidemiological investigations revealed strong genetic linkage of one particular Nra-repressed regulator, Ralp3 (SPy0735), with a gene encoding Epf (extracellular protein factor from Streptococcus suis). In a serotype-specific fashion, this ralp3 epf gene block is integrated, most likely via transposition, into the eno sagA virulence gene block, which is present in all GAS serotypes. In GAS serotypes M1, M4, M12, M28, and M49 this novel discrete genetic region is therefore designated the eno ralp3 epf sagA (ERES) pathogenicity region. Functional experiments showed that Epf is a novel GAS plasminogen-binding protein and revealed that Ralp3 activity counteracts Nra and MsmR regulatory activity. In addition to the Mga and FCT regions, the ERES region is the third discrete chromosomal pathogenicity region. All of these regions are transcriptionally linked, adding another level of complexity to the known GAS growth phase-dependent regulatory network.
Kreikemeyer, Bernd; Nakata, Masanobu; Köller, Thomas; Hildisch, Hendrikje; Kourakos, Vassilios; Standar, Kerstin; Kawabata, Shigetada; Glocker, Michael O.; Podbielski, Andreas
2007-01-01
Many Streptococcus pyogenes (group A streptococcus [GAS]) virulence factor- and transcriptional regulator-encoding genes cluster together in discrete genomic regions. Nra is a central regulator of the FCT region. Previous studies exclusively described Nra as a transcriptional repressor of adhesin and toxin genes. Here transcriptome and proteome analysis of a serotype M49 GAS strain and an isogenic Nra mutant of this strain revealed the complete Nra regulon profile. Nra is active in all growth phases tested, with the largest regulon in the transition phase. Almost exclusively, virulence factor-encoding genes are repressed by Nra; these genes include the GAS pilus operon, the capsule synthesis operon, the cytolysin-mediated translocation system genes, all Mga region core virulence genes, and genes encoding other regulators, like the Ihk/Irr system, Rgg, and two additional RofA-like protein family regulators. Surprisingly, our experiments revealed that Nra additionally acts as a positive regulator, mostly for genes encoding proteins and enzymes with metabolic functions. Epidemiological investigations revealed strong genetic linkage of one particular Nra-repressed regulator, Ralp3 (SPy0735), with a gene encoding Epf (extracellular protein factor from Streptococcus suis). In a serotype-specific fashion, this ralp3 epf gene block is integrated, most likely via transposition, into the eno sagA virulence gene block, which is present in all GAS serotypes. In GAS serotypes M1, M4, M12, M28, and M49 this novel discrete genetic region is therefore designated the eno ralp3 epf sagA (ERES) pathogenicity region. Functional experiments showed that Epf is a novel GAS plasminogen-binding protein and revealed that Ralp3 activity counteracts Nra and MsmR regulatory activity. In addition to the Mga and FCT regions, the ERES region is the third discrete chromosomal pathogenicity region. All of these regions are transcriptionally linked, adding another level of complexity to the known GAS growth phase-dependent regulatory network. PMID:17893125
A Comparative Test of Metallicity Calibrations for M dwarfs
NASA Astrophysics Data System (ADS)
Neves, Vasco; Bonfils, X.; Santos, N. C.
2011-09-01
The determination of the stellar parameters of M dwarfs is of prime importance in the fields of galactic, stellar and planetary astronomy. M stars are the least studied galactic component regarding their fundamental parameters. Yet, they are the most numerous stars in the galaxy and contribute to more than half of its total (baryonic) mass. In particular, we are interested in their metallicity in order to study the star-planet connection and to refine the planetary parameters. Here we present a comparative test of five metallicity calibrations of M dwarfs proposed in the literature. Our test sample is made of 22 M dwarfs, companion of widely separated (> 5 arcsec) F-, G- or K- dwarfs with known or newly measured metallicity. We included M dwarfs with reliable V photometry only by restricting our sample to stars with V uncertainty lower than ˜0.02 dex. Among all calibrations, we find that Schlaufman & Laughlin (2010) provides a lower offset and residuals against our sample and, ultimately, we used that larger sample to update and marginally improve their calibration. Despite better V photometry than used in previous studies the dispersion remains largely in excess given [Fe/H] and photometric uncertainties, suggesting it has physical roots. Finally, we also present preliminary work on a new, high-precision spectroscopic calibration involving the direct measurement of high-resolution spectra of M dwarfs. This work is supported by the European Research Council/European Community under the FP7 through Starting Grant agreement number 239953. NCS also acknowledges the support from Fundacão para a Ciência e a Tecnologia (FCT) through program Ciência 2007 funded by FCT/MCTES (Portugal) and POPH/FSE (EC), and in the form of grant reference PTDC/CTE-AST/098528/2008. VN would also like to acknowledge the support from FCT in the form of the fellowship SFRH/BD/60688/2009.
Characterization of Viscoelastic Materials Through an Active Mixer by Direct-Ink Writing
NASA Astrophysics Data System (ADS)
Drake, Eric
The goal of this thesis is two-fold: First, to determine mixing effectiveness of an active mixer attachment to a three-dimensional (3D) printer by characterizing actively-mixed, three-dimensionally printed silicone elastomers. Second, to understand mechanical properties of a printed lattice structure with varying geometry and composition. Ober et al defines mixing effectiveness as a measureable quantity characterized by two key variables: (i) a dimensionless impeller parameter (O ) that depends on mixer geometry as well as Peclet number (Pe) and (ii) a coefficient of variation (COV) that describes the mixer effectiveness based upon image intensity. The first objective utilizes tungsten tracer particles distributed throughout a batch of Dow Corning SE1700 (two parts silicone) - ink "A". Ink "B" is made from pure SE1700. Using the in-site active mixer, both ink "A" and "B" coalesce to form a hybrid ink just before extrusion. Two samples of varying mixer speeds and composition ratios are printed and analyzed by microcomputed tomography (MicroCT). A continuous stirred tank reactor (CSTR) model is applied to better understand mixing behavior. Results are then compared with computer models to verify the hypothesis. Data suggests good mixing for the sample with higher impeller speed. A Radial Distrubtion Function (RDF) macro is used to provide further qualitative analysis of mixing efficiency. The second objective of this thesis utilized three-dimensionally printed samples of varying geometry and composition to ascertain mechanical properties. Samples were printed using SE1700 provided by Lawrence Livermore National Laboratory with a face-centered tetragonal (FCT) structure. Hardness testing is conducted using a Shore OO durometer guided by a computer-controlled, three-axis translation stage to provide precise movements. Data is collected across an 'x-y' plane of the specimen. To explain the data, a simply supported beam model is applied to a single unit cell which yields basic structural behavior per cell. Characterizing the sample as a whole requires a more rigorous approach and non-trivial complexities due to varying geometries and compositions exist. The data demonstrates a uniform change in hardness as a function of position. Additionally, the data indicates periodicities in the lattice structure.
Design Management, Learning and Innovation: Results from a Portuguese Online Questionnaire
ERIC Educational Resources Information Center
Monteiro Barata, José M.
2013-01-01
This paper is an output of a Portuguese public research project: "DeSid"--"design as a company's strategic resource: a study of the impacts of design" (FCT). The "DeSid" research project was created with the main purpose to make a diagnosis of the use of design inside the Portuguese manufacturing industry. This paper…
A comparative study of advanced shock-capturing schemes applied to Burgers' equation
NASA Technical Reports Server (NTRS)
Yang, H. Q.; Przekwas, A. J.
1990-01-01
Several variations of the TVD scheme, ENO scheme, FCT scheme, and geometrical schemes, such as MUSCL and PPM, are considered. A comparative study of these schemes as applied to the Burgers' equation is presented. The objective is to assess their performance for problems involving formation and propagation of shocks, shock collisions, and expansion of discontinuities.
ERIC Educational Resources Information Center
Ali, F. A. Farah; Aliyu, Umar Yanda
2015-01-01
The present study examined the use of social networking among senior secondary school students in Abuja Municipal Area Council of FCT. The study employed quantitative method for data collection involving questionnaire administration. Fifteen questions with Likert model and ten yes/no responses in a questionnaire were personally administered to 400…
Genome-wide association study of the four-constitution medicine.
Yin, Chang Shik; Park, Hi Joon; Chung, Joo-Ho; Lee, Hye-Jung; Lee, Byung-Cheol
2009-12-01
Four-constitution medicine (FCM), also known as Sasang constitutional medicine, and the heritage of the long history of individualized acupuncture medicine tradition, is one of the holistic and traditional systems of constitution to appraise and categorize individual differences into four major types. This study first reports a genome-wide association study on FCM, to explore the genetic basis of FCM and facilitate the integration of FCM with conventional individual differences research. Healthy individuals of the Korean population were classified into the four constitutional types (FCTs). A total of 353,202 single nucleotide polymorphisms (SNPs) were typed using whole genome amplified samples, and six-way comparison of FCM types provided lists of significantly differential SNPs. In one-to-one FCT comparisons, 15,944 SNPs were significantly differential, and 5 SNPs were commonly significant in all of the three comparisons. In one-to-two FCT comparisons, 22,616 SNPs were significantly differential, and 20 SNPs were commonly significant in all of the three comparison groups. This study presents the association between genome-wide SNP profiles and the categorization of the FCM, and it could further provide a starting point of genome-based identification and research of the constitutions of FCM.
The US Army Foreign Comparative Test fuel cell program
NASA Astrophysics Data System (ADS)
Bostic, Elizabeth; Sifer, Nicholas; Bolton, Christopher; Ritter, Uli; Dubois, Terry
The US Army RDECOM initiated a Foreign Comparative Test (FCT) Program to acquire lightweight, high-energy dense fuel cell systems from across the globe for evaluation as portable power sources in military applications. Five foreign companies, including NovArs, Smart Fuel Cell, Intelligent Energy, Ballard Power Systems, and Hydrogenics, Inc., were awarded competitive contracts under the RDECOM effort. This paper will report on the status of the program as well as the experimental results obtained from one of the units. The US Army has interests in evaluating and deploying a variety of fuel cell systems, where these systems show added value when compared to current power sources in use. For low-power applications, fuel cells utilizing high-energy dense fuels offer significant weight savings over current battery technologies. This helps reduce the load a solider must carry for longer missions. For high-power applications, the low operating signatures (acoustic and thermal) of fuel cell systems make them ideal power generators in stealth operations. Recent testing has been completed on the Smart Fuel Cell A25 system that was procured through the FCT program. The "A-25" is a direct methanol fuel cell hybrid and was evaluated as a potential candidate for soldier and sensor power applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Norman, Matthew R
2014-01-01
The novel ADER-DT time discretization is applied to two-dimensional transport in a quadrature-free, WENO- and FCT-limited, Finite-Volume context. Emphasis is placed on (1) the serial and parallel computational properties of ADER-DT and this framework and (2) the flexibility of ADER-DT and this framework in efficiently balancing accuracy with other constraints important to transport applications. This study demonstrates a range of choices for the user when approaching their specific application while maintaining good parallel properties. In this method, genuine multi-dimensionality, single-step and single-stage time stepping, strict positivity, and a flexible range of limiting are all achieved with only one parallel synchronizationmore » and data exchange per time step. In terms of parallel data transfers per simulated time interval, this improves upon multi-stage time stepping and post-hoc filtering techniques such as hyperdiffusion. This method is evaluated with standard transport test cases over a range of limiting options to demonstrate quantitatively and qualitatively what a user should expect when employing this method in their application.« less
Music and Drama in Primary Schools in the Madeira Island--Narratives of Ownership and Leadership
ERIC Educational Resources Information Center
Mota, Graça; Araújo, Maria Jose
2013-01-01
A three-year-case study funded by the Foundation for Science and Technology (FCT) from the Portuguese Ministry of Science, Technology and Higher Education was designed to study a 30-year project of music and drama in primary schools in Madeira. This article reports on the narratives of the three main figures in the project as they elaborate on its…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John
This report is the milestone deliverable M4FT-17SN111102091 “Summary of Assessments Performed FY17 by SNL QA POC” for work package FT-17SN11110209 titled “Quality Assurance – SNL”. This report summarizes the FY17 assessment performed on Fuel Cycle Technologies / Spent Fuel and Waste Disposition efforts.
Moisture sources of the Atmospheric Rivers making landfall in western Europe
NASA Astrophysics Data System (ADS)
Trigo, Ricardo M.; Ramos, Alexandre M.; Nieto, Raquel; Tomé, Ricardo; Gimeno, Luis; Liberato, Margarida L. R.; Lavers, David A.
2017-04-01
An automated atmospheric river (AR) detection algorithm is used for the North Atlantic Ocean basin, allowing the identification of the major ARs affecting western European coasts between 1979 and 2012. The entire western coast of Europe was divided into five domains, namely the Iberian Peninsula (9.75W, 36-43.75N), France (4.5W, 43.75-50N), UK (4.5W, 50-59N), southern Scandinavia and the Netherlands (5.25E, 50-59N), and northern Scandinavia (5.25E, 59-70N). Following the identification of the main ARs that made landfall in western Europe, a Lagrangian analysis was then applied in order to identify the main areas where the moisture uptake was anomalous and contributed to the ARs reaching each domain. The Lagrangian data set used was obtained from the FLEXPART model global simulation from 1979 to 2012. The results show that, in general, for all regions considered, the major climatological areas for the anomalous moisture uptake extend along the subtropical North Atlantic, from the Florida Peninsula (northward of 20N) to each sink region, with the nearest coast to each sink region always appearing as a local maximum. In addition, during AR events the Atlantic subtropical source is reinforced and displaced, with a slight northward movement of the sources found when the sink region is positioned at higher latitudes. In conclusion, the results confirm not only the anomalous advection of moisture linked to ARs from subtropical ocean areas but also the existence of a tropical source, together with midlatitude anomaly sources at some locations closer to AR landfalls (Ramos et al., 2016). References: Ramos et al., (2016) Atmospheric rivers moisture sources from a Lagrangian perspective, Earth Syst. Dynam., 7, 371-384. Acknowledgements This work was supported by the project IMDROFLOOD - Improving Drought and Flood Early Warning, Forecasting and Mitigation using real-time hydroclimatic indicators (WaterJPI/0004/2014) funded by Fundação para a Ciência e a Tecnologia, Portugal (FCT). A. M. Ramos was supported by a FCT postdoctoral grant (FCT/DFRH/ SFRH/BPD/84328/2012). Raquel Nieto acknowledges the support of the Xunta de Galicia, Spain, through THIS (EM2014/043) project.
Scenarios of atmospheric rivers affecting Western Europe during the XXI Century
NASA Astrophysics Data System (ADS)
Trigo, Ricardo M.; Ramos, Alexandre M.; Tomé, Ricardo; Liberato, Margarida L. R.; Pinto, Joaquim G.
2017-04-01
Extreme precipitation events in Europe during the winter half of the year have major socio-economic impacts associated with floods, landslides, extensive property damage and life losses. In recent years, a number of works have shed new light on the role played by Atmospheric Rivers (ARs) in the occurrence of extreme precipitation events in Europe as was the case in major historical floods in Duero (Pereira et al., 2016) and Tagus (Trigo et al., 2015) rivers in Iberia. We analyse ARs reaching Europe, for the extended winter months (October to March), in simulations from six CMIP5 global climate models (CGMs) to quantify possible changes during the current century, with emphasis in five western European prone coastal areas. ARs are represented reasonably well in GCMs for recent climate conditions (1980-2005). Increased vertically integrated horizontal water transport is found for 2074- 2099 (RCP4.5 and RCP8.5) compared to 1980-2005, while the number of ARs is projected to double on average for the same period. These changes are robust between models and are associated with higher air temperatures and thus enhanced atmospheric moisture content, together with higher precipitation associated with extra-tropical cyclones. This suggests an increased risk of intense precipitation and floods along the Atlantic European Coasts from the Iberian Peninsula to Scandinavia (Ramos et al., 2016). References: Pereira et al., (2016) Spatial impact and triggering conditions of the exceptional hydro-geomorphological event of December 1909 in Iberia, Nat. Hazards Earth Syst. Sci., 16, 371-390. Ramos et al., (2016) Projected changes in atmospheric rivers affecting Europe in CMIP5 models, Geophys. Res. Lett., 43, 9315-9323. Trigo et al., (2015) The record precipitation and flood event in Iberia in December 1876: description and synoptic analysis, Front. Earth Sci. 2:3. Acknowledgements This work was supported by the project FORLAND - Hydrogeomorphologic risk in Portugal: driving forces and application for land use planning (PTDC/ATPGEO/1660/2014) funded by Fundação para a Ciência e a Tecnologia, Portugal (FCT). A. M. Ramos was also supported by a FCT postdoctoral grant (FCT/DFRH/ SFRH/BPD/84328/2012).
El Ghannudi, Soraya; Ohlmann, Patrick; Meyer, Nicolas; Wiesel, Marie-Louise; Radulescu, Bogdan; Chauvin, Michel; Bareiss, Pierre; Gachet, Christian; Morel, Olivier
2010-06-01
The aim of this study was to determine whether low platelet response to the P2Y(12) receptor antagonist clopidogrel as assessed by Vasodilator-stimulated phosphoprotein flow cytometry test (VASP- FCT) predicts cardiovascular events in a high-risk population undergoing percutaneous coronary intervention (PCI). Impaired platelet responsiveness to clopidogrel is thought to be a determinant of cardiovascular events after PCI. The platelet VASP-FCT is a new assay specific to the P2Y(12) adenosine diphosphate receptor-pathway. In this test, platelet activation is expressed as platelet reactivity index (PRI). Four-hundred sixty-one unselected patients undergoing urgent (n = 346) or planned (n = 115) PCI were prospectively enrolled. Patients were classified as low-response (LR) and response (R) to clopidogrel, depending on their PRI. Optimal PRI cutoff was determined by receiver-operator characteristic curve analysis to 61% (LR: PRI > or =61% and R: PRI <61%). Follow-up was obtained at a mean of 9 +/- 2 months in 453 patients (98.3%). At follow-up, total cardiac mortality rates and possible and total stent thrombosis were higher in LR patients. Multivariate analysis identified creatinine clearance (hazard ratio [HR]: 0.95; 95% confidence interval [CI]: 0.93 to 0.98, p < 0.001), drug-eluting stent (HR: 5.73; 95% CI: 1.40 to 23.43, p = 0.015), C-reactive protein (HR: 1.01; 95% CI: 1.001 to 1.019, p = 0.024), and LR to clopidogrel (HR: 4.00; 95% CI: 1.08 to 14.80, p = 0.037) as independent predictors of cardiac death. The deleterious impact of LR to clopidogrel on cardiovascular death was significantly higher in patients implanted with drug-eluting stent. In patients undergoing PCI, LR to clopidogrel assessed by VASP-FCT is an independent predictor of cardiovascular death at the PRI cutoff value of > or =61%. The LR clinical impact seems to be dependent on the type of stent implanted. Copyright 2010 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Ambient Noise Tomography of the East African Rift System in Mozambique
NASA Astrophysics Data System (ADS)
Domingues, A.; Chamussa, J.; Silveira, G. M.; Custodio, S.; Lebedev, S.; Chang, S.; Ferreira, A. M.; Fonseca, J. F.
2013-12-01
A wide range of studies has shown that the cross-correlation of ambient noise can provide an estimate of the Greens functions between pairs of stations. Project MOZART (funded by FCT, Lisbon, PI J. Fonseca) deployed 30 broadband (120s) seismic stations from the SEIS-UK Pool in Central Mozambique and NE South Africa, with the purpose of studying the East African Rift System (EARS) in Mozambique. We applied the Ambient Noise Tomography (ANT) method to broadband seismic data recorded from March 2011 until July 2012. Cross-correlations were computed between all pairs of stations, and from these we obtained Rayleigh wave group velocity dispersion curves for all interstation paths, in the period range from 3 to 50 seconds. We tested various approaches for pre-processing the ambient noise data regarding time-domain and spectral normalisation, as well as the use of phase cross-correlations. Moreover, we examined the robustness of our dispersion maps by splitting our dataset into various sub-sets of Green's functions with similar paths and by quantifying the differences between the dispersion maps obtained from the various sub-sets of data. We find that while the geographical distribution of the group velocity anomalies is well constrained, the amplitudes of the anomalies are slightly less robust. We performed a three-dimensional inversion to obtain the S-wave velocity of the crust and upper mantle. In addition, our preliminary results show a good correlation between the Rayleigh wave group velocity and the geology of Mozambique. In order to extend the investigation to longer periods and, thus, to be able to look into the lithosphere-asthenosphere depth range in the upper mantle, we apply a recent implementation of the surface-wave two-station method (teleseismic interferometry) and augment our dataset with Rayleigh wave phase velocities curves in broad period ranges.
NASA Astrophysics Data System (ADS)
Fritscher, Klaus; Braue, Wolfgang; Schulz, Uwe
2013-05-01
The chemical composition of the alumina-zirconia mixed zone (MZ) of an electron beam physical vapor deposited thermal barrier coating (EB-PVD TBC) system is affected by service conditions and by the interdiffusion of elements from the substrate alloy below and the zirconia top coat. Three NiCoCrAlY bond-coated Ni-base substrates with YPSZ or CeSZ EB-PVD TBCs were subjected to a cyclic furnace oxidation test (FCT) at 1373 K (1100 °C) in order to provide experimental evidence of a link between chemistry of the MZ, the substrate alloy, the ceramic top coat, and the time in the FCT. Energy dispersive spectroscopy of the MZ revealed preferred accumulation of Cr, Zr, Y, and Ce. The concentration of the reactive elements (RE = Ce + Y + Zr) was related to the respective average lifetimes of the TBC systems at 1373 K (1100 °C). The RE content in the MZ turned out to be a life-limiting parameter for YPSZ and CeSZ TBC systems which can be utilized to predict their relative lifetimes on the individual substrates. Conversely, the TBC failure mechanisms of YPSZ and CeSZ TBC systems are dissimilar.
Raymond, Yves; Champagne, Claude P
2015-04-01
The goals of this study were to evaluate the precision and accuracy of flow cytometry (FC) methodologies in the evaluation of populations of probiotic bacteria (Lactobacillus rhamnosus R0011) in two commercial dried forms, and ascertain the challenges in enumerating them in a chocolate matrix. FC analyses of total (FC(T)) and viable (FC(V)) counts in liquid or dried cultures were almost two times more precise (reproducible) than traditional direct microscopic counts (DCM) or colony forming units (CFU). With FC, it was possible to ascertain low levels of dead cells (FC(D)) in fresh cultures, which is not possible with traditional CFU and DMC methodologies. There was no interference of chocolate solids on FC counts of probiotics when inoculation was above 10(7) bacteria per g. Addition of probiotics in chocolate at 40 °C resulted in a 37% loss in viable cells. Blending of the probiotic powder into chocolate was not uniform which raised a concern that the precision of viable counts could suffer. FCT data can serve to identify the correct inoculation level of a sample, and viable counts (FCV or CFU) can subsequently be better interpreted. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
BAWADI, Hiba A.; AL-SHWAIYAT, Naseem M.; TAYYEM, Reema F.; MEKARY, Rania; TUURI, Georgianna
2011-01-01
Aim This study was conducted to develop a meal-planning exchange list for Middle Eastern foods commonly included in the Jordanian cuisine. Forty types of appetizers and another 40 types of desserts were selected; with five different recipes for each item. Recipes were collected from different housewives and Arabic cookbooks. Ingredients’ weight and dish net weight were recorded based on an average recipe, and dishes were prepared accordingly. Dishes were proximately analyzed following the AOAC procedures. Proximate analysis was compared to the WHO-food composition tables (FCT) for the use in the Middle East, and with food analysis software (ESHA). Results Significant correlations (P < 0.001) were found between macronutrient content obtained from proximate analysis and those obtained from ESHA. The correlation coefficients (r) were 0.92 for carbohydrate, 0.86 for protein, and 0.86 for fat. Strong correlations were also detected between proximate analysis FCT for carbohydrate (r=0.91, P<0.001) and protein (r=0.81; P<0.001) contents. However, this significant correlation was not found as strong, yet significant for fat (r=0. 62, P<0.001). Conclusion A valid exchange system for traditional desserts and appetizers is now available and ready to be used by dietitians and health care providers in Jordan and Arab World. PMID:21841913
Bawadi, Hiba A; Al-Shwaiyat, Naseem M; Tayyem, Reema F; Mekary, Rania; Tuuri, Georgianna
2009-03-01
AIM: This study was conducted to develop a meal-planning exchange list for Middle Eastern foods commonly included in the Jordanian cuisine. Forty types of appetizers and another 40 types of desserts were selected; with five different recipes for each item. Recipes were collected from different housewives and Arabic cookbooks. Ingredients' weight and dish net weight were recorded based on an average recipe, and dishes were prepared accordingly. Dishes were proximately analyzed following the AOAC procedures. Proximate analysis was compared to the WHO-food composition tables (FCT) for the use in the Middle East, and with food analysis software (ESHA). RESULTS: Significant correlations (P < 0.001) were found between macronutrient content obtained from proximate analysis and those obtained from ESHA. The correlation coefficients (r) were 0.92 for carbohydrate, 0.86 for protein, and 0.86 for fat. Strong correlations were also detected between proximate analysis FCT for carbohydrate (r=0.91, P<0.001) and protein (r=0.81; P<0.001) contents. However, this significant correlation was not found as strong, yet significant for fat (r=0. 62, P<0.001). CONCLUSION: A valid exchange system for traditional desserts and appetizers is now available and ready to be used by dietitians and health care providers in Jordan and Arab World.
Ma, Jui-Shan; Chen, Sin-Yu; Lo, Hsueh-Hsia
2017-11-01
Biofilm formation has been well known as a determinant of bacterial virulence. Group G Streptococcus dysgalactiae subspecies equisimilis (SDSE), a relevant pathogen with increasing medical importance, was evaluated for the biofilm-forming potential. Microtiter plate assay was used to assess the most feasible medium for group G SDSE to form a biofilm. Among 246 SDSE isolates examined, 46.7%, 43.5%, 33.3%, and 26.4% of isolates showed moderate or strong biofilm-forming abilities using tryptic soy broth (TSB), brain heart infusion broth (BHI), Todd-Hewitt broth (THB), and C medium with 30 mM glucose (CMG), respectively. The addition of glucose significantly increased the biofilm-forming ability of group G SDSE. FCT (fibronectin-collagen-T-antigen) typing of SDSE was first undertaken and 11 FCT types were found. Positive associations of stG10.0 or negative associations of stG245.0, stG840.0, and stG6.1 with biofilm-forming ability of SDSE were, respectively, found. This was the first investigation demonstrating biofilm-forming potential in clinical group G SDSE isolates; also, some significant associations of biofilm-forming ability with certain emm types were presented. © 2017 APMIS. Published by John Wiley & Sons Ltd.
Using the Generic Mapping Tools From Within the MATLAB, Octave and Julia Computing Environments
NASA Astrophysics Data System (ADS)
Luis, J. M. F.; Wessel, P.
2016-12-01
The Generic Mapping Tools (GMT) is a widely used software infrastructure tool set for analyzing and displaying geoscience data. Its power to analyze and process data and produce publication-quality graphics has made it one of several standard processing toolsets used by a large segment of the Earth and Ocean Sciences. GMT's strengths lie in superior publication-quality vector graphics, geodetic-quality map projections, robust data processing algorithms scalable to enormous data sets, and ability to run under all common operating systems. The GMT tool chest offers over 120 modules sharing a common set of command options, file structures, and documentation. GMT modules are command line tools that accept input and write output, and this design allows users to write scripts in which one module's output becomes another module's input, creating highly customized GMT workflows. With the release of GMT 5, these modules are high-level functions with a C API, potentially allowing users access to high-level GMT capabilities from any programmable environment. Many scientists who use GMT also use other computational tools, such as MATLAB® and its clone Octave. We have built a MATLAB/Octave interface on top of the GMT 5 C API. Thus, MATLAB or Octave now has full access to all GMT modules as well as fundamental input/output of GMT data objects via a MEX function. Internally, the GMT/MATLAB C API defines six high-level composite data objects that handle input and output of data via individual GMT modules. These are data tables, grids, text tables (text/data mixed records), color palette tables, raster images (1-4 color bands), and PostScript. The API is responsible for translating between the six GMT objects and the corresponding native MATLAB objects. References to data arrays are passed if transposing of matrices is not required. The GMT and MATLAB/Octave combination is extremely flexible, letting the user harvest the general numerical and graphical capabilities of both systems, and represents a giant step forward in interoperability between GMT and other software package. We will present examples of the symbiotic benefits of combining these platforms. Two other extensions are also in the works: a nearly finished Julia wrapper and an embryonic Python module. Publication supported by FCT- project UID/GEO/50019/2013 - Instituto D. Luiz
Exploring the CIGALA/CALIBRA network data base for supporting space weather service over Brazil
NASA Astrophysics Data System (ADS)
Galera Monico, Joao Francisco; Shimabukuro, Milton; Vani, Bruno; Stuani, Vinicius
Most of Brazil region is surrounded by equatorial anomaly northwards and southwards. Therefore, investigations related to space weather are quite important and very demanding. For example, GNSS applications are widely affected by ionospheric disturbances, a significant field within space weather. A network for continuous monitoring of ionosphere was deployed over its territory, starting on February/2011. This network was named CIGALA/CALIBRA according to the names of the two projects which originated it. Through CIGALA (Concept for Ionospheric Scintillation Mitigation for Professional GNSS in Latin America), which was funded by European Commission (EC) in the framework of the FP7-GALILEO-2009-GSA (European GNSS Agency), the first stations were deployed at Presidente Prudente, São Paulo state, at February 2011. CIGALA Project was finalized at February 2012 with eight stations distributed over the Brazilian territory. Through CALIBRA (Countering GNSS high Accuracy applications Limitations due to Ionospheric disturbances in BRAzil), which is also funded by the European Commission now in the framework of the FP7-GALILEO-2011-GSA, new stations were deployed. All monitoring stations were specifically placed at locations following geomagnetic arrangements for supporting development of ionospheric models. CALIBRA project started at November 2012 and will have two years of duration, focusing on development of new algorithms that can be applied to high accuracy GNSS techniques (RTK, PPP) in order to tackle the effects of ionospheric disturbances. All the stations have PolarRxS-PRO receivers, manufactured by Septentrio®. This multi-GNSS receiver can collect data up to 100 Hz rates, providing ionospheric indices like TEC, scintillation parameters like S4 and Sigma-Phi, and other signal metrics like locktime for all satellites and frequencies tracked. All collected data is sent to a central facility located at the Faculdade de Ciências e Tecnologia da Universidade Estadual Paulista (FCT/UNESP) in Presidente Prudente. For dealing with the large amount of data, an analysis infrastructure was also being established, which is in constant development. It is the web software named ISMR Query Tool, which provide query and visualization of the scintillation parameters, with capabilities on identifying specific behaviors of ionosphere activity through data visualization and data mining. Its web availability and user-specified features allow the users to interact with data through a simple internet connection, enlarging insights about the ionosphere according with their own previous knowledge. Information about the network, the projects and the tool can be found at the FCT/UNESP Ionosphere web portal available at http://is-cigala-calibra.fct.unesp.br/. At this contribution we will provide an overview of results extracted from the monitoring and analysis infrastructures, explaining the possibilities provided by ISMR Query Tool supporting analysis of the ionosphere and the development of models or mitigation techniques to GNSS. At this moment, at least until the end of the CALIBRA project, this service is free available to users that request access to FCT/UNESP. We also would like to discuss means of financing and keeping the service available at a minimum cost after the end of the project.
Forest fires caused by lightning activity in Portugal
NASA Astrophysics Data System (ADS)
Russo, Ana; Ramos, Alexandre M.; Benali, Akli; Trigo, Ricardo M.
2017-04-01
Wildfires in southern Europe have been causing in the last decades extensive economic and ecological losses and, even human casualties (e.g. Pereira et al., 2011). According to statistics provided by the EC-JRC European Forest Fires Information System (EFFIS) for Europe, the years of 2003 and 2007 represent the most dramatic fire seasons since the beginning of the millennium, followed by the years 2005 and 2012. These extreme years registered total annual burned areas for Europe of over 600.000 ha, reaching 800.000 ha in 2003. Over Iberia and France, the exceptional fire seasons registered in 2003 and 2005 were coincident respectively with one of the most severe heatwaves (Bastos et al., 2014) and droughts of the 20th century (Gouveia et al., 2009). On the other hand, the year 2007 was very peculiar as the area of the Peloponnese was struck by a severe winter drought followed by a subsequent wet spring, being also stricken by three heat heaves during summer and played a major role increasing the susceptibility of the region to wildfires (Gouveia et al., 2016). Some countries have a relatively large fraction of fires caused by natural factors such as lightning, e.g. northwestern USA, Canada, Russia. In contrast, Mediterranean countries such as Portugal has only a small percentage of fire records caused by lightning. Although significant uncertainties remain for the triggering mechanism for the majority of fires registered in the catalog, since they were cataloged without a likely cause. In this work we have used mainly two different databases: 1) the Portuguese Rural Fire Database (PRFD) which is representative of rural fires that have occurred in Continental Portugal, 2002-2009, with the original data provided by the National forestry Authority; 2) lightning discharges location which were extracted from the Portuguese Lightning Location System that has been in service since June of 2002 and is operated by the national weather service - Portuguese Institute for Sea and Atmosphere (IPMA). The main objective of this work was to evaluate and quantify the relations between the wildfires' occurrence and the lightning activity. In particularly we were able to verify if wildfires which were identified as "ignited by lightning" by comparing its location to the lightning discharges location database. Furthermore we have also investigated possible fire ignition by lightning discharges that have not yet been labeled in the PRFD by comparing daily data from both datasets. - Bastos A., Gouveia C.M., Trigo R.M., Running S.W., 2014. Biogeosciences, 11, 3421-3435. - Pereira M.G., B.D. Malamud R.M. Trigo, P.I. Alves, 2011. Nat. Hazards Earth Syst. Sci., 11, 3343-3358. - Gouveia C., Trigo R.M., DaCamara C.C., 2009. Nat. Hazards Earth Syst. Sci., 9, 185-195 - Gouveia C.M., Bistinas I., Liberato M.L.R., Bastos A., Koutsiasd N., Trigo R., 2016. Agricultural and Forest Meteorology, 218-219, 135-145. Acknowledgements Research performed was supported by FAPESP/FCT Project Brazilian Fire-Land-Atmosphere System (BrFLAS) (1389/2014 and 2015/01389-4). Ana Russo thanks FCT for granted support (SFRH/BPD/99757/2014). A. M. Ramos was also supported by a FCT postdoctoral grant (FCT/DFRH/ SFRH/BPD/84328/2012).
Hydro-geomorphologic events in Portugal and its association with Circulation weather types
NASA Astrophysics Data System (ADS)
Pereira, Susana; Ramos, Alexandre M.; Rebelo, Luís; Trigo, Ricardo M.; Zêzere, José L.
2017-04-01
Floods and landslides correspond to the most hazardous weather driven natural disasters in Portugal. A recent improvement on their characterization has been achieved with the gathering of basic information on past floods and landslides that caused social consequences in Portugal for the period 1865-2015 through the DISASTER database (Zêzere et al., 2014). This database was built under the assumption that strong social impacts of floods and landslides are sufficient relevant to be reported consistently by national and regional newspapers. The DISASTER database contains detailed information on the location, date of occurrence and social impacts (fatalities, injuries, missing people, evacuated and homeless people) of each individual hydro-geomorphologic case (1677 flood cases and 292 landslide cases). These hydro-geomorphologic disaster cases are grouped in a restrict number of DISASTER events that were selected according to the following criteria: a set of at least 3 DISASTER cases sharing the same trigger in time (with no more than 3 days without cases), which have a widespread spatial extension related to the triggering mechanism and a certain magnitude. In total, the DISASTER database includes 134 events (3.7 average days of duration) that generated high social impacts in Portugal (962 fatalities and 40878 homeless people). Each DISASTER event was characterized with the following attributes: hydro-geomorphologic event type (e.g landslides, floods, flash floods, urban floods); date of occurrence (year, month and days); duration in days; spatial location in GIS; number of fatalities, injured, evacuated and homeless people; and weather type responsible for triggering the event. The atmospheric forcing at different time scales is the main trigger for the hydro-meteorological DISASTER events occurred in Portugal. In this regard there is an urge for a more systematic assessment of the weather types associated to flood and landslide damaging events to correctly characterize the climatic forcing of hydro-geomorphologic risk in Portugal. The weather type classification used herein is an automated version of the Lamb weather type procedure, initially developed for the United Kingdom and often named circulation weather types (CWT) and latter adapted for Portugal. We computed the daily CWT for the 1865-2015 period by means of the daily SLP retrieved from the 20 Century Reanalysis dataset. The relationship between the CWTs and the hydro-meteorological events in Portugal shows that the cyclonic, westerly and southwesterly are CWTs frequently associated with major socio-economic impacts of DISASTER events. In addition, CWT basic variables (flow strength, vorticity and direction) were used to better understand the impacts of the meteorological conditions in the hydro-meteorological events in Portugal. Reference: Zêzere, J. L., Pereira, S., Tavares, A. O., Bateira, C., Trigo, R. M., Quaresma, I., Santos, P. P., Santos, M. and Verde, J.: DISASTER: a GIS database on hydro-geomorphologic disasters in Portugal, Nat. Hazards, 72(2), 503-532, doi:10.1007/s11069-013-1018-y, 2014. This work was supported by the project FORLAND - Hydrogeomorphologic risk in Portugal: driving forces and application for land use planning [grant number PTDC/ATPGEO/1660/2014] funded by the Portuguese Foundation for Science and Technology (FCT), Portugal. A. M. Ramos was also supported by a FCT postdoctoral grant (FCT/DFRH/ SFRH/BPD/84328/2012).
Extreme precipitation events in the Iberian Peninsula and its association with Atmospheric Rivers
NASA Astrophysics Data System (ADS)
Ramos, Alexandre M.; Liberato, Margarida L. R.; Trigo, Ricardo M.
2015-04-01
Extreme precipitation events in the Iberian Peninsula during the winter half of the year have major socio-economic impacts associated with floods, landslides, extensive property damage and life losses. In recent years, a number of works have shed new light on the role played by Atmospheric Rivers (ARs) in the occurrence of extreme precipitation events in both Europe and USA. ARs are relatively narrow regions of concentrated WV responsible for horizontal transport in the lower atmosphere corresponding to the core section of the broader warm conveyor belt occurring over the oceans along the warm sector of extra-tropical cyclones. Over the North Atlantic ARs are usually W-E oriented steered by pre-frontal low level jets along the trailing cold front and subsequently feed the precipitation in the extra-tropical cyclones. It was shown that more than 90% of the meridional WV transport in the mid-latitudes occurs in the AR, although they cover less than 10% of the area of the globe. The large amount of WV that is transported can lead to heavy precipitation and floods. An automated ARs detection algorithm is used for the North Atlantic Ocean Basin allowing the identification and a comprehensive characterization of the major AR events that affected the Iberian Peninsula over the 1948-2012 period. The extreme precipitation days in the Iberian Peninsula were assessed recently by us (Ramos et al., 2014) and their association (or not) with the occurrence of AR is analyzed in detail here. The extreme precipitation days are ranked by their magnitude and are obtained after considering 1) the area affected and 2) the precipitation intensity. Different rankings are presented for the entire Iberian Peninsula, Portugal and also for the six largest Iberian river basins (Minho, Duero, Tagus, Guadiana, Guadalquivir and Ebro) covering the 1950-2008 period (Ramos et al., 2014). Results show that the association between ARs and extreme precipitation days in the western domains (Portugal, Minho, Tagus and Duero) is noteworthy, while for the eastern and southern basins (Ebro, Guadiana and Guadalquivir) the impact of ARs is reduced. In addition, meteorological large scale influence associated with ARs was also analyzed. The anomalies between the extended winter (ONDJFM) long term mean and the composite for the persistent ARs time steps were computed for the IVT and SLP fields. Negative SLP anomalies are found centered in Ireland with slight positive anomalies of SLP located over northern Africa. It was found that the ARs hitting the IP are strongly correlated with the EA pattern, while the influence of other patterns such as the NAO or SCAND is weak. Main results presented are currently in print (Ramos et al., 2015) Ramos et al (2014), A ranking of high-resolution daily precipitation extreme events for the Iberian Peninsula. Atmospheric Science Letters, doi: 10.1002/asl2.507. Ramos et al. (2015), Daily precipitation extreme events in the Iberian Peninsula and its association with Atmospheric Rivers. Journal Hydrometeorology, in press. This work was partially supported by FEDER (Fundo Europeu de Desenvolvimento Regional) funds through the COMPETE (Programa Operacional Factores de Competitividade) and by national funds through FCT (Fundação para a Ciência e a Tecnologia, Portugal) under project STORMEx FCOMP-01-0124-FEDER-019524 (PTDC/AAC-CLI/121339/2010). A. M. Ramos was also supported by a FCT postdoctoral grant (FCT/DFRH/SFRH/BPD/84328/2012).
On the latitudinal distribution of Titan's haze at the Voyager epoch
NASA Astrophysics Data System (ADS)
Negrao, A.; Roos-Serote, M.; Rannou, P.; Rages, K.; McKay, C.
2002-09-01
In this work, we re-analyse a total of 10 high phase angle images of Titan (2 from Voyager 1 and 8 from Voyager 2). The images were acquired in different filters of the Voyager Imaging Sub System in 1980 - 1981. We apply a model, developed and used by Rannou etal. (1997) and Cabane etal. (1992), that calculates the vertical (1-D) distribution of haze particles and the I/F radial profiles as a function of a series of parameters. Two of these parameters, the haze particle production rate (P) and imaginary refractive index (xk), are used to obtain fits to the observed I/F profiles at different latitudes. Differerent from previous studies is that we consider all filters simultaneously, in an attempt to better fix the parameter values. We also include the filter response functions, not considered previously. The results show that P does not change significantly as a function of latitude, eventhough somewhat lower values are found at high northern latitudes. xk seems to increase towards southern latitudes. We will compare our results with GCM runs, that can give the haze distribution at the epoch of the observations. Work financed by portuguese Foundation for Science and Tecnology (FCT), contract ESO/PRO/40157/2000
Foreign Comparative Testing (FCT) Program Procedures Manual
1994-01-01
Public Law 101-189, "National Defense Authorization Act for Fiscal Years 1990 and 1991," November 29, 1989 (e) Federal Acquisition Regulation (FAR...reference (b)) shall be met for any subsequent acquisition (assuming the T&E is successful ). f. Factors, if any, that would mandate subsequent production of...States on an urgent basis. The Commander, U.S. Army Chemical Defense Research Institute, Fort Pinkerton, AL, acting in his capacity as the User
Modelling crop yield in Iberia under drought conditions
NASA Astrophysics Data System (ADS)
Ribeiro, Andreia; Páscoa, Patrícia; Russo, Ana; Gouveia, Célia
2017-04-01
The improved assessment of the cereal yield and crop loss under drought conditions are essential to meet the increasing economy demands. The growing frequency and severity of the extreme drought conditions in the Iberian Peninsula (IP) has been likely responsible for negative impacts on agriculture, namely on crop yield losses. Therefore, a continuous monitoring of vegetation activity and a reliable estimation of drought impacts is crucial to contribute for the agricultural drought management and development of suitable information tools. This works aims to assess the influence of drought conditions in agricultural yields over the IP, considering cereal yields from mainly rainfed agriculture for the provinces with higher productivity. The main target is to develop a strategy to model drought risk on agriculture for wheat yield at a province level. In order to achieve this goal a combined assessment was made using a drought indicator (Standardized Precipitation Evapotranspiration Index, SPEI) to evaluate drought conditions together with a widely used vegetation index (Normalized Difference Vegetation Index, NDVI) to monitor vegetation activity. A correlation analysis between detrended wheat yield and SPEI was performed in order to assess the vegetation response to each time scale of drought occurrence and also identify the moment of the vegetative cycle when the crop yields are more vulnerable to drought conditions. The time scales and months of SPEI, together with the months of NDVI, better related with wheat yield were chosen to perform a multivariate regression analysis to simulate crop yield. Model results are satisfactory and highlighted the usefulness of such analysis in the framework of developing a drought risk model for crop yields. In terms of an operational point of view, the results aim to contribute to an improved understanding of crop yield management under dry conditions, particularly adding substantial information on the advantages of combining vegetation and hydro-meteorological drought indices for the assessment of cereal yield. Moreover, the present study will provide some guidance on user's decision making process in agricultural practices in the IP, assisting farmers in deciding whether to purchase crop insurance. Acknowledgements: This work was partially supported by national funds through FCT (Fundação para a Ciência e a Tecnologia, Portugal) under project IMDROFLOOD (WaterJPI/0004/2014). Ana Russo thanks FCT for granted support (SFRH/BPD/99757/2014). Andreia Ribeiro also thanks FCT for grant PD/BD/114481/2016.
Characteristics of storms that contribute to extreme precipitation events over the Iberian Peninsula
NASA Astrophysics Data System (ADS)
Trigo, Ricardo; Ramos, Alexandre M.; Ordoñez, Paulina; Liberato, Margarida L. R.; Trigo, Isabel F.
2014-05-01
Floods correspond to one of the most deadly natural disasters in the Iberian Peninsula during the last century. Quite often these floods are associated to intense low pressure systems with an Atlantic origin. In recent years a number of episodes have been evaluated on a case-by-case approach, with a clear focus on extreme events, thus lacking a systematic assessment. In this study we focus on the characteristics of storms for the extended winter season (October to March) that are responsible for the most extreme rainfall events over large areas of the Iberian Peninsula. An objective method for ranking daily precipitation events during the extended winter is used based on the most comprehensive database of high resolution (0.2º latitude by 0.2º longitude) gridded daily precipitation dataset available for the Iberian Peninsula. The magnitude of an event is obtained after considering the total area affected as well as its intensity in every grid point (taking into account the daily normalised departure from climatology). Different precipitation rankings are studied considering the entire Iberian Peninsula, Portugal and also the six largest river basins in the Iberian Peninsula (Duero, Ebro, Tagus, Minho, Guadiana and Guadalquivir). Using an objective cyclone detecting and tracking scheme [Trigo, 2006] the storm track and characteristics of the cyclones were obtained using the ERA-Interim reanalyses for the 1979-2008 period. The spatial distribution of extratropical cyclone positions when the precipitation extremes occur will be analysed over the considered sub-domains (Iberia, Portugal, major river basins). In addition, we distinguish the different cyclone characteristics (lifetime, direction, minimum pressure, position, velocity, vorticity and radius) with significant impacts in precipitation over the different domains in the Iberian Peninsula. This work was partially supported by FEDER (Fundo Europeu de Desenvolvimento Regional) funds through the COMPETE (Programa Operacional Factores de Competitividade) and by national funds through FCT (Fundação para a Ciência e a Tecnologia, Portugal) under project STORMEx FCOMP-01-0124-FEDER- 019524 (PTDC/AAC-CLI/121339/2010). A. M. Ramos was also supported by a FCT postdoctoral Grant (FCT/DFRH/SFRH/BPD/84328/2012). Trigo I. F. (2006) Climatology and interannual variability of storm-tracks in the Euro-Atlantic sector: A comparison between ERA-40 and NCEP/NCAR reanalyses. Clim. Dyn., 26, 127-143.
Laser ablation U-Th-Sm/He dating of detrital apatite
NASA Astrophysics Data System (ADS)
Guest, B.; Pickering, J. E.; Matthews, W.; Hamilton, B.; Sykes, C.
2016-12-01
Detrital apatite U-Th-Sm/He thermochronology has the potential to be a powerful tool for conducting basin thermal history analyses as well as complementing the well-established detrital zircon U-Pb approach in source to sink studies. A critical roadblock that prevents the routine application of detrital apatite U-Th-Sm/He thermochronology to solving geological problems is the costly and difficult whole grain approach that is generally used to obtain apatite U-Th-Sm/He data. We present a new analytical method for laser ablation thermochronology on apatite. Samples are ablated using a Resonetics™ 193 nm excimer laser and liberated 4He is measured using an ASI (Australian Scientific Instruments) Alphachron™ quadrupole mass spectrometer system; collectively known as the Resochron™. The ablated sites are imaged using a Zygo ZescopeTM optical profilometer and ablated pit volume measured using PitVol, a custom MatLabTM algorithm. The accuracy and precision of the method presented here was confirmed using well-characterized Durango apatite and Fish Canyon Tuff (FCT) apatite reference materials, with Durango apatite used as a primary reference and FCT apatite used as a secondary reference. The weighted average of our laser ablation Durango ages (30.5±0.35 Ma) compare well with ages obtained using conventional whole grain degassing and dissolution U-Th-Sm/He methods (32.56±0.43 Ma) (Jonckheere et.al., 1 993; Farley, 2000; McDowell et.al., 2005) for chips of the same Durango crystal. These Durango ages were used to produce a K-value to correct the secondary references and unknown samples. After correction, FCT apatite has a weighted average age of 28.37 ± 0.96 Ma, which agrees well with published ages. As a further test of this new method we have conducted a case study on a set of samples from the British Mountains of the Yukon Territory in NW Canada. Sandstone samples collected across the British Mountains were analyzed using conventional U-Th-Sm/He whole grain methods and then reanalyzed using our new Laser ablation approach. The laser ablation results are consistent with those obtained using conventional methods, confirming that apatite laser ablation U-Th-Sm/He thermochronology is a viable alternative for collecting large low temperature thermochronology data sets from detrital samples.
Reduced 3d modeling on injection schemes for laser wakefield acceleration at plasma scale lengths
NASA Astrophysics Data System (ADS)
Helm, Anton; Vieira, Jorge; Silva, Luis; Fonseca, Ricardo
2017-10-01
Current modelling techniques for laser wakefield acceleration (LWFA) are based on particle-in-cell (PIC) codes which are computationally demanding. In PIC simulations the laser wavelength λ0, in μm-range, has to be resolved over the acceleration lengths in meter-range. A promising approach is the ponderomotive guiding center solver (PGC) by only considering the laser envelope for laser pulse propagation. Therefore only the plasma skin depth λp has to be resolved, leading to speedups of (λp /λ0) 2. This allows to perform a wide-range of parameter studies and use it for λ0 <<λp studies. We present the 3d version of a PGC solver in the massively parallel, fully relativistic PIC code OSIRIS. Further, a discussion and characterization of the validity of the PGC solver for injection schemes on the plasma scale lengths, such as down-ramp injection, magnetic injection and ionization injection, through parametric studies, full PIC simulations and theoretical scaling, is presented. This work was partially supported by Fundacao para a Ciencia e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014 and PD/BD/105882/2014.
The vorticity of Solar photospheric flows on the scale of granulation
NASA Astrophysics Data System (ADS)
Pevtsov, A. A.
2016-12-01
We employ time sequences of images observed with a G-band filter (λ4305Å) by the Solar Optical Telescope (SOT) on board of Hinode spacecraft at different latitude along solar central meridian to study vorticity of granular flows in quiet Sun areas during deep minimum of solar activity. Using a feature correlation tracking (FCT) technique, we calculate the vorticity of granular-scale flows. Assuming the known pattern of vertical flows (upward in granules and downward in intergranular lanes), we infer the sign of kinetic helicity of these flows. We show that the kinetic helicity of granular flows and intergranular vortices exhibits a weak hemispheric preference, which is in agreement with the action of the Coriolis force. This slight hemispheric sign asymmetry, however, is not statistically significant given large scatter in the average vorticity. The sign of the current helicity density of network magnetic fields computed using full disk vector magnetograms from the Synoptic Optical Long-term Investigations of the Sun (SOLIS) does not show any hemispheric preference. The combination of these two findings suggests that the photospheric dynamo operating on the scale of granular flows is non-helical in nature.
Report on FY16 Low-dose Metal Fuel Irradiation and PIE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edmondson, Philip D.
2016-09-01
This report gives an overview of the efforts into the low-dose metal fuel irradiation and PIE as part of the Fuel Cycle Research & Development (FCRD) Advanced Fuels Campaign (AFC) milestone M3FT-16OR020303031. The current status of the FCT and FCRP irradiation campaigns are given including a description of the materials that have been irradiated, analysis of the passive temperature monitors, and the initial PIE efforts of the fuel samples.
Lanphere, M.A.; Baadsgaard, H.
2001-01-01
The accuracy of ages measured using the 40Ar/39Ar technique is affected by uncertainties in the age of radiation fluence-monitor minerals. At present, there is lack of agreement about the ages of certain minerals used as fluence monitors. The accuracy of the age of a standard may be improved if the age can be measured using different decay schemes. This has been done by measuring ages on minerals from the Oligocene Fish Canyon Tuff (FCT) using the K-Ar, 40Ar/39Ar. Rb-Sr and U/Pb methods. K-Ar and 40Ar/39Ar total fusion ages of sanidine, biotite and hornblende yielded a mean age of 27.57 ?? 0.36 Ma. The weighted mean 40Ar/39Ar plateau age of sanidine and biotite is 27.57 ?? 0.18 Ma. A biotite-feldspar Rb-Sr isochron yielded an age of 27.44 ?? 0.16 Ma. The U-Pb data for zircon are complex because of the presence of Precambrian zircons and inheritance of radiogenic Pb. Zircons with 207Pb/235U < 0.4 yielded a discordia line with a lower concordia intercept of 27.52 ?? 0.09 Ma. Evaluation of the combined data suggests that the best age for FCT is 27.51 Ma. Published by Elsevier Science B.V.
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Spirkovska, Lilijana; McCann, Rob; Wang, Lui; Pohlkamp, Kara; Morin, Lee
2012-01-01
NASA's Advanced Exploration Systems Autonomous Mission Operations (AMO) project conducted an empirical investigation of the impact of time-delay on todays mission operations, and of the effect of processes and mission support tools designed to mitigate time-delay related impacts. Mission operation scenarios were designed for NASA's Deep Space Habitat (DSH), an analog spacecraft habitat, covering a range of activities including nominal objectives, DSH system failures, and crew medical emergencies. The scenarios were simulated at time-delay values representative of Lunar (1.2-5 sec), Near Earth Object (NEO) (50 sec) and Mars (300 sec) missions. Each combination of operational scenario and time-delay was tested in a Baseline configuration, designed to reflect present-day operations of the International Space Station, and a Mitigation configuration in which a variety of software tools, information displays, and crew-ground communications protocols were employed to assist both crews and Flight Control Team (FCT) members with the long-delay conditions. Preliminary findings indicate: 1) Workload of both crew members and FCT members generally increased along with increasing time delay. 2) Advanced procedure execution viewers, caution and warning tools, and communications protocols such as text messaging decreased the workload of both flight controllers and crew, and decreased the difficulty of coordinating activities. 3) Whereas crew workload ratings increased between 50 sec and 300 sec of time-delay in the Baseline configuration, workload ratings decreased (or remained flat) in the Mitigation configuration.
1980-01-01
1979. Principal Investigator: An Archaeological Investigation of the Proposed Lagoon Site, Dam Site Recreation Area, Coralville Lake, ! Iowa River, Iowa ...Proposed Lagoon Site, Coralville Lake, Iowa . Winter, 1979. I Analysis of Material from the Site Survey of Blue Earth City Park, Faribault County...t Site Recreation Area, Coralville Lake, Iowa . With Richard A. Strachan. For the Rock Island DsrFct, U. S. Acmy Corps of Engineers. With Richard A
HCP to FCT + precipitate transformations in lamellar gamma-titanium aluminide alloys
NASA Astrophysics Data System (ADS)
Karadge, Mallikarjun Baburao
Fully lamellar gamma-TiAl [alpha2(HCP) + gamma(FCT)] based alloys are potential structural materials for aerospace engine applications. Lamellar structure stabilization and additional strengthening mechanisms are major issues in the ongoing development of titanium aluminides due to the microstructural instability resulting from decomposition of the strengthening alpha 2 phase. This work addresses characterization of multi-component TiAl systems to identify the mechanism of lamellar structure refinement and assess the effects of light element additions (C and Si) on creep deformation behavior. Transmission electron microscopy studies directly confirmed for the first time that, fine lamellar structure is formed by the nucleation and growth of a large number of basal stacking faults on the 1/6<112¯0> dislocations cross slipping repeatedly into and out of basal planes. This lamellar structure can be tailored by modifying jog heights through chemistry and thermal processing. alpha 2 → gamma transformation during heating (investigated by differential scanning calorimetry and X-ray diffraction) is a two step process involving the formation of a novel disordered FCC gamma' TiAl [with a(gamma') = c(gamma)] as an intermediate phase followed by ordering. Addition of carbon and silicon induced Ti2AlC H-type carbide precipitation inside the alpha2 lath and Ti 5(Al,Si)3 zeta-type silicide precipitation at the alpha 2/gamma interface. The H-carbides preserve alpha2/gamma type interfaces, while zeta-silicide precipitates restrict ledge growth and interfacial sliding enabling strong resistance to creep deformation.
High magnetic coercivity of FePt-Ag/MgO granular nanolayers
NASA Astrophysics Data System (ADS)
Roghani, R.; Sebt, S. A.; Khajehnezhad, A.
2018-06-01
L10-FePt ferromagnetic nanoparticles have a hight coercivity of Tesla order. Thus, these nanoparticles, with size of 10 to 15 nm and uniform surface distribution, are suitable in magnetic data storage technology with density of more than 1GB. In order to improve structural and magnetic properties of FePt nanoparticles, some elements and combinations have been added to compound. In this research, we show that due to the presence of the Ag, the phase transition temperature of FePt from fcc to L10-fct phase decreases. The presence of Ag as an additive in FePt-Ag nanocomposite, increases the magnetic coercivity. This nanocomposite, with 10% Ag, was deposited by magnetron sputtering on the MgO heat layer. VSM results of 10 nm nanoparticles show that coercivity has increased up to 1.4 T. XRD and FESEM results confirm that the size of the L10-FePt nanoparticles are 10 nm and their surface distribution are uniform. Ag gradually form nano scale clusters with separate lattice and FePt-Ag nanocomposite appears. The result of this process is emptiness of Ag position in FePt-fcc lattice. So, the mobility of Fe and Pt atoms in this lattice increases and it can be possible for them to move in lower temperature. This mechanism explain the effect of Ag on decreasing the transition temperature to fct-L10 phase, and hight coercivity of FePt nanoparticles.
NASA Astrophysics Data System (ADS)
Oliveira, Sérgio C.; Zêzere, José L.; Catalão, João; Nico, Giovanni
2015-04-01
In the Grande da Pipa river basin (north of Lisbon, Portugal), 64% of the landslides inventoried occur on a particular weak rock lithological unit composed by clay and with sandstone intercalations, that is present in 58% of the study (Oliveira et al., 2014). Deep-seated slow moving rotational slides occur essentially on this lithological unit and are responsible for the major damages verified along roads and buildings in the study area. Within this context, landslide hazard assessment, is limited by two major constrains: (i) the slope instability signs may not be sufficiently clear and observable and consequently may not be correctly identifiable through traditional geomorphologic survey techniques and (ii) the non-timely recognition of precursor signs of instability both in landslides activated for the first time and in previously landslide-affected areas (landslide reactivation). To encompass these limitations, the Persistent Scatterer synthetic aperture radar interferometry technique is applied to a data set of 16 TerraSAR-X SAR images, from April of 2010 to March of 2011, available for a small test site of 12.5 square kilometers (Laje-Salema) located on south-central part of the study area. This work's specific objectives are the following: (i) to evaluate the capacity of the Persistent Scatterer displacement maps in assessing landslide susceptibility at the regional scale, and (ii) to assess the capacity of landslide susceptibility maps based on historical landslide inventories to predict the location of actual terrain displacement measured by the Persistent Scatterers technique. Landslide susceptibility was assessed for the test site using the Information Value bivariate statistical method and the susceptibility scores were exported to the Grande da Pipa river basin. The independent validation of the landslide susceptibility maps was made using the historical landslide inventory and the Persistent Scatterer displacement map. Results are compared by computing the respective Receiver Operator Characteristic curves and calculating the corresponding Area Under the Curve. Reference: Oliveira, S.C.; Zêzere, J.L.; Catalão, J.; Nico, G. (2014) - The contribution of PSInSAR interferometry to landslide hazard in weak rock-dominated areas. Landslides, DOI 10.1007/s10346-014-0522-9 This work was supported by the FCT - Portuguese Foundation for Science and Technology and is within the framework of the Project Pan-European and nation-wide landslide susceptibility assessment, European and Mediterranean Major Hazards Agreement (EUR-OPA). The first author was funded by a postdoctoral grant (SFRH/BPD/85827/2012) from the Portuguese Foundation for Science and Technology (FCT).
NASA Astrophysics Data System (ADS)
Ivanov, M.; Zeitoun, D.; Vuillon, J.; Gimelshein, S.; Markelov, G.
1996-05-01
The problem of transition of planar shock waves over straight wedges in steady flows from regular to Mach reflection and back was numerically studied by the DSMC method for solving the Boltzmann equation and finite difference method with FCT algorithm for solving the Euler equations. It is shown that the transition from regular to Mach reflection takes place in accordance with detachment criterion while the opposite transition occurs at smaller angles. The hysteresis effect was observed at increasing and decreasing shock wave angle.
1989-05-01
r--S is. WATER FLIGHT CODE A T ION DATA FROCE.SFD 51 !4E FAA ’FCtINICAL CF.N!FR AfLAV’IC CITY AP0 N1 08403 D SPEED F WIND SPEED IS 10 iP1. OR...08,35 DEEC INDICATE WIND SPEED IN S NG OCCURS IF WIND SPEED IS 10 IlPt. OR GREATER IND S. ING INDICATES WIND SPEED A YORK WALL ST. DR HELIPORT CALM IiI G
Sarmento, Sandra; Costa, Filipa; Pereira, Alexandre; Lencart, Joana; Dias, Anabela; Cunha, Luís; Sousa, Olga; Silva, José Pedro; Santos, Lúcio
2015-12-15
After publication of this study [1], the authors noticed that the funding was incorrectly acknowledged. The correct Acknowledgements section can be found below: “This work was partly funded by Fundação para a Ciência e Tecnologia (FCT), in the framework of the project PTDC/SAU-ENB/117631/2010, which is cofinanced by FEDER, through Programa Operacional Fatores de Competitividade - COMPETE of QREN (reference FCOMP-01-0124-FEDER-021141).”
Wan, Haiying; Shi, Shifan; Bai, Litao; Shamsuzzoha, Mohammad; Harrell, J W; Street, Shane C
2010-08-01
We describe an approach to synthesize monodisperse CoPt nanoparticles with dendrimer as template by a simple chemical reduction method in aqueous solution using NaBH4 as reducing agent at room temperature. The as-made CoPt nanoparticles buried in the dendrimer matrix have the chemically disordered fcc structure and can be transformed to the fct phase after annealing at 700 degrees C. This is the first report of dendrimer-mediated room temperature synthesis of monodisperse magnetic nanoparticles in aqueous solution.
1988-09-01
S P a .E REPORT DOCUMENTATION PAGE OMR;oJ ’ , CRR Eo Dale n2 ;R6 ’a 4EPOR- SCRFT CASS F.C.T ON ’b RES’RICTI’,E MARKINGS Unclassified a ECRIT y...and selection of test waves 30. Measured prototype wave data on which a comprehensive statistical analysis of wave conditions could be based were...Tests Existing conditions 32. Prior to testing of the various improvement plans, comprehensive tests were conducted for existing conditions (Plate 1
2013-01-01
catalyst thermal CVD (FCT-CVD) with xylene and ferrocene liquid mixture without any prior catalyst deposition. T-CVD is a low-cost system that can... ferrocene is used as an iron source to promoteCNT growth. Based on these repeatable results, the CNT growth parameters were used to grow CNTs on the...temperature furnace is ramped up to the growth temperature of 750∘C. Ferrocene was dissolved into a xylene solvent in a 0.008 : 1molar volume ratio.The xylene
Regional rainfall thresholds for landslide occurrence using a centenary database
NASA Astrophysics Data System (ADS)
Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Quaresma, Ivânia
2017-04-01
Rainfall is one of the most important triggering factors for landslides occurrence worldwide. The relation between rainfall and landslide occurrence is complex and some approaches have been focus on the rainfall thresholds identification, i.e., rainfall critical values that when exceeded can initiate landslide activity. In line with these approaches, this work proposes and validates rainfall thresholds for the Lisbon region (Portugal), using a centenary landslide database associated with a centenary daily rainfall database. The main objectives of the work are the following: i) to compute antecedent rainfall thresholds using linear and potential regression; ii) to define lower limit and upper limit rainfall thresholds; iii) to estimate the probability of critical rainfall conditions associated with landslide events; and iv) to assess the thresholds performance using receiver operating characteristic (ROC) metrics. In this study we consider the DISASTER database, which lists landslides that caused fatalities, injuries, missing people, evacuated and homeless people occurred in Portugal from 1865 to 2010. The DISASTER database was carried out exploring several Portuguese daily and weekly newspapers. Using the same newspaper sources, the DISASTER database was recently updated to include also the landslides that did not caused any human damage, which were also considered for this study. The daily rainfall data were collected at the Lisboa-Geofísico meteorological station. This station was selected considering the quality and completeness of the rainfall data, with records that started in 1864. The methodology adopted included the computation, for each landslide event, of the cumulative antecedent rainfall for different durations (1 to 90 consecutive days). In a second step, for each combination of rainfall quantity-duration, the return period was estimated using the Gumbel probability distribution. The pair (quantity-duration) with the highest return period was considered as the critical rainfall combination responsible for triggering the landslide event. Only events whose critical rainfall combinations have a return period above 3 years were included. This criterion reduces the likelihood of been included events whose triggering factor was other than rainfall. The rainfall quantity-duration threshold for the Lisbon region was firstly defined using the linear and potential regression. Considering that this threshold allow the existence of false negatives (i.e. events below the threshold) it was also identified the lower limit and upper limit rainfall thresholds. These limits were defined empirically by establishing the quantity-durations combinations bellow which no landslides were recorded (lower limit) and the quantity-durations combinations above which only landslides were recorded without any false positive occurrence (upper limit). The zone between the lower limit and upper limit rainfall thresholds was analysed using a probabilistic approach, defining the uncertainties of each rainfall critical conditions in the triggering of landslides. Finally, the performances of the thresholds obtained in this study were assessed using ROC metrics. This work was supported by the project FORLAND - Hydrogeomorphologic risk in Portugal: driving forces and application for land use planning [grant number PTDC/ATPGEO/1660/2014] funded by the Portuguese Foundation for Science and Technology (FCT), Portugal. Sérgio Cruz Oliveira is a post-doc fellow of the FCT [grant number SFRH/BPD/85827/2012].
NASA Astrophysics Data System (ADS)
Lima, R. S.; Nagy, D.; Marple, B. R.
2015-01-01
Different types of thermal spray systems, including HVOF (JP5000 and DJ2600-hybrid), APS (F4-MB and Axial III), and LPPS (Oerlikon Metco system) were employed to spray CoNiCrAlY bond coats (BCs) onto Inconel 625 substrates. The chemical composition of the BC powder was the same in all cases; however, the particle size distribution of the powder employed with each torch was that specifically recommended for the torch. For optimization purposes, these BCs were screened based on initial evaluations of roughness, porosity, residual stress, relative oxidation, and isothermal TGO growth. A single type of standard YSZ top coat was deposited via APS (F4MB) on all the optimized BCs. The TBCs were thermally cycled by employing a furnace cycle test (FCT) (1080 °C-1 h—followed by forced air cooling). Samples were submitted to 10, 100, 400, and 1400 cycles as well as being cycled to failure. The behavior of the microstructures, bond strength values (ASTM 633), and the TGO evolution of these TBCs, were investigated for the as-sprayed and thermally cycled samples. During FCT, the TBCs found to be both the best and poorest performing and had their BCs deposited via HVOF. The results showed that engineering low-oxidized BCs does not necessarily lead to an optimal TBC performance. Moreover, the bond strength values decrease significantly only when the TBC is about to fail (top coat spall off) and the as-sprayed bond strength values cannot be used as an indicator of TBC performance.
Energy Return on Investment - Fuel Recycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halsey, W; Simon, A J; Fratoni, M
2012-06-06
This report provides a methodology and requisite data to assess the potential Energy Return On Investment (EROI) for nuclear fuel cycle alternatives, and applies that methodology to a limited set of used fuel recycle scenarios. This paper is based on a study by Lawrence Livermore National Laboratory and a parallel evaluation by AREVA Federal Services LLC, both of which were sponsored by the DOE Fuel Cycle Technologies (FCT) Program. The focus of the LLNL effort was to develop a methodology that can be used by the FCT program for such analysis that is consistent with the broader energy modeling community,more » and the focus of the AREVA effort was to bring industrial experience and operational data into the analysis. This cooperative effort successfully combined expertise from the energy modeling community with expertise from the nuclear industry. Energy Return on Investment is one of many figures of merit on which investment in a new energy facility or process may be judged. EROI is the ratio of the energy delivered by a facility divided by the energy used to construct, operate and decommission that facility. While EROI is not the only criterion used to make an investment decision, it has been shown that, in technologically advanced societies, energy supplies must exceed a minimum EROI. Furthermore, technological history shows a trend towards higher EROI energy supplies. EROI calculations have been performed for many components of energy technology: oil wells, wind turbines, photovoltaic modules, biofuels, and nuclear reactors. This report represents the first standalone EROI analysis of nuclear fuel reprocessing (or recycling) facilities.« less
NUVEM - New methods to Use gnss water Vapor Estimates for Meteorology of Portugal
NASA Astrophysics Data System (ADS)
Fernandes, R. M. S.; Viterbo, P.; Bos, M. S.; Martins, J. P.; Sá, A. G.; Valentim, H.; Jones, J.
2014-12-01
NUVEM (New methods to Use gnss water Vapor Estimates for Meteorology of Portugal) is a collaborative project funded by the Portuguese National Science Foundation (FCT) aiming to implement a multi-disciplinary approach in order to operationalize the inclusion of GNSS-PWV estimates for nowcasting in Portugal, namely for the preparation of warnings of severe weather. To achieve such goal, the NUVEM project is divided in two major components: a) Development and implementation of methods to compute accurate estimates of PWV (Precipitable Water Vapor) in NRT (Near Real-Time); b) Integration of such estimates in nowcasting procedures in use at IPMA (Portuguese Meteorological Service). Methodologies will be optimized at SEGAL to passive and actively access to the data; the PWV estimations will be computed using PPP (Precise Point Positioning), which permits the estimation of each individual station separately; solutions will be validated using internal and external values; and computed solutions will be transferred timely to the IPMA Operational Center. Validation of derived estimations using robust statistics is an important component of the project. The need for sending computed values as soon as possible to IPMA requires fast but reliable internal (e.g., noise estimation) and external (e.g., feedback from IPMA using other sensors like radiosondes) assessment of the quality of the PWV estimates. At IPMA, the goal is to implement the operational use of GNSS-PWV to assist weather nowcasting in Portugal. This will be done with the assistance of the Meteo group of IDL. Maps of GNSS-PWV will be automatically created and compared with solutions provided by other operational systems in order to help IPMA to detect suspicious patterns at near real time. This will be the first step towards the assimilation of GNSS-PWV estimates at IPMA nowcasting models. The NUVEM (EXPL/GEO-MET/0413/2013) project will also contribute to the active participation of Portugal at the COST Action ES1206 - Advanced Global Navigation Satellite Systems tropospheric products for monitoring severe weather events and climate (GNSS4SWEC). This work is also carried out in the framework of the Portuguese Project SMOG (PTDC/CTE-ATM/119922/2010).
Transaction Design Specification Medical Exam Databases System (MED) update Transaction
1986-12-01
8217RECOFD IN 51) 73. CORONARY SFAS1 SITE ;CHAR X(6) IN 7:) 74* CORONARY FLAQUES (RECO D IN 51) 75. CCFONARY PLAOUE 3ITE ,CHAR K(60 IN 74) 76* FCT DIAMETER...KETOSTEROIDS YE HYDROXYCARTICOSTEROIDS YO 24 HR URINE TOTAL VOLUME MA URINE OSMOLALITY MB SERUM OSMOLALITY MC 24HR URINE TOTAL VOLUME ZE SERUM COPPER FBS TO...RHEUMATOID FACTOR PA N P -2 2 ANTINUCLEAR ANTIBODY PB N P -2 2 0 FREE FATTY ACIDS QA 5 9 57 200 MG% SERUM COPPER RA 30 70 130 300 JG% URINE COPPER RBM 10 30 90
Relating precipitation to fronts at a sub-daily basis
NASA Astrophysics Data System (ADS)
Hénin, Riccardo; Ramos, Alexandre M.; Liberato, Margarida L. R.; Gouveia, Célia
2017-04-01
High impact events over Western Iberia include precipitation extremes that are cause for concern as they lead to flooding, landslides, extensive property damage and human casualties. These events are usually associated with low pressure systems over the North Atlantic moving eastward towards the European western coasts (Liberato and Trigo, 2014). A method to detect fronts and to associate amounts of precipitation to each front is tested, distinguishing between warm and cold fronts. The 6-hourly ERA-interim 1979-2012 reanalysis with 1°x1° horizontal resolution is used for the purpose. An objective front identification method (the Thermal Method described in Shemm et al., 2014) is applied to locate fronts all over the Northern Hemisphere considering the equivalent potential temperature as thermal parameter to use in the model. On the other hand, we settled a squared search box of tuneable dimension (from 2 to 10 degrees long) to look for a front in the neighbourhood of a grid point affected by precipitation. A sensitivity analysis is performed and the optimal dimension of the box is assessed in order to avoid over(under) estimation of precipitation. This is performed in the light of the variability and typical dynamics of warm/cold frontal systems in the Western Europe region. Afterwards, using the extreme event ranking over Iberia proposed by Ramos et al. (2014) the first ranked extreme events are selected in order to validate the method with specific case studies. Finally, climatological and trend maps of frontal activity are produced both on annual and seasonal scales. Trend maps show a decrease of frontal precipitation over north-western Europe and a slight increase over south-western Europe, mainly due to warm fronts. REFERENCES Liberato M.L.R. and R.M. Trigo (2014) Extreme precipitation events and related impacts in Western Iberia. Hydrology in a Changing World: Environmental and Human Dimensions. IAHS Red Book No 363, 171-176. ISSN: 0144-7815. Ramos A.M., R.M. Trigo and M.L.R. Liberato (2014) A ranking of high-resolution daily precipitation extreme events for the Iberian Peninsula, Atmospheric Science Letters 15, 328 - 334. doi: 10.1002/asl2.507. Shemm S., I. Rudeva and I. Simmonds (2014) Extratropical fronts in the lower troposphere - global perspectives obtained from two automated methods. Quarterly Journal of the Royal Meteorological Society, 141: 1686-1698, doi: 10.1002/qj.2471. ACKNOWLEDGEMENTS This work is supported by FCT - project UID/GEO/50019/2013 - Instituto Dom Luiz. Fundação para a Ciência e a Tecnologia, Portugal (FCT) is also providing for R. Hénin doctoral grant (PD/BD/114479/2016) and A.M. Ramos postdoctoral grant (FCT/DFRH/SFRH/BPD/84328/2012).
Exploring the Cigala/calibra Network Data Base for Ionosphere Monitoring Over Brazil
NASA Astrophysics Data System (ADS)
Vani, B. C.; Galera Monico, J. F.; Shimabukuro, M. H.; Pereira, V. A.; Aquino, M. H.
2013-12-01
The ionosphere in Brazil is strongly influenced by the equatorial anomaly, therefore GNSS based applications are widely affected by ionospheric disturbances. A network for continuous monitoring of the ionosphere has been deployed over its territory since February/2011, as part of the CIGALA and CALIBRA projects. Through CIGALA (Concept for Ionospheric Scintillation Mitigation for Professional GNSS in Latin America), which was funded by European Commission (EC) in the framework of the FP7-GALILEO-2009-GSA (European GNSS Agency), the first stations were deployed at Presidente Prudente, São Paulo state, in February 2011. CIGALA was finalized in February 2012 with eight stations distributed over the Brazilian territory. Through CALIBRA (Countering GNSS high Accuracy applications Limitations due to Ionospheric disturbances in BRAzil), which is also funded by the European Commission now in the framework of the FP7-GALILEO-2011-GSA, new stations are being deployed. Some of the stations are being specifically placed according to geomagnetic considerations aiming to support the development of a local scintillation and TEC model. CALIBRA started in November 2012 and will have two years of duration, focusing on the development of improved and new algorithms that can be applied to high accuracy GNSS techniques in order to tackle the effects of ionospheric disturbances. PolarRxS-PRO receivers, manufactured by Septentrio, have been deployed at all stations This multi-GNSS receiver can collect data at rates of up to 100 Hz, providing ionospheric TEC, scintillation parameters like S4 and Sigma-Phi, and other signal metrics like locktime for all satellites and frequencies tracked. All collected data (raw and ionosphere monitoring records) is stored at a central facility located at the Faculdade de Ciências e Tecnologia da Universidade Estadual Paulista (FCT/UNESP) in Presidente Prudente. To deal with the large amount of data, an analysis infrastructure has also been established in the form of a web based software named ISMR Query Tool, which provides a capability to identify specific behaviors of ionospheric activity through data visualization and data mining. Its web availability and user-specified features allow the users to interact with the data through a simple internet connection, enabling to obtain insight about the ionosphere according with their own previous knowledge. Information about the network, the projects and the tool can be found at the FCT/UNESP Ionosphere web portal available at http://is-cigala-calibra.fct.unesp.br/. This contribution will provide an overview of results extracted using the monitoring and analysis infrastructure, explaining the possibilities offered by the ISMR Query Tool to support analysis of the ionosphere as well as the development of models and mitigation techniques to counter the effects of ionospheric disturbances on GNSS.
Abdelfattah, Nizar S; Al-Sheikh, Mayss; Pitetta, Sean; Mousa, Ahmed; Sadda, SriniVas R; Wykoff, Charles C
2017-02-01
To compare the enlargement rate of macular atrophy (ERMA) in eyes treated with ranibizumab monthly or using a treat-and-extend (TREX) regimen for neovascular age-related macular degeneration (AMD) or fellow control eyes, as well as analyze risk factors for macular atrophy (MA) development and progression. Eighteen-month, multicenter, randomized, controlled clinical trial. Sixty patients with treatment-naïve neovascular AMD in 1 eye randomized 1:2 to monthly or TREX ranibizumab. Patients' study and fellow eyes were followed for 18 months using spectral-domain optical coherence tomography (SD OCT) and fundus autofluorescence (FAF) imaging. The MA was quantified on FAF images using Heidelberg Region Finder software (Heidelberg Engineering, Heidelberg, Germany), with suspected areas of atrophy confirmed by SD OCT and infrared reflectance imaging. For eyes without baseline MA yet developed MA by 18 months, intervening visits were assessed to determine the first visit at which MA appeared to define progression rates. Foveal choroidal thickness (FCT), subretinal hyperreflective material (SHRM), and pigment epithelial detachment (PED), were assessed at baseline to determine whether they influenced MA progression. Mean ERMA at 18 months. Relationship between visual acuity and MA, and the baseline risk factors for ERMA were also assessed. The final analysis cohort included 88 eyes in 3 groups: monthly (n = 19), TREX (n = 30), and control fellow eyes (n = 39). Mean ERMA over 18 months was 0.39±0.67 (monthly), 1.1±1.9 (TREX), and 0.49±1 mm 2 (control, P = 0.12). Mean ERMA per group among the 40.9% (n = 36) of baseline patients with MA was 0.9±1, 1.9±2.2, and 1±1.3 mm 2 , respectively (P = 0.31). The incidence rate of MA in the 3 groups was 40%, 0%, and 8.3%, respectively. Mann-Whitney U test revealed a statistically significant association between baseline FCT (127±46 vs. 155±55 μm, P = 0.01) and SHRM thickness (106±131 vs. 50±85 μm, P = 0.02) on MA. In eyes with no baseline MA, presence of SHRM, SHRM, and PED thickness, and presence of baseline hemorrhage were all significant predictors of new MA development (P = 0.04, 0.01, 0.04, 0.004, 0.01, respectively). Ranibizumab did not show a statistically significant influence on new MA development in eyes with neovascular AMD, whether dosed monthly or per TREX regimen. The FCT, SHRM thickness, and hemorrhage at baseline were all significant predictors of new MA. Copyright © 2016 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Zangrillo, Amanda N; Fisher, Wayne W; Greer, Brian D; Owen, Todd M; DeSouza, Andresa A
2016-01-01
Previous research has supported functional communication training (FCT) as an effective intervention for reducing challenging behavior. Clinicians often program schedule-thinning procedures to increase the portability of the treatment (i.e., reinforcement is provided less frequently). For individuals with escape-maintained problem behavior, chained schedules have proven effective in increasing task completion and supplemental procedures may ameliorate reemergence of challenging behavior as access to reinforcement is decreased. The present study compared the use of a chained schedule-thinning procedure with and without alternative reinforcement (e.g., toys and activities) embedded in an intervention in which escape from the task is provided contingent on a request for a break. Two individuals with escape-maintained challenging behavior participated. We compared two treatment conditions, escape-only and escape-to-tangibles, using a single-subject, alternating treatments design with each treatment implemented in a distinct academic context. With the escape-to-tangibles treatment, we reached the final schedule in both contexts with both participants (4 successes out of 4 applications). We did not reach the final schedule with either participant with the escape-only intervention (0 successes out of 2 applications). The current results provided preliminary confirmation that providing positive plus negative reinforcement would decrease destructive behavior, increase compliance, and facilitate reinforcer-schedule thinning.
Zangrillo, Amanda N.; Fisher, Wayne W.; Greer, Brian D.; Owen, Todd M.; DeSouza, Andresa A.
2016-01-01
Objective Previous research has supported functional communication training (FCT) as an effective intervention for reducing challenging behavior. Clinicians often program schedule-thinning procedures to increase the portability of the treatment (i.e., reinforcement is provided less frequently). For individuals with escape-maintained problem behavior, chained schedules have proven effective in increasing task completion and supplemental procedures may ameliorate reemergence of challenging behavior as access to reinforcement is decreased. The present study compared the use of a chained schedule-thinning procedure with and without alternative reinforcement (e.g., toys and activities) embedded in an intervention in which escape from the task is provided contingent on a request for a break. Method Two individuals with escape-maintained challenging behavior participated. We compared two treatment conditions, escape-only and escape-to-tangibles, using a single-subject, alternating treatments design with each treatment implemented in a distinct academic context. Results With the escape-to-tangibles treatment, we reached the final schedule in both contexts with both participants (4 successes out of 4 applications). We did not reach the final schedule with either participant with the escape-only intervention (0 successes out of 2 applications). Conclusion The current results provided preliminary confirmation that providing positive plus negative reinforcement would decrease destructive behavior, increase compliance, and facilitate reinforcer-schedule thinning. PMID:28626579
van Nieuwamerongen, S E; Mendl, M; Held, S; Soede, N M; Bolhuis, J E
2017-09-01
We studied the social and cognitive performance of piglets raised pre-weaning either in a conventional system with a sow in a farrowing crate (FC) or in a multi-suckling (MS) system in which 5 sows and their piglets could interact in a more physically enriched and spacious environment. After weaning at 4 weeks of age, 8 groups of 4 litter-mates per pre-weaning housing treatment were studied under equal and enriched post-weaning housing conditions. From each pen, one pair consisting of a dominant and a submissive pig was selected, based on a feed competition test (FCT) 2 weeks post-weaning. This pair was used in an informed forager test (IFT) which measured aspects of spatial learning and foraging strategies in a competitive context. During individual training, submissive (informed) pigs learned to remember a bait location in a testing arena with 8 buckets (the same bucket was baited in a search visit and a subsequent relocation visit), whereas dominant (non-informed) pigs always found the bait in a random bucket (search visits only). After learning their task, the informed pigs' individual search visit was followed by a pairwise relocation visit in which they were accompanied by the non-informed pig. Effects of pre-weaning housing treatment were not distinctly present regarding the occurrence of aggression in the FCT and the learning performance during individual training in the IFT. During paired visits, informed and non-informed pigs changed their behaviour in response to being tested pairwise instead of individually, but MS and FC pigs showed few distinct behavioural differences.
Nakatsuka, Haruo; Chiba, Keiko; Watanabe, Takao; Sawatari, Hideyuki; Seki, Takako
2016-11-01
Iodine intake by adults in farming districts in Northeastern Japan was evaluated by two methods: (1) government-approved food composition tables based calculation and (2) instrumental measurement. The correlation between these two values and a regression model for the calibration of calculated values was presented. Iodine intake was calculated, using the values in the Japan Standard Tables of Food Composition (FCT), through the analysis of duplicate samples of complete 24-h food consumption for 90 adult subjects. In cases where the value for iodine content was not available in the FCT, it was assumed to be zero for that food item (calculated values). Iodine content was also measured by ICP-MS (measured values). Calculated and measured values rendered geometric means (GM) of 336 and 279 μg/day, respectively. There was no statistically significant (p > 0.05) difference between calculated and measured values. The correlation coefficient was 0.646 (p < 0.05). With this high correlation coefficient, a simple regression line can be applied to estimate measured value from calculated value. A survey of the literature suggests that the values in this study were similar to values that have been reported to date for Japan, and higher than those for other countries in Asia. Iodine intake of Japanese adults was 336 μg/day (GM, calculated) and 279 μg/day (GM, measured). Both values correlated so well, with a correlation coefficient of 0.646, that a regression model (Y = 130.8 + 1.9479X, where X and Y are measured and calculated values, respectively) could be used to calibrate calculated values.
Zapata Moya, Angel R; Navarro Yáñez, Clemente J
2017-03-01
Urban regeneration policies are area-based interventions addressing multidimensional problems. In this study, we analyse the impact of urban regeneration processes on the evolution of inequalities in mortality from certain causes. On the basis of Fundamental Cause Theory (FCT), our main hypothesis is that the impact of urban regeneration programmes will be more clearly observed on the causes of preventable deaths, as these programmes imply a direct or indirect improvement to a whole range of 'flexible resources' that residents in relevant areas have access to, and which ultimately may influence the inverse relationship between socioeconomic status and health. Using a quasi-experimental design and data from Longitudinal Statistics on Survival and Longevity of Andalusia (Spain), we analyse differences in the evolution of standard mortality ratios for preventable and less-preventable causes of premature death. This encompasses 59 neighbourhoods in 37 municipalities where urban regeneration projects were implemented in the last decade within the framework of three different programmes and in 59 counterparts where these policies were not implemented. As expected in line with FCT, there are no significant patterns in the evolution of internal differences in terms of less-preventable mortality. However, excessive preventable mortality strongly decreases in the neighbourhoods with intervention programmes, specifically in those where two or more projects were in force. This is even more apparent for women. The urban regeneration policies studied seem to contribute to reducing health inequity when the interventions are more integral in nature. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Tampering with the turbulent energy cascade with polymer additives
NASA Astrophysics Data System (ADS)
Valente, Pedro; da Silva, Carlos; Pinho, Fernando
2014-11-01
We show that the strong depletion of the viscous dissipation in homogeneous viscoelastic turbulence reported by previous authors does not necessarily imply a depletion of the turbulent energy cascade. However, for large polymer relaxation times there is an onset of a polymer-induced kinetic energy cascade which competes with the non-linear energy cascade leading to its depletion. Remarkably, the total energy cascade flux from both cascade mechanisms remains approximately the same fraction of the kinetic energy over the turnover time as the non-linear energy cascade flux in Newtonian turbulence. The authors acknowledge the funding from COMPETE, FEDER and FCT (Grant PTDC/EME-MFE/113589/2009).
Structures and Algorithms in Stochastic Realization Theory and the Smoothing Problem
1980-01-01
w satisfying (2.2) and i. H(w) for all i) having y as its output is called a rezlization of y. Clearly, the components of x, y and w belong to H...FCt)x,(t) + B,(t)w,(t) ; x,(0) Z 0 y(t) a H(t)x,(t) + R,(t) w,(t) , which clearly belongs to S. It can be immediately seen that the co- variance... it is seen that the realization i,(t-) = P’(t)i,(t) + B,(t),(t) ; i,(T) - 0(90,) (2.37) )y(t) = G’(t)iCt) + 9,(t)hQ,(t) belongs to S. By Lemma 2.7
High-coercivity FePt nanoparticle assemblies embedded in silica thin films.
Yan, Q; Purkayastha, A; Singh, A P; Li, H; Li, A; Ramanujan, R V; Ramanath, G
2009-01-14
The ability to process assemblies using thin film techniques in a scalable fashion would be a key to transmuting the assemblies into manufacturable devices. Here, we embed FePt nanoparticle assemblies into a silica thin film by sol-gel processing. Annealing the thin film composite at 650 degrees C transforms the chemically disordered fcc FePt phase into the fct phase, yielding magnetic coercivity values H(c)>630 mT. The positional order of the particles is retained due to the protection offered by the silica host. Such films with assemblies of high-coercivity magnetic particles are attractive for realizing new types of ultra-high-density data storage devices and magneto-composites.
Visual exploration and analysis of ionospheric scintillation monitoring data: The ISMR Query Tool
NASA Astrophysics Data System (ADS)
Vani, Bruno César; Shimabukuro, Milton Hirokazu; Galera Monico, João Francisco
2017-07-01
Ionospheric Scintillations are rapid variations on the phase and/or amplitude of a radio signal as it passes through ionospheric plasma irregularities. The ionosphere is a specific layer of the Earth's atmosphere located approximately between 50 km and 1000 km above the Earth's surface. As Global Navigation Satellite Systems (GNSS) - such as GPS, Galileo, BDS and GLONASS - use radio signals, these variations degrade their positioning service quality. Due to its location, Brazil is one of the places most affected by scintillation in the world. For that reason, ionosphere monitoring stations have been deployed over Brazilian territory since 2011 through cooperative projects between several institutions in Europe and Brazil. Such monitoring stations compose a network that generates a large amount of monitoring data everyday. GNSS receivers deployed at these stations - named Ionospheric Scintillation Monitor Receivers (ISMR) - provide scintillation indices and related signal metrics for available satellites dedicated to satellite-based navigation and positioning services. With this monitoring infrastructure, more than ten million observation values are generated and stored every day. Extracting the relevant information from this huge amount of data was a hard process and required the expertise of computer and geoscience scientists. This paper describes the concepts, design and aspects related to the implementation of the software that has been supporting research on ISMR data - the so-called ISMR Query Tool. Usability and other aspects are also presented via examples of application. This web based software has been designed and developed aiming to ensure insights over the huge amount of ISMR data that is fetched every day on an integrated platform. The software applies and adapts time series mining and information visualization techniques to extend the possibilities of exploring and analyzing ISMR data. The software is available to the scientific community through the World Wide Web, therefore constituting an analysis infrastructure that complements the monitoring one, providing support for researching ionospheric scintillation in the GNSS context. Interested researchers can access the functionalities without cost at http://is-cigala-calibra.fct.unesp.br/, under online request to the Space Geodesy Study Group from UNESP - Univ Estadual Paulista at Presidente Prudente.
NASA Astrophysics Data System (ADS)
Soares, P. M. M.; Cardoso, R. M.
2017-12-01
Regional climate models (RCM) are used with increasing resolutions pursuing to represent in an improved way regional to local scale atmospheric phenomena. The EURO-CORDEX simulations at 0.11° and simulations exploiting finer grid spacing approaching the convective-permitting regimes are representative examples. The climate runs are computationally very demanding and do not always show improvements. These depend on the region, variable and object of study. The gains or losses associated with the use of higher resolution in relation to the forcing model (global climate model or reanalysis), or to different resolution RCM simulations, is known as added value. Its characterization is a long-standing issue, and many different added-value measures have been proposed. In the current paper, a new method is proposed to assess the added value of finer resolution simulations, in comparison to its forcing data or coarser resolution counterparts. This approach builds on a probability density function (PDF) matching score, giving a normalised measure of the difference between diverse resolution PDFs, mediated by the observational ones. The distribution added value (DAV) is an objective added value measure that can be applied to any variable, region or temporal scale, from hindcast or historical (non-synchronous) simulations. The DAVs metric and an application to the EURO-CORDEX simulations, for daily temperatures and precipitation, are here presented. The EURO-CORDEX simulations at both resolutions (0.44o,0.11o) display a clear added value in relation to ERA-Interim, with values around 30% in summer and 20% in the intermediate seasons, for precipitation. When both RCM resolutions are directly compared the added value is limited. The regions with the larger precipitation DAVs are areas where convection is relevant, e.g. Alps and Iberia. When looking at the extreme precipitation PDF tail, the higher resolution improvement is generally greater than the low resolution for seasons and regions. For temperature, the added value is smaller. AcknowledgmentsThe authors wish to acknowledge SOLAR (PTDC/GEOMET/7078/2014) and FCT UID/GEO/50019/ 2013 (Instituto Dom Luiz) projects.
Benchmarking a geostatistical procedure for the homogenisation of annual precipitation series
NASA Astrophysics Data System (ADS)
Caineta, Júlio; Ribeiro, Sara; Henriques, Roberto; Soares, Amílcar; Costa, Ana Cristina
2014-05-01
The European project COST Action ES0601, Advances in homogenisation methods of climate series: an integrated approach (HOME), has brought to attention the importance of establishing reliable homogenisation methods for climate data. In order to achieve that, a benchmark data set, containing monthly and daily temperature and precipitation data, was created to be used as a comparison basis for the effectiveness of those methods. Several contributions were submitted and evaluated by a number of performance metrics, validating the results against realistic inhomogeneous data. HOME also led to the development of new homogenisation software packages, which included feedback and lessons learned during the project. Preliminary studies have suggested a geostatistical stochastic approach, which uses Direct Sequential Simulation (DSS), as a promising methodology for the homogenisation of precipitation data series. Based on the spatial and temporal correlation between the neighbouring stations, DSS calculates local probability density functions at a candidate station to detect inhomogeneities. The purpose of the current study is to test and compare this geostatistical approach with the methods previously presented in the HOME project, using surrogate precipitation series from the HOME benchmark data set. The benchmark data set contains monthly precipitation surrogate series, from which annual precipitation data series were derived. These annual precipitation series were subject to exploratory analysis and to a thorough variography study. The geostatistical approach was then applied to the data set, based on different scenarios for the spatial continuity. Implementing this procedure also promoted the development of a computer program that aims to assist on the homogenisation of climate data, while minimising user interaction. Finally, in order to compare the effectiveness of this methodology with the homogenisation methods submitted during the HOME project, the obtained results were evaluated using the same performance metrics. This comparison opens new perspectives for the development of an innovative procedure based on the geostatistical stochastic approach. Acknowledgements: The authors gratefully acknowledge the financial support of "Fundação para a Ciência e Tecnologia" (FCT), Portugal, through the research project PTDC/GEO-MET/4026/2012 ("GSIMCLI - Geostatistical simulation with local distributions for the homogenization and interpolation of climate data").
Context change explains resurgence after the extinction of operant behavior
Trask, Sydney; Schepers, Scott T.; Bouton, Mark E.
2016-01-01
Extinguished operant behavior can return or “resurge” when a response that has replaced it is also extinguished. Typically studied in nonhuman animals, the resurgence effect may provide insight into relapse that is seen when reinforcement is discontinued following human contingency management (CM) and functional communication training (FCT) treatments, which both involve reinforcing alternative behaviors to reduce behavioral excess. Although the variables that affect resurgence have been studied for some time, the mechanisms through which they promote relapse are still debated. We discuss three explanations of resurgence (response prevention, an extension of behavioral momentum theory, and an account emphasizing context change) as well as studies that evaluate them. Several new findings from our laboratory concerning the effects of different temporal distributions of the reinforcer during response elimination and the effects of manipulating qualitative features of the reinforcer pose a particular challenge to the momentum-based model. Overall, the results are consistent with a contextual account of resurgence, which emphasizes that reinforcers presented during response elimination have a discriminative role controlling behavioral inhibition. Changing the “reinforcer context” at the start of testing produces relapse if the organism has not learned to suppress its responding under conditions similar to the ones that prevail during testing. PMID:27429503
Yildirim, Oktay; Gang, Tian; Kinge, Sachin; Reinhoudt, David N.; Blank, Dave H.A.; van der Wiel, Wilfred G.; Rijnders, Guus; Huskens, Jurriaan
2010-01-01
FePt nanoparticles (NPs) were assembled on aluminum oxide substrates, and their ferromagnetic properties were studied before and after thermal annealing. For the first time, phosph(on)ates were used as an adsorbate to form self-assembled monolayers (SAMs) on alumina to direct the assembly of NPs onto the surface. The Al2O3 substrates were functionalized with aminobutylphosphonic acid (ABP) or phosphonoundecanoic acid (PNDA) SAMs or with poly(ethyleneimine) (PEI) as a reference. FePt NPs assembled on all of these monolayers, but much less on unmodified Al2O3, which shows that ligand exchange at the NPs is the most likely mechanism of attachment. Proper modification of the Al2O3 surface and controlling the immersion time of the modified Al2O3 substrates into the FePt NP solution resulted in FePt NPs assembly with controlled NP density. Alumina substrates were patterned by microcontact printing using aminobutylphosphonic acid as the ink, allowing local NP assembly. Thermal annealing under reducing conditions (96%N2/4%H2) led to a phase change of the FePt NPs from the disordered FCC phase to the ordered FCT phase. This resulted in ferromagnetic behavior at room temperature. Such a process can potentially be applied in the fabrication of spintronic devices. PMID:20480007
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dias, M F; Department of Radiation Oncology, Francis H. Burr Proton Therapy Center Massachusetts General Hospital; Seco, J
Purpose: Research in carbon imaging has been growing over the past years, as a way to increase treatment accuracy and patient positioning in carbon therapy. The purpose of this tool is to allow a fast and flexible way to generate CDRR data without the need to use Monte Carlo (MC) simulations. It can also be used to predict future clinically measured data. Methods: A python interface has been developed, which uses information from CT or 4DCT and thetreatment calibration curve to compute the Water Equivalent Path Length (WEPL) of carbon ions. A GPU based ray tracing algorithm computes the WEPLmore » of each individual carbon traveling through the CT voxels. A multiple peak detection method to estimate high contrast margin positioning has been implemented (described elsewhere). MC simulations have been used to simulate carbons depth dose curves in order to simulate the response of a range detector. Results: The tool allows the upload of CT or 4DCT images. The user has the possibility to selectphase/slice of interested as well as position, angle…). The WEPL is represented as a range detector which can be used to assess range dilution and multiple peak detection effects. The tool also provides knowledge of the minimum energy that should be considered for imaging purposes. The multiple peak detection method has been used in a lung tumor case, showing an accuracy of 1mm in determine the exact interface position. Conclusion: The tool offers an easy and fast way to simulate carbon imaging data. It can be used for educational and for clinical purposes, allowing the user to test beam energies and angles before real acquisition. An analysis add-on is being developed, where the used will have the opportunity to select different reconstruction methods and detector types (range or energy). Fundacao para a Ciencia e a Tecnologia (FCT), PhD Grant number SFRH/BD/85749/2012.« less
NASA Astrophysics Data System (ADS)
Hwang, Byoungchul; Lee, Chang Gil; Lee, Tae-Ho
2010-01-01
The correlation of the microstructure and mechanical properties of thermomechanically processed low-carbon steels containing B and Cu was investigated in this study. Eighteen kinds of steel specimens were fabricated by varying B and Cu contents and finish cooling temperatures (FCTs) after controlled rolling, and then tensile and Charpy impact tests were conducted on them. Continuous cooling transformation (CCT) diagrams of the B-free and B-added steel specimens under nondeformed and deformed conditions were constructed by a combination of deformation dilatometry and metallographic methods. The addition of a very small amount of B remarkably decreased the transformation start temperatures near a bainite start temperature (Bs) and thus expanded the formation region of low-temperature transformation phases such as degenerate upper bainite (DUB) and lower bainite (LB) to slower cooling rates. On the other hand, a deformation in the austenite region promoted the formation of quasipolygonal ferrite (QPF) and granular bainite (GB) with an increase in transformation start temperatures. The tensile test results indicated that tensile strength primarily increased with decreasing FCT, while the yield strength did not vary much, except in some specimens. The addition of B and Cu, however, increased the tensile and yield strengths simultaneously because of the significant microstructural change occasionally affected by the FCT. The Charpy impact test results indicated that the steel specimens predominantly composed of LB and lath martensite (LM) had lower upper-shelf energy (USE) than those consisting of GB or DUB, but had nearly equivalent or rather lower ductile-to-brittle transition temperature (DBTT) in spite of the increased strength. According to the electron backscatter diffraction (EBSD) analysis data, it was confirmed that LB and LM microstructures had a relatively smaller effective grain size than GB or DUB microstructures, which enhanced the tortuosity of cleavage crack propagation, thereby resulting in a decrease in DBTT.
Structure and magnetism of epitaxially strained Pd(001) films on Fe(001): Experiment and theory
NASA Astrophysics Data System (ADS)
Fullerton, Eric E.; Stoeffler, D.; Ounadjela, K.; Heinrich, B.; Celinski, Z.; Bland, J. A. C.
1995-03-01
We present an experimental and theoretical description of the structure and magnetism of epitaxially strained Pd(001) films on Fe(001) and in Fe/Pd/Fe(001) trilayers. The structure is determined by combining reflection high-energy electron diffraction and x-ray diffraction. For Fe/Au(001) bilayers and Fe/Pd/Au(001) trilayers grown by molecular-beam epitaxy on Ag(001), the Fe and Au layers are well represented by their bulk structure, whereas, thin Pd layers have a face-centered tetragonal structure with an in-plane expansion of 4.2% and an out-of-plane contraction of 7.2% (c/a=0.89). Theoretical ab initio studies of the interfacial structure indicate that the structural ground state of the epitaxially strained Pd layer is well described by a fct structure which maintains the bulk Pd atomic volume with small deviations at the interface. For Fe/Pd/Fe trilayers, the interlayer coupling oscillates with a period of 4 monolayers (ML) on a ferromagnetic background that crosses to weak antiferromagnetic coupling for thicknesses >12 ML of Pd. Strong ferromagnetic coupling observed below 5 ML of Pd indicates that 2 ML of Pd at each interface are ferromagnetically ordered. Theoretical studies of Fe3Pdn superlattices (where n is the number of Pd atomic layers) determine the polarization of the Pd layer and the interlayer magnetic coupling to depend strongly on the c/a ratio of the Pd layers. Modeling of a Pd layer with a constant-volume fct structure and one monolayer interfacial roughness find that the first 2 ML of the Pd is polarized in close agreement with the experimental results. Polarized neutron reflectivity results on an Fe(5.6 ML)/Pd(7 ML)/Au(20 ML) sample determine the average moment per Fe atom of 2.66+/-0.05μB. Calculations for the same structure show that this value is consistent with the induced Pd polarization.
Abreu, Ana; Oliveira, Mário; Silva Cunha, Pedro; Santa Clara, Helena; Portugal, Guilherme; Gonçalves Rodrigues, Inês; Santos, Vanessa; Morais, Luís; Selas, Mafalda; Soares, Rui; Branco, Luísa; Ferreira, Rui; Mota Carmo, Miguel
2017-10-01
The benefits of cardiac resynchronization therapy (CRT) documented in heart failure (HF) may be influenced by atrial fibrillation (AF). We aimed to compare CRT response in patients in AF and in sinus rhythm (SR). We prospectively studied 101 HF patients treated by CRT. Rates of clinical, echocardiographic and functional response, baseline NYHA class and variation, left ventricular ejection fraction, volumes and mass, atrial volumes, cardiopulmonary exercise test (CPET) duration (CPET dur), peak oxygen consumption (VO 2 max) and ventilatory efficiency (VE/VCO 2 slope) were compared between AF and SR patients, before and at three and six months after implantation of a CRT device. All patients achieved ≥95% biventricular pacing, and 5.7% underwent atrioventricular junction ablation. Patients were divided into AF (n=35) and SR (n=66) groups; AF patients were older, with larger atrial volumes and lower CPET dur and VO 2 max before CRT. The percentages of clinical and echocardiographic responders were similar in the two groups, but there were more functional responders in the AF group (71% vs. 39% in SR patients; p=0.012). In SR patients, left atrial volume and left ventricular mass were significantly reduced (p=0.015 and p=0.021, respectively), whereas in AF patients, CPET dur (p=0.003) and VO 2 max (p=0.001; 0.083 age-adjusted) showed larger increases. Clinical and echocardiographic response rates were similar in SR and AF patients, with a better functional response in AF. Improvement in left ventricular function and volumes occurred in both groups, but left ventricular mass reduction and left atrial reverse remodeling were seen exclusively in SR patients (ClinicalTrials.gov identifier: NCT02413151; FCT code: PTDC/DES/120249/2010). Copyright © 2017 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.
A Coordinated Initialization Process for the Distributed Space Exploration Simulation (DSES)
NASA Technical Reports Server (NTRS)
Phillips, Robert; Dexter, Dan; Hasan, David; Crues, Edwin Z.
2007-01-01
This document describes the federate initialization process that was developed at the NASA Johnson Space Center with the HIIA Transfer Vehicle Flight Controller Trainer (HTV FCT) simulations and refined in the Distributed Space Exploration Simulation (DSES). These simulations use the High Level Architecture (HLA) IEEE 1516 to provide the communication and coordination between the distributed parts of the simulation. The purpose of the paper is to describe a generic initialization sequence that can be used to create a federate that can: 1. Properly initialize all HLA objects, object instances, interactions, and time management 2. Check for the presence of all federates 3. Coordinate startup with other federates 4. Robustly initialize and share initial object instance data with other federates.
Texture formation in FePt thin films via thermal stress management
NASA Astrophysics Data System (ADS)
Rasmussen, P.; Rui, X.; Shield, J. E.
2005-05-01
The transformation variant of the fcc to fct transformation in FePt thin films was tailored by controlling the stresses in the thin films, thereby allowing selection of in- or out-of-plane c-axis orientation. FePt thin films were deposited at ambient temperature on several substrates with differing coefficients of thermal expansion relative to the FePt, which generated thermal stresses during the ordering heat treatment. X-ray diffraction analysis revealed preferential out-of-plane c-axis orientation for FePt films deposited on substrates with a similar coefficients of thermal expansion, and random orientation for FePt films deposited on substrates with a very low coefficient of thermal expansion, which is consistent with theoretical analysis when considering residual stresses.
Integration of data-driven and physically-based methods to assess shallow landslides susceptibility
NASA Astrophysics Data System (ADS)
Lajas, Sara; Oliveira, Sérgio C.; Zêzere, José Luis
2016-04-01
Approaches used to assess shallow landslides susceptibility at the basin scale are conceptually different depending on the use of statistic or deterministic methods. The data-driven methods are sustained in the assumption that the same causes are likely to produce the same effects and for that reason a present/past landslide inventory and a dataset of factors assumed as predisposing factors are crucial for the landslide susceptibility assessment. The physically-based methods are based on a system controlled by physical laws and soil mechanics, where the forces which tend to promote movement are compared with forces that tend to promote resistance to movement. In this case, the evaluation of susceptibility is supported by the calculation of the Factor of safety (FoS), and dependent of the availability of detailed data related with the slope geometry and hydrological and geotechnical properties of the soils and rocks. Within this framework, this work aims to test two hypothesis: (i) although conceptually distinct and based on contrasting procedures, statistic and deterministic methods generate similar shallow landslides susceptibility results regarding the predictive capacity and spatial agreement; and (ii) the integration of the shallow landslides susceptibility maps obtained with data-driven and physically-based methods, for the same study area, generate a more reliable susceptibility model for shallow landslides occurrence. To evaluate these two hypotheses, we select the Information Value data-driven method and the physically-based Infinite Slope model to evaluate shallow landslides in the study area of Monfalim and Louriceira basins (13.9 km2), which is located in the north of Lisbon region (Portugal). The landslide inventory is composed by 111 shallow landslides and was divide in two independent groups based on temporal criteria (age ≤ 1983 and age > 1983): (i) the modelling group (51 cases) was used to define the weights for each predisposing factor (lithology, land use, slope, aspect, curvature, topographic position index and the slope over area ratio) with the Information Value method and was used also to calibrate the strength parameters (cohesion and friction angle) of the different lithological units considered in the Infinity Slope model; and (ii) the validation group (60 cases) was used to independent validate and define the predictive capacity of the shallow landslides susceptibility maps produced with the Information Value method and the Infinite Slope method. The comparison of both landslide susceptibility maps was supported by: (i) the computation of the Receiver Operator Characteristic (ROC) curves; (ii) the calculation of the Area Under the Curve (AUC); and (iii) the evaluation of the spatial agreement between the landslide susceptibility classes. Finally, the susceptibility maps produced with the Information Value and the Infinite Slope methods are integrated into a single landslide susceptibility map based on a set of integration rules define by cross-validation of the susceptibility classes of both maps and analysis of the corresponding contingency table. This work was supported by the FCT - Portuguese Foundation for Science and Technology and is within the framework of the FORLAND Project. Sérgio Oliveira was funded by a postdoctoral grant (SFRH/BPD/85827/2012) from the Portuguese Foundation for Science and Technology (FCT).
Multidimensional FEM-FCT schemes for arbitrary time stepping
NASA Astrophysics Data System (ADS)
Kuzmin, D.; Möller, M.; Turek, S.
2003-05-01
The flux-corrected-transport paradigm is generalized to finite-element schemes based on arbitrary time stepping. A conservative flux decomposition procedure is proposed for both convective and diffusive terms. Mathematical properties of positivity-preserving schemes are reviewed. A nonoscillatory low-order method is constructed by elimination of negative off-diagonal entries of the discrete transport operator. The linearization of source terms and extension to hyperbolic systems are discussed. Zalesak's multidimensional limiter is employed to switch between linear discretizations of high and low order. A rigorous proof of positivity is provided. The treatment of non-linearities and iterative solution of linear systems are addressed. The performance of the new algorithm is illustrated by numerical examples for the shock tube problem in one dimension and scalar transport equations in two dimensions.
NASA Astrophysics Data System (ADS)
Ramos, Alexandre M.; Trigo, Ricardo M.; Liberato, Margarida LR
2014-05-01
Extreme precipitation events in the Iberian Peninsula during the extended winter months have major socio-economic impacts such as floods, landslides, extensive property damage and life losses. These events are usually associated with low pressure systems with Atlantic origin, although some extreme events in summer/autumn months can be linked to Mediterranean low pressure systems. Quite often these events are evaluated on a casuistic base and making use of data from relatively few stations. An objective method for ranking daily precipitation events is presented here based on the extensive use of the most comprehensive database of daily gridded precipitation available for the Iberian Peninsula (IB02) and spanning from 1950 to 2008, with a resolution of 0.2° (approximately 16 x 22 km at latitude 40°N), for a total of 1673 pixels. This database is based on a dense network of rain gauges, combining two national data sets, 'Spain02' for peninsular Spain and Balearic islands, and 'PT02' for mainland Portugal, with a total of more than two thousand stations over Spain and four hundred stations over Portugal, all quality-controlled and homogenized. Through this objective method for ranking daily precipitation events the magnitude of an event is obtained after considering the area affected as well as its intensity in every grid point and taking into account the daily precipitation normalised departure from climatology. Different precipitation rankings are presented considering the entire Iberian Peninsula, Portugal and also the six largest river basins in the Iberian Peninsula. Atmospheric Rivers (AR) are the water vapour (WV) core section of the broader warm conveyor belt occurring over the oceans along the warm sector of extra-tropical cyclones. They are usually W-E oriented steered by pre-frontal low level jets along the trailing cold front and subsequently feed the precipitation in the extra-tropical cyclones. They are relatively narrow regions of concentrated WV responsible for horizontal transport in the lower atmosphere. It was shown that more than 90% of the meridional WV transport in the mid-latitudes occurs in the AR, although they cover less than 10% of the area of the globe. The large amount of WV that is transported can lead to heavy precipitation and floods. In this work we use an automated AR detection algorithm for the North Atlantic Ocean Basin to identify the major AR events that affected the Iberian Peninsula based on the NCEP/NCAR reanalysis. The two different databases (extreme precipitation events and AR) will be analysed together in order to study ARs in detail in the North Atlantic Basin and, additionally, the relationship with precipitation-related events in Iberian Peninsula. Results confirm the significance link between these phenomena, as the TOP 20 days of the ranking of precipitation anomalies for the Iberian Peninsula includes 19 days that are clearly related with AR events. This work was partially supported by FEDER (Fundo Europeu de Desenvolvimento Regional) funds through the COMPETE (Programa Operacional Factores de Competitividade) and by national funds through FCT (Fundação para a Ciência e a Tecnologia, Portugal) under project STORMEx FCOMP-01-0124-FEDER-019524 (PTDC/AAC-CLI/121339/2010). A. M. Ramos was also supported by a FCT postdoctoral grant (FCT/DFRH/SFRH/BPD/84328/2012).
Land degradation and improvement trends over Iberia in the last three decades
NASA Astrophysics Data System (ADS)
Gouveia, Célia M.; Páscoa, Patrícia; Russo, Ana; Trigo, Ricardo
2017-04-01
Land degradation and desertification are recognised as an important environmental and social problem in arid and semiarid regions, particularly within a climate change context. In the last three decades the entire Mediterranean basin has been affected by more frequent droughts, covering large sectors and often lasting several months. Simultaneously, the stress imposed by land management practices, such as land abandonment and intensification, highlights the need of a continuous monitoring and early detection of degradation. The Normalized Difference Vegetation Index (NDVI) from GIMMS dataset was used as an indicator of land degradation or improvement over Iberia between 1982 and 2012. The precipitation influence on NDVI was previously removed and the negative/positive trends of the obtained residuals were presumed to indicate land degradation/improvement. Overall the Iberian Peninsula is dominated by widespread land improvement with only a few hot spots of land degradation located in central and southern sectors and also in east Mediterranean and Atlantic coasts. Less than 20% of the area presenting land degradation is located within regions where land cover changes were observed, being the new land cover types associated with transitional woodland-shrub, permanent and annual crops and permanently irrigated land areas. Although being a very small fraction, the pixels of land degradation are mainly located on a semi-arid region. The monotonic changes and trend shifts present in the NDVI dataset were also assessed. The major shifts in vegetation trends and the corresponding year of occurrence were associated with the main disturbances observed in Iberia, namely the major wildfires' seasons in Portugal, and also to land abandonment and to new agricultural practices that resulted from the construction of new dams. The results obtained provide a new outlook of the real nature of degradation or improvement of vegetation in Iberia in the last three decades. Acknowledgements: This work was partially supported by national funds through FCT (Fundação para a Ciência e a Tecnologia, Portugal) under project IMDROFLOOD (WaterJPI/0004/2014). Ana Russo thanks FCT for granted support (SFRH/BPD/99757/2014).
Vegetation fire proneness in Europe
NASA Astrophysics Data System (ADS)
Pereira, Mário; Aranha, José; Amraoui, Malik
2015-04-01
Fire selectivity has been studied for vegetation classes in terms of fire frequency and fire size in a few European regions. This analysis is often performed along with other landscape variables such as topography, distance to roads and towns. These studies aims to assess the landscape sensitivity to forest fires in peri-urban areas and land cover changes, to define landscape management guidelines and policies based on the relationships between landscape and fires in the Mediterranean region. Therefore, the objectives of this study includes the: (i) analysis of the spatial and temporal variability statistics within Europe; and, (ii) the identification and characterization of the vegetated land cover classes affected by fires; and, (iii) to propose a fire proneness index. The datasets used in the present study comprises: Corine Land Cover (CLC) maps for 2000 and 2006 (CLC2000, CLC2006) and burned area (BA) perimeters, from 2000 to 2013 in Europe, provided by the European Forest Fire Information System (EFFIS). The CLC is a part of the European Commission programme to COoRdinate INformation on the Environment (Corine) and it provides consistent, reliable and comparable information on land cover across Europe. Both the CLC and EFFIS datasets were combined using geostatistics and Geographical Information System (GIS) techniques to access the spatial and temporal evolution of the types of shrubs and forest affected by fires. Obtained results confirms the usefulness and efficiency of the land cover classification scheme and fire proneness index which allows to quantify and to compare the propensity of vegetation classes and countries to fire. As expected, differences between northern and southern Europe are notorious in what concern to land cover distribution, fire incidence and fire proneness of vegetation cover classes. This work was supported by national funds by FCT - Portuguese Foundation for Science and Technology, under the project PEst-OE/AGR/UI4033/2014 and by the project SUSTAINSYS: Environmental Sustainable Agro-Forestry Systems (NORTE-07-0124-FEDER-000044), financed by the North Portugal Regional Operational Programme (ON.2 - O Novo Norte), under the National Strategic Reference Framework (QREN), through the European Regional Development Fund (FEDER), as well as by National Funds (PIDDAC) through the Portuguese Foundation for Science and Technology (FCT/MEC).
Water Quality Monitoring of Inland Waters using Meris data
NASA Astrophysics Data System (ADS)
Potes, M.; Costa, M. J.; Salgado, R.; Le Moigne, P.
2012-04-01
The successful launch of ENVISAT in March 2002 has given a great opportunity to understand the optical changes of water surfaces, including inland waters such as lakes and reservoirs, through the use of the Medium Resolution Imaging Spectrometer (MERIS). The potential of this instrument to describe variations of optically active substances has been examined in the Alqueva reservoir, located in the south of Portugal, where satellite spectral radiances are corrected for the atmospheric effects to obtain the surface spectral reflectance. In order to validate this spectral reflectance, several field campaigns were carried out, with a portable spectroradiometer, during the satellite overpass. The retrieved lake surface spectral reflectance was combined with limnological laboratory data and with the resulting algorithms, spatial maps of biological quantities and turbidity were obtained, allowing for the monitoring of these water quality indicators. In the framework of the recent THAUMEX 2011 field campaign performed in Thau lagoon (southeast of France) in-water radiation, surface irradiation and reflectance measurements were taken with a portable spectrometer in order to test the methodology described above. At the same time, water samples were collected for laboratory analysis. The two cases present different results related to the geographic position, water composition, environment, resources exploration, etc. Acknowledgements This work is financed through FCT grant SFRH/BD/45577/2008 and through FEDER (Programa Operacional Factores de Competitividade - COMPETE) and National funding through FCT - Fundação para a Ciência e a Tecnologia in the framework of projects FCOMP-01-0124-FEDER-007122 (PTDC / CTE-ATM / 65307 / 2006) and FCOMP-01-0124-FEDER-009303 (PTDC/CTE-ATM/102142/2008). Image data has been provided by ESA in the frame of ENVISAT projects AOPT-2423 and AOPT-2357. We thank AERONET investigators for their effort in establishing and maintaining Évora AERONET site. We also thank the Water Laboratory of the University of Évora for support in the field campaigns, EDIA and IFREMER (Institut Français de Recherche pour le Exploitation de la Mer) for providing the water quality data used in this work.
Bobryshev, Y V; Killingsworth, M C; Lord, R S A; Grabs, A J
2008-10-01
Plaque rupture is the most common type of plaque complication and leads to acute ischaemic events such as myocardial infarction and stroke. Calcification has been suggested as a possible indicator of plaque instability. Although the role of matrix vesicles in the initial stages of arterial calcification has been recognized, no studies have yet been carried out to examine a possible role of matrix vesicles in plaque destabilization. Tissue specimens selected for the present study represented carotid specimens obtained from patients undergoing carotid endarterectomy. Serial frozen cross-sections of the tissue specimens were cut and mounted on glass slides. The thickness of the fibrous cap (FCT) in each advanced atherosclerotic lesion, containing a well developed lipid/necrotic core, was measured at its narrowest sites in sets of serial sections. According to established criteria, atherosclerotic plaque specimens were histologically subdivided into two groups: vulnerable plaques with thin fibrous caps (FCT <100 microm) and presumably stable plaques, in which fibrous caps were thicker than 100 microm. Twenty-four carotid plaques (12 vulnerable and 12 presumably stable plaques) were collected for the present analysis of matrix vesicles in fibrous caps. In order to provide a sufficient number of representative areas from each plaque, laser capture microdissection (LCM) was carried out. The quantification of matrix vesicles in ultrathin sections of vulnerable and stable plaques revealed that the numbers of matrix vesicles were significantly higher in fibrous caps of vulnerable plaques than those in stable plaques (8.908+0.544 versus 6.208+0.467 matrix vesicles per 1.92 microm2 standard area; P= 0.0002). Electron microscopy combined with X-ray elemental microanalysis showed that some matrix vesicles in atherosclerotic plaques were undergoing calcification and were characterized by a high content of calcium and phosphorus. The percentage of calcified matrix vesicles/microcalcifications was significantly higher in fibrous caps in vulnerable plaques compared with that in stable plaques (6.705+/-0.436 versus 5.322+/-0494; P= 0.0474). The findings reinforce a view that the texture of the extracellular matrix in the thinning fibrous cap of atherosclerotic plaque is altered and this might contribute to plaque destabilization.
A 2D-3D strategy for resolving tsunami-generated debris flow in urban environments
NASA Astrophysics Data System (ADS)
Birjukovs Canelas, Ricardo; Conde, Daniel; Garcia-Feal, Orlando; João Telhado, Maria; Ferreira, Rui M. L.
2017-04-01
The incorporation of solids, either sediment from the natural environment or remains from buildings or infrastructures is a relevant feature of tsunami run-up in urban environments, greatly increasing the destructive potential of tsunami propagation. Two-dimensional (2D) models have been used to assess the propagation of the bore, even in dense urban fronts. Computational advances are introduced in this work, namely a fully lagrangian, 3D description of the fluid-solid flow, coupled with a high performance meshless implementation capable of dealing with large domains and fine discretizations. A Smoothed Particle Hydrodynamics (SPH) Navier-Stokes discretization and a Distributed Contact Discrete Element Method (DCDEM) description of solid-solid interactions provide a state-of the-art fluid-solid flow description. Together with support for arbitrary geometries, centimetre scale resolution simulations of a city section in Lisbon downtown are presented. 2D results are used as boundary conditions for the 3D model, characterizing the incoming wave as it approaches the coast. It is shown that the incoming bore is able to mobilize and incorporate standing vehicles and other urban hardware. Such fully featured simulation provides explicit description of the interactions among fluid, floating debris (vehicles and urban furniture), the buildings and the pavement. The proposed model presents both an innovative research tool for the study of these flows and a powerful and robust approach to study, design and test mitigation solutions at the local scale. At the same time, due to the high time and space resolution of these methodologies, new questions are raised: scenario-building and initial configurations play a crucial role but they do not univocally determine the final configuration of the simulation, as the solution of the Navier-Stokes equations for high Reynolds numbers possesses a high number of degrees of freedom. This calls for conducting the simulations in a statistical framework, involving both initial conditions generation and interpretation of results, which is only attainable under very high standards of computational efficiency. This research as partially supported by Portuguese and European funds, within programs COMPETE2020 and PORL-FEDER, through project PTDC/ECM-HID/6387/2014 granted by the National Foundation for Science and Technology (FCT).
Tavares, Renata S.; Mansell, Steven; Barratt, Christopher L.R.; Wilson, Stuart M.; Publicover, Stephen J.; Ramalho-Santos, João
2013-01-01
STUDY QUESTION Is the environmental endocrine disruptor p,p′-dichlorodiphenyldichloroethylene (p,p′-DDE) able to induce non-genomic changes in human sperm and consequently affect functional sperm parameters? SUMMARY ANSWER p,p′-DDE promoted Ca2+ flux into human sperm by activating CatSper channels even at doses found in human reproductive fluids, ultimately compromising sperm parameters important for fertilization. WHAT IS KNOWN ALREADY p,p′-DDE may promote non-genomic actions and interact directly with pre-existing signaling pathways, as already observed in other cell types. However, although often found in both male and female reproductive fluids, its effects on human spermatozoa function are not known. STUDY DESIGN, SIZE, DURATION Normozoospermic sperm samples from healthy individuals were included in this study. Samples were exposed to several p,p′-DDE concentrations for 3 days at 37°C and 5% CO2 in vitro to mimic the putative continuous exposure to this toxicant in the female reproductive tract in vivo. Shorter p,p′-DDE incubation periods were also performed in order to monitor sperm rapid Ca2+ responses. All experiments were repeated on a minimum of five sperm samples from different individuals. PARTICIPANTS/MATERIALS, SETTING, METHODS All healthy individuals were recruited at the Biosciences School, University of Birmingham, the Medical Research Institute, University of Dundee and in the Human Reproduction Service at University Hospitals of Coimbra. Intracellular Ca2+ concentration ([Ca2+]i) was monitored by imaging single spermatozoa loaded with Oregon Green BAPTA-1AM and further whole-cell patch-clamp recordings were performed to validate our results. Sperm viability and acrosomal integrity were assessed using the LIVE/DEAD sperm vitality kit and the acrosomal content marker PSA-FITC, respectively. MAIN RESULTS AND THE ROLE OF CHANCE p,p′-DDE rapidly increased [Ca2+]i (P < 0.05) even at extremely low doses (1 pM and 1 nM), with magnitudes of response up to 200%, without affecting sperm viability, except after 3 days of continuous exposure to the highest concentration tested (P < 0.05). Furthermore, experiments performed in a low Ca2+ medium demonstrated that extracellular Ca2+ influx was responsible for this Ca2+ increase (P < 0.01). Mibefradil and NNC 55-0396, both inhibitors of the sperm-specific CatSper channel, reversed the p,p′-DDE-induced [Ca2+]i rise, suggesting the participation of CatSper in this process (P < 0.05). In fact, whole-cell patch-clamp recordings confirmed CatSper as a target of p,p′-DDE action by monitoring an increase in CatSper currents of >100% (P < 0.01). Finally, acrosomal integrity was adversely affected after 2 days of exposure to p,p′-DDE concentrations, suggesting that [Ca2+]i rise may cause premature acrosome reaction (P < 0.05). LIMITATIONS, REASONS FOR CAUTION This is an in vitro study, and caution must be taken when extrapolating the results. WIDER IMPLICATIONS OF THE FINDINGS A novel non-genomic p,p′-DDE mechanism specific to sperm is shown in this study. p,p′-DDE was able to induce [Ca2+]i rise in human sperm through the opening of CatSper consequently compromising male fertility. The promiscuous nature of CatSper activation may predispose human sperm to the action of some persistent endocrine disruptors. STUDY FUNDING/COMPETING INTEREST(S) The study was supported by both the Portuguese National Science Foundation (FCT; PEst-C/SAU/LA0001/2011) and the UK Wellcome Trust (Grant #86470). SM was supported by the Infertility Research Trust. RST is a recipient of a PhD fellowship from FCT (SFRH/BD/46002/2008). None of the authors has any conflict of interest to declare. PMID:24067601
Material Recover and Waste Form Development--2016 Accomplishments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Todd, Terry A.; Vienna, John; Paviet, Patricia
The Material Recovery and Waste Form Development (MRWFD) Campaign under the U.S. Department of Energy (DOE) Fuel Cycle Technologies (FCT) Program is responsible for developing advanced separation and waste form technologies to support the various fuel cycle options defined in the DOE Nuclear Energy Research and Development Roadmap, Report to Congress (April 2010). This MRWFD accomplishments report summarizes the results of the research and development (R&D) efforts performed within MRWFD in Fiscal Year (FY) 2016. Each section of the report contains an overview of the activities, results, technical point of contact, applicable references, and documents produced during the FY. Thismore » report briefly outlines campaign management and integration activities but primarily focuses on the many technical accomplishments of FY 2016. The campaign continued to use an engineering-driven, science-based approach to maintain relevance and focus.« less
International Collaboration Activities on Engineered Barrier Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jove-Colon, Carlos F.
The Used Fuel Disposition Campaign (UFDC) within the DOE Fuel Cycle Technologies (FCT) program has been engaging in international collaborations between repository R&D programs for high-level waste (HLW) disposal to leverage on gathered knowledge and laboratory/field data of near- and far-field processes from experiments at underground research laboratories (URL). Heater test experiments at URLs provide a unique opportunity to mimetically study the thermal effects of heat-generating nuclear waste in subsurface repository environments. Various configurations of these experiments have been carried out at various URLs according to the disposal design concepts of the hosting country repository program. The FEBEX (Full-scale Engineeredmore » Barrier Experiment in Crystalline Host Rock) project is a large-scale heater test experiment originated by the Spanish radioactive waste management agency (Empresa Nacional de Residuos Radiactivos S.A. – ENRESA) at the Grimsel Test Site (GTS) URL in Switzerland. The project was subsequently managed by CIEMAT. FEBEX-DP is a concerted effort of various international partners working on the evaluation of sensor data and characterization of samples obtained during the course of this field test and subsequent dismantling. The main purpose of these field-scale experiments is to evaluate feasibility for creation of an engineered barrier system (EBS) with a horizontal configuration according to the Spanish concept of deep geological disposal of high-level radioactive waste in crystalline rock. Another key aspect of this project is to improve the knowledge of coupled processes such as thermal-hydro-mechanical (THM) and thermal-hydro-chemical (THC) operating in the near-field environment. The focus of these is on model development and validation of predictions through model implementation in computational tools to simulate coupled THM and THC processes.« less
Phase investigation in Pt supported off-stoichiometric iron-platinum thin films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, Rekha; Medwal, Rohit; Annapoorni, S., E-mail: annapoornis@yahoo.co.in
2013-10-15
Graphical abstract: - Highlights: • Low temperature FePt L1{sub 0} phase transformation using Pt/Fe{sub 3}Pt/Pt structure. • Temperature dependent FCC to FCT phase investigation using Rietveld refinement. • Estimation of soft and hard ferromagnetic contribution from demagnetization curve. • Interlayer diffusion and stoichiometry conformation of L1{sub 0} phase using RBS. • Correlation of structural, magnetic and RBS studies were successfully understood. - Abstract: The structural and magnetic phase transformation of Pt/Fe{sub 3}Pt/Pt films on Si <1 0 0> substrates prepared by DC magnetron sputtering is investigated as a function of annealing temperature. Pt diffusion driven low temperature phase transformation frommore » A1 to L1{sub 0} phase is achieved at 300 °C, attaining a very high coercivity of 9 kOe. At 300 °C, 85% L1{sub 0} phase transformation is observed using the X-ray diffraction profile fitting. The estimated phase content is also further verified by fitting the demagnetization curve. The underlayer promotes the ordering at lower temperature while overlayer induces growth along (0 0 1) preferred orientation. Rutherford back scattering study reveals interlayer diffusion and confirms the desired stoichiometry for L1{sub 0} phase. The presence of Pt under-overlayer provides the Pt source and further facilitates the Pt diffusion, which makes it effective in promoting the phase ordering at a lower temperature.« less
Identification and Characterization of Cronobacter Iron Acquisition Systems
Grim, C. J.; Kothary, M. H.; Gopinath, G.; Jarvis, K. G.; Beaubrun, J. Jean-Gilles; McClelland, M.; Tall, B. D.
2012-01-01
Cronobacter spp. are emerging pathogens that cause severe infantile meningitis, septicemia, or necrotizing enterocolitis. Contaminated powdered infant formula has been implicated as the source of Cronobacter spp. in most cases, but questions still remain regarding the natural habitat and virulence potential for each strain. The iron acquisition systems in 231 Cronobacter strains isolated from different sources were identified and characterized. All Cronobacter spp. have both the Feo and Efe systems for acquisition of ferrous iron, and all plasmid-harboring strains (98%) have the aerobactin-like siderophore, cronobactin, for transport of ferric iron. All Cronobacter spp. have the genes encoding an enterobactin-like siderophore, although it was not functional under the conditions tested. Furthermore, all Cronobacter spp. have genes encoding five receptors for heterologous siderophores. A ferric dicitrate transport system (fec system) is encoded specifically by a subset of Cronobacter sakazakii and C. malonaticus strains, of which a high percentage were isolated from clinical samples. Phylogenetic analysis confirmed that the fec system is most closely related to orthologous genes present in human-pathogenic bacterial strains. Moreover, all strains of C. dublinensis and C. muytjensii encode two receptors, FcuA and Fct, for heterologous siderophores produced by plant pathogens. Identification of putative Fur boxes and expression of the genes under iron-depleted conditions revealed which genes and operons are components of the Fur regulon. Taken together, these results support the proposition that C. sakazakii and C. malonaticus may be more associated with the human host and C. dublinensis and C. muytjensii with plants. PMID:22706064
NASA Astrophysics Data System (ADS)
Santos, Regina; Fernandes, Luís; Varandas, Simone; Pereira, Mário; Sousa, Ronaldo; Teixeira, Amilcar; Lopes-Lima, Manuel; Cortes, Rui; Pacheco, Fernando
2015-04-01
Climate change is one of the most important causes of biodiversity loss in freshwater ecosystems and it is expected to cause extinctions in many species in the future. Freshwater ecosystems are also highly affected by anthropogenic pressures such as land use/land cover changes, water abstractions and impoundments. The aim of this study is to assess the impacts of future climate and land-use in the Beça River (northern Portugal) namely on the conservation status of the endangered pearl mussel Margaritifera margaritifera (Linnaeus, 1758). This is an environmental indicator and endangered species currently present in several stretches of the Beça River that still hold adequate ecological conditions. However, the species is threatened by the precipitation decrease projected for the 21st century and the deviation of a significant portion of the river water to an adjacent watershed (since 1998). This decrease in river water can be especially acute during the summer months, forming small pools dispersed along the water course where M. margaritifera, and its host (Salmo trutta), barely find biological conditions for survival. The materials and methods used in this study include; (i) the assessment of water quality based on minimum, maximum and average values of relevant physicochemical parameters within the period 2000-2009; (ii) assessment of future climate change settings based on air temperature and precipitation projected by Regional and Global Circulation Models for recent past (1961 - 1990) and future climate scenarios (2071 - 2099); (iii) data processing to remove the model biases; and, (iv) integrated watershed modelling with river-planning (Mike Basin) and broad GIS (ArcMap) computer packages. Our findings comprise: (i); a good relationship between current wildfire incidence and river water quality; (ii) an increase in the future air temperature throughout the year; (iii) increases in future precipitations during winter and decreases during the other seasons; (iv) major runoff decrease more likely to occur between April and June and in October (<-30% in both future scenarios) which may reach -50%; (v) a decrease in the simulated average water depth in most river sections leading to habitat fragmentation by loss of connectivity during the summer season (water depth < 10 cm) with reverberating effects on the mobility of Salmo trutta, which may impair the reproduction and recruitment of pearl mussels. In addition, human-related threats mostly associated with the presence of dams and wildfires are expected to increase in the future. The presence of dams contribute to an additional decrease in the connectivity and river flow while the forest fires are a major threat, related to the wash out of burned areas during storms, eventually causing the disappearance of the mussels, especially the juveniles. In view of future climate and land-use change scenarios, conservation strategies are proposed to maintain good status and enable recovery, including the negotiation of ecological flows with the river board authorities, the replanting of riparian vegetation along the water course and the reintroduction of native tree species throughout the catchment. This work was supported by national funds by FCT - Portuguese Foundation for Science and Technology, under the project PEst-OE/AGR/UI4033/2014 and by the project SUSTAINSYS: Environmental Sustainable Agro-Forestry Systems (NORTE-07-0124-FEDER-000044), financed by the North Portugal Regional Operational Programme (ON.2 - O Novo Norte), under the National Strategic Reference Framework (QREN), through the European Regional Development Fund (FEDER), as well as by National Funds (PIDDAC) through the Portuguese Foundation for Science and Technology (FCT/MEC).
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...
Compressed normalized block difference for object tracking
NASA Astrophysics Data System (ADS)
Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge
2018-04-01
Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.
Quantitative recurrence for free semigroup actions
NASA Astrophysics Data System (ADS)
Carvalho, Maria; Rodrigues, Fagner B.; Varandas, Paulo
2018-03-01
We consider finitely generated free semigroup actions on a compact metric space and obtain quantitative information on Poincaré recurrence, average first return time and hitting frequency for the random orbits induced by the semigroup action. Besides, we relate the recurrence to balls with the rates of expansion of the semigroup generators and the topological entropy of the semigroup action. Finally, we establish a partial variational principle and prove an ergodic optimization for this kind of dynamical action. MC has been financially supported by CMUP (UID/MAT/00144/2013), which is funded by FCT (Portugal) with national (MEC) and European structural funds (FEDER) under the partnership agreement PT2020. FR and PV were partially supported by BREUDS. PV has also benefited from a fellowship awarded by CNPq-Brazil and is grateful to the Faculty of Sciences of the University of Porto for the excellent research conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohanan, Senthilnathan; Diebolder, Rolf; Hibst, Raimund
2008-04-01
We report about the influence of pulsed laser irradiation on the structural and magnetic properties of NiMn/Co thin films. Rocking curve measurements showed a significant improvement of the (111) texture of NiMn after laser irradiation which was accompanied by grain growth. We have studied the ordering transition in as-prepared and irradiated (laser fluence of 0.15 J/cm{sup 2}) samples during subsequent annealing. The onset of the fcc to fct phase transformation occurs at 325 deg. C irrespective of laser irradiation. Exchange bias fields for the laser irradiated samples are higher than those of the as-prepared samples. The observed increase in themore » exchange bias field for laser irradiated samples has been attributed to the increased grain size and the improved (111) texture of the NiMn layer after laser irradiation.« less
Potential External (non-DOE) Constraints on U.S. Fuel Cycle Options
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steven J. Piet
2012-07-01
The DOE Fuel Cycle Technologies (FCT) Program will be conducting a screening of fuel cycle options in FY2013 to help focus fuel cycle R&D activities. As part of this screening, performance criteria and go/no-go criteria are being identified. To help ensure that these criteria are consistent with current policy, an effort was initiated to identify the status and basis of potentially relevant regulations, laws, and policies that have been established external to DOE. As such regulations, laws, and policies may be beyond DOE’s control to change, they may constrain the screening criteria and internally-developed policy. This report contains a historicalmore » survey and analysis of publically available domestic documents that could pertain to external constraints on advanced nuclear fuel cycles. “External” is defined as public documents outside DOE. This effort did not include survey and analysis of constraints established internal to DOE.« less
NASA Astrophysics Data System (ADS)
Jalal, T.; Hossein Nedjad, S.; Khalili Molan, S.
2013-05-01
A nearly equiatomic MnNi alloy was fabricated from the elemental powders by means of mechanical alloying in a planetary ball milling apparatus. X-ray diffraction (XRD), scanning electron microscopy (SEM), differential scanning calorimetry (DSC), and measurements of magnetization were conducted to identify the structural states and properties of the prepared alloys. After ball milling for 20 h, a disordered face-centered cubic (f.c.c.) solid solution was formed which increased in lattice parameter by further milling up to 50 h. An exothermic reaction took place at around 300-400°C during continuous heating of the disordered f.c.c. solid solution. This reaction is attributed to a structural ordering leading to the formation of a face-centered tetragonal (f.c.t.) phase with L10 type ordering. Examination of the magnetic properties indicated that the structural ordering increases remnant magnetization and decreases coerecivity.
Bobryshev, Y V; Killingsworth, M C; Lord, R S A; Grabs, A J
2008-01-01
Plaque rupture is the most common type of plaque complication and leads to acute ischaemic events such as myocardial infarction and stroke. Calcification has been suggested as a possible indicator of plaque instability. Although the role of matrix vesicles in the initial stages of arterial calcification has been recognized, no studies have yet been carried out to examine a possible role of matrix vesicles in plaque destabilization. Tissue specimens selected for the present study represented carotid specimens obtained from patients undergoing carotid endarterectomy. Serial frozen cross-sections of the tissue specimens were cut and mounted on glass slides. The thickness of the fibrous cap (FCT) in each advanced atherosclerotic lesion, containing a well developed lipid/necrotic core, was measured at its narrowest sites in sets of serial sections. According to established criteria, atherosclerotic plaque specimens were histologically subdivided into two groups: vulnerable plaques with thin fibrous caps (FCT <100 μm) and presumably stable plaques, in which fibrous caps were thicker than 100 μm. Twenty-four carotid plaques (12 vulnerable and 12 presumably stable plaques) were collected for the present analysis of matrix vesicles in fibrous caps. In order to provide a sufficient number of representative areas from each plaque, laser capture microdissection (LCM) was carried out. The quantification of matrix vesicles in ultrathin sections of vulnerable and stable plaques revealed that the numbers of matrix vesicles were significantly higher in fibrous caps of vulnerable plaques than those in stable plaques (8.908±0.544 versus 6.208±0.467 matrix vesicles per 1.92 μm2 standard area; P= 0.0002). Electron microscopy combined with X-ray elemental microanalysis showed that some matrix vesicles in atherosclerotic plaques were undergoing calcification and were characterized by a high content of calcium and phosphorus. The percentage of calcified matrix vesicles/microcalcifications was significantly higher in fibrous caps in vulnerable plaques compared with that in stable plaques (6.705±0.436 versus 5.322±0A94; P= 0.0474). The findings reinforce a view that the texture of the extracellular matrix in the thinning fibrous cap of atherosclerotic plaque is altered and this might contribute to plaque destabilization. PMID:18194456
NASA Astrophysics Data System (ADS)
Almeida, Pedro; Tomas, Ricardo; Rosas, Filipe; Duarte, Joao; Terrinha, Pedro
2015-04-01
Different modes of strain accommodation affecting a deformable hanging-wall in a flat-ramp-flat thrust system were previously addressed through several (sandbox) analog modeling studies, focusing on the influence of different variables, such as: a) thrust ramp dip angle and friction (Bonini et al, 2000); b) prescribed thickness of the hanging-wall (Koy and Maillot, 2007); and c) sin-thrust erosion (compensating for topographic thrust edification, e.g. Persson and Sokoutis, 2002). In the present work we reproduce the same experimental procedure to investigate the influence of two different parameters on hanging-wall deformation: 1) the geometry of the thrusting surface; and 2) the absence of a velocity discontinuity (VD) that is always present in previous similar analogue modeling studies. Considering the first variable we use two end member ramp geometries, flat-ramp-flat and convex-concave, to understand the control exerted by the abrupt ramp edges in the hanging-wall stress-strain distribution, comparing the obtain results with the situation in which such edge singularities are absent (convex-concave thrust ramp). Considering the second investigated parameter, our motivation was the recognition that the VD found in the different analogue modeling settings simply does not exist in nature, despite the fact that it has a major influence on strain accommodation in the deformable hanging-wall. We thus eliminate such apparatus artifact from our models and compare the obtained results with the previous ones. Our preliminary results suggest that both investigated variables play a non-negligible role on the structural style characterizing the hanging-wall deformation of convergent tectonic settings were such thrust-ramp systems were recognized. Acknowledgments This work was sponsored by the Fundação para a Ciência e a Tecnologia (FCT) through project MODELINK EXPL/GEO-GEO/0714/2013. Pedro Almeida wants to thank to FCT for the Ph.D. grant (SFRH/BD/52556/2014) under the Doctoral Program EarthSystems in IDL/UL. References Bonini, M., Sokoutis, D., Mulugeta, G., Katrivanos, E. (2000) - Modelling hanging wall accommodation above rigid thrust ramps. Journal of Structural Geology, 22, pp. 1165-1179. Persson, K. & Sokoutis, D (2002) - Analogue models of orogenic wedges controlled by erosion. Tectonophysics, 356, pp. 323- 336. Koy, H. & Bertrand, M. (2007) - Tectonic thickening of hanging-wall units over a ramp.Journal of Structural Geology, 29, pp. 924-932.
Novel approaches to helicopter obstacle warning
NASA Astrophysics Data System (ADS)
Seidel, Christian; Samuelis, Christian; Wegner, Matthias; Münsterer, Thomas; Rumpf, Thomas; Schwartz, Ingo
2006-05-01
EADS Germany is the world market leader in commercial Helicopter Laser Radar (HELLAS) Obstacle Warning Systems. The HELLAS-Warning System has been introduced into the market in 2000, is in service at German Border Control (Bundespolizei) and Royal Thai Airforce and is successfully evaluated by the Foreign Comparative Test Program (FCT) of the USSOCOM. Currently the successor system HELLAS-Awareness is in development. It will have extended sensor performance, enhanced realtime data processing capabilities and advanced HMI features. We will give an outline of the new sensor unit concerning detection technology and helicopter integration aspects. The system provides a widespread field of view with additional dynamic line of sight steering and a large detection range in combination with a high frame rate of 3Hz. The workflow of the data processing will be presented with focus on novel filter techniques and obstacle classification methods. As commonly known the former are indispensable due to unavoidable statistical measuring errors and solarisation. The amount of information in the filtered raw data is further reduced by ground segmentation. The remaining raised objects are extracted and classified in several stages into different obstacle classes. We will show the prioritization function which orders the obstacles concerning to their threat potential to the helicopter taking into account the actual flight dynamics. The priority of an object determines the display and provision of warnings to the pilot. Possible HMI representation includes video or FLIR overlay on multifunction displays, audio warnings and visualization of information on helmet mounted displays and digital maps. Different concepts will be presented.
Computation of Standard Errors
Dowd, Bryan E; Greene, William H; Norton, Edward C
2014-01-01
Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304
Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.
2002-01-01
Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.
Advanced information processing system: Local system services
NASA Technical Reports Server (NTRS)
Burkhardt, Laura; Alger, Linda; Whittredge, Roy; Stasiowski, Peter
1989-01-01
The Advanced Information Processing System (AIPS) is a multi-computer architecture composed of hardware and software building blocks that can be configured to meet a broad range of application requirements. The hardware building blocks are fault-tolerant, general-purpose computers, fault-and damage-tolerant networks (both computer and input/output), and interfaces between the networks and the computers. The software building blocks are the major software functions: local system services, input/output, system services, inter-computer system services, and the system manager. The foundation of the local system services is an operating system with the functions required for a traditional real-time multi-tasking computer, such as task scheduling, inter-task communication, memory management, interrupt handling, and time maintenance. Resting on this foundation are the redundancy management functions necessary in a redundant computer and the status reporting functions required for an operator interface. The functional requirements, functional design and detailed specifications for all the local system services are documented.
Computing Functions by Approximating the Input
ERIC Educational Resources Information Center
Goldberg, Mayer
2012-01-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…
Khan, Asaduzzaman; Western, Mark
The purpose of this study was to explore factors that facilitate or hinder effective use of computers in Australian general medical practice. This study is based on data extracted from a national telephone survey of 480 general practitioners (GPs) across Australia. Clinical functions performed by GPs using computers were examined using a zero-inflated Poisson (ZIP) regression modelling. About 17% of GPs were not using computer for any clinical function, while 18% reported using computers for all clinical functions. The ZIP model showed that computer anxiety was negatively associated with effective computer use, while practitioners' belief about usefulness of computers was positively associated with effective computer use. Being a female GP or working in partnership or group practice increased the odds of effectively using computers for clinical functions. To fully capitalise on the benefits of computer technology, GPs need to be convinced that this technology is useful and can make a difference.
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Yang, H. Q.
1989-01-01
The capability of accurate nonlinear flow analysis of resonance systems is essential in many problems, including combustion instability. Classical numerical schemes are either too diffusive or too dispersive especially for transient problems. In the last few years, significant progress has been made in the numerical methods for flows with shocks. The objective was to assess advanced shock capturing schemes on transient flows. Several numerical schemes were tested including TVD, MUSCL, ENO, FCT, and Riemann Solver Godunov type schemes. A systematic assessment was performed on scalar transport, Burgers' and gas dynamic problems. Several shock capturing schemes are compared on fast transient resonant pipe flow problems. A system of 1-D nonlinear hyperbolic gas dynamics equations is solved to predict propagation of finite amplitude waves, the wave steepening, formation, propagation, and reflection of shocks for several hundred wave cycles. It is shown that high accuracy schemes can be used for direct, exact nonlinear analysis of combustion instability problems, preserving high harmonic energy content for long periods of time.
Gravitational Waves from Rotating Neutron Stars and Evaluation of fast Chirp Transform Techniques
NASA Technical Reports Server (NTRS)
Strohmayer, Tod E.; White, Nicholas E. (Technical Monitor)
2000-01-01
X-ray observations suggest that neutron stars in low mass X-ray binaries (LMXB) are rotating with frequencies from 300 - 600 Hz. These spin rates are significantly less than the break-up rates for essentially all realistic neutron star equations of state, suggesting that some process may limit the spin frequencies of accreting neutron stars to this range. If the accretion induced spin up torque is in equilibrium with gravitational radiation losses, these objects could be interesting sources of gravitational waves. I present a brief summary of current measurements of neutron star spins in LMXBs based on the observations of high-Q oscillations during thermonuclear bursts (so called 'burst oscillations'). Further measurements of neutron star spins will be important in exploring the gravitational radiation hypothesis in more detail. To this end I also present a study of fast chirp transform (FCT) techniques as described by Jenet and Prince in the context of searching for the chirping signals observed during X-ray bursts.
1976-05-01
V) ut U U H0)« t) Ü 00 a, eg 2"c PA w ^ 3 td 0) 00 > cd b a. 0) Q, 0) *2 w CQ E 3 to I« c o 43 •o E 0) 3 « n ä 4) E 8... td .0 .0 .0 .0 .0 .0 49-»0 .u .0 .0 .0 .0 .0 .« 61-70 .0 .0 .0 .0 .0 .0 ,0 71-16 .0 .0 .0 .0 .0 .0 .0 •7. .0 .0 .0 .0 .0 .0 .0 TOT FCT ,7 a.2...Pfspjppsifpwwaf^^ filHUtRV TtlLE IT t»ii 0012 SAN 01EUO I2.5N lii.O* PCT thtQ OF AIR TEHPERATUItE ( DfC Fl AND THE OCCURRENCE QF FOG INITHOUT
Shin, Hye-Jeong; Kim, Min-Jung; Kim, Hyung-Il; Kwon, Yong Hoon; Seol, Hyo-Joung
2017-03-31
This study examined the effect of ice-quenching after degassing on the change in hardness of a Pd-Au-Zn alloy during porcelain firing simulations. By ice-quenching after degassing, the specimens were softened due to homogenization without the need for an additional softening heat treatment. The lowered hardness by ice-quenching after degassing was recovered greatly from the first stage of porcelain firing process by controlling the cooling rate. The increase in hardness during cooling after porcelain firing was attributed to the precipitation of the f.c.t. PdZn phase containing Au, which caused severe lattice strain in the interphase boundary between the precipitates and matrix of the f.c.c. structure. The final hardness was slightly higher in the ice-quenched specimen than in the specimen cooled at stage 0 (the most effective cooling rate for alloy hardening) after degassing. This was attributed to the more active grain interior precipitation during cooling in the ice-quenched specimen after degassing.
Material Recovery and Waste Form Development FY 2015 Accomplishments Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Todd, Terry Allen; Braase, Lori Ann
The Material Recovery and Waste Form Development (MRWFD) Campaign under the U.S. Department of Energy (DOE) Fuel Cycle Technologies (FCT) Program is responsible for developing advanced separation and waste form technologies to support the various fuel cycle options defined in the DOE Nuclear Energy Research and Development Roadmap, Report to Congress, April 2010. The FY 2015 Accomplishments Report provides a highlight of the results of the research and development (R&D) efforts performed within the MRWFD Campaign in FY-14. Each section contains a high-level overview of the activities, results, technical point of contact, applicable references, and documents produced during the fiscalmore » year. This report briefly outlines campaign management and integration activities, but primarily focuses on the many technical accomplishments made during FY-15. The campaign continued to utilize an engineering driven-science-based approach to maintain relevance and focus. There was increased emphasis on development of technologies that support near-term applications that are relevant to the current once-through fuel cycle.« less
Effect of pressure on the tetragonal distortion in TiH2: a first-principles study
NASA Astrophysics Data System (ADS)
de Coss, R.; Quijano, R.; Singh, D. J.
2009-03-01
The transition metal dihydride TiH2 present the fluorite structure (CaF2) at high temperature but undergoes a tetragonal distortion with c/a<1 at low temperature. Early electronic band structure calculations have shown that TiH2 in the cubic phase display a nearly flat double degenerated band at the Fermi level. Thus the low temperature tetragonal distortion has been associated to a Jahn-Teller effect. Nevertheless, recently we have show that the instability of fcc-TiH2 is likely to be related with a van Hove singularity. In the present work, we have performed ab-initio calculations of the electronic structure and the tetragonal distortion for TiH2 under pressure (0-30 GPa). We found that the fcc-fct energy barrier and the tetragonal distortion increases with pressure. The evolution of the tetragonal distortion is analyzed in terms of the electronic band structure. This research was supported by Consejo Nacional de Ciencia y Tecnolog'ia (Conacyt) under Grant No. 49985.
Tomography of the East African Rift System in Mozambique
NASA Astrophysics Data System (ADS)
Domingues, A.; Silveira, G. M.; Custodio, S.; Chamussa, J.; Lebedev, S.; Chang, S. J.; Ferreira, A. M. G.; Fonseca, J. F. B. D.
2014-12-01
Unlike the majority of the East African Rift, the Mozambique region has not been deeply studied, not only due to political instabilities but also because of the difficult access to its most interior regions. An earthquake with M7 occurred in Machaze in 2006, which triggered the investigation of this particular region. The MOZART project (funded by FCT, Lisbon) installed a temporary seismic network, with a total of 30 broadband stations from the SEIS-UK pool, from April 2011 to July 2013. Preliminary locations of the seismicity were estimated with the data recorded from April 2011 to July 2012. A total of 307 earthquakes were located, with ML magnitudes ranging from 0.9 to 3.9. We observe a linear northeast-southwest distribution of the seismicity that seems associated to the Inhaminga fault. The seismicity has an extension of ~300km reaching the Machaze earthquake area. The northeast sector of the seismicity shows a good correlation with the topography, tracing the Urema rift valley. In order to obtain an initial velocity model of the region, the ambient noise method is used. This method is applied to the entire data set available and two additional stations of the AfricaARRAY project. Ambient noise surface wave tomography is possible by computing cross-correlations between all pairs of stations and measuring the group velocities for all interstation paths. With this approach we obtain Rayleigh wave group velocity dispersion curves in the period range from 3 to 50 seconds. Group velocity maps are calculated for several periods and allowing a geological and tectonic interpretation. In order to extend the investigation to longer wave periods and thus probe both the crust and upper mantle, we apply a recent implementation of the surface-wave two-station method (teleseismic interferometry - Meier el al 2004) to augment our dataset with Rayleigh wave phase velocities curves in a broad period range. Using this method we expect to be able to explore the lithosphere-asthenosphere depth range beneath Mozambique.
Ambient Noise Tomography of the East African Rift System in Mozambique
NASA Astrophysics Data System (ADS)
Domingues, Ana; Custódio, Susana; Chamussa, José; Silveira, Graça; Chang, Sung-Joon; Lebedev, Sergei; Ferreira, Ana; Fonseca, João
2014-05-01
Project MOZART - MOZAmbique Rift Tomography (funded by FCT, Lisbon) deployed a total of 30 temporary broadband seismic stations from the SEIS-UK Pool in central and south Mozambique and in NE South Africa. The purpose of this project is the study of the East African Rift System (EARS) in Mozambique. We estimated preliminary locations with the data recorded from April 2011 to July 2012. A total of 307 earthquakes were located, with ML magnitudes ranging from 0.9 to 3.9. We observe a linear northeast-southwest distribution of the seismicity that seems associated to the Inhaminga fault. The seismicity in the northeast sector correlates well with the topography, tracing the Urema rift valley. The seismicity extends to ~300km, reaching the M7 2006 Machaze earthquake area. In order to obtain an initial velocity model of the region, we applied the ambient noise method to the MOZART data and two additional stations from AfricaARRAY. Cross-correlations were computed between all pairs of stations, and we obtained Rayleigh wave group velocity dispersion curves for all interstation paths, in the period range from 3 to 50 seconds. The geographical distribution of the group velocity anomalies is in good agreement with the geology map of Mozambique, having lower group velocities in sedimentary basins areas and higher velocities in cratonic regions. We also observe two main regions with different velocities that may indicate a structure not proposed in previous studies. We perform a three-dimensional inversion to obtain the S-wave velocity of the crust and upper mantle, and in order to extend the investigation to longer periods we apply a recent implementation of the surface-wave two-station method (teleseismic interferometry), while augmenting our dataset with Rayleigh wave phase velocities curves in broad period ranges. In this way we expect to be able to look into the lithosphere-asthenosphere depth range.
Efficient and Flexible Computation of Many-Electron Wave Function Overlaps.
Plasser, Felix; Ruckenbauer, Matthias; Mai, Sebastian; Oppel, Markus; Marquetand, Philipp; González, Leticia
2016-03-08
A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented.
Recent decadal trends in Iberian water vapour: GPS analysis and WRF process study
NASA Astrophysics Data System (ADS)
Miranda, Pedro M. A.; Nogueira, Miguel; Semedo, Alvaro; Benevides, Pedro; Catalao, Joao; Costa, Vera
2016-04-01
A 24-year simulation of the recent Iberian climate, using the WRF model at 9km resolution forced by ERA-Interim reanalysis (1989-2012), is analysed for the decadal evolution of the upwelling forcing coastal wind and for column integrated Precipitable water vapour (PWV). Results indicate that, unlike what was found by Bakun et al. (2009) for the Peruvian region, a statistically significant trend in the upwelling favourable (northerly) wind has been accompanied by a corresponding decrease in PWV, not only inland but also over the coastal waters. Such increase is consistent with a reinforced northerly coastal jet in the maritime boundary layer contributing to atmospheric Ekman pumping of dry continental air into the coastal region. Diagnostics of the prevalence of the Iberian thermal low following Hoinka and Castro (2003) also show a positive trend in its frequency during an extended summer period (April to September). These results are consistent with recent studies indicating an upward trend in the frequency of upwelling in SW Iberia (Alves and Miranda 2013), and may be relevant for climate change applications as an increase in coastal upwelling (Miranda et al 2013) may lead to substantial regional impacts in the subtropics. The same analysis with ERA-Interim reanalysis data, which was used to force the WRF simulations, does not reveal the same signal in PWV, and indeed correlates poorly with the GPS observations, indicating that the data assimilation process makes the water vapour data in reanalysis unusable for climate change purposes. The good correlation between the WRF simulated data and GPS observations allow for a detailed analysis of the processes involved in the evolution of the PWV field. Akcnowledgements: Study done within FCT Grant RECI/GEO-MET/0380/2012, financially supported by FCT Grant UID/ GEO/50019/2013-IDL Alves JMR, Miranda PMA (2013) Variability of Iberian upwelling implied by ERA-40 and ERA-Interim reanalyses, Tellus A 2013, http://dx.doi.org/10.3402/tellusa.v65i0.19245. Bakun et al (2010) Greenhouse gas, upwelling-favorable winds, and the future of coastal ocean upwelling ecosystems, Global Change Biology, doi: 10.1111/j.1365-2486.2009.02094.x Hoinka KP, Castro M (2003) The Iberian Peninsula thermal low. QJRMS, 129, 1491- 1511, doi: 10.1256/qj.01.189. Miranda et al (2013) Climate change and upwelling: response of Iberian upwelling to atmospheric forcing in a regional climate scenario. Climate Dynamics, doi: 10.1007/s00382-012-1442-9.
Social inequality in morbidity, framed within the current economic crisis in Spain.
Zapata Moya, A R; Buffel, V; Navarro Yáñez, C J; Bracke, P
2015-11-14
Inspired by the 'Fundamental Cause Theory (FCT)' we explore social inequalities in preventable versus relatively less-preventable illnesses in Spain. The focus is on the education-health gradient, as education is one of the most important components of an individual's socioeconomic status (SES). Framed in the context of the recent economic crisis, we investigate the education gradient in depression, diabetes, and myocardial infarction (relatively highly preventable illnesses) and malignant tumors (less preventable), and whether this educational gradient varies across the regional-economic context and changes therein. We use data from three waves of the Spanish National Health Survey (2003-2004, 2006-2007, and 2011-2012), and from the 2009-2010 wave of the European Health Survey in Spain, which results in a repeated cross-sectional design. Logistic multilevel regressions are performed with depression, diabetes, myocardial infarction, and malignant tumors as dependent variables. The multilevel design has three levels (the individual, period-regional, and regional level), which allows us to estimate both longitudinal and cross-sectional macro effects. The regional-economic context and changes therein are assessed using the real GDP growth rate and the low work intensity indicator. Education gradients in more-preventable illness are observed, while this is far less the case in our less-preventable disease group. Regional economic conditions seem to have a direct impact on depression among Spanish men (y-stand. OR = 1.04 [95 % CI: 1.01-1.07]). Diabetes is associated with cross-regional differences in low work intensity among men (y-stand. OR = 1.02 [95 % CI: 1.00-1.05]) and women (y-stand. OR = 1.04 [95 % CI: 1.01-1.06]). Economic contraction increases the likelihood of having diabetes among men (y-stand. OR = 1.04 [95 % CI: 1.01-1.06]), and smaller decreases in the real GDP growth rate are associated with lower likelihood of myocardial infarction among women (y-stand. OR = 0.83 [95 % CI: 0.69-1.00]). Finally, there are interesting associations between the macroeconomic changes across the crisis period and the likelihood of suffering from myocardial infarction among lower educated groups, and the likelihood of having depression and diabetes among less-educated women. Our findings partially support the predictions of the FCT for Spain. The crisis effects on health emerge especially in the case of our more-preventable illnesses and among lower educated groups. Health inequalities in Spain could increase rapidly in the coming years due to the differential effects of recession on socioeconomic groups.
Size distribution of PM at Cape Verde - Santiago Island
NASA Astrophysics Data System (ADS)
Pio, C.; Nunes, T.; Cardoso, J.; Caseiro, A.; Cerqueira, M.; Custodio, D.; Freitas, M. C.; Almeida, S. M.
2012-04-01
The archipelago of Cape Verde is located on the eastern North Atlantic, about 500 km west of the African coast. Its geographical location, inside the main area of dust transport over tropical Atlantic and near the coast of Africa, is strongly affected by mineral dust from the Sahara and the Sahel regions. In the scope of the CVDust project a surface field station was implemented in the surroundings of Praia City, Santiago Island (14° 55' N e 23° 29' W, 98 m at sea level), where aerosol sampling throughout different samplers was performed during one year. To study the size distribution of aerosol, an optical dust monitor (Grimm 180), from 0.250 to 32 μm in 31 size channels, was running almost continuously from January 2011 to December 2011. The performance of Grimm 180 to quantify PM mass concentration in an area affected by the transport of Saharan dust particles was evaluated throughout the sampling period by comparison with PM10 mass concentrations obtained with the gravimetric reference method (PM10 TSI High-Volume, PM10 Partisol and PM10 TCR-Tecora). PM10 mass concentration estimated with the Grimm 180 dust monitor, an optical counter, showed a good correlation with the reference gravimetric method, with R2= 0.94 and a linear regression equation of PM10Grimm = 0.81PM10TCR- 5.34. The number and mass size distribution of PM at ground level together with meteorological and back trajectories were analyzed and compared for different conditions aiming at identifying different signatures related to sources and dust transport. January and February, the months when most Saharan dust events occurred, showed the highest concentrations, with PM10 daily average of 66.6±60.2 μg m-3 and 91.6±97.4 μg m-3, respectively. During these months PM1 and PM2.5 accounted for less than 11% and 47% of PM10 respectively, and the contribution of fine fractions (PM1 and PM2.5) to PM mass concentrations tended to increase for the other months. During Saharan dust events, the PM2.5 hourly average could reach mass concentrations higher than 200 μg m-3 whereas PM10 overpass 600 μg m-3. Acknowledgement: This work was funded by the Portuguese Science Foundation (FCT) through the project PTDD/AAC-CLI/100331/2008 and FCOMP-01-0124-FEDER-008646 (CV-Dust). J. Cardoso acknowledges the PhD grant SFRH-BD-6105-2009 from FCT.
Contextuality and Wigner-function negativity in qubit quantum computation
NASA Astrophysics Data System (ADS)
Raussendorf, Robert; Browne, Dan E.; Delfosse, Nicolas; Okay, Cihan; Bermejo-Vega, Juan
2017-05-01
We describe schemes of quantum computation with magic states on qubits for which contextuality and negativity of the Wigner function are necessary resources possessed by the magic states. These schemes satisfy a constraint. Namely, the non-negativity of Wigner functions must be preserved under all available measurement operations. Furthermore, we identify stringent consistency conditions on such computational schemes, revealing the general structure by which negativity of Wigner functions, hardness of classical simulation of the computation, and contextuality are connected.
Precipitation extremes in the Iberian Peninsula: an overview of the CLIPE project
NASA Astrophysics Data System (ADS)
Santos, João A.; Gonçalves, Paulo M.; Rodrigues, Tiago; Carvalho, Maria J.; Rocha, Alfredo
2014-05-01
The main aims of the project "Climate change of precipitation extreme episodes in the Iberian Peninsula and its forcing mechanisms - CLIPE" are 1) to diagnose the climate change signal in the precipitation extremes over the Iberian Peninsula (IP) and 2) to identify the underlying physical mechanisms. For the first purpose, a multi-model ensemble of 25 Regional Climate Model (RCM) simulations, from the ENSEMBLES project, is used. These experiments were generated by 15 RCMs, driven by five General Circulation Models (GCMs) under both historic conditions (1951-2000) and SRES A1B scenario (2001-2100). In this project, daily precipitation and mean sea level pressure, for the periods 1961-1990 (recent past) and 2021-2100 (future), are used. Using the Standardised Precipitation Index (SPI) on a daily basis, a precipitation extreme is defined by the pair of threshold values (Dmin, Imin), where Dmin is the minimum number of consecutive days with daily SPI above the Imin value. For both past and future climates, a precipitation extreme of a specific type is then characterised by two variables: the number of episodes with a specific duration in days and the number of episodes with a specific mean intensity (SPI/duration). Climate change is also assessed by changes in their Probability Density Functions (PDFs), estimated at sectors representative of different precipitation regimes. Lastly, for the second objective of this project, links between precipitation and Circulation Weather Regimes (CWRs) are explored for both past and future climates. Acknowledgments: this work is supported by European Union Funds (FEDER/COMPETE - Operational Competitiveness Programme) and by national funds (FCT - Portuguese Foundation for Science and Technology) under the project CLIPE (PTDC/AAC-CLI/111733/2009).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linger, Richard C; Pleszkoch, Mark G; Prowell, Stacy J
Organizations maintaining mainframe legacy software can benefit from code modernization and incorporation of security capabilities to address the current threat environment. Oak Ridge National Laboratory is developing the Hyperion system to compute the behavior of software as a means to gain understanding of software functionality and security properties. Computation of functionality is critical to revealing security attributes, which are in fact specialized functional behaviors of software. Oak Ridge is collaborating with MITRE Corporation to conduct a demonstration project to compute behavior of legacy IBM Assembly Language code for a federal agency. The ultimate goal is to understand functionality and securitymore » vulnerabilities as a basis for code modernization. This paper reports on the first phase, to define functional semantics for IBM Assembly instructions and conduct behavior computation experiments.« less
Computing quantum hashing in the model of quantum branching programs
NASA Astrophysics Data System (ADS)
Ablayev, Farid; Ablayev, Marat; Vasiliev, Alexander
2018-02-01
We investigate the branching program complexity of quantum hashing. We consider a quantum hash function that maps elements of a finite field into quantum states. We require that this function is preimage-resistant and collision-resistant. We consider two complexity measures for Quantum Branching Programs (QBP): a number of qubits and a number of compu-tational steps. We show that the quantum hash function can be computed efficiently. Moreover, we prove that such QBP construction is optimal. That is, we prove lower bounds that match the constructed quantum hash function computation.
The Computer and Its Functions; How to Communicate with the Computer.
ERIC Educational Resources Information Center
Ward, Peggy M.
A brief discussion of why it is important for students to be familiar with computers and their functions and a list of some practical applications introduce this two-part paper. Focusing on how the computer works, the first part explains the various components of the computer, different kinds of memory storage devices, disk operating systems, and…
Hathaway, R.M.; McNellis, J.M.
1989-01-01
Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously. The new approaches and expanded use of computers will require substantial increases in the quantity and sophistication of the Division 's computer resources. The requirements presented in this report will be used to develop technical specifications that describe the computer resources needed during the 1990's. (USGS)
RATGRAPH: Computer Graphing of Rational Functions.
ERIC Educational Resources Information Center
Minch, Bradley A.
1987-01-01
Presents an easy-to-use Applesoft BASIC program that graphs rational functions and any asymptotes that the functions might have. Discusses the nature of rational functions, graphing them manually, employing a computer to graph rational functions, and describes how the program works. (TW)
1986-07-01
COMPUTER-AIDED OPERATION MANAGEMENT SYSTEM ................. 29 Functions of an Off-Line Computer-Aided Operation Management System Applications of...System Comparisons 85 DISTRIBUTION 5V J. • 0. FIGURES Number Page 1 Hardware Components 21 2 Basic Functions of a Computer-Aided Operation Management System...Plant Visits 26 4 Computer-Aided Operation Management Systems Reviewed for Analysis of Basic Functions 29 5 Progress of Software System Installation and
ERIC Educational Resources Information Center
Sarfo, Frederick Kwaku; Amankwah, Francis; Konin, Daniel
2017-01-01
The study is aimed at investigating 1) the level of computer self-efficacy among public senior high school (SHS) teachers in Ghana and 2) the functionality of teachers' age, gender, and computer experiences on their computer self-efficacy. Four hundred and Seven (407) SHS teachers were used for the study. The "Computer Self-Efficacy"…
Compressed Continuous Computation v. 12/20/2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorodetsky, Alex
2017-02-17
A library for performing numerical computation with low-rank functions. The (C3) library enables performing continuous linear and multilinear algebra with multidimensional functions. Common tasks include taking "matrix" decompositions of vector- or matrix-valued functions, approximating multidimensional functions in low-rank format, adding or multiplying functions together, integrating multidimensional functions.
NASA Astrophysics Data System (ADS)
Bornyakov, V. G.; Boyda, D. L.; Goy, V. A.; Molochkov, A. V.; Nakamura, Atsushi; Nikolaev, A. A.; Zakharov, V. I.
2017-05-01
We propose and test a new approach to computation of canonical partition functions in lattice QCD at finite density. We suggest a few steps procedure. We first compute numerically the quark number density for imaginary chemical potential i μq I . Then we restore the grand canonical partition function for imaginary chemical potential using the fitting procedure for the quark number density. Finally we compute the canonical partition functions using high precision numerical Fourier transformation. Additionally we compute the canonical partition functions using the known method of the hopping parameter expansion and compare results obtained by two methods in the deconfining as well as in the confining phases. The agreement between two methods indicates the validity of the new method. Our numerical results are obtained in two flavor lattice QCD with clover improved Wilson fermions.
Cortical Neural Computation by Discrete Results Hypothesis
Castejon, Carlos; Nuñez, Angel
2016-01-01
One of the most challenging problems we face in neuroscience is to understand how the cortex performs computations. There is increasing evidence that the power of the cortical processing is produced by populations of neurons forming dynamic neuronal ensembles. Theoretical proposals and multineuronal experimental studies have revealed that ensembles of neurons can form emergent functional units. However, how these ensembles are implicated in cortical computations is still a mystery. Although cell ensembles have been associated with brain rhythms, the functional interaction remains largely unclear. It is still unknown how spatially distributed neuronal activity can be temporally integrated to contribute to cortical computations. A theoretical explanation integrating spatial and temporal aspects of cortical processing is still lacking. In this Hypothesis and Theory article, we propose a new functional theoretical framework to explain the computational roles of these ensembles in cortical processing. We suggest that complex neural computations underlying cortical processing could be temporally discrete and that sensory information would need to be quantized to be computed by the cerebral cortex. Accordingly, we propose that cortical processing is produced by the computation of discrete spatio-temporal functional units that we have called “Discrete Results” (Discrete Results Hypothesis). This hypothesis represents a novel functional mechanism by which information processing is computed in the cortex. Furthermore, we propose that precise dynamic sequences of “Discrete Results” is the mechanism used by the cortex to extract, code, memorize and transmit neural information. The novel “Discrete Results” concept has the ability to match the spatial and temporal aspects of cortical processing. We discuss the possible neural underpinnings of these functional computational units and describe the empirical evidence supporting our hypothesis. We propose that fast-spiking (FS) interneuron may be a key element in our hypothesis providing the basis for this computation. PMID:27807408
Cortical Neural Computation by Discrete Results Hypothesis.
Castejon, Carlos; Nuñez, Angel
2016-01-01
One of the most challenging problems we face in neuroscience is to understand how the cortex performs computations. There is increasing evidence that the power of the cortical processing is produced by populations of neurons forming dynamic neuronal ensembles. Theoretical proposals and multineuronal experimental studies have revealed that ensembles of neurons can form emergent functional units. However, how these ensembles are implicated in cortical computations is still a mystery. Although cell ensembles have been associated with brain rhythms, the functional interaction remains largely unclear. It is still unknown how spatially distributed neuronal activity can be temporally integrated to contribute to cortical computations. A theoretical explanation integrating spatial and temporal aspects of cortical processing is still lacking. In this Hypothesis and Theory article, we propose a new functional theoretical framework to explain the computational roles of these ensembles in cortical processing. We suggest that complex neural computations underlying cortical processing could be temporally discrete and that sensory information would need to be quantized to be computed by the cerebral cortex. Accordingly, we propose that cortical processing is produced by the computation of discrete spatio-temporal functional units that we have called "Discrete Results" (Discrete Results Hypothesis). This hypothesis represents a novel functional mechanism by which information processing is computed in the cortex. Furthermore, we propose that precise dynamic sequences of "Discrete Results" is the mechanism used by the cortex to extract, code, memorize and transmit neural information. The novel "Discrete Results" concept has the ability to match the spatial and temporal aspects of cortical processing. We discuss the possible neural underpinnings of these functional computational units and describe the empirical evidence supporting our hypothesis. We propose that fast-spiking (FS) interneuron may be a key element in our hypothesis providing the basis for this computation.
Computational work and time on finite machines.
NASA Technical Reports Server (NTRS)
Savage, J. E.
1972-01-01
Measures of the computational work and computational delay required by machines to compute functions are given. Exchange inequalities are developed for random access, tape, and drum machines to show that product inequalities between storage and time, number of drum tracks and time, number of bits in an address and time, etc., must be satisfied to compute finite functions on bounded machines.
NASA Technical Reports Server (NTRS)
Rediess, Herman A.; Hewett, M. D.
1991-01-01
The requirements are assessed for the use of remote computation to support HRV flight testing. First, remote computational requirements were developed to support functions that will eventually be performed onboard operational vehicles of this type. These functions which either cannot be performed onboard in the time frame of initial HRV flight test programs because the technology of airborne computers will not be sufficiently advanced to support the computational loads required, or it is not desirable to perform the functions onboard in the flight test program for other reasons. Second, remote computational support either required or highly desirable to conduct flight testing itself was addressed. The use is proposed of an Automated Flight Management System which is described in conceptual detail. Third, autonomous operations is discussed and finally, unmanned operations.
NASA Astrophysics Data System (ADS)
Dunster, T. M.; Gil, A.; Segura, J.; Temme, N. M.
2017-08-01
Conical functions appear in a large number of applications in physics and engineering. In this paper we describe an extension of our module Conical (Gil et al., 2012) for the computation of conical functions. Specifically, the module includes now a routine for computing the function R-1/2+ iτ m (x) , a real-valued numerically satisfactory companion of the function P-1/2+ iτ m (x) for x > 1. In this way, a natural basis for solving Dirichlet problems bounded by conical domains is provided. The module also improves the performance of our previous algorithm for the conical function P-1/2+ iτ m (x) and it includes now the computation of the first order derivative of the function. This is also considered for the function R-1/2+ iτ m (x) in the extended algorithm.
Network Coding for Function Computation
ERIC Educational Resources Information Center
Appuswamy, Rathinakumar
2011-01-01
In this dissertation, the following "network computing problem" is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function f of the messages. The objective is to maximize the average number of times f can be computed per network usage, i.e., the "computing…
Simulation study of entropy production in the one-dimensional Vlasov system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Zongliang, E-mail: liangliang1223@gmail.com; Wang, Shaojie
2016-07-15
The coarse-grain averaged distribution function of the one-dimensional Vlasov system is obtained by numerical simulation. The entropy productions in cases of the random field, the linear Landau damping, and the bump-on-tail instability are computed with the coarse-grain averaged distribution function. The computed entropy production is converged with increasing length of coarse-grain average. When the distribution function differs slightly from a Maxwellian distribution, the converged value agrees with the result computed by using the definition of thermodynamic entropy. The length of the coarse-grain average to compute the coarse-grain averaged distribution function is discussed.
Computing Evans functions numerically via boundary-value problems
NASA Astrophysics Data System (ADS)
Barker, Blake; Nguyen, Rose; Sandstede, Björn; Ventura, Nathaniel; Wahl, Colin
2018-03-01
The Evans function has been used extensively to study spectral stability of travelling-wave solutions in spatially extended partial differential equations. To compute Evans functions numerically, several shooting methods have been developed. In this paper, an alternative scheme for the numerical computation of Evans functions is presented that relies on an appropriate boundary-value problem formulation. Convergence of the algorithm is proved, and several examples, including the computation of eigenvalues for a multi-dimensional problem, are given. The main advantage of the scheme proposed here compared with earlier methods is that the scheme is linear and scalable to large problems.
Efficient Server-Aided Secure Two-Party Function Evaluation with Applications to Genomic Computation
2016-07-14
of the important properties of secure computation . In particular, it is known that full fairness cannot be achieved in the case of two-party com...Jakobsen, J. Nielsen, and C. Orlandi. A framework for outsourcing of secure computation . In ACM Workshop on Cloud Computing Security (CCSW), pages...Function Evaluation with Applications to Genomic Computation Abstract: Computation based on genomic data is becoming increasingly popular today, be it
PERFORMANCE OF A COMPUTER-BASED ASSESSMENT OF COGNITIVE FUNCTION MEASURES IN TWO COHORTS OF SENIORS
Espeland, Mark A.; Katula, Jeffrey A.; Rushing, Julia; Kramer, Arthur F.; Jennings, Janine M.; Sink, Kaycee M.; Nadkarni, Neelesh K.; Reid, Kieran F.; Castro, Cynthia M.; Church, Timothy; Kerwin, Diana R.; Williamson, Jeff D.; Marottoli, Richard A.; Rushing, Scott; Marsiske, Michael; Rapp, Stephen R.
2013-01-01
Background Computer-administered assessment of cognitive function is being increasingly incorporated in clinical trials, however its performance in these settings has not been systematically evaluated. Design The Seniors Health and Activity Research Program (SHARP) pilot trial (N=73) developed a computer-based tool for assessing memory performance and executive functioning. The Lifestyle Interventions and Independence for Seniors (LIFE) investigators incorporated this battery in a full scale multicenter clinical trial (N=1635). We describe relationships that test scores have with those from interviewer-administered cognitive function tests and risk factors for cognitive deficits and describe performance measures (completeness, intra-class correlations). Results Computer-based assessments of cognitive function had consistent relationships across the pilot and full scale trial cohorts with interviewer-administered assessments of cognitive function, age, and a measure of physical function. In the LIFE cohort, their external validity was further demonstrated by associations with other risk factors for cognitive dysfunction: education, hypertension, diabetes, and physical function. Acceptable levels of data completeness (>83%) were achieved on all computer-based measures, however rates of missing data were higher among older participants (odds ratio=1.06 for each additional year; p<0.001) and those who reported no current computer use (odds ratio=2.71; p<0.001). Intra-class correlations among clinics were at least as low (ICC≤0.013) as for interviewer measures (ICC≤0.023), reflecting good standardization. All cognitive measures loaded onto the first principal component (global cognitive function), which accounted for 40% of the overall variance. Conclusion Our results support the use of computer-based tools for assessing cognitive function in multicenter clinical trials of older individuals. PMID:23589390
Computation of turbulent boundary layers employing the defect wall-function method. M.S. Thesis
NASA Technical Reports Server (NTRS)
Brown, Douglas L.
1994-01-01
In order to decrease overall computational time requirements of spatially-marching parabolized Navier-Stokes finite-difference computer code when applied to turbulent fluid flow, a wall-function methodology, originally proposed by R. Barnwell, was implemented. This numerical effort increases computational speed and calculates reasonably accurate wall shear stress spatial distributions and boundary-layer profiles. Since the wall shear stress is analytically determined from the wall-function model, the computational grid near the wall is not required to spatially resolve the laminar-viscous sublayer. Consequently, a substantially increased computational integration step size is achieved resulting in a considerable decrease in net computational time. This wall-function technique is demonstrated for adiabatic flat plate test cases from Mach 2 to Mach 8. These test cases are analytically verified employing: (1) Eckert reference method solutions, (2) experimental turbulent boundary-layer data of Mabey, and (3) finite-difference computational code solutions with fully resolved laminar-viscous sublayers. Additionally, results have been obtained for two pressure-gradient cases: (1) an adiabatic expansion corner and (2) an adiabatic compression corner.
Evans function computation for the stability of travelling waves
NASA Astrophysics Data System (ADS)
Barker, B.; Humpherys, J.; Lyng, G.; Lytle, J.
2018-04-01
In recent years, the Evans function has become an important tool for the determination of stability of travelling waves. This function, a Wronskian of decaying solutions of the eigenvalue equation, is useful both analytically and computationally for the spectral analysis of the linearized operator about the wave. In particular, Evans-function computation allows one to locate any unstable eigenvalues of the linear operator (if they exist); this allows one to establish spectral stability of a given wave and identify bifurcation points (loss of stability) as model parameters vary. In this paper, we review computational aspects of the Evans function and apply it to multidimensional detonation waves. This article is part of the theme issue `Stability of nonlinear waves and patterns and related topics'.
Using gravimetry to probe small to large scale lithospheric structure at Fogo Island (Cape Verde)
NASA Astrophysics Data System (ADS)
Fernandes, R.; Bos, M. S.; Dumont, S.; Oliveira, C. S.; Miranda, E. H.; Almeida, P. G.
2017-12-01
The Fogo volcano is located on one of the Cape Verde islands in the Atlantic Ocean. Its last eruption was from November 2014 to February 2015 and destroyed two villages. To better understand its volcanic plumbing system, but also its eruptive dynamics that would contribute to hazard assessment and risk mitigation, the project "FIRE - Fogo Island volcano: multi-disciplinary Research on 2014/15 Eruption" was rapidly conceived in collaboration with local institutions. Despite a regular eruptive activity in the last centuries ( 20 years), no magmatic chamber has been evidenced yet and this is what we are investigating using gravimetry in the FIRE project. Approximately 70 relative new gravity observations were made by the C4G mission during the 2014 eruption, using a SCINTREX CG3M gravimeter. The spacing between observation points was around 5 km for most of the island and around 2 km in Chã das Caldeiras, close to the 2014 eruption vent. In January 2017, 48 additional observations were made which densified the post-eruption observations in Chã das Caldeiras. The exact location of each observation point was determined with an accuracy of <10 cm from colocated GNSS receivers. The gravity residuals are fitted to the predicted gravity effect from modelled magma chambers with various diameters, depths and density contrasts in order to investigate if such a chamber can explain the measurements. A digital terrain model of the island, refined with a high-resolution terrain model of 10m resolution in the Chã das Caldeiras produced as part of the project, will be used to remove the gravitational effect of the topography. In addition, the new gravity observations can be used to improve global geopotential models such as EGM2008/EIGEN6C4 over the island and to compute a new elastic thickness of the crust underneath the island. Pim (2006) estimated that the elastic thickness Te is around 30 km in this region which is normal for the age of lithosphere, suggesting that it has not experienced thermal reheating due to the underlying mantle plume. Our measurements can verify these results. This is a contribution to Project FIRE (PTDC/GEOGEO/1123/2014) and scholarship SFRH/BPD/89923/2012 funded by FCT (Portugal). The GNSS solutions were computed using resources provided by C4G - Collaboratory for Geosciences (POCI-01-0145-FEDER-022151).
Some computational techniques for estimating human operator describing functions
NASA Technical Reports Server (NTRS)
Levison, W. H.
1986-01-01
Computational procedures for improving the reliability of human operator describing functions are described. Special attention is given to the estimation of standard errors associated with mean operator gain and phase shift as computed from an ensemble of experimental trials. This analysis pertains to experiments using sum-of-sines forcing functions. Both open-loop and closed-loop measurement environments are considered.
Green's function methods in heavy ion shielding
NASA Technical Reports Server (NTRS)
Wilson, John W.; Costen, Robert C.; Shinn, Judy L.; Badavi, Francis F.
1993-01-01
An analytic solution to the heavy ion transport in terms of Green's function is used to generate a highly efficient computer code for space applications. The efficiency of the computer code is accomplished by a nonperturbative technique extending Green's function over the solution domain. The computer code can also be applied to accelerator boundary conditions to allow code validation in laboratory experiments.
NASA Technical Reports Server (NTRS)
Kennedy, J. R.; Fitzpatrick, W. S.
1971-01-01
The computer executive functional system design concepts derived from study of the Space Station/Base are presented. Information Management System hardware configuration as directly influencing the executive design is reviewed. The hardware configuration and generic executive design requirements are considered in detail in a previous report (System Configuration and Executive Requirements Specifications for Reusable Shuttle and Space Station/Base, 9/25/70). This report defines basic system primitives and delineates processes and process control. Supervisor states are considered for describing basic multiprogramming and multiprocessing systems. A high-level computer executive including control of scheduling, allocation of resources, system interactions, and real-time supervisory functions is defined. The description is oriented to provide a baseline for a functional simulation of the computer executive system.
Apollo experience report: Real-time auxiliary computing facility development
NASA Technical Reports Server (NTRS)
Allday, C. E.
1972-01-01
The Apollo real time auxiliary computing function and facility were an extension of the facility used during the Gemini Program. The facility was expanded to include support of all areas of flight control, and computer programs were developed for mission and mission-simulation support. The scope of the function was expanded to include prime mission support functions in addition to engineering evaluations, and the facility became a mandatory mission support facility. The facility functioned as a full scale mission support activity until after the first manned lunar landing mission. After the Apollo 11 mission, the function and facility gradually reverted to a nonmandatory, offline, on-call operation because the real time program flexibility was increased and verified sufficiently to eliminate the need for redundant computations. The evaluation of the facility and function and recommendations for future programs are discussed in this report.
Spectroscopy of Kerr black holes with Earth- and space-based interferometers
NASA Astrophysics Data System (ADS)
Berti, Emanuele; Sesana, Alberto; Barausse, Enrico; Cardoso, Vitor; Belczynski, Krzysztof
2017-01-01
We estimate the potential of present and future interferometric gravitational-wave detectors to test the Kerr nature of black holes through ``gravitational spectroscopy,'' i.e. the measurement of multiple quasinormal mode frequencies from the remnant of a black hole merger. Using population synthesis models of the formation and evolution of stellar-mass black hole binaries, we find that Voyager-class interferometers will be necessary to perform these tests. Gravitational spectroscopy in the local Universe may become routine with the Einstein Telescope, but a 40-km facility like Cosmic Explorer is necessary to go beyond z 3 . In contrast, eLISA-like detectors should carry out a few - or even hundreds - of these tests every year, depending on uncertainties in massive black hole formation models. Many space-based spectroscopical measurements will occur at high redshift, testing the strong gravity dynamics of Kerr black holes in domains where cosmological corrections to general relativity (if they occur in nature) must be significant. NSF CAREER Grant No. PHY-1055103, NSF Grant No. PHY-1607130, FCT contract IF/00797/2014/CP1214/CT0012.
Cooperative inversion of magnetotelluric and seismic data sets
NASA Astrophysics Data System (ADS)
Markovic, M.; Santos, F.
2012-04-01
Cooperative inversion of magnetotelluric and seismic data sets Milenko Markovic,Fernando Monteiro Santos IDL, Faculdade de Ciências da Universidade de Lisboa 1749-016 Lisboa Inversion of single geophysical data has well-known limitations due to the non-linearity of the fields and non-uniqueness of the model. There is growing need, both in academy and industry to use two or more different data sets and thus obtain subsurface property distribution. In our case ,we are dealing with magnetotelluric and seismic data sets. In our approach,we are developing algorithm based on fuzzy-c means clustering technique, for pattern recognition of geophysical data. Separate inversion is performed on every step, information exchanged for model integration. Interrelationships between parameters from different models is not required in analytical form. We are investigating how different number of clusters, affects zonation and spatial distribution of parameters. In our study optimization in fuzzy c-means clustering (for magnetotelluric and seismic data) is compared for two cases, firstly alternating optimization and then hybrid method (alternating optimization+ Quasi-Newton method). Acknowledgment: This work is supported by FCT Portugal
Environmental magnetic signature of the Deccan Phase 2 at the Gambsach section
NASA Astrophysics Data System (ADS)
Font, Eric; Adatte, Thierry; Florindo, Fabio
2015-04-01
The age and paleoenvironmental effects of the Deccan Traps volcanism are still poorly constrained. Recently, we discovered an interval of low magnetic susceptibility containing akaganeite in stratigraphic interval located just below the Cretaceous-Tertiary Boundary (KTB) at Bidart and Gubbio and correlated in age to the Deccan Phase 2. Here we aim to test our hypotheses in another complete and well-calibrated KT section, the Gambsach section in Austria. We applied magnetic susceptibility, isothermal remanent magnetization curves and FORC diagrams is order to check for changes in magnetic properties and their link with paleoenvironmental changes. Our results show that an interval of low magnetic susceptibility is located just below the KTB, similarly to Bidart and Gubbio. The low values of magnetic susceptibility correspond to lower content in magnetite and hematite. We interpreted the loss of iron oxides as the result of reductive dissolution due to ocean acidification. The newly found evidences are consistent with major paleoenvironmental changes linked to Deccan volcanism at the dawn of the KT mass extinction. Keywords: Deccan, KT mass extinction, magnetism, environment, Gambsach. Funded by FCT (PTDC/CTE-GIX/117298/2010)
Constraining stellar binary black hole formation scenarios with LISA eccentricity measurements
NASA Astrophysics Data System (ADS)
Berti, Emanuele; Nishizawa, Atsushi; Sesana, Alberto; Klein, Antoine
2017-01-01
A space-based interferometer such as LISA could observe few to few thousands progenitors of black hole binaries (BHBs) similar to those recently detected by Advanced LIGO. Gravitational radiation circularizes the orbit during inspiral, but some BHBs retain a measurable eccentricity at the low frequencies where LISA is most sensitive. The eccentricity of a BHB carries precious information about its formation channel: BHBs formed in the field, in globular clusters, or close to a massive black hole (MBH) have distinct eccentricity distributions in the LISA band. We generate mock LISA observations, folding in measurement errors, and using Bayesian model selection we study whether LISA measurements can identify the BHB formation channel. We find that a handful of observations would suffice to tell whether BHBs were formed in the gravitational field of a MBH. Conversely, several tens of observations are needed to tell apart field formation from globular cluster formation. A five-year LISA mission with the longest possible armlength is desirable to shed light on BHB formation scenarios. NSF CAREER Grant No. PHY-1055103, NSF Grant No. PHY-1607130, FCT contract IF/00797/2014/CP1214/CT0012.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Small, Ward; Pearson, Mark A.; Metz, Tom R.
Dow Corning SE 1700 (reinforced polydimethylsiloxane) porous structures were made by direct ink writing (DIW) in a face centered tetragonal (FCT) configuration. The filament diameter was 250 μm. Structures consisting of 4, 8, or 12 layers were fabricated with center-to-center filament spacing (“road width” (RW)) of 475, 500, 525, 550, or 575 μm. Three compressive load-unload cycles to 2000 kPa were performed on four separate areas of each sample; three samples of each thickness and filament spacing were tested. At a given strain during the third loading phase, stress varied inversely with porosity. At 10% strain, the stress was nearlymore » independent of the number of layers (i.e., thickness). At higher strains (20- 40%), the stress was highest for the 4-layer structure; the 8- and 12-layer structures were nearly equivalent suggesting that the load deflection is independent of number of layers above 8 layers. Intra-and inter-sample variability of the load deflection response was higher for thinner and less porous structures.« less
BiDiBlast: comparative genomics pipeline for the PC.
de Almeida, João M G C F
2010-06-01
Bi-directional BLAST is a simple approach to detect, annotate, and analyze candidate orthologous or paralogous sequences in a single go. This procedure is usually confined to the realm of customized Perl scripts, usually tuned for UNIX-like environments. Porting those scripts to other operating systems involves refactoring them, and also the installation of the Perl programming environment with the required libraries. To overcome these limitations, a data pipeline was implemented in Java. This application submits two batches of sequences to local versions of the NCBI BLAST tool, manages result lists, and refines both bi-directional and simple hits. GO Slim terms are attached to hits, several statistics are derived, and molecular evolution rates are estimated through PAML. The results are written to a set of delimited text tables intended for further analysis. The provided graphic user interface allows a friendly interaction with this application, which is documented and available to download at http://moodle.fct.unl.pt/course/view.php?id=2079 or https://sourceforge.net/projects/bidiblast/ under the GNU GPL license. Copyright 2010 Beijing Genomics Institute. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jin, Yan; Ye, Chen; Luo, Xiao; Yuan, Hui; Cheng, Changgui
2017-05-01
In order to improve the inclusion removal property of the tundish, the mathematic model for simulation of the flow field sourced from inner-swirl-type turbulence controller (ISTTC) was developed, in which there were six blades arranged with an eccentric angle (θ) counterclockwise. Based on the mathematical and water model, the effect of inclusion removal in the swirling flow field formed by ISTTC was analyzed. It was found that ISTTC had got the better effect of inhibiting turbulence in tundish than traditional turbulence inhibitor (TI). As the blades eccentric angle (θ) of ISTTC increasing, the intensity of swirling flow above it increased. The maximum rotate speed of fluid in swirling flow band driven by ISTTC (θ=45°) was equal to 25 rmp. Based on the force analysis of inclusion in swirling flow sourced from ISTTC, the removal effect of medium size inclusion by ISTTC was attributed to the centripetal force (Fct) of swirling flow, but removal effect of ISTTC to small size inclusion was more depend on its better turbulence depression behavior.
ERIC Educational Resources Information Center
Coster, Wendy J.; Kramer, Jessica M.; Tian, Feng; Dooley, Meghan; Liljenquist, Kendra; Kao, Ying-Chia; Ni, Pengsheng
2016-01-01
The Pediatric Evaluation of Disability Inventory-Computer Adaptive Test is an alternative method for describing the adaptive function of children and youth with disabilities using a computer-administered assessment. This study evaluated the performance of the Pediatric Evaluation of Disability Inventory-Computer Adaptive Test with a national…
Passive dendrites enable single neurons to compute linearly non-separable functions.
Cazé, Romain Daniel; Humphries, Mark; Gutkin, Boris
2013-01-01
Local supra-linear summation of excitatory inputs occurring in pyramidal cell dendrites, the so-called dendritic spikes, results in independent spiking dendritic sub-units, which turn pyramidal neurons into two-layer neural networks capable of computing linearly non-separable functions, such as the exclusive OR. Other neuron classes, such as interneurons, may possess only a few independent dendritic sub-units, or only passive dendrites where input summation is purely sub-linear, and where dendritic sub-units are only saturating. To determine if such neurons can also compute linearly non-separable functions, we enumerate, for a given parameter range, the Boolean functions implementable by a binary neuron model with a linear sub-unit and either a single spiking or a saturating dendritic sub-unit. We then analytically generalize these numerical results to an arbitrary number of non-linear sub-units. First, we show that a single non-linear dendritic sub-unit, in addition to the somatic non-linearity, is sufficient to compute linearly non-separable functions. Second, we analytically prove that, with a sufficient number of saturating dendritic sub-units, a neuron can compute all functions computable with purely excitatory inputs. Third, we show that these linearly non-separable functions can be implemented with at least two strategies: one where a dendritic sub-unit is sufficient to trigger a somatic spike; another where somatic spiking requires the cooperation of multiple dendritic sub-units. We formally prove that implementing the latter architecture is possible with both types of dendritic sub-units whereas the former is only possible with spiking dendrites. Finally, we show how linearly non-separable functions can be computed by a generic two-compartment biophysical model and a realistic neuron model of the cerebellar stellate cell interneuron. Taken together our results demonstrate that passive dendrites are sufficient to enable neurons to compute linearly non-separable functions.
Passive Dendrites Enable Single Neurons to Compute Linearly Non-separable Functions
Cazé, Romain Daniel; Humphries, Mark; Gutkin, Boris
2013-01-01
Local supra-linear summation of excitatory inputs occurring in pyramidal cell dendrites, the so-called dendritic spikes, results in independent spiking dendritic sub-units, which turn pyramidal neurons into two-layer neural networks capable of computing linearly non-separable functions, such as the exclusive OR. Other neuron classes, such as interneurons, may possess only a few independent dendritic sub-units, or only passive dendrites where input summation is purely sub-linear, and where dendritic sub-units are only saturating. To determine if such neurons can also compute linearly non-separable functions, we enumerate, for a given parameter range, the Boolean functions implementable by a binary neuron model with a linear sub-unit and either a single spiking or a saturating dendritic sub-unit. We then analytically generalize these numerical results to an arbitrary number of non-linear sub-units. First, we show that a single non-linear dendritic sub-unit, in addition to the somatic non-linearity, is sufficient to compute linearly non-separable functions. Second, we analytically prove that, with a sufficient number of saturating dendritic sub-units, a neuron can compute all functions computable with purely excitatory inputs. Third, we show that these linearly non-separable functions can be implemented with at least two strategies: one where a dendritic sub-unit is sufficient to trigger a somatic spike; another where somatic spiking requires the cooperation of multiple dendritic sub-units. We formally prove that implementing the latter architecture is possible with both types of dendritic sub-units whereas the former is only possible with spiking dendrites. Finally, we show how linearly non-separable functions can be computed by a generic two-compartment biophysical model and a realistic neuron model of the cerebellar stellate cell interneuron. Taken together our results demonstrate that passive dendrites are sufficient to enable neurons to compute linearly non-separable functions. PMID:23468600
NASA Technical Reports Server (NTRS)
Curran, R. T.; Hornfeck, W. A.
1972-01-01
The functional requirements for the design of an interpretive simulator for the space ultrareliable modular computer (SUMC) are presented. A review of applicable existing computer simulations is included along with constraints on the SUMC simulator functional design. Input requirements, output requirements, and language requirements for the simulator are discussed in terms of a SUMC configuration which may vary according to the application.
1976-03-01
RESEARCH IN FUNCTIONALLY DISTRIBUTED COMPUTER SYSTEMS DEVEI.OPME--ETClU) MAR 76 P S FISHER, F MARYANSKI DAA629-76-6-0108 UNCLASSIFIED CS-76-08AN...RESEARCH IN FUNCTIONALLY !DISTRIBUTED COMPUTER SYSTEMS DEVELOPMENT Kansas State University Virgil Wallentine Principal Investigator Approved for public...reme; disiribution unlimited DTIC \\4JWE III ELECTi"U ~E V0AI. Ill ~1ONTAUG 2 0 1981&EV .IAIN LiSP4 F U.S. ARMY COMPUTER SYSTEMS COMMAND FT BELVOIR, VA
Crosstalk Cancellation for a Simultaneous Phase Shifting Interferometer
NASA Technical Reports Server (NTRS)
Olczak, Eugene (Inventor)
2014-01-01
A method of minimizing fringe print-through in a phase-shifting interferometer, includes the steps of: (a) determining multiple transfer functions of pixels in the phase-shifting interferometer; (b) computing a crosstalk term for each transfer function; and (c) displaying, to a user, a phase-difference map using the crosstalk terms computed in step (b). Determining a transfer function in step (a) includes measuring intensities of a reference beam and a test beam at the pixels, and measuring an optical path difference between the reference beam and the test beam at the pixels. Computing crosstalk terms in step (b) includes computing an N-dimensional vector, where N corresponds to the number of transfer functions, and the N-dimensional vector is obtained by minimizing a variance of a modulation function in phase shifted images.
Computer method for identification of boiler transfer functions
NASA Technical Reports Server (NTRS)
Miles, J. H.
1972-01-01
Iterative computer aided procedure was developed which provides for identification of boiler transfer functions using frequency response data. Method uses frequency response data to obtain satisfactory transfer function for both high and low vapor exit quality data.
Massively parallel sparse matrix function calculations with NTPoly
NASA Astrophysics Data System (ADS)
Dawson, William; Nakajima, Takahito
2018-04-01
We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.
An efficient method for hybrid density functional calculation with spin-orbit coupling
NASA Astrophysics Data System (ADS)
Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui
2018-03-01
In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.
Partitioning in Avionics Architectures: Requirements, Mechanisms, and Assurance
NASA Technical Reports Server (NTRS)
Rushby, John
1999-01-01
Automated aircraft control has traditionally been divided into distinct "functions" that are implemented separately (e.g., autopilot, autothrottle, flight management); each function has its own fault-tolerant computer system, and dependencies among different functions are generally limited to the exchange of sensor and control data. A by-product of this "federated" architecture is that faults are strongly contained within the computer system of the function where they occur and cannot readily propagate to affect the operation of other functions. More modern avionics architectures contemplate supporting multiple functions on a single, shared, fault-tolerant computer system where natural fault containment boundaries are less sharply defined. Partitioning uses appropriate hardware and software mechanisms to restore strong fault containment to such integrated architectures. This report examines the requirements for partitioning, mechanisms for their realization, and issues in providing assurance for partitioning. Because partitioning shares some concerns with computer security, security models are reviewed and compared with the concerns of partitioning.
Effects of Computer-Based Training on Procedural Modifications to Standard Functional Analyses
ERIC Educational Resources Information Center
Schnell, Lauren K.; Sidener, Tina M.; DeBar, Ruth M.; Vladescu, Jason C.; Kahng, SungWoo
2018-01-01
Few studies have evaluated methods for training decision-making when functional analysis data are undifferentiated. The current study evaluated computer-based training to teach 20 graduate students to arrange functional analysis conditions, analyze functional analysis data, and implement procedural modifications. Participants were exposed to…
Mirror neurons and imitation: a computationally guided review.
Oztop, Erhan; Kawato, Mitsuo; Arbib, Michael
2006-04-01
Neurophysiology reveals the properties of individual mirror neurons in the macaque while brain imaging reveals the presence of 'mirror systems' (not individual neurons) in the human. Current conceptual models attribute high level functions such as action understanding, imitation, and language to mirror neurons. However, only the first of these three functions is well-developed in monkeys. We thus distinguish current opinions (conceptual models) on mirror neuron function from more detailed computational models. We assess the strengths and weaknesses of current computational models in addressing the data and speculations on mirror neurons (macaque) and mirror systems (human). In particular, our mirror neuron system (MNS), mental state inference (MSI) and modular selection and identification for control (MOSAIC) models are analyzed in more detail. Conceptual models often overlook the computational requirements for posited functions, while too many computational models adopt the erroneous hypothesis that mirror neurons are interchangeable with imitation ability. Our meta-analysis underlines the gap between conceptual and computational models and points out the research effort required from both sides to reduce this gap.
Computational Study of a Functionally Graded Ceramic-Metallic Armor
2006-12-15
UNCLAS –Dist. A - Approved for public release Computational Study of a Functionally Graded Ceramic-Metallic Armor Douglas W. Templeton1, Tara J...efficiency of a postulated FGM ceramic-metallic armor system composed of aluminum nitride (AlN) and aluminum. The study had two primary...2006 2. REPORT TYPE N/ A 3. DATES COVERED - 4. TITLE AND SUBTITLE Computational Study of a Functionally Graded Ceramic-Metallic Armor 5a
Bischoff, Florian A; Harrison, Robert J; Valeev, Edward F
2012-09-14
We present an approach to compute accurate correlation energies for atoms and molecules using an adaptive discontinuous spectral-element multiresolution representation for the two-electron wave function. Because of the exponential storage complexity of the spectral-element representation with the number of dimensions, a brute-force computation of two-electron (six-dimensional) wave functions with high precision was not practical. To overcome the key storage bottlenecks we utilized (1) a low-rank tensor approximation (specifically, the singular value decomposition) to compress the wave function, and (2) explicitly correlated R12-type terms in the wave function to regularize the Coulomb electron-electron singularities of the Hamiltonian. All operations necessary to solve the Schrödinger equation were expressed so that the reconstruction of the full-rank form of the wave function is never necessary. Numerical performance of the method was highlighted by computing the first-order Møller-Plesset wave function of a helium atom. The computed second-order Møller-Plesset energy is precise to ~2 microhartrees, which is at the precision limit of the existing general atomic-orbital-based approaches. Our approach does not assume special geometric symmetries, hence application to molecules is straightforward.
Computer/gaming station use in youth: Correlations among use, addiction and functional impairment
Baer, Susan; Saran, Kelly; Green, David A
2012-01-01
OBJECTIVE: Computer/gaming station use is ubiquitous in the lives of youth today. Overuse is a concern, but it remains unclear whether problems arise from addictive patterns of use or simply excessive time spent on use. The goal of the present study was to evaluate computer/gaming station use in youth and to examine the relationship between amounts of use, addictive features of use and functional impairment. METHOD: A total of 110 subjects (11 to 17 years of age) from local schools participated. Time spent on television, video gaming and non-gaming recreational computer activities was measured. Addictive features of computer/gaming station use were ascertained, along with emotional/behavioural functioning. Multiple linear regressions were used to understand how youth functioning varied with time of use and addictive features of use. RESULTS: Mean (± SD) total screen time was 4.5±2.4 h/day. Addictive features of use were consistently correlated with functional impairment across multiple measures and informants, whereas time of use, after controlling for addiction, was not. CONCLUSIONS: Youth are spending many hours each day in front of screens. In the absence of addictive features of computer/gaming station use, time spent is not correlated with problems; however, youth with addictive features of use show evidence of poor emotional/ behavioural functioning. PMID:24082802
NASA Astrophysics Data System (ADS)
Ribeiro, Mónica; Taborda, Rui; Lira, Cristina; Bizarro, Aurora; Oliveira, Anabela
2014-05-01
Headland sediment bypassing plays a major role in definition of coastal sedimentary budget and consequently in coastal management. This process is particularity important at headland-bay beaches on rocky coasts. However, headland-bay beach research is usually focused on the beach rotation since these beaches are generally regarded as closed systems. The sediment bypassing mechanisms have been extensively studied in the context of artificial structures (e.g. groins and jetties) but studies of natural headland sediment bypassing are scarce and usually applied to decadal time scales. This work aims to contribute to the understanding of headland sediment bypassing processes in non-artificial environments, taking as a case study a natural coastal stretch at the Portuguese west coast. The study is supported on the analysis of planform beach changes using Landsat satellite images (with an acquisition frequency of 16 days) complemented with field surveys with DGPS-RTK and ground-based photographic monitoring. The study area can be described as a cliffed rocky coast that accommodates a series of headland-bay beaches with different geometries: some are encased in the dependence of fluvial streams, while others correspond to a narrow and elongated thin sand strip that covers a rocky shore platform. This coast is generally characterized by a weak, but active, sediment supply and high levels of wave energy due to the exposure to the swells generated in the North Atlantic. The long-term stability of the beaches in conjunction with active sediment supply along the study area (from streams and cliff erosion) and a sink at the downdrift end of this coastal stretch (an active dune system) support the existence of headland sediment bypassing. The analysis of planform beach changes show a coherent signal in time but with a range that depends on the orientation of the stretch where each beach is included. In general, beaches displays a clockwise rotation during summer related to the NW (less energetic) incident wave conditions. The persistence of these conditions induces an enlargement of the beach downdrift (southward) and eventually sediment bypassing. This process can result in a continuous inner bar along the headland coast, which migrates downdrift in the surf zone and weld to the downdrift beach. The counter-clockwise rotation observed in the winter is more variable being in agreement with the less persistent W and SW incident wave conditions, suggesting that sediment bypassing occurs only southwards. The work was funded by FEDER funds through the Operational Programme for Competitiveness Factors - COMPETE and FCT National Funds - Portuguese Foundation for Science and Technology under the project Beach to Canyon Head Sedimentary Processes (PTDC/MAR/114674/2009). First author benefits from a PhD grant funded by FCT (SFRH/BD/79126/2011).
The Association Between Computer Use and Cognition Across Adulthood: Use it so You Won't Lose it?
Tun, Patricia A.; Lachman, Margie E.
2012-01-01
Understanding the association between computer use and adult cognition has been limited until now by self-selected samples with restricted ranges of age and education. Here we studied effects of computer use in a large national sample (N=2671) of adults aged 32 to 84, assessing cognition with the Brief Test of Adult Cognition by Telephone (Tun & Lachman, 2005), and executive function with the Stop and Go Switch Task (Tun & Lachman, 2008). Frequency of computer activity was associated with cognitive performance after controlling for age, sex, education, and health status: that is, individuals who used the computer frequently scored significantly higher than those who seldom used the computer. Greater computer use was also associated with better executive function on a task-switching test, even after controlling for basic cognitive ability as well as demographic variables. These findings suggest that frequent computer activity is associated with good cognitive function, particularly executive control, across adulthood into old age, especially for those with lower intellectual ability. PMID:20677884
Computational models of basal-ganglia pathway functions: focus on functional neuroanatomy
Schroll, Henning; Hamker, Fred H.
2013-01-01
Over the past 15 years, computational models have had a considerable impact on basal-ganglia research. Most of these models implement multiple distinct basal-ganglia pathways and assume them to fulfill different functions. As there is now a multitude of different models, it has become complex to keep track of their various, sometimes just marginally different assumptions on pathway functions. Moreover, it has become a challenge to oversee to what extent individual assumptions are corroborated or challenged by empirical data. Focusing on computational, but also considering non-computational models, we review influential concepts of pathway functions and show to what extent they are compatible with or contradict each other. Moreover, we outline how empirical evidence favors or challenges specific model assumptions and propose experiments that allow testing assumptions against each other. PMID:24416002
NASA Technical Reports Server (NTRS)
Curran, R. T.
1971-01-01
A flight computer functional executive design for the reusable shuttle is presented. The design is given in the form of functional flowcharts and prose description. Techniques utilized in the regulation of process flow to accomplish activation, resource allocation, suspension, termination, and error masking based on process primitives are considered. Preliminary estimates of main storage utilization by the Executive are furnished. Conclusions and recommendations for timely, effective software-hardware integration in the reusable shuttle avionics system are proposed.
Laminar fMRI and computational theories of brain function.
Stephan, K E; Petzschner, F H; Kasper, L; Bayer, J; Wellstein, K V; Stefanics, G; Pruessmann, K P; Heinzle, J
2017-11-02
Recently developed methods for functional MRI at the resolution of cortical layers (laminar fMRI) offer a novel window into neurophysiological mechanisms of cortical activity. Beyond physiology, laminar fMRI also offers an unprecedented opportunity to test influential theories of brain function. Specifically, hierarchical Bayesian theories of brain function, such as predictive coding, assign specific computational roles to different cortical layers. Combined with computational models, laminar fMRI offers a unique opportunity to test these proposals noninvasively in humans. This review provides a brief overview of predictive coding and related hierarchical Bayesian theories, summarises their predictions with regard to layered cortical computations, examines how these predictions could be tested by laminar fMRI, and considers methodological challenges. We conclude by discussing the potential of laminar fMRI for clinically useful computational assays of layer-specific information processing. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Hepner, T. E.; Meyers, J. F. (Inventor)
1985-01-01
A laser velocimeter covariance processor which calculates the auto covariance and cross covariance functions for a turbulent flow field based on Poisson sampled measurements in time from a laser velocimeter is described. The device will process a block of data that is up to 4096 data points in length and return a 512 point covariance function with 48-bit resolution along with a 512 point histogram of the interarrival times which is used to normalize the covariance function. The device is designed to interface and be controlled by a minicomputer from which the data is received and the results returned. A typical 4096 point computation takes approximately 1.5 seconds to receive the data, compute the covariance function, and return the results to the computer.
Rybacka, Anna; Goździk-Spychalska, Joanna; Rybacki, Adam; Piorunek, Tomasz; Batura-Gabryel, Halina; Karmelita-Katulska, Katarzyna
2018-05-04
In cystic fibrosis, pulmonary function tests (PFTs) and computed tomography are used to assess lung function and structure, respectively. Although both techniques of assessment are congruent there are lingering doubts about which PFTs variables show the best congruence with computed tomography scoring. In this study we addressed the issue by reinvestigating the association between PFTs variables and the score of changes seen in computed tomography scans in patients with cystic fibrosis with and without pulmonary exacerbation. This retrospective study comprised 40 patients in whom PFTs and computed tomography were performed no longer than 3 weeks apart. Images (inspiratory: 0.625 mm slice thickness, 0.625 mm interval; expiratory: 1.250 mm slice thickness, 10 mm interval) were evaluated with the Bhalla scoring system. The most frequent structural abnormality found in scans were bronchiectases and peribronchial thickening. The strongest relationship was found between the Bhalla sore and forced expiratory volume in 1 s (FEV1). The Bhalla sore also was related to forced vital capacity (FVC), FEV1/FVC ratio, residual volume (RV), and RV/total lung capacity (TLC) ratio. We conclude that lung structural data obtained from the computed tomography examination are highly congruent to lung function data. Thus, computed tomography imaging may supersede functional assessment in cases of poor compliance with spirometry procedures in the lederly or children. Computed tomography also seems more sensitive than PFTs in the assessment of cystic fibrosis progression. Moreover, in early phases of cystic fibrosis, computed tomography, due to its excellent resolution, may be irreplaceable in monitoring pulmonary damage.
Performance of a computer-based assessment of cognitive function measures in two cohorts of seniors
USDA-ARS?s Scientific Manuscript database
Computer-administered assessment of cognitive function is being increasingly incorporated in clinical trials, however its performance in these settings has not been systematically evaluated. The Seniors Health and Activity Research Program (SHARP) pilot trial (N=73) developed a computer-based tool f...
ERIC Educational Resources Information Center
Dikli, Semire
2006-01-01
The impacts of computers on writing have been widely studied for three decades. Even basic computers functions, i.e. word processing, have been of great assistance to writers in modifying their essays. The research on Automated Essay Scoring (AES) has revealed that computers have the capacity to function as a more effective cognitive tool (Attali,…
Song, Tianqi; Garg, Sudhanshu; Mokhtar, Reem; Bui, Hieu; Reif, John
2018-01-19
A main goal in DNA computing is to build DNA circuits to compute designated functions using a minimal number of DNA strands. Here, we propose a novel architecture to build compact DNA strand displacement circuits to compute a broad scope of functions in an analog fashion. A circuit by this architecture is composed of three autocatalytic amplifiers, and the amplifiers interact to perform computation. We show DNA circuits to compute functions sqrt(x), ln(x) and exp(x) for x in tunable ranges with simulation results. A key innovation in our architecture, inspired by Napier's use of logarithm transforms to compute square roots on a slide rule, is to make use of autocatalytic amplifiers to do logarithmic and exponential transforms in concentration and time. In particular, we convert from the input that is encoded by the initial concentration of the input DNA strand, to time, and then back again to the output encoded by the concentration of the output DNA strand at equilibrium. This combined use of strand-concentration and time encoding of computational values may have impact on other forms of molecular computation.
Tempel, David G; Aspuru-Guzik, Alán
2012-01-01
We prove that the theorems of TDDFT can be extended to a class of qubit Hamiltonians that are universal for quantum computation. The theorems of TDDFT applied to universal Hamiltonians imply that single-qubit expectation values can be used as the basic variables in quantum computation and information theory, rather than wavefunctions. From a practical standpoint this opens the possibility of approximating observables of interest in quantum computations directly in terms of single-qubit quantities (i.e. as density functionals). Additionally, we also demonstrate that TDDFT provides an exact prescription for simulating universal Hamiltonians with other universal Hamiltonians that have different, and possibly easier-to-realize two-qubit interactions. This establishes the foundations of TDDFT for quantum computation and opens the possibility of developing density functionals for use in quantum algorithms.
f1: a code to compute Appell's F1 hypergeometric function
NASA Astrophysics Data System (ADS)
Colavecchia, F. D.; Gasaneo, G.
2004-02-01
In this work we present the FORTRAN code to compute the hypergeometric function F1( α, β1, β2, γ, x, y) of Appell. The program can compute the F1 function for real values of the variables { x, y}, and complex values of the parameters { α, β1, β2, γ}. The code uses different strategies to calculate the function according to the ideas outlined in [F.D. Colavecchia et al., Comput. Phys. Comm. 138 (1) (2001) 29]. Program summaryTitle of the program: f1 Catalogue identifier: ADSJ Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSJ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computers: PC compatibles, SGI Origin2∗ Operating system under which the program has been tested: Linux, IRIX Programming language used: Fortran 90 Memory required to execute with typical data: 4 kbytes No. of bits in a word: 32 No. of bytes in distributed program, including test data, etc.: 52 325 Distribution format: tar gzip file External subprograms used: Numerical Recipes hypgeo [W.H. Press et al., Numerical Recipes in Fortran 77, Cambridge Univ. Press, 1996] or chyp routine of R.C. Forrey [J. Comput. Phys. 137 (1997) 79], rkf45 [L.F. Shampine and H.H. Watts, Rep. SAND76-0585, 1976]. Keywords: Numerical methods, special functions, hypergeometric functions, Appell functions, Gauss function Nature of the physical problem: Computing the Appell F1 function is relevant in atomic collisions and elementary particle physics. It is usually the result of multidimensional integrals involving Coulomb continuum states. Method of solution: The F1 function has a convergent-series definition for | x|<1 and | y|<1, and several analytic continuations for other regions of the variable space. The code tests the values of the variables and selects one of the precedent cases. In the convergence region the program uses the series definition near the origin of coordinates, and a numerical integration of the third-order differential parametric equation for the F1 function. Also detects several special cases according to the values of the parameters. Restrictions on the complexity of the problem: The code is restricted to real values of the variables { x, y}. Also, there are some parameter domains that are not covered. These usually imply differences between integer parameters that lead to negative integer arguments of Gamma functions. Typical running time: Depends basically on the variables. The computation of Table 4 of [F.D. Colavecchia et al., Comput. Phys. Comm. 138 (1) (2001) 29] (64 functions) requires approximately 0.33 s in a Athlon 900 MHz processor.
Properties of Neurons in External Globus Pallidus Can Support Optimal Action Selection
Bogacz, Rafal; Martin Moraud, Eduardo; Abdi, Azzedine; Magill, Peter J.; Baufreton, Jérôme
2016-01-01
The external globus pallidus (GPe) is a key nucleus within basal ganglia circuits that are thought to be involved in action selection. A class of computational models assumes that, during action selection, the basal ganglia compute for all actions available in a given context the probabilities that they should be selected. These models suggest that a network of GPe and subthalamic nucleus (STN) neurons computes the normalization term in Bayes’ equation. In order to perform such computation, the GPe needs to send feedback to the STN equal to a particular function of the activity of STN neurons. However, the complex form of this function makes it unlikely that individual GPe neurons, or even a single GPe cell type, could compute it. Here, we demonstrate how this function could be computed within a network containing two types of GABAergic GPe projection neuron, so-called ‘prototypic’ and ‘arkypallidal’ neurons, that have different response properties in vivo and distinct connections. We compare our model predictions with the experimentally-reported connectivity and input-output functions (f-I curves) of the two populations of GPe neurons. We show that, together, these dichotomous cell types fulfil the requirements necessary to compute the function needed for optimal action selection. We conclude that, by virtue of their distinct response properties and connectivities, a network of arkypallidal and prototypic GPe neurons comprises a neural substrate capable of supporting the computation of the posterior probabilities of actions. PMID:27389780
AN ULTRAVIOLET-VISIBLE SPECTROPHOTOMETER AUTOMATION SYSTEM. PART I: FUNCTIONAL SPECIFICATIONS
This document contains the project definition, the functional requirements, and the functional design for a proposed computer automation system for scanning spectrophotometers. The system will be implemented on a Data General computer using the BASIC language. The system is a rea...
[Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].
Furuta, Takuya; Sato, Tatsuhiko
2015-01-01
Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.
Extending compile-time reverse mode and exploiting partial separability in ADIFOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C.H.; El-Khadiri, M.
1992-10-01
The numerical methods employed in the solution of many scientific computing problems require the computation of the gradient of a function f: R[sup n] [yields] R. ADIFOR is a source translator that, given a collection of subroutines to compute f, generates Fortran 77 code for computing the derivative of this function. Using the so-called torsion problem from the MINPACK-2 test collection as an example, this paper explores two issues in automatic differentiation: the efficient computation of derivatives for partial separable functions and the use of the compile-time reverse mode for the generation of derivatives. We show that orders of magnitudesmore » of improvement are possible when exploiting partial separability and maximizing use of the reverse mode.« less
Synthetic analog computation in living cells.
Daniel, Ramiz; Rubens, Jacob R; Sarpeshkar, Rahul; Lu, Timothy K
2013-05-30
A central goal of synthetic biology is to achieve multi-signal integration and processing in living cells for diagnostic, therapeutic and biotechnology applications. Digital logic has been used to build small-scale circuits, but other frameworks may be needed for efficient computation in the resource-limited environments of cells. Here we demonstrate that synthetic analog gene circuits can be engineered to execute sophisticated computational functions in living cells using just three transcription factors. Such synthetic analog gene circuits exploit feedback to implement logarithmically linear sensing, addition, ratiometric and power-law computations. The circuits exhibit Weber's law behaviour as in natural biological systems, operate over a wide dynamic range of up to four orders of magnitude and can be designed to have tunable transfer functions. Our circuits can be composed to implement higher-order functions that are well described by both intricate biochemical models and simple mathematical functions. By exploiting analog building-block functions that are already naturally present in cells, this approach efficiently implements arithmetic operations and complex functions in the logarithmic domain. Such circuits may lead to new applications for synthetic biology and biotechnology that require complex computations with limited parts, need wide-dynamic-range biosensing or would benefit from the fine control of gene expression.
The multifacet graphically contracted function method. I. Formulation and implementation
NASA Astrophysics Data System (ADS)
Shepard, Ron; Gidofalvi, Gergely; Brozell, Scott R.
2014-08-01
The basic formulation for the multifacet generalization of the graphically contracted function (MFGCF) electronic structure method is presented. The analysis includes the discussion of linear dependency and redundancy of the arc factor parameters, the computation of reduced density matrices, Hamiltonian matrix construction, spin-density matrix construction, the computation of optimization gradients for single-state and state-averaged calculations, graphical wave function analysis, and the efficient computation of configuration state function and Slater determinant expansion coefficients. Timings are given for Hamiltonian matrix element and analytic optimization gradient computations for a range of model problems for full-CI Shavitt graphs, and it is observed that both the energy and the gradient computation scale as O(N2n4) for N electrons and n orbitals. The important arithmetic operations are within dense matrix-matrix product computational kernels, resulting in a computationally efficient procedure. An initial implementation of the method is used to present applications to several challenging chemical systems, including N2 dissociation, cubic H8 dissociation, the symmetric dissociation of H2O, and the insertion of Be into H2. The results are compared to the exact full-CI values and also to those of the previous single-facet GCF expansion form.
The multifacet graphically contracted function method. I. Formulation and implementation.
Shepard, Ron; Gidofalvi, Gergely; Brozell, Scott R
2014-08-14
The basic formulation for the multifacet generalization of the graphically contracted function (MFGCF) electronic structure method is presented. The analysis includes the discussion of linear dependency and redundancy of the arc factor parameters, the computation of reduced density matrices, Hamiltonian matrix construction, spin-density matrix construction, the computation of optimization gradients for single-state and state-averaged calculations, graphical wave function analysis, and the efficient computation of configuration state function and Slater determinant expansion coefficients. Timings are given for Hamiltonian matrix element and analytic optimization gradient computations for a range of model problems for full-CI Shavitt graphs, and it is observed that both the energy and the gradient computation scale as O(N(2)n(4)) for N electrons and n orbitals. The important arithmetic operations are within dense matrix-matrix product computational kernels, resulting in a computationally efficient procedure. An initial implementation of the method is used to present applications to several challenging chemical systems, including N2 dissociation, cubic H8 dissociation, the symmetric dissociation of H2O, and the insertion of Be into H2. The results are compared to the exact full-CI values and also to those of the previous single-facet GCF expansion form.
NASA Technical Reports Server (NTRS)
Ustinov, E. A.
1999-01-01
Evaluation of weighting functions in the atmospheric remote sensing is usually the most computer-intensive part of the inversion algorithms. We present an analytic approach to computations of temperature and mixing ratio weighting functions that is based on our previous results but the resulting expressions use the intermediate variables that are generated in computations of observable radiances themselves. Upwelling radiances at the given level in the atmosphere and atmospheric transmittances from space to the given level are combined with local values of the total absorption coefficient and its components due to absorption of atmospheric constituents under study. This makes it possible to evaluate the temperature and mixing ratio weighting functions in parallel with evaluation of radiances. This substantially decreases the computer time required for evaluation of weighting functions. Implications for the nadir and limb viewing geometries are discussed.
Holmes, Sean T; Iuliucci, Robbie J; Mueller, Karl T; Dybowski, Cecil
2015-11-10
Calculations of the principal components of magnetic-shielding tensors in crystalline solids require the inclusion of the effects of lattice structure on the local electronic environment to obtain significant agreement with experimental NMR measurements. We assess periodic (GIPAW) and GIAO/symmetry-adapted cluster (SAC) models for computing magnetic-shielding tensors by calculations on a test set containing 72 insulating molecular solids, with a total of 393 principal components of chemical-shift tensors from 13C, 15N, 19F, and 31P sites. When clusters are carefully designed to represent the local solid-state environment and when periodic calculations include sufficient variability, both methods predict magnetic-shielding tensors that agree well with experimental chemical-shift values, demonstrating the correspondence of the two computational techniques. At the basis-set limit, we find that the small differences in the computed values have no statistical significance for three of the four nuclides considered. Subsequently, we explore the effects of additional DFT methods available only with the GIAO/cluster approach, particularly the use of hybrid-GGA functionals, meta-GGA functionals, and hybrid meta-GGA functionals that demonstrate improved agreement in calculations on symmetry-adapted clusters. We demonstrate that meta-GGA functionals improve computed NMR parameters over those obtained by GGA functionals in all cases, and that hybrid functionals improve computed results over the respective pure DFT functional for all nuclides except 15N.
A Model for Minimizing Numeric Function Generator Complexity and Delay
2007-12-01
allow computation of difficult mathematical functions in less time and with less hardware than commonly employed methods. They compute piecewise...Programmable Gate Arrays (FPGAs). The algorithms and estimation techniques apply to various NFG architectures and mathematical functions. This...thesis compares hardware utilization and propagation delay for various NFG architectures, mathematical functions, word widths, and segmentation methods
Interactive algebraic grid-generation technique
NASA Technical Reports Server (NTRS)
Smith, R. E.; Wiese, M. R.
1986-01-01
An algebraic grid generation technique and use of an associated interactive computer program are described. The technique, called the two boundary technique, is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are referred to as the bottom and top, and they are defined by two ordered sets of points. Left and right side boundaries which intersect the bottom and top boundaries may also be specified by two ordered sets of points. when side boundaries are specified, linear blending functions are used to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly space computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth-cubic-spline functions is presented. The technique works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. An interactive computer program based on the technique and called TBGG (two boundary grid generation) is also described.
NASA Astrophysics Data System (ADS)
Chibani, Wael; Ren, Xinguo; Scheffler, Matthias; Rinke, Patrick
2016-04-01
We present an embedding scheme for periodic systems that facilitates the treatment of the physically important part (here a unit cell or a supercell) with advanced electronic structure methods, that are computationally too expensive for periodic systems. The rest of the periodic system is treated with computationally less demanding approaches, e.g., Kohn-Sham density-functional theory, in a self-consistent manner. Our scheme is based on the concept of dynamical mean-field theory formulated in terms of Green's functions. Our real-space dynamical mean-field embedding scheme features two nested Dyson equations, one for the embedded cluster and another for the periodic surrounding. The total energy is computed from the resulting Green's functions. The performance of our scheme is demonstrated by treating the embedded region with hybrid functionals and many-body perturbation theory in the GW approach for simple bulk systems. The total energy and the density of states converge rapidly with respect to the computational parameters and approach their bulk limit with increasing cluster (i.e., computational supercell) size.
Analysis of moisture advection during explosive cyclogenesis over North Atlantic Ocean
NASA Astrophysics Data System (ADS)
Ordóñez, Paulina; Liberato, Margarida L. R.; Pinto, Joaquim G.; Trigo, Ricardo M.
2013-04-01
The development of a mid-latitude cyclone may strongly be amplified by the presence of a very warm and moist air mass within its warm sector through enhanced latent heat release. In this work, a lagrangian approach is applied to examine the contribution of moisture advection to the deepening of cyclones over the North Atlantic Ocean. The warm sector is represented by a 5°x5° longitude/latitude moving box comprising the centre of the cyclone and its south-eastern area is defined for the tracks of different cyclones computed at 6-hourly intervals. Using the lagrangian particle model FLEXPART we evaluated the fresh water flux (E - P) along 2-days back-trajectories of the particles residing on the total column over the defined boxes for case studies occurring during winter months from 1980 to 2000. FLEXPART simulations were performed using one degree resolution and 60 model vertical levels available in ERA40 Reanalyses at 00, 06, 12, 18 UTC for each case. Sensitivity studies on the dimensions of the target area - chosen boxes representing the warm sector -, and on its relative position to the center, were performed. We have applied this methodology to several case studies of independent North Atlantic cyclones with notorious characteristics (e.g. deepening rate, wind speed, surface damages). Results indicate that the moisture transport is particularly relevant in what concerns the fast/explosive development stage of these extratropical cyclones. In particular, the advection of moist air from the subtropics towards the cyclone core is clearly associated with the warm conveyor belt of the cyclone. This methodology can be generalized to a much larger number of mid-latitude cyclones, providing a unique opportunity to analyze the moisture behavior associated with the explosive development. Acknowledgments: This work was partially supported by FEDER (Fundo Europeu de Desenvolvimento Regional) funds through the COMPETE (Programa Operacional Factores de Competitividade) Programme and by national funds through FCT (Fundação para a Ciência e a Tecnologia, Portugal) through project STORMEx FCOMP-01-0124-FEDER-019524 (PTDC/AAC-CLI/121339/2010).
On-line computer system for use with low- energy nuclear physics experiments is reported
NASA Technical Reports Server (NTRS)
Gemmell, D. S.
1969-01-01
Computer program handles data from low-energy nuclear physics experiments which utilize the ND-160 pulse-height analyzer and the PHYLIS computing system. The program allows experimenters to choose from about 50 different basic data-handling functions and to prescribe the order in which these functions will be performed.
A Systematic Approach for Understanding Slater-Gaussian Functions in Computational Chemistry
ERIC Educational Resources Information Center
Stewart, Brianna; Hylton, Derrick J.; Ravi, Natarajan
2013-01-01
A systematic way to understand the intricacies of quantum mechanical computations done by a software package known as "Gaussian" is undertaken via an undergraduate research project. These computations involve the evaluation of key parameters in a fitting procedure to express a Slater-type orbital (STO) function in terms of the linear…
A Functional Specification for a Programming Language for Computer Aided Learning Applications.
ERIC Educational Resources Information Center
National Research Council of Canada, Ottawa (Ontario).
In 1972 there were at least six different course authoring languages in use in Canada with little exchange of course materials between Computer Assisted Learning (CAL) centers. In order to improve facilities for producing "transportable" computer based course materials, a working panel undertook the definition of functional requirements of a user…
Fair and Square Computation of Inverse "Z"-Transforms of Rational Functions
ERIC Educational Resources Information Center
Moreira, M. V.; Basilio, J. C.
2012-01-01
All methods presented in textbooks for computing inverse "Z"-transforms of rational functions have some limitation: 1) the direct division method does not, in general, provide enough information to derive an analytical expression for the time-domain sequence "x"("k") whose "Z"-transform is "X"("z"); 2) computation using the inversion integral…
Computational protein design-the next generation tool to expand synthetic biology applications.
Gainza-Cirauqui, Pablo; Correia, Bruno Emanuel
2018-05-02
One powerful approach to engineer synthetic biology pathways is the assembly of proteins sourced from one or more natural organisms. However, synthetic pathways often require custom functions or biophysical properties not displayed by natural proteins, limitations that could be overcome through modern protein engineering techniques. Structure-based computational protein design is a powerful tool to engineer new functional capabilities in proteins, and it is beginning to have a profound impact in synthetic biology. Here, we review efforts to increase the capabilities of synthetic biology using computational protein design. We focus primarily on computationally designed proteins not only validated in vitro, but also shown to modulate different activities in living cells. Efforts made to validate computational designs in cells can illustrate both the challenges and opportunities in the intersection of protein design and synthetic biology. We also highlight protein design approaches, which although not validated as conveyors of new cellular function in situ, may have rapid and innovative applications in synthetic biology. We foresee that in the near-future, computational protein design will vastly expand the functional capabilities of synthetic cells. Copyright © 2018. Published by Elsevier Ltd.
Aqababa, Heydar; Tabandeh, Mehrdad; Tabatabaei, Meisam; Hasheminejad, Meisam; Emadi, Masoomeh
2013-01-01
A computational approach was applied to screen functional monomers and polymerization solvents for rational design of molecular imprinted polymers (MIPs) as smart adsorbents for solid-phase extraction of clonazepam (CLO) form human serum. The comparison of the computed binding energies of the complexes formed between the template and functional monomers was conducted. The primary computational results were corrected by taking into calculation both the basis set superposition error (BSSE) and the effect of the polymerization solvent using the counterpoise (CP) correction and the polarizable continuum model, respectively. Based on the theoretical calculations, trifluoromethyl acrylic acid (TFMAA) and acrylonitrile (ACN) were found as the best and the worst functional monomers, correspondingly. To test the accuracy of the computational results, three MIPs were synthesized by different functional monomers and their Langmuir-Freundlich (LF) isotherms were studied. The experimental results obtained confirmed the computational results and indicated that the MIP synthesized using TFMAA had the highest affinity for CLO in human serum despite the presence of a vast spectrum of ions. Copyright © 2012 Elsevier B.V. All rights reserved.
Influence of direct computer experience on older adults' attitudes toward computers.
Jay, G M; Willis, S L
1992-07-01
This research examined whether older adults' attitudes toward computers became more positive as a function of computer experience. The sample comprised 101 community-dwelling older adults aged 57 to 87. The intervention involved a 2-week computer training program in which subjects learned to use a desktop publishing software program. A multidimensional computer attitude measure was used to assess differential attitude change and maintenance of change following training. The results indicated that older adults' computer attitudes are modifiable and that direct computer experience is an effective means of change. Attitude change as a function of training was found for the attitude dimensions targeted by the intervention program: computer comfort and efficacy. In addition, maintenance of attitude change was established for at least two weeks following training.
Some system considerations in configuring a digital flight control - navigation system
NASA Technical Reports Server (NTRS)
Boone, J. H.; Flynn, G. R.
1976-01-01
A trade study was conducted with the objective of providing a technical guideline for selection of the most appropriate computer technology for the automatic flight control system of a civil subsonic jet transport. The trade study considers aspects of using either an analog, incremental type special purpose computer or a general purpose computer to perform critical autopilot computation functions. It also considers aspects of integration of noncritical autopilot and autothrottle modes into the computer performing the critical autoland functions, as compared to the federation of the noncritical modes into either a separate computer or with a R-Nav computer. The study is accomplished by establishing the relative advantages and/or risks associated with each of the computer configurations.
NASA Technical Reports Server (NTRS)
Mielke, Roland V. (Inventor); Stoughton, John W. (Inventor)
1990-01-01
Computationally complex primitive operations of an algorithm are executed concurrently in a plurality of functional units under the control of an assignment manager. The algorithm is preferably defined as a computationally marked graph contianing data status edges (paths) corresponding to each of the data flow edges. The assignment manager assigns primitive operations to the functional units and monitors completion of the primitive operations to determine data availability using the computational marked graph of the algorithm. All data accessing of the primitive operations is performed by the functional units independently of the assignment manager.
Hasbrouck, W.P.
1983-01-01
Processing of data taken with the U.S. Geological Survey's coal-seismic system is done with a desktop, stand-alone computer. Programs for this computer are written in the extended BASIC language utilized by the Tektronix 4051 Graphic System. This report presents computer programs used to develop rms velocity functions and apply mute and normal moveout to a 12-trace seismogram.
A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain
2015-05-18
approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a
Effects of computer-based training on procedural modifications to standard functional analyses.
Schnell, Lauren K; Sidener, Tina M; DeBar, Ruth M; Vladescu, Jason C; Kahng, SungWoo
2018-01-01
Few studies have evaluated methods for training decision-making when functional analysis data are undifferentiated. The current study evaluated computer-based training to teach 20 graduate students to arrange functional analysis conditions, analyze functional analysis data, and implement procedural modifications. Participants were exposed to training materials using interactive software during a 1-day session. Following the training, mean scores on the posttest, novel cases probe, and maintenance probe increased for all participants. These results replicate previous findings during a 1-day session and include a measure of participant acceptability of the training. Recommendations for future research on computer-based training and functional analysis are discussed. © 2017 Society for the Experimental Analysis of Behavior.
Dynamics and computation in functional shifts
NASA Astrophysics Data System (ADS)
Namikawa, Jun; Hashimoto, Takashi
2004-07-01
We introduce a new type of shift dynamics as an extended model of symbolic dynamics, and investigate the characteristics of shift spaces from the viewpoints of both dynamics and computation. This shift dynamics is called a functional shift, which is defined by a set of bi-infinite sequences of some functions on a set of symbols. To analyse the complexity of functional shifts, we measure them in terms of topological entropy, and locate their languages in the Chomsky hierarchy. Through this study, we argue that considering functional shifts from the viewpoints of both dynamics and computation gives us opposite results about the complexity of systems. We also describe a new class of shift spaces whose languages are not recursively enumerable.
A computer program for analyzing channel geometry
Regan, R.S.; Schaffranek, R.W.
1985-01-01
The Channel Geometry Analysis Program (CGAP) provides the capability to process, analyze, and format cross-sectional data for input to flow/transport simulation models or other computational programs. CGAP allows for a variety of cross-sectional data input formats through use of variable format specification. The program accepts data from various computer media and provides for modification of machine-stored parameter values. CGAP has been devised to provide a rapid and efficient means of computing and analyzing the physical properties of an open-channel reach defined by a sequence of cross sections. CGAP 's 16 options provide a wide range of methods by which to analyze and depict a channel reach and its individual cross-sectional properties. The primary function of the program is to compute the area, width, wetted perimeter, and hydraulic radius of cross sections at successive increments of water surface elevation (stage) from data that consist of coordinate pairs of cross-channel distances and land surface or channel bottom elevations. Longitudinal rates-of-change of cross-sectional properties are also computed, as are the mean properties of a channel reach. Output products include tabular lists of cross-sectional area, channel width, wetted perimeter, hydraulic radius, average depth, and cross-sectional symmetry computed as functions of stage; plots of cross sections; plots of cross-sectional area and (or) channel width as functions of stage; tabular lists of cross-sectional area and channel width computed as functions of stage for subdivisions of a cross section; plots of cross sections in isometric projection; and plots of cross-sectional area at a fixed stage as a function of longitudinal distance along an open-channel reach. A Command Procedure Language program and Job Control Language procedure exist to facilitate program execution on the U.S. Geological Survey Prime and Amdahl computer systems respectively. (Lantz-PTT)
Statistical fingerprinting for malware detection and classification
Prowell, Stacy J.; Rathgeb, Christopher T.
2015-09-15
A system detects malware in a computing architecture with an unknown pedigree. The system includes a first computing device having a known pedigree and operating free of malware. The first computing device executes a series of instrumented functions that, when executed, provide a statistical baseline that is representative of the time it takes the software application to run on a computing device having a known pedigree. A second computing device executes a second series of instrumented functions that, when executed, provides an actual time that is representative of the time the known software application runs on the second computing device. The system detects malware when there is a difference in execution times between the first and the second computing devices.
Deep hierarchies in the primate visual cortex: what can we learn for computer vision?
Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz
2013-08-01
Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.
Logical synchronization: how evidence and hypotheses steer atomic clocks
NASA Astrophysics Data System (ADS)
Myers, John M.; Madjid, F. Hadi
2014-05-01
A clock steps a computer through a cycle of phases. For the propagation of logical symbols from one computer to another, each computer must mesh its phases with arrivals of symbols from other computers. Even the best atomic clocks drift unforeseeably in frequency and phase; feedback steers them toward aiming points that depend on a chosen wave function and on hypotheses about signal propagation. A wave function, always under-determined by evidence, requires a guess. Guessed wave functions are coded into computers that steer atomic clocks in frequency and position—clocks that step computers through their phases of computations, as well as clocks, some on space vehicles, that supply evidence of the propagation of signals. Recognizing the dependence of the phasing of symbol arrivals on guesses about signal propagation elevates `logical synchronization.' from its practice in computer engineering to a dicipline essential to physics. Within this discipline we begin to explore questions invisible under any concept of time that fails to acknowledge the unforeseeable. In particular, variation of spacetime curvature is shown to limit the bit rate of logical communication.
Automatic computation of transfer functions
Atcitty, Stanley; Watson, Luke Dale
2015-04-14
Technologies pertaining to the automatic computation of transfer functions for a physical system are described herein. The physical system is one of an electrical system, a mechanical system, an electromechanical system, an electrochemical system, or an electromagnetic system. A netlist in the form of a matrix comprises data that is indicative of elements in the physical system, values for the elements in the physical system, and structure of the physical system. Transfer functions for the physical system are computed based upon the netlist.
Jensen-Bregman LogDet Divergence for Efficient Similarity Computations on Positive Definite Tensors
2012-05-02
function of Legendre-type on int(domS) [29]. From (7) the following properties of dφ(x, y) are apparent: strict convexity in x; asym- metry; non ...tensor imaging. An important task in all of these applications is to compute the distance between covariance matrices using a (dis)similarity function ...important task in all of these applications is to compute the distance between covariance matrices using a (dis)similarity function , for which the natural
Linear Scaling Density Functional Calculations with Gaussian Orbitals
NASA Technical Reports Server (NTRS)
Scuseria, Gustavo E.
1999-01-01
Recent advances in linear scaling algorithms that circumvent the computational bottlenecks of large-scale electronic structure simulations make it possible to carry out density functional calculations with Gaussian orbitals on molecules containing more than 1000 atoms and 15000 basis functions using current workstations and personal computers. This paper discusses the recent theoretical developments that have led to these advances and demonstrates in a series of benchmark calculations the present capabilities of state-of-the-art computational quantum chemistry programs for the prediction of molecular structure and properties.
Failure detection in high-performance clusters and computers using chaotic map computations
Rao, Nageswara S.
2015-09-01
A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C.H.; El-Khadiri, M.
1992-10-01
The numerical methods employed in the solution of many scientific computing problems require the computation of the gradient of a function f: R{sup n} {yields} R. ADIFOR is a source translator that, given a collection of subroutines to compute f, generates Fortran 77 code for computing the derivative of this function. Using the so-called torsion problem from the MINPACK-2 test collection as an example, this paper explores two issues in automatic differentiation: the efficient computation of derivatives for partial separable functions and the use of the compile-time reverse mode for the generation of derivatives. We show that orders of magnitudesmore » of improvement are possible when exploiting partial separability and maximizing use of the reverse mode.« less
Optical transfer function of NTS-1 retroreflector array
NASA Technical Reports Server (NTRS)
Arnold, D. A.
1974-01-01
An optical transfer function was computed for the retroreflector array carried by the NTS-1 satellite. Range corrections are presented for extrapolating laser range measurements to the center of mass of the satellite. The gain function of the array was computed for use in estimating laser-echo signal strengths.
Propagation of Computational Uncertainty Using the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2007-01-01
This paper describes the use of formally designed experiments to aid in the error analysis of a computational experiment. A method is described by which the underlying code is approximated with relatively low-order polynomial graduating functions represented by truncated Taylor series approximations to the true underlying response function. A resource-minimal approach is outlined by which such graduating functions can be estimated from a minimum number of case runs of the underlying computational code. Certain practical considerations are discussed, including ways and means of coping with high-order response functions. The distributional properties of prediction residuals are presented and discussed. A practical method is presented for quantifying that component of the prediction uncertainty of a computational code that can be attributed to imperfect knowledge of independent variable levels. This method is illustrated with a recent assessment of uncertainty in computational estimates of Space Shuttle thermal and structural reentry loads attributable to ice and foam debris impact on ascent.
Complete Insecurity of Quantum Protocols for Classical Two-Party Computation
NASA Astrophysics Data System (ADS)
Buhrman, Harry; Christandl, Matthias; Schaffner, Christian
2012-10-01
A fundamental task in modern cryptography is the joint computation of a function which has two inputs, one from Alice and one from Bob, such that neither of the two can learn more about the other’s input than what is implied by the value of the function. In this Letter, we show that any quantum protocol for the computation of a classical deterministic function that outputs the result to both parties (two-sided computation) and that is secure against a cheating Bob can be completely broken by a cheating Alice. Whereas it is known that quantum protocols for this task cannot be completely secure, our result implies that security for one party implies complete insecurity for the other. Our findings stand in stark contrast to recent protocols for weak coin tossing and highlight the limits of cryptography within quantum mechanics. We remark that our conclusions remain valid, even if security is only required to be approximate and if the function that is computed for Bob is different from that of Alice.
Complete insecurity of quantum protocols for classical two-party computation.
Buhrman, Harry; Christandl, Matthias; Schaffner, Christian
2012-10-19
A fundamental task in modern cryptography is the joint computation of a function which has two inputs, one from Alice and one from Bob, such that neither of the two can learn more about the other's input than what is implied by the value of the function. In this Letter, we show that any quantum protocol for the computation of a classical deterministic function that outputs the result to both parties (two-sided computation) and that is secure against a cheating Bob can be completely broken by a cheating Alice. Whereas it is known that quantum protocols for this task cannot be completely secure, our result implies that security for one party implies complete insecurity for the other. Our findings stand in stark contrast to recent protocols for weak coin tossing and highlight the limits of cryptography within quantum mechanics. We remark that our conclusions remain valid, even if security is only required to be approximate and if the function that is computed for Bob is different from that of Alice.
ERIC Educational Resources Information Center
Gillespie-Lynch, Kristen; Kapp, Steven K.; Shane-Simpson, Christina; Smith, David Shane; Hutman, Ted
2014-01-01
An online survey compared the perceived benefits and preferred functions of computer-mediated communication of participants with (N = 291) and without ASD (N = 311). Participants with autism spectrum disorder (ASD) perceived benefits of computer-mediated communication in terms of increased comprehension and control over communication, access to…
Electro-Optic Computing Architectures. Volume I
1998-02-01
The objective of the Electro - Optic Computing Architecture (EOCA) program was to develop multi-function electro - optic interfaces and optical...interconnect units to enhance the performance of parallel processor systems and form the building blocks for future electro - optic computing architectures...Specifically, three multi-function interface modules were targeted for development - an Electro - Optic Interface (EOI), an Optical Interconnection Unit (OW
Computational strategies for the Riemann zeta function
NASA Astrophysics Data System (ADS)
Borwein, Jonathan M.; Bradley, David M.; Crandall, Richard E.
2000-09-01
We provide a compendium of evaluation methods for the Riemann zeta function, presenting formulae ranging from historical attempts to recently found convergent series to curious oddities old and new. We concentrate primarily on practical computational issues, such issues depending on the domain of the argument, the desired speed of computation, and the incidence of what we call "value recycling".
Computing the Power-Density Spectrum for an Engineering Model
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1982-01-01
Computer program for calculating of power-density spectrum (PDS) from data base generated by Advanced Continuous Simulation Language (ACSL) uses algorithm that employs fast Fourier transform (FFT) to calculate PDS of variable. Accomplished by first estimating autocovariance function of variable and then taking FFT of smoothed autocovariance function to obtain PDS. Fast-Fourier-transform technique conserves computer resources.
Toward high-resolution computational design of helical membrane protein structure and function
Barth, Patrick; Senes, Alessandro
2016-01-01
The computational design of α-helical membrane proteins is still in its infancy but has made important progress. De novo design has produced stable, specific and active minimalistic oligomeric systems. Computational re-engineering can improve stability and modulate the function of natural membrane proteins. Currently, the major hurdle for the field is not computational, but the experimental characterization of the designs. The emergence of new structural methods for membrane proteins will accelerate progress PMID:27273630
Research on OpenStack of open source cloud computing in colleges and universities’ computer room
NASA Astrophysics Data System (ADS)
Wang, Lei; Zhang, Dandan
2017-06-01
In recent years, the cloud computing technology has a rapid development, especially open source cloud computing. Open source cloud computing has attracted a large number of user groups by the advantages of open source and low cost, have now become a large-scale promotion and application. In this paper, firstly we briefly introduced the main functions and architecture of the open source cloud computing OpenStack tools, and then discussed deeply the core problems of computer labs in colleges and universities. Combining with this research, it is not that the specific application and deployment of university computer rooms with OpenStack tool. The experimental results show that the application of OpenStack tool can efficiently and conveniently deploy cloud of university computer room, and its performance is stable and the functional value is good.
Precollege Computer Literacy: A Personal Computing Approach. Second Edition.
ERIC Educational Resources Information Center
Moursund, David
Intended for elementary and secondary teachers and curriculum specialists, this booklet discusses and defines computer literacy as a functional knowledge of computers and their effects on students and the rest of society. It analyzes personal computing and the aspects of computers that have direct impact on students. Outlining computer-assisted…
The multifacet graphically contracted function method. I. Formulation and implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shepard, Ron; Brozell, Scott R.; Gidofalvi, Gergely
2014-08-14
The basic formulation for the multifacet generalization of the graphically contracted function (MFGCF) electronic structure method is presented. The analysis includes the discussion of linear dependency and redundancy of the arc factor parameters, the computation of reduced density matrices, Hamiltonian matrix construction, spin-density matrix construction, the computation of optimization gradients for single-state and state-averaged calculations, graphical wave function analysis, and the efficient computation of configuration state function and Slater determinant expansion coefficients. Timings are given for Hamiltonian matrix element and analytic optimization gradient computations for a range of model problems for full-CI Shavitt graphs, and it is observed that bothmore » the energy and the gradient computation scale as O(N{sup 2}n{sup 4}) for N electrons and n orbitals. The important arithmetic operations are within dense matrix-matrix product computational kernels, resulting in a computationally efficient procedure. An initial implementation of the method is used to present applications to several challenging chemical systems, including N{sub 2} dissociation, cubic H{sub 8} dissociation, the symmetric dissociation of H{sub 2}O, and the insertion of Be into H{sub 2}. The results are compared to the exact full-CI values and also to those of the previous single-facet GCF expansion form.« less
Kiper, Pawel; Szczudlik, Andrzej; Venneri, Annalena; Stozek, Joanna; Luque-Moreno, Carlos; Opara, Jozef; Baba, Alfonc; Agostini, Michela; Turolla, Andrea
2016-10-15
Computational approaches for modelling the central nervous system (CNS) aim to develop theories on processes occurring in the brain that allow the transformation of all information needed for the execution of motor acts. Computational models have been proposed in several fields, to interpret not only the CNS functioning, but also its efferent behaviour. Computational model theories can provide insights into neuromuscular and brain function allowing us to reach a deeper understanding of neuroplasticity. Neuroplasticity is the process occurring in the CNS that is able to permanently change both structure and function due to interaction with the external environment. To understand such a complex process several paradigms related to motor learning and computational modeling have been put forward. These paradigms have been explained through several internal model concepts, and supported by neurophysiological and neuroimaging studies. Therefore, it has been possible to make theories about the basis of different learning paradigms according to known computational models. Here we review the computational models and motor learning paradigms used to describe the CNS and neuromuscular functions, as well as their role in the recovery process. These theories have the potential to provide a way to rigorously explain all the potential of CNS learning, providing a basis for future clinical studies. Copyright © 2016 Elsevier B.V. All rights reserved.
Applications of large-scale density functional theory in biology
NASA Astrophysics Data System (ADS)
Cole, Daniel J.; Hine, Nicholas D. M.
2016-10-01
Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality.
NASA Technical Reports Server (NTRS)
Huang, K.-N.
1977-01-01
A computational procedure for calculating correlated wave functions is proposed for three-particle systems interacting through Coulomb forces. Calculations are carried out for the muonic helium atom. Variational wave functions which explicitly contain interparticle coordinates are presented for the ground and excited states. General Hylleraas-type trial functions are used as the basis for the correlated wave functions. Excited-state energies of the muonic helium atom computed from 1- and 35-term wave functions are listed for four states.
cosmoabc: Likelihood-free inference for cosmology
NASA Astrophysics Data System (ADS)
Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.
2015-05-01
Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.
Reliable computation from contextual correlations
NASA Astrophysics Data System (ADS)
Oestereich, André L.; Galvão, Ernesto F.
2017-12-01
An operational approach to the study of computation based on correlations considers black boxes with one-bit inputs and outputs, controlled by a limited classical computer capable only of performing sums modulo-two. In this setting, it was shown that noncontextual correlations do not provide any extra computational power, while contextual correlations were found to be necessary for the deterministic evaluation of nonlinear Boolean functions. Here we investigate the requirements for reliable computation in this setting; that is, the evaluation of any Boolean function with success probability bounded away from 1 /2 . We show that bipartite CHSH quantum correlations suffice for reliable computation. We also prove that an arbitrarily small violation of a multipartite Greenberger-Horne-Zeilinger noncontextuality inequality also suffices for reliable computation.
Data Reduction Functions for the Langley 14- by 22-Foot Subsonic Tunnel
NASA Technical Reports Server (NTRS)
Boney, Andy D.
2014-01-01
The Langley 14- by 22-Foot Subsonic Tunnel's data reduction software utilizes six major functions to compute the acquired data. These functions calculate engineering units, tunnel parameters, flowmeters, jet exhaust measurements, balance loads/model attitudes, and model /wall pressures. The input (required) variables, the output (computed) variables, and the equations and/or subfunction(s) associated with each major function are discussed.
Analysis and selection of optimal function implementations in massively parallel computer
Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN
2011-05-31
An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.
BLUES function method in computational physics
NASA Astrophysics Data System (ADS)
Indekeu, Joseph O.; Müller-Nedebock, Kristian K.
2018-04-01
We introduce a computational method in physics that goes ‘beyond linear use of equation superposition’ (BLUES). A BLUES function is defined as a solution of a nonlinear differential equation (DE) with a delta source that is at the same time a Green’s function for a related linear DE. For an arbitrary source, the BLUES function can be used to construct an exact solution to the nonlinear DE with a different, but related source. Alternatively, the BLUES function can be used to construct an approximate piecewise analytical solution to the nonlinear DE with an arbitrary source. For this alternative use the related linear DE need not be known. The method is illustrated in a few examples using analytical calculations and numerical computations. Areas for further applications are suggested.
Neural computation of arithmetic functions
NASA Technical Reports Server (NTRS)
Siu, Kai-Yeung; Bruck, Jehoshua
1990-01-01
An area of application of neural networks is considered. A neuron is modeled as a linear threshold gate, and the network architecture considered is the layered feedforward network. It is shown how common arithmetic functions such as multiplication and sorting can be efficiently computed in a shallow neural network. Some known results are improved by showing that the product of two n-bit numbers and sorting of n n-bit numbers can be computed by a polynomial-size neural network using only four and five unit delays, respectively. Moreover, the weights of each threshold element in the neural networks require O(log n)-bit (instead of n-bit) accuracy. These results can be extended to more complicated functions such as multiple products, division, rational functions, and approximation of analytic functions.
Software on diffractive optics and computer-generated holograms
NASA Astrophysics Data System (ADS)
Doskolovich, Leonid L.; Golub, Michael A.; Kazanskiy, Nikolay L.; Khramov, Alexander G.; Pavelyev, Vladimir S.; Seraphimovich, P. G.; Soifer, Victor A.; Volotovskiy, S. G.
1995-01-01
The `Quick-DOE' software for an IBM PC-compatible computer is aimed at calculating the masks of diffractive optical elements (DOEs) and computer generated holograms, computer simulation of DOEs, and for executing a number of auxiliary functions. In particular, among the auxiliary functions are the file format conversions, mask visualization on display from a file, implementation of fast Fourier transforms, and arranging and preparation of composite images for the output on a photoplotter. The software is aimed for use by opticians, DOE designers, and the programmers dealing with the development of the program for DOE computation.
ERIC Educational Resources Information Center
Tumthong, Suwut; Piriyasurawong, Pullop; Jeerangsuwan, Namon
2016-01-01
This research proposes a functional competency development model for academic personnel based on international professional qualification standards in computing field and examines the appropriateness of the model. Specifically, the model consists of three key components which are: 1) functional competency development model, 2) blended training…
Algebraic Functions, Computer Programming, and the Challenge of Transfer
ERIC Educational Resources Information Center
Schanzer, Emmanuel Tanenbaum
2015-01-01
Students' struggles with algebra are well documented. Prior to the introduction of functions, mathematics is typically focused on applying a set of arithmetic operations to compute an answer. The introduction of functions, however, marks the point at which mathematics begins to focus on building up abstractions as a way to solve complex problems.…
Optical transfer function of Starlette retroreflector array
NASA Technical Reports Server (NTRS)
Arnold, D. A.
1975-01-01
An optical transfer function was computed for the retroreflector array carried by the Starlette satellite (1975 10A). The range correction is given for extrapolating laser range measurements to the center of mass of the satellite. The gain function and active reflecting area of the array are computed for estimating laser-echo signal strengths.
Two-body Schrödinger wave functions in a plane-wave basis via separation of dimensions
NASA Astrophysics Data System (ADS)
Jerke, Jonathan; Poirier, Bill
2018-03-01
Using a combination of ideas, the ground and several excited electronic states of the helium atom and the hydrogen molecule are computed to chemical accuracy—i.e., to within 1-2 mhartree or better. The basic strategy is very different from the standard electronic structure approach in that the full two-electron six-dimensional (6D) problem is tackled directly, rather than starting from a single-electron Hartree-Fock approximation. Electron correlation is thus treated exactly, even though computational requirements remain modest. The method also allows for exact wave functions to be computed, as well as energy levels. From the full-dimensional 6D wave functions computed here, radial distribution functions and radial correlation functions are extracted—as well as a 2D probability density function exhibiting antisymmetry for a single Cartesian component. These calculations support a more recent interpretation of Hund's rule, which states that the lower energy of the higher spin-multiplicity states is actually due to reduced screening, rather than reduced electron-electron repulsion. Prospects for larger systems and/or electron dynamics applications appear promising.
Two-body Schrödinger wave functions in a plane-wave basis via separation of dimensions.
Jerke, Jonathan; Poirier, Bill
2018-03-14
Using a combination of ideas, the ground and several excited electronic states of the helium atom and the hydrogen molecule are computed to chemical accuracy-i.e., to within 1-2 mhartree or better. The basic strategy is very different from the standard electronic structure approach in that the full two-electron six-dimensional (6D) problem is tackled directly, rather than starting from a single-electron Hartree-Fock approximation. Electron correlation is thus treated exactly, even though computational requirements remain modest. The method also allows for exact wave functions to be computed, as well as energy levels. From the full-dimensional 6D wave functions computed here, radial distribution functions and radial correlation functions are extracted-as well as a 2D probability density function exhibiting antisymmetry for a single Cartesian component. These calculations support a more recent interpretation of Hund's rule, which states that the lower energy of the higher spin-multiplicity states is actually due to reduced screening, rather than reduced electron-electron repulsion. Prospects for larger systems and/or electron dynamics applications appear promising.
1992-02-01
develop,, and maintains computer programs for the Department of the Navy. It provides life cycle support for over 50 computer programs installed at over...the computer programs . Table 4 presents a list of possible product or output measures of functionality for ACDS Block 0 programs . Examples of output...were identified as important "causes" of process performance. Functionality of the computer programs was the result or "effect" of the combination of
Software For Computer-Aided Design Of Control Systems
NASA Technical Reports Server (NTRS)
Wette, Matthew
1994-01-01
Computer Aided Engineering System (CAESY) software developed to provide means to evaluate methods for dealing with users' needs in computer-aided design of control systems. Interpreter program for performing engineering calculations. Incorporates features of both Ada and MATLAB. Designed to be flexible and powerful. Includes internally defined functions, procedures and provides for definition of functions and procedures by user. Written in C language.
NASA Technical Reports Server (NTRS)
Kuo, B. C.; Singh, G.
1974-01-01
The dynamics of the Large Space Telescope (LST) control system were studied in order to arrive at a simplified model for computer simulation without loss of accuracy. The frictional nonlinearity of the Control Moment Gyroscope (CMG) Control Loop was analyzed in a model to obtain data for the following: (1) a continuous describing function for the gimbal friction nonlinearity; (2) a describing function of the CMG nonlinearity using an analytical torque equation; and (3) the discrete describing function and function plots for CMG functional linearity. Preliminary computer simulations are shown for the simplified LST system, first without, and then with analytical torque expressions. Transfer functions of the sampled-data LST system are also described. A final computer simulation is presented which uses elements of the simplified sampled-data LST system with analytical CMG frictional torque expressions.
Pérès, Sabine; Felicori, Liza; Rialle, Stéphanie; Jobard, Elodie; Molina, Franck
2010-01-01
Motivation: In the available databases, biological processes are described from molecular and cellular points of view, but these descriptions are represented with text annotations that make it difficult to handle them for computation. Consequently, there is an obvious need for formal descriptions of biological processes. Results: We present a formalism that uses the BioΨ concepts to model biological processes from molecular details to networks. This computational approach, based on elementary bricks of actions, allows us to calculate on biological functions (e.g. process comparison, mapping structure–function relationships, etc.). We illustrate its application with two examples: the functional comparison of proteases and the functional description of the glycolysis network. This computational approach is compatible with detailed biological knowledge and can be applied to different kinds of systems of simulation. Availability: www.sysdiag.cnrs.fr/publications/supplementary-materials/BioPsi_Manager/ Contact: sabine.peres@sysdiag.cnrs.fr; franck.molina@sysdiag.cnrs.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20448138
Positive Wigner functions render classical simulation of quantum computation efficient.
Mari, A; Eisert, J
2012-12-07
We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.
... tissues are working. Other imaging tests, such as magnetic resonance imaging ( MRI ) and computed tomography ( CT ) scans only reveal ... M, Hellwig S, Kloppel S, Weiller C. Functional neuroimaging: functional magnetic resonance imaging, positron emission tomography, and single-photon emission computed ...
Computing correct truncated excited state wavefunctions
NASA Astrophysics Data System (ADS)
Bacalis, N. C.; Xiong, Z.; Zang, J.; Karaoulanis, D.
2016-12-01
We demonstrate that, if a wave function's truncated expansion is small, then the standard excited states computational method, of optimizing one "root" of a secular equation, may lead to an incorrect wave function - despite the correct energy according to the theorem of Hylleraas, Undheim and McDonald - whereas our proposed method [J. Comput. Meth. Sci. Eng. 8, 277 (2008)] (independent of orthogonality to lower lying approximants) leads to correct reliable small truncated wave functions. The demonstration is done in He excited states, using truncated series expansions in Hylleraas coordinates, as well as standard configuration-interaction truncated expansions.
GPU accelerated dynamic functional connectivity analysis for functional MRI data.
Akgün, Devrim; Sakoğlu, Ünal; Esquivel, Johnny; Adinoff, Bryon; Mete, Mutlu
2015-07-01
Recent advances in multi-core processors and graphics card based computational technologies have paved the way for an improved and dynamic utilization of parallel computing techniques. Numerous applications have been implemented for the acceleration of computationally-intensive problems in various computational science fields including bioinformatics, in which big data problems are prevalent. In neuroimaging, dynamic functional connectivity (DFC) analysis is a computationally demanding method used to investigate dynamic functional interactions among different brain regions or networks identified with functional magnetic resonance imaging (fMRI) data. In this study, we implemented and analyzed a parallel DFC algorithm based on thread-based and block-based approaches. The thread-based approach was designed to parallelize DFC computations and was implemented in both Open Multi-Processing (OpenMP) and Compute Unified Device Architecture (CUDA) programming platforms. Another approach developed in this study to better utilize CUDA architecture is the block-based approach, where parallelization involves smaller parts of fMRI time-courses obtained by sliding-windows. Experimental results showed that the proposed parallel design solutions enabled by the GPUs significantly reduce the computation time for DFC analysis. Multicore implementation using OpenMP on 8-core processor provides up to 7.7× speed-up. GPU implementation using CUDA yielded substantial accelerations ranging from 18.5× to 157× speed-up once thread-based and block-based approaches were combined in the analysis. Proposed parallel programming solutions showed that multi-core processor and CUDA-supported GPU implementations accelerated the DFC analyses significantly. Developed algorithms make the DFC analyses more practical for multi-subject studies with more dynamic analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.
Classical multiparty computation using quantum resources
NASA Astrophysics Data System (ADS)
Clementi, Marco; Pappa, Anna; Eckstein, Andreas; Walmsley, Ian A.; Kashefi, Elham; Barz, Stefanie
2017-12-01
In this work, we demonstrate a way to perform classical multiparty computing among parties with limited computational resources. Our method harnesses quantum resources to increase the computational power of the individual parties. We show how a set of clients restricted to linear classical processing are able to jointly compute a nonlinear multivariable function that lies beyond their individual capabilities. The clients are only allowed to perform classical xor gates and single-qubit gates on quantum states. We also examine the type of security that can be achieved in this limited setting. Finally, we provide a proof-of-concept implementation using photonic qubits that allows four clients to compute a specific example of a multiparty function, the pairwise and.
Experimental assessment of indoor radon and soil gas variability: the RADON project
NASA Astrophysics Data System (ADS)
Barbosa, S. M.; Pereira, A. J. S. C.; Neves, L. J. P. F.; Steinitz, G.; Zafrir, H.; Donner, R.; Woith, H.
2012-04-01
Radon is a radioactive noble gas naturally present in the environment, particularly in soils derived from rocks with high uranium content. Radon is formed by alpha decay from radium within solid mineral grains, but can migrate via diffusion and/or advection into the air space of soils, as well as into groundwater and the atmosphere. The exhalation of radon from the pore space of porous materials into the atmosphere of indoor environments is well known to cause adverse health effects due to the inhalation of radon's short-lived decay products. The danger to human health is particularly acute in the case of poorly ventilated dwellings located in geographical areas of high radon potential. The RADON project, funded by the Portuguese Science Foundation (FCT), aims to evaluate the temporal variability of radon in the soil and atmosphere and to examine the influence of meteorological effects in radon concentration. For that purpose an experimental monitoring station is being installed in an undisturbed dwelling located in a region of high radon potential near the old uranium mine of Urgeiriça (central Portugal). The rationale of the project, the set-up of the experimental radon monitoring station, and preliminary monitoring results will be presented.
Synthesis of N-graphene using microwave plasma-based methods
NASA Astrophysics Data System (ADS)
Dias, Ana; Tatarova, Elena; Henriques, Julio; Dias, Francisco; Felizardo, Edgar; Abrashev, Miroslav; Bundaleski, Nenad; Cvelbar, Uros
2016-09-01
In this work a microwave atmospheric plasma driven by surface waves is used to produce free-standing graphene sheets (FSG). Carbonaceous precursors are injected into a microwave plasma environment, where decomposition processes take place. The transport of plasma generated gas-phase carbon atoms and molecules into colder zones of plasma reactor results in carbon nuclei formation. The main part of the solid carbon is gradually carried from the ``hot'' plasma zone into the outlet plasma stream where carbon nanostructures assemble and grow. Subsequently, the graphene sheets have been N-doped using a N2-Ar large-scale remote plasma treatment, which consists on placing the FSG on a substrate in a remote zone of the N2-Ar plasma. The samples were treated with different compositions of N2-Ar gas mixtures, while maintaining 1 mbar pressure in the chamber and a power applied of 600 W. The N-doped graphene sheets were characterized by scanning and by high-resolution transmission electron microscopy, X-ray photoelectron spectroscopy and Raman spectroscopy. Plasma characterization was also performed by optical emission spectroscopy. Work partially funded by Portuguese FCT - Fundacao para a Ciencia e a Tecnologia, under grant SFRH/BD/52413/2013 (PD-F APPLAuSE).
NASA Astrophysics Data System (ADS)
Shih, T. I.; Lin, Y. C.; Duh, J. G.; Hsu, Tom
2006-10-01
Lead-free solder bumps have been widely used in current flip-chip technology (FCT) due to environmental issues. Solder joints after temperature cycling tests were employed to investigate the interfacial reaction between the Ti/Ni/Cu under-bump metallization and Sn-Ag-Cu solders. The interfacial morphology and quantitative analysis of the intermetallic compounds (IMCs) were obtained by electron probe microanalysis (EPMA) and field emission electron probe microanalysis (FE-EPMA). Various types of IMCs such as (Cu1-x,Agx)6Sn5, (Cu1-y,Agy)3Sn, and (Ag1-z,Cuz)3Sn were observed. In addition to conventional I-V measurements by a special sample preparation technique, a scanning electron microscope (SEM) internal probing system was introduced to evaluate the electrical characteristics in the IMCs after various test conditions. The electrical data would be correlated to microstructural evolution due to the interfacial reaction between the solder and under-bump metallurgy (UBM). This study demonstrated the successful employment of an internal nanoprobing approach, which would help further understanding of the electrical behavior within an IMC layer in the solder/UBM assembly.
Kutlunina, N A; Polezhaeva, M A; Permiakova, M V
2013-04-01
In populations of four species of tulips, (Tulipa biebersteiniana, T. patens, T. scytica and T. riparia) from the Volgograd, Kurgansk, Orenburg, and Chelyabinsk regions and the Republic of Bashkortostan, genetic diversity was studied by means of morphological and AFLP analysis. A morphological analysis of seven quantitative and two qualitative criteria was carried out. Three selective EcoRI/MseI primer pairs allowed one to genotype 81 individuals from 13 tulip populations with 87 loci. The low level of variability by AFLP loci were revealed in all species, including T. biebersteiniana (P = 20.41%, UH(e) = 0.075), T. patens (26.97%, 0.082), T. scytica (27.53%, 0.086), and T. riparia (27.72%, 0.096). According to the AMOVA results, the variability proportion that characterizes the differences between the four Tulip species was lower (F(CT) = 0.235) than between populations within species (F(ST) = 0.439). Tulipa patens is well differentiated by means of Nei's distances, coordination, and analysis in the STRUCTURE program. An analysis in the STRUCTURE revealed four genetic groups of tulips that are not completely in accordance with the analyzed species. This acknowledges the presence of complicated genetic process in the tulip population.
Mercury anomaly, Deccan volcanism and the end-Cretaceous mass extinction
NASA Astrophysics Data System (ADS)
Font, Eric; Adatte, Thierry; Nobrega Sial, Alcides; Drude de Lacerda, Luiz; Keller, Gerta; Punekar, Jahnavi
2016-04-01
The contribution of the Deccan Traps volcanism in the Cretaceous-Palaeogene (KPg) crisis is still a matter of debate. Particularly, the global geochemical effects of Deccan volcanism in the marine sedimentary record are still poorly resolved. Here, we investigate the mercury (Hg) content of the Bidart (France) section, where an interval of low magnetic susceptibility (MS) located just below the KPg boundary was hypothesized to result from paleoenvironmental perturbations linked to paroxysmal Deccan phase-2. Results show mercury concentrations over two orders of magnitude higher from ~80 cm below up to ~50 cm above the KPg boundary (max. 46.6 ppb) and coincident with the low MS interval. Increase in Hg contents shows no correlation with clay or total organic carbon contents, suggesting that the mercury anomalies resulted from higher input of atmospheric Hg species into the marine realm, rather than organic matter scavenging and/or increased run-off. The Hg anomalies correlate with high shell fragmentation and dissolution effects in planktic foraminifera suggesting correlative changes in marine biodiversity. This discovery represents an unprecedented piece of evidence of the nature and importance of the Deccan-related environmental changes at the onset of the KPg mass extinction. Funded by IDL (FCT UID/GEO/50019/2013)
NASA Astrophysics Data System (ADS)
Moreira, Joao; Zeng, Xiaohan; Amaral, Luis
2013-03-01
Assessing the career performance of scientists has become essential to modern science. Bibliometric indicators, like the h-index are becoming more and more decisive in evaluating grants and approving publication of articles. However, many of the more used indicators can be manipulated or falsified by publishing with very prolific researchers or self-citing papers with a certain number of citations, for instance. Accounting for these factors is possible but it introduces unwanted complexity that drives us further from the purpose of the indicator: to represent in a clear way the prestige and importance of a given scientist. Here we try to overcome this challenge. We used Thompson Reuter's Web of Science database and analyzed all the papers published until 2000 by ~1500 researchers in the top 30 departments of seven scientific fields. We find that over 97% of them have a citation distribution that is consistent with a discrete lognormal model. This suggests that our model can be used to accurately predict the performance of a researcher. Furthermore, this predictor does not depend on the individual number of publications and is not easily ``gamed'' on. The authors acknowledge support from FCT Portugal, and NSF grants
Gallion, Jonathan; Koire, Amanda; Katsonis, Panagiotis; Schoenegge, Anne‐Marie; Bouvier, Michel
2017-01-01
Abstract Computational prediction yields efficient and scalable initial assessments of how variants of unknown significance may affect human health. However, when discrepancies between these predictions and direct experimental measurements of functional impact arise, inaccurate computational predictions are frequently assumed as the source. Here, we present a methodological analysis indicating that shortcomings in both computational and biological data can contribute to these disagreements. We demonstrate that incomplete assaying of multifunctional proteins can affect the strength of correlations between prediction and experiments; a variant's full impact on function is better quantified by considering multiple assays that probe an ensemble of protein functions. Additionally, many variants predictions are sensitive to protein alignment construction and can be customized to maximize relevance of predictions to a specific experimental question. We conclude that inconsistencies between computation and experiment can often be attributed to the fact that they do not test identical hypotheses. Aligning the design of the computational input with the design of the experimental output will require cooperation between computational and biological scientists, but will also lead to improved estimations of computational prediction accuracy and a better understanding of the genotype–phenotype relationship. PMID:28230923
Gallion, Jonathan; Koire, Amanda; Katsonis, Panagiotis; Schoenegge, Anne-Marie; Bouvier, Michel; Lichtarge, Olivier
2017-05-01
Computational prediction yields efficient and scalable initial assessments of how variants of unknown significance may affect human health. However, when discrepancies between these predictions and direct experimental measurements of functional impact arise, inaccurate computational predictions are frequently assumed as the source. Here, we present a methodological analysis indicating that shortcomings in both computational and biological data can contribute to these disagreements. We demonstrate that incomplete assaying of multifunctional proteins can affect the strength of correlations between prediction and experiments; a variant's full impact on function is better quantified by considering multiple assays that probe an ensemble of protein functions. Additionally, many variants predictions are sensitive to protein alignment construction and can be customized to maximize relevance of predictions to a specific experimental question. We conclude that inconsistencies between computation and experiment can often be attributed to the fact that they do not test identical hypotheses. Aligning the design of the computational input with the design of the experimental output will require cooperation between computational and biological scientists, but will also lead to improved estimations of computational prediction accuracy and a better understanding of the genotype-phenotype relationship. © 2017 The Authors. **Human Mutation published by Wiley Periodicals, Inc.
Administering truncated receive functions in a parallel messaging interface
Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E
2014-12-09
Administering truncated receive functions in a parallel messaging interface (`PMI`) of a parallel computer comprising a plurality of compute nodes coupled for data communications through the PMI and through a data communications network, including: sending, through the PMI on a source compute node, a quantity of data from the source compute node to a destination compute node; specifying, by an application on the destination compute node, a portion of the quantity of data to be received by the application on the destination compute node and a portion of the quantity of data to be discarded; receiving, by the PMI on the destination compute node, all of the quantity of data; providing, by the PMI on the destination compute node to the application on the destination compute node, only the portion of the quantity of data to be received by the application; and discarding, by the PMI on the destination compute node, the portion of the quantity of data to be discarded.
Heterogeneous concurrent computing with exportable services
NASA Technical Reports Server (NTRS)
Sunderam, Vaidy
1995-01-01
Heterogeneous concurrent computing, based on the traditional process-oriented model, is approaching its functionality and performance limits. An alternative paradigm, based on the concept of services, supporting data driven computation, and built on a lightweight process infrastructure, is proposed to enhance the functional capabilities and the operational efficiency of heterogeneous network-based concurrent computing. TPVM is an experimental prototype system supporting exportable services, thread-based computation, and remote memory operations that is built as an extension of and an enhancement to the PVM concurrent computing system. TPVM offers a significantly different computing paradigm for network-based computing, while maintaining a close resemblance to the conventional PVM model in the interest of compatibility and ease of transition Preliminary experiences have demonstrated that the TPVM framework presents a natural yet powerful concurrent programming interface, while being capable of delivering performance improvements of upto thirty percent.
Alberta Education's Clearinghouse: Functions and Findings.
ERIC Educational Resources Information Center
Wighton, David
1984-01-01
Discusses functions of the Alberta (Canada) Computer Technology Project's courseware clearinghouse, reviews findings on instructional software quality, identifies software development trends, and discusses need for support systems to facilitate the incorporation of computer assisted instruction in Canadian schools. (MBR)
Theoretical computer science and the natural sciences
NASA Astrophysics Data System (ADS)
Marchal, Bruno
2005-12-01
I present some fundamental theorems in computer science and illustrate their relevance in Biology and Physics. I do not assume prerequisites in mathematics or computer science beyond the set N of natural numbers, functions from N to N, the use of some notational conveniences to describe functions, and at some point, a minimal amount of linear algebra and logic. I start with Cantor's transcendental proof by diagonalization of the non enumerability of the collection of functions from natural numbers to the natural numbers. I explain why this proof is not entirely convincing and show how, by restricting the notion of function in terms of discrete well defined processes, we are led to the non algorithmic enumerability of the computable functions, but also-through Church's thesis-to the algorithmic enumerability of partial computable functions. Such a notion of function constitutes, with respect to our purpose, a crucial generalization of that concept. This will make easy to justify deep and astonishing (counter-intuitive) incompleteness results about computers and similar machines. The modified Cantor diagonalization will provide a theory of concrete self-reference and I illustrate it by pointing toward an elementary theory of self-reproduction-in the Amoeba's way-and cellular self-regeneration-in the flatworm Planaria's way. To make it easier, I introduce a very simple and powerful formal system known as the Schoenfinkel-Curry combinators. I will use the combinators to illustrate in a more concrete way the notion introduced above. The combinators, thanks to their low-level fine grained design, will also make it possible to make a rough but hopefully illuminating description of the main lessons gained by the careful observation of nature, and to describe some new relations, which should exist between computer science, the science of life and the science of inert matter, once some philosophical, if not theological, hypotheses are made in the cognitive sciences. In the last section, I come back to self-reference and I give an exposition of its modal logics. This is used to show that theoretical computer science makes those “philosophical hypotheses” in theoretical cognitive science experimentally and mathematically testable.
Electro-Optic Computing Architectures: Volume II. Components and System Design and Analysis
1998-02-01
The objective of the Electro - Optic Computing Architecture (EOCA) program was to develop multi-function electro - optic interfaces and optical...interconnect units to enhance the performance of parallel processor systems and form the building blocks for future electro - optic computing architectures...Specifically, three multi-function interface modules were targeted for development - an Electro - Optic Interface (EOI), an Optical Interconnection Unit
ERIC Educational Resources Information Center
Harris, Michelle A.; Peck, Ronald F.; Colton, Shannon; Morris, Jennifer; Neto, Elias Chaibub; Kallio, Julie
2009-01-01
We conducted a controlled investigation to examine whether a combination of computer imagery and tactile tools helps introductory cell biology laboratory undergraduate students better learn about protein structure/function relationships as compared with computer imagery alone. In all five laboratory sections, students used the molecular imaging…
1980-01-01
necessary and identify by block number) on-the-job training task proficiency mission-oriented training training management aircraft armament systems...as was the training itself, to determine the feasibility of applying state-of-the-art computer technology to the problems of management and...62 Measures Used in Rank-ordering Functions ........ ........... 63 Computer-Supportable Functions ........ .. 63 Instructional Management
On introduction of artificial intelligence elements to heat power engineering
NASA Astrophysics Data System (ADS)
Dregalin, A. F.; Nazyrova, R. R.
1993-10-01
The basic problems of 'the thermodynamic intelligence' of personal computers have been outlined. The thermodynamic intellect of personal computers as a concept has been introduced to heat processes occurring in engines of flying vehicles. In particular, the thermodynamic intellect of computers is determined by the possibility of deriving formal relationships between thermodynamic functions. In chemical thermodynamics, a concept of a characteristic function has been introduced.
The evolvability of programmable hardware.
Raman, Karthik; Wagner, Andreas
2011-02-06
In biological systems, individual phenotypes are typically adopted by multiple genotypes. Examples include protein structure phenotypes, where each structure can be adopted by a myriad individual amino acid sequence genotypes. These genotypes form vast connected 'neutral networks' in genotype space. The size of such neutral networks endows biological systems not only with robustness to genetic change, but also with the ability to evolve a vast number of novel phenotypes that occur near any one neutral network. Whether technological systems can be designed to have similar properties is poorly understood. Here we ask this question for a class of programmable electronic circuits that compute digital logic functions. The functional flexibility of such circuits is important in many applications, including applications of evolutionary principles to circuit design. The functions they compute are at the heart of all digital computation. We explore a vast space of 10(45) logic circuits ('genotypes') and 10(19) logic functions ('phenotypes'). We demonstrate that circuits that compute the same logic function are connected in large neutral networks that span circuit space. Their robustness or fault-tolerance varies very widely. The vicinity of each neutral network contains circuits with a broad range of novel functions. Two circuits computing different functions can usually be converted into one another via few changes in their architecture. These observations show that properties important for the evolvability of biological systems exist in a commercially important class of electronic circuitry. They also point to generic ways to generate fault-tolerant, adaptable and evolvable electronic circuitry.
The role of under-determined approximations in engineering and science application
NASA Technical Reports Server (NTRS)
Carpenter, William C.
1992-01-01
There is currently a great deal of interest in using response surfaces in the optimization of aircraft performance. The objective function and/or constraint equations involved in these optimization problems may come from numerous disciplines such as structures, aerodynamics, environmental engineering, etc. In each of these disciplines, the mathematical complexity of the governing equations usually dictates that numerical results be obtained from large computer programs such as a finite element method program. Thus, when performing optimization studies, response surfaces are a convenient way of transferring information from the various disciplines to the optimization algorithm as opposed to bringing all the sundry computer programs together in a massive computer code. Response surfaces offer another advantage in the optimization of aircraft structures. A characteristic of these types of optimization problems is that evaluation of the objective function and response equations (referred to as a functional evaluation) can be very expensive in a computational sense. Because of the computational expense in obtaining functional evaluations, the present study was undertaken to investigate under-determinined approximations. An under-determined approximation is one in which there are fewer training pairs (pieces of information about a function) than there are undetermined parameters (coefficients or weights) associated with the approximation. Both polynomial approximations and neural net approximations were examined. Three main example problems were investigated: (1) a function of one design variable was considered; (2) a function of two design variables was considered; and (3) a 35 bar truss with 4 design variables was considered.
The evolvability of programmable hardware
Raman, Karthik; Wagner, Andreas
2011-01-01
In biological systems, individual phenotypes are typically adopted by multiple genotypes. Examples include protein structure phenotypes, where each structure can be adopted by a myriad individual amino acid sequence genotypes. These genotypes form vast connected ‘neutral networks’ in genotype space. The size of such neutral networks endows biological systems not only with robustness to genetic change, but also with the ability to evolve a vast number of novel phenotypes that occur near any one neutral network. Whether technological systems can be designed to have similar properties is poorly understood. Here we ask this question for a class of programmable electronic circuits that compute digital logic functions. The functional flexibility of such circuits is important in many applications, including applications of evolutionary principles to circuit design. The functions they compute are at the heart of all digital computation. We explore a vast space of 1045 logic circuits (‘genotypes’) and 1019 logic functions (‘phenotypes’). We demonstrate that circuits that compute the same logic function are connected in large neutral networks that span circuit space. Their robustness or fault-tolerance varies very widely. The vicinity of each neutral network contains circuits with a broad range of novel functions. Two circuits computing different functions can usually be converted into one another via few changes in their architecture. These observations show that properties important for the evolvability of biological systems exist in a commercially important class of electronic circuitry. They also point to generic ways to generate fault-tolerant, adaptable and evolvable electronic circuitry. PMID:20534598
Methods for evaluating and ranking transportation energy conservation programs
NASA Astrophysics Data System (ADS)
Santone, L. C.
1981-04-01
The energy conservation programs are assessed in terms of petroleum savings, incremental costs to consumers probability of technical and market success, and external impacts due to environmental, economic, and social factors. Three ranking functions and a policy matrix are used to evaluate the programs. The net present value measure which computes the present worth of petroleum savings less the present worth of costs is modified by dividing by the present value of DOE funding to obtain a net present value per program dollar. The comprehensive ranking function takes external impacts into account. Procedures are described for making computations of the ranking functions and the attributes that require computation. Computations are made for the electric vehicle, Stirling engine, gas turbine, and MPG mileage guide program.
Math and numeracy in young adults with spina bifida and hydrocephalus.
Dennis, Maureen; Barnes, Marcia
2002-01-01
The developmental stability of poor math skill was studied in 31 young adults with spina bifida and hydrocephalus (SBH), a neurodevelopmental disorder involving malformations of the brain and spinal cord. Longitudinally, individuals with poor math problem solving as children grew into adults with poor problem solving and limited functional numeracy. As a group, young adults with SBH had poor computation accuracy, computation speed, problem solving, a ndfunctional numeracy. Computation accuracy was related to a supporting cognitive system (working memory for numbers), and functional numeracy was related to one medical history variable (number of lifetime shunt revisions). Adult functional numeracy, but not functional literacy, was predictive of higher levels of social, personal, and community independence.
Theoretical study of the electric dipole moment function of the ClO molecule
NASA Technical Reports Server (NTRS)
Pettersson, L. G. M.; Langhoff, S. R.; Chong, D. P.
1986-01-01
The potential energy function and electric dipole moment function (EDMF) are computed for ClO X 2Pi using several different techniques to include electron correlation. The EDMF is used to compute Einstein coefficients, vibrational lifetimes, and dipole moments in higher vibrational levels. The band strength of the 1-0 fundamental transition is computed to be 12 + or - 2 per sq cm atm determined from infrared heterodyne spectroscopy. The theoretical methods used include SCF, CASSCF, multireference singles plus doubles configuration interaction (MRCI) and contracted CI, coupled pair functional (CPF), and a modified version of the CPF method. The results obtained using the different methods are critically compared.
A computer system for processing data from routine pulmonary function tests.
Pack, A I; McCusker, R; Moran, F
1977-01-01
In larger pulmonary function laboratories there is a need for computerised techniques of data processing. A flexible computer system, which is used routinely, is described. The system processes data from a relatively large range of tests. Two types of output are produced--one for laboratory purposes, and one for return to the referring physician. The system adds an automatic interpretative report for each set of results. In developing the interpretative system it has been necessary to utilise a number of arbitrary definitions. The present terminology for reporting pulmonary function tests has limitations. The computer interpretation system affords the opportunity to take account of known interaction between measurements of function and different pathological states. Images PMID:329462
Microelectromechanical reprogrammable logic device.
Hafiz, M A A; Kosuru, L; Younis, M I
2016-03-29
In modern computing, the Boolean logic operations are set by interconnect schemes between the transistors. As the miniaturization in the component level to enhance the computational power is rapidly approaching physical limits, alternative computing methods are vigorously pursued. One of the desired aspects in the future computing approaches is the provision for hardware reconfigurability at run time to allow enhanced functionality. Here we demonstrate a reprogrammable logic device based on the electrothermal frequency modulation scheme of a single microelectromechanical resonator, capable of performing all the fundamental 2-bit logic functions as well as n-bit logic operations. Logic functions are performed by actively tuning the linear resonance frequency of the resonator operated at room temperature and under modest vacuum conditions, reprogrammable by the a.c.-driving frequency. The device is fabricated using complementary metal oxide semiconductor compatible mass fabrication process, suitable for on-chip integration, and promises an alternative electromechanical computing scheme.
Microelectromechanical reprogrammable logic device
Hafiz, M. A. A.; Kosuru, L.; Younis, M. I.
2016-01-01
In modern computing, the Boolean logic operations are set by interconnect schemes between the transistors. As the miniaturization in the component level to enhance the computational power is rapidly approaching physical limits, alternative computing methods are vigorously pursued. One of the desired aspects in the future computing approaches is the provision for hardware reconfigurability at run time to allow enhanced functionality. Here we demonstrate a reprogrammable logic device based on the electrothermal frequency modulation scheme of a single microelectromechanical resonator, capable of performing all the fundamental 2-bit logic functions as well as n-bit logic operations. Logic functions are performed by actively tuning the linear resonance frequency of the resonator operated at room temperature and under modest vacuum conditions, reprogrammable by the a.c.-driving frequency. The device is fabricated using complementary metal oxide semiconductor compatible mass fabrication process, suitable for on-chip integration, and promises an alternative electromechanical computing scheme. PMID:27021295
Localized overlap algorithm for unexpanded dispersion energies
NASA Astrophysics Data System (ADS)
Rob, Fazle; Misquitta, Alston J.; Podeszwa, Rafał; Szalewicz, Krzysztof
2014-03-01
First-principles-based, linearly scaling algorithm has been developed for calculations of dispersion energies from frequency-dependent density susceptibility (FDDS) functions with account of charge-overlap effects. The transition densities in FDDSs are fitted by a set of auxiliary atom-centered functions. The terms in the dispersion energy expression involving products of such functions are computed using either the unexpanded (exact) formula or from inexpensive asymptotic expansions, depending on the location of these functions relative to the dimer configuration. This approach leads to significant savings of computational resources. In particular, for a dimer consisting of two elongated monomers with 81 atoms each in a head-to-head configuration, the most favorable case for our algorithm, a 43-fold speedup has been achieved while the approximate dispersion energy differs by less than 1% from that computed using the standard unexpanded approach. In contrast, the dispersion energy computed from the distributed asymptotic expansion differs by dozens of percent in the van der Waals minimum region. A further increase of the size of each monomer would result in only small increased costs since all the additional terms would be computed from the asymptotic expansion.
Quantum computing: a prime modality in neurosurgery's future.
Lee, Brian; Liu, Charles Y; Apuzzo, Michael L J
2012-11-01
With each significant development in the field of neurosurgery, our dependence on computers, small and large, has continuously increased. From something as mundane as bipolar cautery to sophisticated intraoperative navigation with real-time magnetic resonance imaging-assisted surgical guidance, both technologies, however simple or complex, require computational processing power to function. The next frontier for neurosurgery involves developing a greater understanding of the brain and furthering our capabilities as surgeons to directly affect brain circuitry and function. This has come in the form of implantable devices that can electronically and nondestructively influence the cortex and nuclei with the purpose of restoring neuronal function and improving quality of life. We are now transitioning from devices that are turned on and left alone, such as vagus nerve stimulators and deep brain stimulators, to "smart" devices that can listen and react to the body as the situation may dictate. The development of quantum computers and their potential to be thousands, if not millions, of times faster than current "classical" computers, will significantly affect the neurosciences, especially the field of neurorehabilitation and neuromodulation. Quantum computers may advance our understanding of the neural code and, in turn, better develop and program implantable neural devices. When quantum computers reach the point where we can actually implant such devices in patients, the possibilities of what can be done to interface and restore neural function will be limitless. Copyright © 2012 Elsevier Inc. All rights reserved.
Simulation of synthetic discriminant function optical implementation
NASA Astrophysics Data System (ADS)
Riggins, J.; Butler, S.
1984-12-01
The optical implementation of geometrical shape and synthetic discriminant function matched filters is computer modeled. The filter implementation utilizes the Allebach-Keegan computer-generated hologram algorithm. Signal-to-noise and efficiency measurements were made on the resultant correlation planes.
Roper, Ian P E; Besley, Nicholas A
2016-03-21
The simulation of X-ray emission spectra of transition metal complexes with time-dependent density functional theory (TDDFT) is investigated. X-ray emission spectra can be computed within TDDFT in conjunction with the Tamm-Dancoff approximation by using a reference determinant with a vacancy in the relevant core orbital, and these calculations can be performed using the frozen orbital approximation or with the relaxation of the orbitals of the intermediate core-ionised state included. Both standard exchange-correlation functionals and functionals specifically designed for X-ray emission spectroscopy are studied, and it is shown that the computed spectral band profiles are sensitive to the exchange-correlation functional used. The computed intensities of the spectral bands can be rationalised by considering the metal p orbital character of the valence molecular orbitals. To compute X-ray emission spectra with the correct energy scale allowing a direct comparison with experiment requires the relaxation of the core-ionised state to be included and the use of specifically designed functionals with increased amounts of Hartree-Fock exchange in conjunction with high quality basis sets. A range-corrected functional with increased Hartree-Fock exchange in the short range provides transition energies close to experiment and spectral band profiles that have a similar accuracy to those from standard functionals.
ERIC Educational Resources Information Center
Classroom Computer Learning, 1984
1984-01-01
Presents activities that focus on computer memories, accuracy of computers, making music, and computer functions. Instructional strategies for the activities and program listings (when applicable) are included. (JN)
A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)
NASA Technical Reports Server (NTRS)
Straeter, T. A.; Markos, A. T.
1975-01-01
A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.
NASA Technical Reports Server (NTRS)
King, H. F.; Komornicki, A.
1986-01-01
Formulas are presented relating Taylor series expansion coefficients of three functions of several variables, the energy of the trial wave function (W), the energy computed using the optimized variational wave function (E), and the response function (lambda), under certain conditions. Partial derivatives of lambda are obtained through solution of a recursive system of linear equations, and solution through order n yields derivatives of E through order 2n + 1, extending Puley's application of Wigner's 2n + 1 rule to partial derivatives in couple perturbation theory. An examination of numerical accuracy shows that the usual two-term second derivative formula is less stable than an alternative four-term formula, and that previous claims that energy derivatives are stationary properties of the wave function are fallacious. The results have application to quantum theoretical methods for the computation of derivative properties such as infrared frequencies and intensities.
Classification Models for Pulmonary Function using Motion Analysis from Phone Sensors.
Cheng, Qian; Juen, Joshua; Bellam, Shashi; Fulara, Nicholas; Close, Deanna; Silverstein, Jonathan C; Schatz, Bruce
2016-01-01
Smartphones are ubiquitous, but it is unknown what physiological functions can be monitored at clinical quality. Pulmonary function is a standard measure of health status for cardiopulmonary patients. We have shown phone sensors can accurately measure walking patterns. Here we show that improved classification models can accurately measure pulmonary function, with sole inputs being sensor data from carried phones. Twenty-four cardiopulmonary patients performed six minute walk tests in pulmonary rehabilitation at a regional hospital. They carried smartphones running custom software recording phone motion. For every patient, every ten-second interval was correctly computed. The trained model perfectly computed the GOLD level 1/2/3, which is a standard categorization of pulmonary function as measured by spirometry. These results are encouraging towards field trials with passive monitors always running in the background. We expect patients can simply carry their phones during daily living, while supporting automatic computation ofpulmonary function for health monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Druskin, V.; Lee, Ping; Knizhnerman, L.
There is now a growing interest in the area of using Krylov subspace approximations to compute the actions of matrix functions. The main application of this approach is the solution of ODE systems, obtained after discretization of partial differential equations by method of lines. In the event that the cost of computing the matrix inverse is relatively inexpensive, it is sometimes attractive to solve the ODE using the extended Krylov subspaces, originated by actions of both positive and negative matrix powers. Examples of such problems can be found frequently in computational electromagnetics.
Cloud computing task scheduling strategy based on improved differential evolution algorithm
NASA Astrophysics Data System (ADS)
Ge, Junwei; He, Qian; Fang, Yiqiu
2017-04-01
In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.
Plate-shaped transformation products in zirconium-base alloys
NASA Astrophysics Data System (ADS)
Banerjee, S.; Dey, G. K.; Srivastava, D.; Ranganathan, S.
1997-11-01
Plate-shaped products resulting from martensitic, diffusional, and mixed mode transformations in zirconium-base alloys are compared in the present study. These alloys are particularly suitable for the comparison in view of the fact that the lattice correspondence between the parent β (bcc) and the product α (hcp) or γ-hydride (fct) phases are remarkably similar for different types of transformations. Crystallographic features such as orientation relations, habit planes, and interface structures associated with these transformations have been compared, with a view toward examining whether the transformation mechanisms have characteristic imprints on these experimental observables. Martensites exhibiting dislocated lath, internally twinned plate, and self-accommodating three-plate cluster morphologies have been encountered in Zr-2.5Nb alloy. Habit planes corresponding to all these morphologies have been found to be consistent with the predictions based on the invariant plane strain (IPS) criterion. Different morphologies have been found to reflect the manner in which the neighboring martensite variants are assembled. Lattice-invariant shears (LISs) for all these cases have been identified to be either {10 bar 11} α < bar 1123> α slip or twinning on {10 bar 11} α planes. Widmanstätten α precipitates, forming in a step-quenching treatment, have been shown to have a lath morphology, the α/β interface being decorated with a periodic array of < c + a> dislocations at a spacing of 8 to 10 nm. The line vectors of these dislocations are nearly parallel to the invariant lines. The α precipitates, forming in the retained β phase on aging, exhibit an internally twinned structure with a zigzag habit plane. Average habit planes for the morphologies have been found to lie near the {103} β — {113} β poles, which are close to the specific variant of the {112} β plane, which transforms into a prismatic plane of the type {1 bar 100} α . The crystallography of the formation of the γ-hydride phase (fct) from both the α and β phases is seen to match the IPS predictions. While the β-γ transformation can be treated approximately as a simple shear on the basal plane involving a change in the stacking sequence, the α-γ transformation can be conceptually broken into a α → β transformation following the Burgers correspondence and the simple β-γ shear process. The active eutectoid decomposition in the Zr-Cu system, β → α + β', has been described in terms of cooperative growth of the α phase from the β phase through the Burgers correspondence and of the partially ordered β' (structurally similar to the equilibrium Zr2Cu phase) through an ordering process. Similarities and differences in crystallographic features of these transformations have been discussed, and the importance of the invariant line vector in deciding the geometry of the corresponding habit planes has been pointed out.
Examining Functions in Mathematics and Science Using Computer Interfacing.
ERIC Educational Resources Information Center
Walton, Karen Doyle
1988-01-01
Introduces microcomputer interfacing as a method for explaining and demonstrating various aspects of the concept of function. Provides three experiments with illustrations and typical computer graphic displays: pendulum motion, pendulum study using two pendulums, and heat absorption and radiation. (YP)
GADRAS Detector Response Function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Dean J.; Harding, Lee; Thoreson, Gregory G
2014-11-01
The Gamma Detector Response and Analysis Software (GADRAS) applies a Detector Response Function (DRF) to compute the output of gamma-ray and neutron detectors when they are exposed to radiation sources. The DRF is fundamental to the ability to perform forward calculations (i.e., computation of the response of a detector to a known source), as well as the ability to analyze spectra to deduce the types and quantities of radioactive material to which the detectors are exposed. This document describes how gamma-ray spectra are computed and the significance of response function parameters that define characteristics of particular detectors.
Methods for Functional Connectivity Analyses
2012-12-13
motor , or hand motor function (green, red, or blue shading, respectively). Thus, this work produced the first comprehensive analysis of ECoG...Computer Engineering, University of Texas at El Paso , TX, USA 3Department of Neurology, Albany Medical College, Albany, NY, USA 4Department of Computer...Department of Health, Albany, NY, USA bDepartment of Electrical and Computer Engineering, University of Texas at El Paso , TX, USA cDepartment of Neurology
Sensing and perception: Connectionist approaches to subcognitive computing
NASA Technical Reports Server (NTRS)
Skrrypek, J.
1987-01-01
New approaches to machine sensing and perception are presented. The motivation for crossdisciplinary studies of perception in terms of AI and neurosciences is suggested. The question of computing architecture granularity as related to global/local computation underlying perceptual function is considered and examples of two environments are given. Finally, the examples of using one of the environments, UCLA PUNNS, to study neural architectures for visual function are presented.
Computer Applications in the Design Process.
ERIC Educational Resources Information Center
Winchip, Susan
Computer Assisted Design (CAD) and Computer Assisted Manufacturing (CAM) are emerging technologies now being used in home economics and interior design applications. A microcomputer in a computer network system is capable of executing computer graphic functions such as three-dimensional modeling, as well as utilizing office automation packages to…
The Student-Teacher-Computer Team: Focus on the Computer.
ERIC Educational Resources Information Center
Ontario Inst. for Studies in Education, Toronto.
Descriptions of essential computer elements, logic and programing techniques, and computer applications are provided in an introductory handbook for use by educators and students. Following a brief historical perspective, the organization of a computer system is schematically illustrated, functions of components are explained in non-technical…
A Survey of Computational Intelligence Techniques in Protein Function Prediction
Tiwari, Arvind Kumar; Srivastava, Rajeev
2014-01-01
During the past, there was a massive growth of knowledge of unknown proteins with the advancement of high throughput microarray technologies. Protein function prediction is the most challenging problem in bioinformatics. In the past, the homology based approaches were used to predict the protein function, but they failed when a new protein was different from the previous one. Therefore, to alleviate the problems associated with homology based traditional approaches, numerous computational intelligence techniques have been proposed in the recent past. This paper presents a state-of-the-art comprehensive review of various computational intelligence techniques for protein function predictions using sequence, structure, protein-protein interaction network, and gene expression data used in wide areas of applications such as prediction of DNA and RNA binding sites, subcellular localization, enzyme functions, signal peptides, catalytic residues, nuclear/G-protein coupled receptors, membrane proteins, and pathway analysis from gene expression datasets. This paper also summarizes the result obtained by many researchers to solve these problems by using computational intelligence techniques with appropriate datasets to improve the prediction performance. The summary shows that ensemble classifiers and integration of multiple heterogeneous data are useful for protein function prediction. PMID:25574395
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
Computational Modeling in Liver Surgery
Christ, Bruno; Dahmen, Uta; Herrmann, Karl-Heinz; König, Matthias; Reichenbach, Jürgen R.; Ricken, Tim; Schleicher, Jana; Ole Schwen, Lars; Vlaic, Sebastian; Waschinsky, Navina
2017-01-01
The need for extended liver resection is increasing due to the growing incidence of liver tumors in aging societies. Individualized surgical planning is the key for identifying the optimal resection strategy and to minimize the risk of postoperative liver failure and tumor recurrence. Current computational tools provide virtual planning of liver resection by taking into account the spatial relationship between the tumor and the hepatic vascular trees, as well as the size of the future liver remnant. However, size and function of the liver are not necessarily equivalent. Hence, determining the future liver volume might misestimate the future liver function, especially in cases of hepatic comorbidities such as hepatic steatosis. A systems medicine approach could be applied, including biological, medical, and surgical aspects, by integrating all available anatomical and functional information of the individual patient. Such an approach holds promise for better prediction of postoperative liver function and hence improved risk assessment. This review provides an overview of mathematical models related to the liver and its function and explores their potential relevance for computational liver surgery. We first summarize key facts of hepatic anatomy, physiology, and pathology relevant for hepatic surgery, followed by a description of the computational tools currently used in liver surgical planning. Then we present selected state-of-the-art computational liver models potentially useful to support liver surgery. Finally, we discuss the main challenges that will need to be addressed when developing advanced computational planning tools in the context of liver surgery. PMID:29249974
Links, Jonathan M; Schwartz, Brian S; Lin, Sen; Kanarek, Norma; Mitrani-Reiser, Judith; Sell, Tara Kirk; Watson, Crystal R; Ward, Doug; Slemp, Cathy; Burhans, Robert; Gill, Kimberly; Igusa, Tak; Zhao, Xilei; Aguirre, Benigno; Trainor, Joseph; Nigg, Joanne; Inglesby, Thomas; Carbone, Eric; Kendra, James M
2018-02-01
Policy-makers and practitioners have a need to assess community resilience in disasters. Prior efforts conflated resilience with community functioning, combined resistance and recovery (the components of resilience), and relied on a static model for what is inherently a dynamic process. We sought to develop linked conceptual and computational models of community functioning and resilience after a disaster. We developed a system dynamics computational model that predicts community functioning after a disaster. The computational model outputted the time course of community functioning before, during, and after a disaster, which was used to calculate resistance, recovery, and resilience for all US counties. The conceptual model explicitly separated resilience from community functioning and identified all key components for each, which were translated into a system dynamics computational model with connections and feedbacks. The components were represented by publicly available measures at the county level. Baseline community functioning, resistance, recovery, and resilience evidenced a range of values and geographic clustering, consistent with hypotheses based on the disaster literature. The work is transparent, motivates ongoing refinements, and identifies areas for improved measurements. After validation, such a model can be used to identify effective investments to enhance community resilience. (Disaster Med Public Health Preparedness. 2018;12:127-137).
Using special functions to model the propagation of airborne diseases
NASA Astrophysics Data System (ADS)
Bolaños, Daniela
2014-06-01
Some special functions of the mathematical physics are using to obtain a mathematical model of the propagation of airborne diseases. In particular we study the propagation of tuberculosis in closed rooms and we model the propagation using the error function and the Bessel function. In the model, infected individual emit pathogens to the environment and this infect others individuals who absorb it. The evolution in time of the concentration of pathogens in the environment is computed in terms of error functions. The evolution in time of the number of susceptible individuals is expressed by a differential equation that contains the error function and it is solved numerically for different parametric simulations. The evolution in time of the number of infected individuals is plotted for each numerical simulation. On the other hand, the spatial distribution of the pathogen around the source of infection is represented by the Bessel function K0. The spatial and temporal distribution of the number of infected individuals is computed and plotted for some numerical simulations. All computations were made using software Computer algebra, specifically Maple. It is expected that the analytical results that we obtained allow the design of treatment rooms and ventilation systems that reduce the risk of spread of tuberculosis.
Functional Programming in Computer Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Loren James; Davis, Marion Kei
We explore functional programming through a 16-week internship at Los Alamos National Laboratory. Functional programming is a branch of computer science that has exploded in popularity over the past decade due to its high-level syntax, ease of parallelization, and abundant applications. First, we summarize functional programming by listing the advantages of functional programming languages over the usual imperative languages, and we introduce the concept of parsing. Second, we discuss the importance of lambda calculus in the theory of functional programming. Lambda calculus was invented by Alonzo Church in the 1930s to formalize the concept of effective computability, and every functionalmore » language is essentially some implementation of lambda calculus. Finally, we display the lasting products of the internship: additions to a compiler and runtime system for the pure functional language STG, including both a set of tests that indicate the validity of updates to the compiler and a compiler pass that checks for illegal instances of duplicate names.« less
Computational Methods to Work as First-Pass Filter in Deleterious SNP Analysis of Alkaptonuria
Magesh, R.; George Priya Doss, C.
2012-01-01
A major challenge in the analysis of human genetic variation is to distinguish functional from nonfunctional SNPs. Discovering these functional SNPs is one of the main goals of modern genetics and genomics studies. There is a need to effectively and efficiently identify functionally important nsSNPs which may be deleterious or disease causing and to identify their molecular effects. The prediction of phenotype of nsSNPs by computational analysis may provide a good way to explore the function of nsSNPs and its relationship with susceptibility to disease. In this context, we surveyed and compared variation databases along with in silico prediction programs to assess the effects of deleterious functional variants on protein functions. In other respects, we attempted these methods to work as first-pass filter to identify the deleterious substitutions worth pursuing for further experimental research. In this analysis, we used the existing computational methods to explore the mutation-structure-function relationship in HGD gene causing alkaptonuria. PMID:22606059
NASA Astrophysics Data System (ADS)
Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-01
Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.
Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-01
Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information. PMID:26823196
Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-29
Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.
Fast Computation of the Two-Point Correlation Function in the Age of Big Data
NASA Astrophysics Data System (ADS)
Pellegrino, Andrew; Timlin, John
2018-01-01
We present a new code which quickly computes the two-point correlation function for large sets of astronomical data. This code combines the ease of use of Python with the speed of parallel shared libraries written in C. We include the capability to compute the auto- and cross-correlation statistics, and allow the user to calculate the three-dimensional and angular correlation functions. Additionally, the code automatically divides the user-provided sky masks into contiguous subsamples of similar size, using the HEALPix pixelization scheme, for the purpose of resampling. Errors are computed using jackknife and bootstrap resampling in a way that adds negligible extra runtime, even with many subsamples. We demonstrate comparable speed with other clustering codes, and code accuracy compared to known and analytic results.
NASA Technical Reports Server (NTRS)
Goltz, G.; Kaiser, L. M.; Weiner, H.
1977-01-01
A computer program has been developed for designing and analyzing the performance of solar array/battery power systems for the U.S. Coast Guard Navigational Aids. This program is called the Design Synthesis/Performance Analysis (DSPA) Computer Program. The basic function of the Design Synthesis portion of the DSPA program is to evaluate functional and economic criteria to provide specifications for viable solar array/battery power systems. The basic function of the Performance Analysis portion of the DSPA program is to simulate the operation of solar array/battery power systems under specific loads and environmental conditions. This document establishes the software requirements for the DSPA computer program, discusses the processing that occurs within the program, and defines the necessary interfaces for operation.
On the distribution of a product of N Gaussian random variables
NASA Astrophysics Data System (ADS)
Stojanac, Željka; Suess, Daniel; Kliesch, Martin
2017-08-01
The product of Gaussian random variables appears naturally in many applications in probability theory and statistics. It has been known that the distribution of a product of N such variables can be expressed in terms of a Meijer G-function. Here, we compute a similar representation for the corresponding cumulative distribution function (CDF) and provide a power-log series expansion of the CDF based on the theory of the more general Fox H-functions. Numerical computations show that for small values of the argument the CDF of products of Gaussians is well approximated by the lowest orders of this expansion. Analogous results are also shown for the absolute value as well as the square of such products of N Gaussian random variables. For the latter two settings, we also compute the moment generating functions in terms of Meijer G-functions.
NASA Technical Reports Server (NTRS)
Schwenke, David W.; Truhlar, Donald G.
1990-01-01
The Generalized Newton Variational Principle for 3D quantum mechanical reactive scattering is briefly reviewed. Then three techniques are described which improve the efficiency of the computations. First, the fact that the Hamiltonian is Hermitian is used to reduce the number of integrals computed, and then the properties of localized basis functions are exploited in order to eliminate redundant work in the integral evaluation. A new type of localized basis function with desirable properties is suggested. It is shown how partitioned matrices can be used with localized basis functions to reduce the amount of work required to handle the complex boundary conditions. The new techniques do not introduce any approximations into the calculations, so they may be used to obtain converged solutions of the Schroedinger equation.
The flight telerobotic servicer: From functional architecture to computer architecture
NASA Technical Reports Server (NTRS)
Lumia, Ronald; Fiala, John
1989-01-01
After a brief tutorial on the NASA/National Bureau of Standards Standard Reference Model for Telerobot Control System Architecture (NASREM) functional architecture, the approach to its implementation is shown. First, interfaces must be defined which are capable of supporting the known algorithms. This is illustrated by considering the interfaces required for the SERVO level of the NASREM functional architecture. After interface definition, the specific computer architecture for the implementation must be determined. This choice is obviously technology dependent. An example illustrating one possible mapping of the NASREM functional architecture to a particular set of computers which implements it is shown. The result of choosing the NASREM functional architecture is that it provides a technology independent paradigm which can be mapped into a technology dependent implementation capable of evolving with technology in the laboratory and in space.
A large-scale evaluation of computational protein function prediction
Radivojac, Predrag; Clark, Wyatt T; Ronnen Oron, Tal; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kassner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Böhm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo
2013-01-01
Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based Critical Assessment of protein Function Annotation (CAFA) experiment. Fifty-four methods representing the state-of-the-art for protein function prediction were evaluated on a target set of 866 proteins from eleven organisms. Two findings stand out: (i) today’s best protein function prediction algorithms significantly outperformed widely-used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is significant need for improvement of currently available tools. PMID:23353650
Moncho, Salvador; Autschbach, Jochen
2010-01-12
A benchmark study for relativistic density functional calculations of NMR spin-spin coupling constants has been performed. The test set contained 47 complexes with heavy metal atoms (W, Pt, Hg, Tl, Pb) with a total of 88 coupling constants involving one or two heavy metal atoms. One-, two-, three-, and four-bond spin-spin couplings have been computed at different levels of theory (nonhybrid vs hybrid DFT, scalar vs two-component relativistic). The computational model was based on geometries fully optimized at the BP/TZP scalar relativistic zeroth-order regular approximation (ZORA) and the conductor-like screening model (COSMO) to include solvent effects. The NMR computations also employed the continuum solvent model. Computations in the gas phase were performed in order to assess the importance of the solvation model. The relative median deviations between various computational models and experiment were found to range between 13% and 21%, with the highest-level computational model (hybrid density functional computations including scalar plus spin-orbit relativistic effects, the COSMO solvent model, and a Gaussian finite-nucleus model) performing best.
The Enhancement of Concurrent Processing through Functional Programming Languages.
1984-06-01
ta * functional programming languages allow us to harness the pro- cessing power of computers with hundreds or even thousands of DD I 1473 EDITION OF...that it might be the best way to make imperative library", programs into functional ones which are well suited to concurrent processing. Accession For...statements in their code. We assert that functional programming languajes allok us to harness the processing power of computers with hundre4s or even
Function Package for Computing Quantum Resource Measures
NASA Astrophysics Data System (ADS)
Huang, Zhiming
2018-05-01
In this paper, we present a function package for to calculate quantum resource measures and dynamics of open systems. Our package includes common operators and operator lists, frequently-used functions for computing quantum entanglement, quantum correlation, quantum coherence, quantum Fisher information and dynamics in noisy environments. We briefly explain the functions of the package and illustrate how to use the package with several typical examples. We expect that this package is a useful tool for future research and education.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hounkonnou, Mahouton Norbert; Nkouankam, Elvis Benzo Ngompe
2010-10-15
From the realization of q-oscillator algebra in terms of generalized derivative, we compute the matrix elements from deformed exponential functions and deduce generating functions associated with Rogers-Szego polynomials as well as their relevant properties. We also compute the matrix elements associated with the (p,q)-oscillator algebra (a generalization of the q-one) and perform the Fourier-Gauss transform of a generalization of the deformed exponential functions.
Compact VLSI neural computer integrated with active pixel sensor for real-time ATR applications
NASA Astrophysics Data System (ADS)
Fang, Wai-Chi; Udomkesmalee, Gabriel; Alkalai, Leon
1997-04-01
A compact VLSI neural computer integrated with an active pixel sensor has been under development to mimic what is inherent in biological vision systems. This electronic eye- brain computer is targeted for real-time machine vision applications which require both high-bandwidth communication and high-performance computing for data sensing, synergy of multiple types of sensory information, feature extraction, target detection, target recognition, and control functions. The neural computer is based on a composite structure which combines Annealing Cellular Neural Network (ACNN) and Hierarchical Self-Organization Neural Network (HSONN). The ACNN architecture is a programmable and scalable multi- dimensional array of annealing neurons which are locally connected with their local neurons. Meanwhile, the HSONN adopts a hierarchical structure with nonlinear basis functions. The ACNN+HSONN neural computer is effectively designed to perform programmable functions for machine vision processing in all levels with its embedded host processor. It provides a two order-of-magnitude increase in computation power over the state-of-the-art microcomputer and DSP microelectronics. A compact current-mode VLSI design feasibility of the ACNN+HSONN neural computer is demonstrated by a 3D 16X8X9-cube neural processor chip design in a 2-micrometers CMOS technology. Integration of this neural computer as one slice of a 4'X4' multichip module into the 3D MCM based avionics architecture for NASA's New Millennium Program is also described.
Dung, Van Than; Tjahjowidodo, Tegoeh
2017-01-01
B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.
Bayesian extraction of the parton distribution amplitude from the Bethe-Salpeter wave function
NASA Astrophysics Data System (ADS)
Gao, Fei; Chang, Lei; Liu, Yu-xin
2017-07-01
We propose a new numerical method to compute the parton distribution amplitude (PDA) from the Euclidean Bethe-Salpeter wave function. The essential step is to extract the weight function in the Nakanishi representation of the Bethe-Salpeter wave function in Euclidean space, which is an ill-posed inversion problem, via the maximum entropy method (MEM). The Nakanishi weight function as well as the corresponding light-front parton distribution amplitude (PDA) can be well determined. We confirm prior work on PDA computations, which was based on different methods.
NASA Astrophysics Data System (ADS)
Reuter, Matthew; Tschudi, Stephen
When investigating the electrical response properties of molecules, experiments often measure conductance whereas computation predicts transmission probabilities. Although the Landauer-Büttiker theory relates the two in the limit of coherent scattering through the molecule, a direct comparison between experiment and computation can still be difficult. Experimental data (specifically that from break junctions) is statistical and computational results are deterministic. Many studies compare the most probable experimental conductance with computation, but such an analysis discards almost all of the experimental statistics. In this work we develop tools to decipher the Landauer-Büttiker transmission function directly from experimental statistics and then apply them to enable a fairer comparison between experimental and computational results.
The role of the host in a cooperating mainframe and workstation environment, volumes 1 and 2
NASA Technical Reports Server (NTRS)
Kusmanoff, Antone; Martin, Nancy L.
1989-01-01
In recent years, advancements made in computer systems have prompted a move from centralized computing based on timesharing a large mainframe computer to distributed computing based on a connected set of engineering workstations. A major factor in this advancement is the increased performance and lower cost of engineering workstations. The shift to distributed computing from centralized computing has led to challenges associated with the residency of application programs within the system. In a combined system of multiple engineering workstations attached to a mainframe host, the question arises as to how does a system designer assign applications between the larger mainframe host and the smaller, yet powerful, workstation. The concepts related to real time data processing are analyzed and systems are displayed which use a host mainframe and a number of engineering workstations interconnected by a local area network. In most cases, distributed systems can be classified as having a single function or multiple functions and as executing programs in real time or nonreal time. In a system of multiple computers, the degree of autonomy of the computers is important; a system with one master control computer generally differs in reliability, performance, and complexity from a system in which all computers share the control. This research is concerned with generating general criteria principles for software residency decisions (host or workstation) for a diverse yet coupled group of users (the clustered workstations) which may need the use of a shared resource (the mainframe) to perform their functions.
Highly fault-tolerant parallel computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spielman, D.A.
We re-introduce the coded model of fault-tolerant computation in which the input and output of a computational device are treated as words in an error-correcting code. A computational device correctly computes a function in the coded model if its input and output, once decoded, are a valid input and output of the function. In the coded model, it is reasonable to hope to simulate all computational devices by devices whose size is greater by a constant factor but which are exponentially reliable even if each of their components can fail with some constant probability. We consider fine-grained parallel computations inmore » which each processor has a constant probability of producing the wrong output at each time step. We show that any parallel computation that runs for time t on w processors can be performed reliably on a faulty machine in the coded model using w log{sup O(l)} w processors and time t log{sup O(l)} w. The failure probability of the computation will be at most t {center_dot} exp(-w{sup 1/4}). The codes used to communicate with our fault-tolerant machines are generalized Reed-Solomon codes and can thus be encoded and decoded in O(n log{sup O(1)} n) sequential time and are independent of the machine they are used to communicate with. We also show how coded computation can be used to self-correct many linear functions in parallel with arbitrarily small overhead.« less
How to Compute Labile Metal-Ligand Equilibria
ERIC Educational Resources Information Center
de Levie, Robert
2007-01-01
The different methods used for computing labile metal-ligand complexes, which are suitable for an iterative computer solution, are illustrated. The ligand function has allowed students to relegate otherwise tedious iterations to a computer, while retaining complete control over what is calculated.
Computer task performance by subjects with Duchenne muscular dystrophy.
Malheiros, Silvia Regina Pinheiro; da Silva, Talita Dias; Favero, Francis Meire; de Abreu, Luiz Carlos; Fregni, Felipe; Ribeiro, Denise Cardoso; de Mello Monteiro, Carlos Bandeira
2016-01-01
Two specific objectives were established to quantify computer task performance among people with Duchenne muscular dystrophy (DMD). First, we compared simple computational task performance between subjects with DMD and age-matched typically developing (TD) subjects. Second, we examined correlations between the ability of subjects with DMD to learn the computational task and their motor functionality, age, and initial task performance. The study included 84 individuals (42 with DMD, mean age of 18±5.5 years, and 42 age-matched controls). They executed a computer maze task; all participants performed the acquisition (20 attempts) and retention (five attempts) phases, repeating the same maze. A different maze was used to verify transfer performance (five attempts). The Motor Function Measure Scale was applied, and the results were compared with maze task performance. In the acquisition phase, a significant decrease was found in movement time (MT) between the first and last acquisition block, but only for the DMD group. For the DMD group, MT during transfer was shorter than during the first acquisition block, indicating improvement from the first acquisition block to transfer. In addition, the TD group showed shorter MT than the DMD group across the study. DMD participants improved their performance after practicing a computational task; however, the difference in MT was present in all attempts among DMD and control subjects. Computational task improvement was positively influenced by the initial performance of individuals with DMD. In turn, the initial performance was influenced by their distal functionality but not their age or overall functionality.
LOKI WIND CORRECTION COMPUTER AND WIND STUDIES FOR LOKI
which relates burnout deviation of flight path with the distributed wind along the boost trajectory. The wind influence function was applied to...electrical outputs. A complete wind correction computer system based on the influence function and the results of wind studies was designed.
DOT National Transportation Integrated Search
1976-08-01
This report contains a functional design for the simulation of a future automation concept in support of the ATC Systems Command Center. The simulation subsystem performs airport airborne arrival delay predictions and computes flow control tables for...
Towards constructing multi-bit binary adder based on Belousov-Zhabotinsky reaction
NASA Astrophysics Data System (ADS)
Zhang, Guo-Mao; Wong, Ieong; Chou, Meng-Ta; Zhao, Xin
2012-04-01
It has been proposed that the spatial excitable media can perform a wide range of computational operations, from image processing, to path planning, to logical and arithmetic computations. The realizations in the field of chemical logical and arithmetic computations are mainly concerned with single simple logical functions in experiments. In this study, based on Belousov-Zhabotinsky reaction, we performed simulations toward the realization of a more complex operation, the binary adder. Combining with some of the existing functional structures that have been verified experimentally, we designed a planar geometrical binary adder chemical device. Through numerical simulations, we first demonstrated that the device can implement the function of a single-bit full binary adder. Then we show that the binary adder units can be further extended in plane, and coupled together to realize a two-bit, or even multi-bit binary adder. The realization of chemical adders can guide the constructions of other sophisticated arithmetic functions, ultimately leading to the implementation of chemical computer and other intelligent systems.
Near-wall k-epsilon turbulence modeling
NASA Technical Reports Server (NTRS)
Mansour, N. N.; Kim, J.; Moin, P.
1987-01-01
The flow fields from a turbulent channel simulation are used to compute the budgets for the turbulent kinetic energy (k) and its dissipation rate (epsilon). Data from boundary layer simulations are used to analyze the dependence of the eddy-viscosity damping-function on the Reynolds number and the distance from the wall. The computed budgets are used to test existing near-wall turbulence models of the k-epsilon type. It was found that the turbulent transport models should be modified in the vicinity of the wall. It was also found that existing models for the different terms in the epsilon-budget are adequate in the region from the wall, but need modification near the wall. The channel flow is computed using a k-epsilon model with an eddy-viscosity damping function from the data and no damping functions in the epsilon-equation. These computations show that the k-profile can be adequately predicted, but to correctly predict the epsilon-profile, damping functions in the epsilon-equation are needed.
NASA Technical Reports Server (NTRS)
Lehtinen, B.; Geyser, L. C.
1984-01-01
AESOP is a computer program for use in designing feedback controls and state estimators for linear multivariable systems. AESOP is meant to be used in an interactive manner. Each design task that the program performs is assigned a "function" number. The user accesses these functions either (1) by inputting a list of desired function numbers or (2) by inputting a single function number. In the latter case the choice of the function will in general depend on the results obtained by the previously executed function. The most important of the AESOP functions are those that design,linear quadratic regulators and Kalman filters. The user interacts with the program when using these design functions by inputting design weighting parameters and by viewing graphic displays of designed system responses. Supporting functions are provided that obtain system transient and frequency responses, transfer functions, and covariance matrices. The program can also compute open-loop system information such as stability (eigenvalues), eigenvectors, controllability, and observability. The program is written in ANSI-66 FORTRAN for use on an IBM 3033 using TSS 370. Descriptions of all subroutines and results of two test cases are included in the appendixes.
Róg, T; Murzyn, K; Hinsen, K; Kneller, G R
2003-04-15
We present a new implementation of the program nMoldyn, which has been developed for the computation and decomposition of neutron scattering intensities from Molecular Dynamics trajectories (Comp. Phys. Commun 1995, 91, 191-214). The new implementation extends the functionality of the original version, provides a much more convenient user interface (both graphical/interactive and batch), and can be used as a tool set for implementing new analysis modules. This was made possible by the use of a high-level language, Python, and of modern object-oriented programming techniques. The quantities that can be calculated by nMoldyn are the mean-square displacement, the velocity autocorrelation function as well as its Fourier transform (the density of states) and its memory function, the angular velocity autocorrelation function and its Fourier transform, the reorientational correlation function, and several functions specific to neutron scattering: the coherent and incoherent intermediate scattering functions with their Fourier transforms, the memory function of the coherent scattering function, and the elastic incoherent structure factor. The possibility to compute memory function is a new and powerful feature that allows to relate simulation results to theoretical studies. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 657-667, 2003
A real-time moment-tensor inversion system (GRiD-MT-3D) using 3-D Green's functions
NASA Astrophysics Data System (ADS)
Nagao, A.; Furumura, T.; Tsuruoka, H.
2016-12-01
We developed a real-time moment-tensor inversion system using 3-D Green's functions (GRiD-MT-3D) by improving the current system (GRiD-MT; Tsuruoka et al., 2009), which uses 1-D Green's functions for longer periods than 20 s. Our moment-tensor inversion is applied to the real-time monitoring of earthquakes occurring beneath Kanto basin area. The basin, which is constituted of thick sediment layers, lies on the complex subduction of the Philippine-Sea Plate and the Pacific Plate that can significantly affect the seismic wave propagation. We compute 3-D Green's functions using finite-difference-method (FDM) simulations considering a 3-D velocity model, which is based on the Japan Integrated Velocity Structure Model (Koketsu et al., 2012), that includes crust, mantle, and subducting plates. The 3-D FDM simulations are computed over a volume of 468 km by 432 km by 120 km in the EW, NS, and depth directions, respectively, that is discretized into 0.25 km grids. Considering that the minimum S wave velocity of the sedimentary layer is 0.5 km/s, simulations can compute seismograms up to 0.5 Hz. We calculate Green's functions between 24,700 sources, which are distributed every 0.1° in the horizontal direction and every 9 km in depth direction, and 13 F-net stations. To compute this large number of Green's functions, we used the EIC parallel computer of ERI. The reciprocity theory, which switches the source and station positions, is used to reduce total computation costs. It took 156 hours to compute all the Green's functions. Results show that at long-periods (T>15 s), only small differences are observed between the 3-D and 1-D Green's functions as indicated by high correlation coefficients of 0.9 between the waveforms. However, at shorter periods (T<10 s), the differences become larger and the correlation coefficients drop to 0.5. The effect of the 3-D heterogeneous structure especially affects the Green's functions for the ray paths that across complex geological structures, such as the sedimentary basin or the subducting plates. After incorporation of the 3-D Green's functions in the GRiD-MT-3D system, we compare the results to the former GRiD-MT system to demonstrate the effectiveness of the new system in terms of variance reduction and accuracy of the moment-tensor estimation for much smaller events than the current one.
Computation of the Complex Probability Function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trainer, Amelia Jo; Ledwith, Patrick John
The complex probability function is important in many areas of physics and many techniques have been developed in an attempt to compute it for some z quickly and e ciently. Most prominent are the methods that use Gauss-Hermite quadrature, which uses the roots of the n th degree Hermite polynomial and corresponding weights to approximate the complex probability function. This document serves as an overview and discussion of the use, shortcomings, and potential improvements on the Gauss-Hermite quadrature for the complex probability function.
Multi-tasking computer control of video related equipment
NASA Technical Reports Server (NTRS)
Molina, Rod; Gilbert, Bob
1989-01-01
The flexibility, cost-effectiveness and widespread availability of personal computers now makes it possible to completely integrate the previously separate elements of video post-production into a single device. Specifically, a personal computer, such as the Commodore-Amiga, can perform multiple and simultaneous tasks from an individual unit. Relatively low cost, minimal space requirements and user-friendliness, provides the most favorable environment for the many phases of video post-production. Computers are well known for their basic abilities to process numbers, text and graphics and to reliably perform repetitive and tedious functions efficiently. These capabilities can now apply as either additions or alternatives to existing video post-production methods. A present example of computer-based video post-production technology is the RGB CVC (Computer and Video Creations) WorkSystem. A wide variety of integrated functions are made possible with an Amiga computer existing at the heart of the system.
Few Fractional Order Derivatives and Their Computations
ERIC Educational Resources Information Center
Bhatta, D. D.
2007-01-01
This work presents an introductory development of fractional order derivatives and their computations. Historical development of fractional calculus is discussed. This paper presents how to obtain computational results of fractional order derivatives for some elementary functions. Computational results are illustrated in tabular and graphical…
ERIC Educational Resources Information Center
Uthe, Elaine F.
1982-01-01
Describes the growing use of computers in our world and how their use will affect vocational education. Discusses recordkeeping and database functions, computer graphics, problem-solving simulations, satellite communications, home computers, and how they will affect office education, home economics education, marketing and distributive education,…
Biswas, Amitava; Liu, Chen; Monga, Inder; ...
2016-01-01
For last few years, there has been a tremendous growth in data traffic due to high adoption rate of mobile devices and cloud computing. Internet of things (IoT) will stimulate even further growth. This is increasing scale and complexity of telecom/internet service provider (SP) and enterprise data centre (DC) compute and network infrastructures. As a result, managing these large network-compute converged infrastructures is becoming complex and cumbersome. To cope up, network and DC operators are trying to automate network and system operations, administrations and management (OAM) functions. OAM includes all non-functional mechanisms which keep the network running.
Lattice dynamics calculations based on density-functional perturbation theory in real space
NASA Astrophysics Data System (ADS)
Shang, Honghui; Carbogno, Christian; Rinke, Patrick; Scheffler, Matthias
2017-06-01
A real-space formalism for density-functional perturbation theory (DFPT) is derived and applied for the computation of harmonic vibrational properties in molecules and solids. The practical implementation using numeric atom-centered orbitals as basis functions is demonstrated exemplarily for the all-electron Fritz Haber Institute ab initio molecular simulations (FHI-aims) package. The convergence of the calculations with respect to numerical parameters is carefully investigated and a systematic comparison with finite-difference approaches is performed both for finite (molecules) and extended (periodic) systems. Finally, the scaling tests and scalability tests on massively parallel computer systems demonstrate the computational efficiency.
Elliptical orbit performance computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1981-01-01
A FORTRAN coded computer program which generates and plots elliptical orbit performance capability of space boosters for presentation purposes is described. Orbital performance capability of space boosters is typically presented as payload weight as a function of perigee and apogee altitudes. The parameters are derived from a parametric computer simulation of the booster flight which yields the payload weight as a function of velocity and altitude at insertion. The process of converting from velocity and altitude to apogee and perigee altitude and plotting the results as a function of payload weight is mechanized with the ELOPE program. The program theory, user instruction, input/output definitions, subroutine descriptions and detailed FORTRAN coding information are included.
Besnier, Francois; Glover, Kevin A.
2013-01-01
This software package provides an R-based framework to make use of multi-core computers when running analyses in the population genetics program STRUCTURE. It is especially addressed to those users of STRUCTURE dealing with numerous and repeated data analyses, and who could take advantage of an efficient script to automatically distribute STRUCTURE jobs among multiple processors. It also consists of additional functions to divide analyses among combinations of populations within a single data set without the need to manually produce multiple projects, as it is currently the case in STRUCTURE. The package consists of two main functions: MPI_structure() and parallel_structure() as well as an example data file. We compared the performance in computing time for this example data on two computer architectures and showed that the use of the present functions can result in several-fold improvements in terms of computation time. ParallelStructure is freely available at https://r-forge.r-project.org/projects/parallstructure/. PMID:23923012
Sturm, Alexandra; Rozenman, Michelle; Piacentini, John C; McGough, James J; Loo, Sandra K; McCracken, James T
2018-03-20
Predictors of math achievement in attention-deficit/hyperactivity disorder (ADHD) are not well-known. To address this gap in the literature, we examined individual differences in neurocognitive functioning domains on math computation in a cross-sectional sample of youth with ADHD. Gender and anxiety symptoms were explored as potential moderators. The sample consisted of 281 youth (aged 8-15 years) diagnosed with ADHD. Neurocognitive tasks assessed auditory-verbal working memory, visuospatial working memory, and processing speed. Auditory-verbal working memory speed significantly predicted math computation. A three-way interaction revealed that at low levels of anxious perfectionism, slower processing speed predicted poorer math computation for boys compared to girls. These findings indicate the uniquely predictive values of auditory-verbal working memory and processing speed on math computation, and their differential moderation. These findings provide preliminary support that gender and anxious perfectionism may influence the relationship between neurocognitive functioning and academic achievement.
Parent's Guide to Computers in Education.
ERIC Educational Resources Information Center
Moursund, David
Addressed to the parents of children taking computer courses in school, this booklet outlines the rationales for computer use in schools and explains for a lay audience the features and functions of computers. A look at the school of the future shows computers aiding the study of reading, writing, arithmetic, geography, and history. The features…
Methodical Approaches to Teaching of Computer Modeling in Computer Science Course
ERIC Educational Resources Information Center
Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina
2015-01-01
The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…
Computer-Based Techniques for Collection of Pulmonary Function Variables during Rest and Exercise.
1991-03-01
routinely Included in experimental protocols involving hyper- and hypobaric excursions. Unfortunately, the full potential of those tests Is often not...for a Pulmonary Function data acquisition system that has proven useful in the hyperbaric research laboratory. It illustrates how computers can
Synthesis of Efficient Structures for Concurrent Computation.
1983-10-01
formal presentation of these techniques, called virtualisation and aggregation, can be found n [King-83$. 113.2 Census Functions Trees perform broadcast... Functions .. .. .. .. ... .... ... ... .... ... ... ....... 6 4 User-Assisted Aggregation .. .. .. .. ... ... ... .... ... .. .......... 6 5 Parallel...6. Simple Parallel Structure for Broadcasting .. .. .. .. .. . ... .. . .. . .... 4 Figure 7. Internal Structure of a Prefix Computation Network
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Single-function, preprogrammed diagnostic computer. 870.1435 Section 870.1435 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Diagnostic Devices § 870.1435...
Short-range density functional correlation within the restricted active space CI method
NASA Astrophysics Data System (ADS)
Casanova, David
2018-03-01
In the present work, I introduce a hybrid wave function-density functional theory electronic structure method based on the range separation of the electron-electron Coulomb operator in order to recover dynamic electron correlations missed in the restricted active space configuration interaction (RASCI) methodology. The working equations and the computational algorithm for the implementation of the new approach, i.e., RAS-srDFT, are presented, and the method is tested in the calculation of excitation energies of organic molecules. The good performance of the RASCI wave function in combination with different short-range exchange-correlation functionals in the computation of relative energies represents a quantitative improvement with respect to the RASCI results and paves the path for the development of RAS-srDFT as a promising scheme in the computation of the ground and excited states where nondynamic and dynamic electron correlations are important.
Computational approaches for rational design of proteins with novel functionalities
Tiwari, Manish Kumar; Singh, Ranjitha; Singh, Raushan Kumar; Kim, In-Won; Lee, Jung-Kul
2012-01-01
Proteins are the most multifaceted macromolecules in living systems and have various important functions, including structural, catalytic, sensory, and regulatory functions. Rational design of enzymes is a great challenge to our understanding of protein structure and physical chemistry and has numerous potential applications. Protein design algorithms have been applied to design or engineer proteins that fold, fold faster, catalyze, catalyze faster, signal, and adopt preferred conformational states. The field of de novo protein design, although only a few decades old, is beginning to produce exciting results. Developments in this field are already having a significant impact on biotechnology and chemical biology. The application of powerful computational methods for functional protein designing has recently succeeded at engineering target activities. Here, we review recently reported de novo functional proteins that were developed using various protein design approaches, including rational design, computational optimization, and selection from combinatorial libraries, highlighting recent advances and successes. PMID:24688643
New Treatment of Strongly Anisotropic Scattering Phase Functions: The Delta-M+ Method
NASA Astrophysics Data System (ADS)
Stamnes, K. H.; Lin, Z.; Chen, N.; Fan, Y.; Li, W.; Stamnes, S.
2017-12-01
The treatment of strongly anisotropic scattering phase functions is still a challenge for accurate radiance computations. The new Delta-M+ method resolves this problem by introducing a reliable, fast, accurate, and easy-to-use Legendre expansion of the scattering phase function with modified moments. Delta-M+ is an upgrade of the widely-used Delta-M method that truncates the forward scattering cone into a Dirac-delta-function (a direct beam), where the + symbol indicates that it essentially matches moments above the first 2M terms. Compared with the original Delta-M method, Delta-M+ has the same computational efficiency, but the accuracy has been increased dramatically. Tests show that the errors for strongly forward-peaked scattering phase functions are greatly reduced. Furthermore, the accuracy and stability of radiance computations are also significantly improved by applying the new Delta-M+ method.
Hexagonalization of correlation functions II: two-particle contributions
NASA Astrophysics Data System (ADS)
Fleury, Thiago; Komatsu, Shota
2018-02-01
In this work, we compute one-loop planar five-point functions in N=4 super-Yang-Mills using integrability. As in the previous work, we decompose the correlation functions into hexagon form factors and glue them using the weight factors which depend on the cross-ratios. The main new ingredient in the computation, as compared to the four-point functions studied in the previous paper, is the two-particle mirror contribution. We develop techniques to evaluate it and find agreement with the perturbative results in all the cases we analyzed. In addition, we consider next-to-extremal four-point functions, which are known to be protected, and show that the sum of one-particle and two-particle contributions at one loop adds up to zero as expected. The tools developed in this work would be useful for computing higher-particle contributions which would be relevant for more complicated quantities such as higher-loop corrections and non-planar correlators.
SPECT/CT in imaging foot and ankle pathology-the demise of other coregistration techniques.
Mohan, Hosahalli K; Gnanasegaran, Gopinath; Vijayanathan, Sanjay; Fogelman, Ignac
2010-01-01
Disorders of the ankle and foot are common and given the complex anatomy and function of the foot, they present a significant clinical challenge. Imaging plays a crucial role in the management of these patients, with multiple imaging options available to the clinician. The American College of radiology has set the appropriateness criteria for the use of the available investigating modalities in the management of foot and ankle pathologies. These are broadly classified into anatomical and functional imaging modalities. Recently, single-photon emission computed tomography and/or computed tomography scanners, which can elegantly combine functional and anatomical images have been introduced, promising an exciting and important development. This review describes our clinical experience with single-photon emission computed tomography and/or computed tomography and discusses potential applications of these techniques.
TBGG- INTERACTIVE ALGEBRAIC GRID GENERATION
NASA Technical Reports Server (NTRS)
Smith, R. E.
1994-01-01
TBGG, Two-Boundary Grid Generation, applies an interactive algebraic grid generation technique in two dimensions. The program incorporates mathematical equations that relate the computational domain to the physical domain. TBGG has application to a variety of problems using finite difference techniques, such as computational fluid dynamics. Examples include the creation of a C-type grid about an airfoil and a nozzle configuration in which no left or right boundaries are specified. The underlying two-boundary technique of grid generation is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are defined by two ordered sets of points, referred to as the top and bottom. Left and right side boundaries may also be specified, and call upon linear blending functions to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly spaced computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth cubic spline functions is also presented. The TBGG program is written in FORTRAN 77. It works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. The program has been implemented on a CDC Cyber 170 series computer using NOS 2.4 operating system, with a central memory requirement of 151,700 (octal) 60 bit words. TBGG requires a Tektronix 4015 terminal and the DI-3000 Graphics Library of Precision Visuals, Inc. TBGG was developed in 1986.
Barth, Patrick; Senes, Alessandro
2016-06-07
The computational design of α-helical membrane proteins is still in its infancy but has already made great progress. De novo design allows stable, specific and active minimal oligomeric systems to be obtained. Computational reengineering can improve the stability and function of naturally occurring membrane proteins. Currently, the major hurdle for the field is the experimental characterization of the designs. The emergence of new structural methods for membrane proteins will accelerate progress.
On the Circulation Manifold for Two Adjacent Lifting Sections
NASA Technical Reports Server (NTRS)
Zannetti, Luca; Iollo, Angelo
1998-01-01
The circulation functional relative to two adjacent lifting sections is studied for two cases. In the first case we consider two adjacent circles. The circulation is computed as a function of the displacement of the secondary circle along the axis joining the two centers and of the angle of attack of the secondary circle, The gradient of such functional is computed by deriving a set of elliptic functions with respect both to their argument and to their Period. In the second case studied, we considered a wing-flap configuration. The circulation is computed by some implicit mappings, whose differentials with respect to the variation of the geometrical configuration in the physical space are found by divided differences. Configurations giving rise to local maxima and minima in the circulation manifold are presented.
Tawhai, M. H.; Clark, A. R.; Donovan, G. M.; Burrowes, K. S.
2011-01-01
Computational models of lung structure and function necessarily span multiple spatial and temporal scales, i.e., dynamic molecular interactions give rise to whole organ function, and the link between these scales cannot be fully understood if only molecular or organ-level function is considered. Here, we review progress in constructing multiscale finite element models of lung structure and function that are aimed at providing a computational framework for bridging the spatial scales from molecular to whole organ. These include structural models of the intact lung, embedded models of the pulmonary airways that couple to model lung tissue, and models of the pulmonary vasculature that account for distinct structural differences at the extra- and intra-acinar levels. Biophysically based functional models for tissue deformation, pulmonary blood flow, and airway bronchoconstriction are also described. The development of these advanced multiscale models has led to a better understanding of complex physiological mechanisms that govern regional lung perfusion and emergent heterogeneity during bronchoconstriction. PMID:22011236
NASA Astrophysics Data System (ADS)
Roy, M.; Maksym, P. A.; Bruls, D.; Offermans, P.; Koenraad, P. M.
2010-11-01
An effective-mass theory of subsurface scanning tunneling microscopy (STM) is developed. Subsurface structures such as quantum dots embedded into a semiconductor slab are considered. States localized around subsurface structures match on to a tail that decays into the vacuum above the surface. It is shown that the lateral variation in this tail may be found from a surface envelope function provided that the effects of the slab surfaces and the subsurface structure decouple approximately. The surface envelope function is given by a weighted integral of a bulk envelope function that satisfies boundary conditions appropriate to the slab. The weight function decays into the slab inversely with distance and this slow decay explains the subsurface sensitivity of STM. These results enable STM images to be computed simply and economically from the bulk envelope function. The method is used to compute wave-function images of cleaved quantum dots and the computed images agree very well with experiment.
2012-01-01
computerized stimulation paradigms for use during functional neuroimaging (i.e., MSIT). Accomplishments: • The following computer tasks were...and Stability Test. • Programming of all computerized functional MRI stimulation paradigms and assessment tasks using E-prime software was completed...Computer stimulation paradigms were tested in the scanner environment to ensure that they could be presented and seen by subjects in the scanner
Teaching CAD on the Apple Computer.
ERIC Educational Resources Information Center
Norton, Robert L.
1984-01-01
Describes a course designed to teach engineers how to accomplish computer graphics techniques on a limited scale with the Apple computer. The same mathematics and program code will also function for larger and more complex computers. Course content, instructional strategies, student evaluation, and recommendations are considered. (JN)
Wang, Degeng
2008-01-01
Discrepancy between the abundance of cognate protein and RNA molecules is frequently observed. A theoretical understanding of this discrepancy remains elusive, and it is frequently described as surprises and/or technical difficulties in the literature. Protein and RNA represent different steps of the multi-stepped cellular genetic information flow process, in which they are dynamically produced and degraded. This paper explores a comparison with a similar process in computers - multi-step information flow from storage level to the execution level. Functional similarities can be found in almost every facet of the retrieval process. Firstly, common architecture is shared, as the ribonome (RNA space) and the proteome (protein space) are functionally similar to the computer primary memory and the computer cache memory respectively. Secondly, the retrieval process functions, in both systems, to support the operation of dynamic networks – biochemical regulatory networks in cells and, in computers, the virtual networks (of CPU instructions) that the CPU travels through while executing computer programs. Moreover, many regulatory techniques are implemented in computers at each step of the information retrieval process, with a goal of optimizing system performance. Cellular counterparts can be easily identified for these regulatory techniques. In other words, this comparative study attempted to utilize theoretical insight from computer system design principles as catalysis to sketch an integrative view of the gene expression process, that is, how it functions to ensure efficient operation of the overall cellular regulatory network. In context of this bird’s-eye view, discrepancy between protein and RNA abundance became a logical observation one would expect. It was suggested that this discrepancy, when interpreted in the context of system operation, serves as a potential source of information to decipher regulatory logics underneath biochemical network operation. PMID:18757239
Wang, Degeng
2008-12-01
Discrepancy between the abundance of cognate protein and RNA molecules is frequently observed. A theoretical understanding of this discrepancy remains elusive, and it is frequently described as surprises and/or technical difficulties in the literature. Protein and RNA represent different steps of the multi-stepped cellular genetic information flow process, in which they are dynamically produced and degraded. This paper explores a comparison with a similar process in computers-multi-step information flow from storage level to the execution level. Functional similarities can be found in almost every facet of the retrieval process. Firstly, common architecture is shared, as the ribonome (RNA space) and the proteome (protein space) are functionally similar to the computer primary memory and the computer cache memory, respectively. Secondly, the retrieval process functions, in both systems, to support the operation of dynamic networks-biochemical regulatory networks in cells and, in computers, the virtual networks (of CPU instructions) that the CPU travels through while executing computer programs. Moreover, many regulatory techniques are implemented in computers at each step of the information retrieval process, with a goal of optimizing system performance. Cellular counterparts can be easily identified for these regulatory techniques. In other words, this comparative study attempted to utilize theoretical insight from computer system design principles as catalysis to sketch an integrative view of the gene expression process, that is, how it functions to ensure efficient operation of the overall cellular regulatory network. In context of this bird's-eye view, discrepancy between protein and RNA abundance became a logical observation one would expect. It was suggested that this discrepancy, when interpreted in the context of system operation, serves as a potential source of information to decipher regulatory logics underneath biochemical network operation.
Research Area 3: Mathematical Sciences: 3.4, Discrete Mathematics and Computer Science
2015-06-10
013-0043-1 Charles Chui, Hrushikesh Mhaskar. MRA contextual-recovery extension of smooth functions on manifolds, Applied and Computational Harmonic...753507. International Society for Optics and Photonics, 2010. [5] C. K. Chui and H. N. Mhaskar. MRA contextual-recovery extension of smooth functions on
Integration of CAI into a Freshmen Liberal Arts Math Course in the Community College.
ERIC Educational Resources Information Center
McCall, Michael B.; Holton, Jean L.
1982-01-01
Discusses four computer-assisted-instruction programs used in a college-level mathematics course to introduce computer literacy and improve mathematical skills. The BASIC programs include polynomial functions, trigonometric functions, matrix algebra, and differential calculus. Each program discusses mathematics theory and introduces programming…
Supporting Executive Functions during Children's Preliteracy Learning with the Computer
ERIC Educational Resources Information Center
Van de Sande, E.; Segers, E.; Verhoeven, L.
2016-01-01
The present study examined how embedded activities to support executive functions helped children to benefit from a computer intervention that targeted preliteracy skills. Three intervention groups were compared on their preliteracy gains in a randomized controlled trial design: an experimental group that worked with software to stimulate early…
Computation of non-monotonic Lyapunov functions for continuous-time systems
NASA Astrophysics Data System (ADS)
Li, Huijuan; Liu, AnPing
2017-09-01
In this paper, we propose two methods to compute non-monotonic Lyapunov functions for continuous-time systems which are asymptotically stable. The first method is to solve a linear optimization problem on a compact and bounded set. The proposed linear programming based algorithm delivers a CPA1