To master or perform? Exploring relations between achievement goals and conceptual change learning.
Ranellucci, John; Muis, Krista R; Duffy, Melissa; Wang, Xihui; Sampasivam, Lavanya; Franco, Gina M
2013-09-01
Research is needed to explore conceptual change in relation to achievement goal orientations and depth of processing. To address this need, we examined relations between achievement goals, use of deep versus shallow processing strategies, and conceptual change learning using a think-aloud protocol. Seventy-three undergraduate students were assessed on their prior knowledge and misconceptions about Newtonian mechanics, and then reported their achievement goals and participated in think-aloud protocols while reading Newtonian physics texts. A mastery-approach goal orientation positively predicted deep processing strategies, shallow processing strategies, and conceptual change. In contrast, a performance-approach goal orientation did not predict either of the processing strategies, but negatively predicted conceptual change. A performance-avoidance goal orientation negatively predicted deep processing strategies and conceptual change. Moreover, deep and shallow processing strategies positively predicted conceptual change as well as recall. Finally, both deep and shallow processing strategies mediated relations between mastery-approach goals and conceptual change. Results provide some support for Dole and Sinatra's (1998) Cognitive Reconstruction of Knowledge Model of conceptual change but also challenge specific facets with regard to the role of depth of processing in conceptual change. © 2012 The British Psychological Society.
Herbert, Cornelia; Kissler, Johanna
2010-05-01
Valence-driven modulation of the startle reflex, that is larger eyeblinks during viewing of unpleasant pictures and inhibited blinks while viewing pleasant pictures, is well documented. The current study investigated, whether this motivational priming pattern also occurs during processing of unpleasant and pleasant words, and to what extent it is influenced by shallow vs. deep encoding of verbal stimuli. Emotional and neutral adjectives were presented for 5s, and the acoustically elicited startle eyeblink response was measured while subjects memorized the words by means of shallow or deep processing strategies. Results showed blink potentiation to unpleasant and blink inhibition to pleasant adjectives in subjects using shallow encoding strategies. In subjects using deep-encoding strategies, blinks were larger for pleasant than unpleasant or neutral adjectives. In line with this, free recall of pleasant words was also better in subjects who engaged in deep processing. The results suggest that motivational priming holds as long as processing is perceptual. However, during deep processing the startle reflex appears to represent a measure of "processing interrupt", facilitating blinks to those stimuli that are more deeply encoded. Copyright 2010 Elsevier B.V. All rights reserved.
The Effects of Test Anxiety on Learning at Superficial and Deep Levels of Processing.
ERIC Educational Resources Information Center
Weinstein, Claire E.; And Others
1982-01-01
Using a deep-level processing strategy, low test-anxious college students performed significantly better than high test-anxious students in learning a paired-associate word list. Using a superficial-level processing strategy resulted in no significant difference in performance. A cognitive-attentional theory and test anxiety mechanisms are…
ERIC Educational Resources Information Center
Anderman, Eric M.
Middle school students (N=712) were surveyed about their achievement goals and cognitive processing strategies. Results suggest that academically at-risk students use deep strategies less and are less learning focused than not at-risk and special education students. Special education and at-risk students tended to be more ability-focused than not…
Levels-Of-Processing Effect on Word Recognition in Schizophrenia
Ragland, J. Daniel; Moelter, Stephen T.; McGrath, Claire; Hill, S. Kristian; Gur, Raquel E.; Bilker, Warren B.; Siegel, Steven J.; Gur, Ruben C.
2015-01-01
Background Individuals with schizophrenia have difficulty organizing words semantically to facilitate encoding. This is commonly attributed to organizational rather than semantic processing limitations. By requiring participants to classify and encode words on either a shallow (e.g., uppercase/lowercase) or deep level (e.g., concrete/abstract), the levels-of-processing paradigm eliminates the need to generate organizational strategies. Methods This paradigm was administered to 30 patients with schizophrenia and 30 healthy comparison subjects to test whether providing a strategy would improve patient performance. Results Word classification during shallow and deep encoding was slower and less accurate in patients. Patients also responded slowly during recognition testing and maintained a more conservative response bias following deep encoding; however, both groups showed a robust levels-of-processing effect on recognition accuracy, with unimpaired patient performance following both shallow and deep encoding. Conclusions This normal levels-of-processing effect in the patient sample suggests that semantic processing is sufficiently intact for patients to benefit from organizational cues. Memory remediation efforts may therefore be most successful if they focus on teaching patients to form organizational strategies during initial encoding. PMID:14643082
Levels-of-processing effect on word recognition in schizophrenia.
Ragland, J Daniel; Moelter, Stephen T; McGrath, Claire; Hill, S Kristian; Gur, Raquel E; Bilker, Warren B; Siegel, Steven J; Gur, Ruben C
2003-12-01
Individuals with schizophrenia have difficulty organizing words semantically to facilitate encoding. This is commonly attributed to organizational rather than semantic processing limitations. By requiring participants to classify and encode words on either a shallow (e.g., uppercase/lowercase) or deep level (e.g., concrete/abstract), the levels-of-processing paradigm eliminates the need to generate organizational strategies. This paradigm was administered to 30 patients with schizophrenia and 30 healthy comparison subjects to test whether providing a strategy would improve patient performance. Word classification during shallow and deep encoding was slower and less accurate in patients. Patients also responded slowly during recognition testing and maintained a more conservative response bias following deep encoding; however, both groups showed a robust levels-of-processing effect on recognition accuracy, with unimpaired patient performance following both shallow and deep encoding. This normal levels-of-processing effect in the patient sample suggests that semantic processing is sufficiently intact for patients to benefit from organizational cues. Memory remediation efforts may therefore be most successful if they focus on teaching patients to form organizational strategies during initial encoding.
An Action Research on Deep Word Processing Strategy Instruction
ERIC Educational Resources Information Center
Zhang, Limei
2010-01-01
For too long a time, how to memorize more words and keep them longer in mind has been a primary and everlasting problem for vocabulary teaching and learning. This study focused on deep processing as a word memorizing strategy in contextualizing, de- and re- contextualizing learning stages. It also examined possible effects of such pedagogy on…
Levels-of-processing effect on internal source monitoring in schizophrenia.
Ragland, J Daniel; McCarthy, Erin; Bilker, Warren B; Brensinger, Colleen M; Valdez, Jeffrey; Kohler, Christian; Gur, Raquel E; Gur, Ruben C
2006-05-01
Recognition can be normalized in schizophrenia by providing patients with semantic organizational strategies through a levels-of-processing (LOP) framework. However, patients may rely primarily on familiarity effects, making recognition less sensitive than source monitoring to the strength of the episodic memory trace. The current study investigates whether providing semantic organizational strategies can also normalize patients' internal source-monitoring performance. Sixteen clinically stable medicated patients with schizophrenia and 15 demographically matched healthy controls were asked to identify the source of remembered words following an LOP-encoding paradigm in which they alternated between processing words on a 'shallow' perceptual versus a 'deep' semantic level. A multinomial analysis provided orthogonal measures of item recognition and source discrimination, and bootstrapping generated variance to allow for parametric analyses. LOP and group effects were tested by contrasting recognition and source-monitoring parameters for words that had been encoded during deep versus shallow processing conditions. As in a previous study there were no group differences in LOP effects on recognition performance, with patients and controls benefiting equally from deep versus shallow processing. Although there were no group differences in internal source monitoring, only controls had significantly better performance for words processed during the deep encoding condition. Patient performance did not correlate with clinical symptoms or medication dose. Providing a deep processing semantic encoding strategy significantly improved patients' recognition performance only. The lack of a significant LOP effect on internal source monitoring in patients may reflect subtle problems in the relational binding of semantic information that are independent of strategic memory processes.
The influence of encoding strategy on episodic memory and cortical activity in schizophrenia.
Bonner-Jackson, Aaron; Haut, Kristen; Csernansky, John G; Barch, Deanna M
2005-07-01
Recent work suggests that episodic memory deficits in schizophrenia may be related to disturbances of encoding or retrieval. Schizophrenia patients appear to benefit from instruction in episodic memory strategies. We tested the hypothesis that providing effective encoding strategies to schizophrenia patients enhances encoding-related brain activity and recognition performance. Seventeen schizophrenia patients and 26 healthy comparison subjects underwent functional magnetic resonance imaging scans while performing incidental encoding tasks of words and faces. Subjects were required to make either deep (abstract/concrete) or shallow (alphabetization) judgments for words and deep (gender) judgments for faces, followed by subsequent recognition tests. Schizophrenia and comparison subjects recognized significantly more words encoded deeply than shallowly, activated regions in inferior frontal cortex (Brodmann area 45/47) typically associated with deep and successful encoding of words, and showed greater left frontal activation for the processing of words compared with faces. However, during deep encoding and material-specific processing (words vs. faces), participants with schizophrenia activated regions not activated by control subjects, including several in prefrontal cortex. Our findings suggest that a deficit in use of effective strategies influences episodic memory performance in schizophrenia and that abnormalities in functional brain activation persist even when such strategies are applied.
ERIC Educational Resources Information Center
Manuel, Carlos J.
2009-01-01
This study assesses reading processes and/or strategies needed to deploy deep processing that could push learners towards syntactic-based constructions in L2 classrooms. Research has found L2 acquisition to present varying degrees of success and/or fossilization (Bley-Vroman 1989, Birdsong 1992 and Sharwood Smith 1994). For example, learners have…
The Influence of Encoding Strategy on Episodic Memory and Cortical Activity in Schizophrenia
Haut, Kristen; Csernansky, John G.; Barch, Deanna M.
2005-01-01
Background: Recent work suggests that episodic memory deficits in schizophrenia may be related to disturbances of encoding or retrieval. Schizophrenia patients appear to benefit from instruction in episodic memory strategies. We tested the hypothesis that providing effective encoding strategies to schizophrenia patients enhances encoding-related brain activity and recognition performance. Methods: Seventeen schizophrenia patients and 26 healthy comparison subjects underwent functional magnetic resonance imaging scans while performing incidental encoding tasks of words and faces. Subjects were required to make either deep (abstract/concrete) or shallow (alphabetization) judgments for words and deep (gender) judgments for faces, followed by subsequent recognition tests. Results: Schizophrenia and comparison subjects recognized significantly more words encoded deeply than shallowly, activated regions in inferior frontal cortex (Brodmann area 45/47) typically associated with deep and successful encoding of words, and showed greater left frontal activation for the processing of words compared with faces. However, during deep encoding and material-specific processing (words vs. faces), participants with schizophrenia activated regions not activated by control subjects, including several in prefrontal cortex. Conclusions: Our findings suggest that a deficit in use of effective strategies influences episodic memory performance in schizophrenia and that abnormalities in functional brain activation persist even when such strategies are applied. PMID:15992522
Miki, Kaori; Yamauchi, Hirotsugu
2005-08-01
We examined the relations among students' perceptions of classroom goal structures (mastery and performance goal structures), students' achievement goal orientations (mastery, performance, and work-avoidance goals), and learning strategies (deep processing, surface processing and self-handicapping strategies). Participants were 323 5th and 6th grade students in elementary schools. The results from structural equation modeling indicated that perceptions of classroom mastery goal structures were associated with students' mastery goal orientations, which were in turn related positively to the deep processing strategies and academic achievement. Perceptions of classroom performance goal stractures proved associated with work avoidance-goal orientations, which were positively related to the surface processing and self-handicapping strategies. Two types of goal structures had a positive relation with students' performance goal orientations, which had significant positive effects on academic achievement. The results of this study suggest that elementary school students' perceptions of mastery goal structures are related to adaptive patterns of learning more than perceptions of performance goal structures are. The role of perceptions of classroom goal structure in promoting students' goal orientations and learning strategies is discussed.
ERIC Educational Resources Information Center
Phan, Huy Phuong
2009-01-01
Research exploring students' academic learning has recently amalgamated different motivational theories within one conceptual framework. The inclusion of achievement goals, self-efficacy, deep processing and critical thinking has been cited in a number of studies. This article discusses two empirical studies that examined these four theoretical…
Levels-of-processing effect on internal source monitoring in schizophrenia
RAGLAND, J. DANIEL; McCARTHY, ERIN; BILKER, WARREN B.; RENSINGER, COLLEEN M. B; VALDEZ, JEFFREY; KOHLER, CHRISTIAN; GUR, RAQUEL E.; GUR, RUBEN C.
2015-01-01
Background Recognition can be normalized in schizophrenia by providing patients with semantic organizational strategies through a levels-of-processing (LOP) framework. However, patients may rely primarily on familiarity effects, making recognition less sensitive than source monitoring to the strength of the episodic memory trace. The current study investigates whether providing semantic organizational strategies can also normalize patients’ internal source-monitoring performance. Method Sixteen clinically stable medicated patients with schizophrenia and 15 demographically matched healthy controls were asked to identify the source of remembered words following an LOP-encoding paradigm in which they alternated between processing words on a ‘shallow’ perceptual versus a ‘deep’ semantic level. A multinomial analysis provided orthogonal measures of item recognition and source discrimination, and bootstrapping generated variance to allow for parametric analyses. LOP and group effects were tested by contrasting recognition and source-monitoring parameters for words that had been encoded during deep versus shallow processing conditions. Results As in a previous study there were no group differences in LOP effects on recognition performance, with patients and controls benefiting equally from deep versus shallow processing. Although there were no group differences in internal source monitoring, only controls had significantly better performance for words processed during the deep encoding condition. Patient performance did not correlate with clinical symptoms or medication dose. Conclusions Providing a deep processing semantic encoding strategy significantly improved patients’ recognition performance only. The lack of a significant LOP effect on internal source monitoring in patients may reffect subtle problems in the relational binding of semantic information that are independent of strategic memory processes. PMID:16608558
ERIC Educational Resources Information Center
Aharony, Noa
2006-01-01
Background: The learning context is learning English in an Internet environment. The examination of this learning process was based on the Biggs and Moore's teaching-learning model (Biggs & Moore, 1993). Aim: The research aims to explore the use of the deep and surface strategies in an Internet environment among EFL students who come from…
Studying Activity Series of Metals.
ERIC Educational Resources Information Center
Hoon, Tien-Ghun; And Others
1995-01-01
Presents teaching strategies that illustrate the linking together of numerous chemical concepts involving the activity of metals (quantitative analysis, corrosion, and electrolysis) through the use of deep-level processing strategies. Concludes that making explicit links in the process of teaching chemistry can lead effectively to meaningful…
ERIC Educational Resources Information Center
Phan, Huy P.
2011-01-01
The author explored the developmental courses of deep learning approach and critical thinking over a 2-year period. Latent growth curve modeling (LGM) procedures were used to test and trace the trajectories of both theoretical frameworks over time. Participants were 264 (119 women, 145 men) university undergraduates. The Deep Learning subscale of…
Flegal, Kristin E.; Lustig, Cindy
2016-01-01
Cognitive training programs that instruct specific strategies frequently show limited transfer. Open-ended approaches can achieve greater transfer, but may fail to benefit many older adults due to age deficits in self-initiated processing. We examined whether a compromise that encourages effort at encoding without an experimenter-prescribed strategy might yield better results. Older adults completed memory training under conditions that either 1) mandated a specific strategy to increase deep, associative encoding, 2) attempted to suppress such encoding by mandating rote rehearsal, or 3) encouraged time and effort towards encoding but allowed for strategy choice. The experimenter-enforced associative encoding strategy succeeded in creating integrated representations of studied items, but training-task progress was related to pre-existing ability. Independent of condition assignment, self-reported deep encoding was associated with positive training and transfer effects, suggesting that the most beneficial outcomes occur when environmental support guiding effort is provided but participants generate their own strategies. PMID:26549616
Flegal, Kristin E; Lustig, Cindy
2016-07-01
Cognitive training programs that instruct specific strategies frequently show limited transfer. Open-ended approaches can achieve greater transfer, but may fail to benefit many older adults due to age deficits in self-initiated processing. We examined whether a compromise that encourages effort at encoding without an experimenter-prescribed strategy might yield better results. Older adults completed memory training under conditions that either (1) mandated a specific strategy to increase deep, associative encoding, (2) attempted to suppress such encoding by mandating rote rehearsal, or (3) encouraged time and effort toward encoding but allowed for strategy choice. The experimenter-enforced associative encoding strategy succeeded in creating integrated representations of studied items, but training-task progress was related to pre-existing ability. Independent of condition assignment, self-reported deep encoding was associated with positive training and transfer effects, suggesting that the most beneficial outcomes occur when environmental support guiding effort is provided but participants generate their own strategies.
ERIC Educational Resources Information Center
Wijnen, Marit; Loyens, Sofie M. M.; Smeets, Guus; Kroeze, Maarten; van der Molen, Henk
2017-01-01
In educational theory, deep processing (i.e., connecting different study topics together) and self-regulation (i.e., taking control over one's own learning process) are considered effective learning strategies. These learning strategies can be influenced by the learning environment. Problem-based learning (PBL), a student-centered educational…
ERIC Educational Resources Information Center
Phan, Huy Phuong
2009-01-01
Recent research indicates that study processing strategies, effort, reflective thinking practice, and achievement goals are important factors contributing to the prediction of students' academic success. Very few studies have combined these theoretical orientations within one conceptual model. This study tested a conceptual model that included, in…
Preliminary evidence for validity of the Bahasa Indonesian version of Study Process Questionnaire.
Liem, Arief Darmanegara; Prasetya, Paulus Hidajat
2007-02-01
This study provides preliminary evidence for the validity of the Bahasa Indonesian version of the Study Process Questionnaire (BI-SPQ) from a sample of 147 psychology students (22 men and 125 women; M age = 21.8 yr., SD = 1.3). The internal consistency alpha of the BI-SPQ subscales were found to range from .46 (Surface Strategy) to .77 (Deep Strategy), with a median of .67. Principal component analysis indicated a two-factor solution, where the Deep and Achieving subscales loaded onto Factor 1 and the Surface subscales loaded on Factor 2. Students' GPAs were associated negatively with Surface Motive (r = -.24) and were associated positively with Deep and Achieving Motives (rs = .20). Further studies with larger samples involving students majoring in other disciplines are needed to provide further evidence of the validity of the BI-SPQ.
Monitoring and Depth of Strategy Use in Computer-Based Learning Environments for Science and History
ERIC Educational Resources Information Center
Deekens, Victor M.; Greene, Jeffrey A.; Lobczowski, Nikki G.
2018-01-01
Background: Self-regulated learning (SRL) models position metacognitive monitoring as central to SRL processing and predictive of student learning outcomes (Winne & Hadwin, 2008; Zimmerman, 2000). A body of research evidence also indicates that depth of strategy use, ranging from surface to deep processing, is predictive of learning…
Personality, Cognitive Style and Students' Learning Strategies.
ERIC Educational Resources Information Center
Entwistle, Noel; Hanley, Maureen
1977-01-01
Research into learning strategies students adopt in tackling academic work is reviewed. A report is presented of a research program underway at the Institute for Post-Compulsory Education at the University of Lancaster which is examining characteristics of students who adopt deep-level or surface processing strategies. (JMD)
On the Shallow Processing (Dis)Advantage: Grammar and Economy.
Koornneef, Arnout; Reuland, Eric
2016-01-01
In the psycholinguistic literature it has been proposed that readers and listeners often adopt a "good-enough" processing strategy in which a "shallow" representation of an utterance driven by (top-down) extra-grammatical processes has a processing advantage over a "deep" (bottom-up) grammatically-driven representation of that same utterance. In the current contribution we claim, both on theoretical and experimental grounds, that this proposal is overly simplistic. Most importantly, in the domain of anaphora there is now an accumulating body of evidence showing that the anaphoric dependencies between (reflexive) pronominals and their antecedents are subject to an economy hierarchy. In this economy hierarchy, deriving anaphoric dependencies by deep-grammatical-operations requires less processing costs than doing so by shallow-extra-grammatical-operations. In addition, in case of ambiguity when both a shallow and a deep derivation are available to the parser, the latter is actually preferred. This, we argue, contradicts the basic assumptions of the shallow-deep dichotomy and, hence, a rethinking of the good-enough processing framework is warranted.
Deep--deeper--deepest? Encoding strategies and the recognition of human faces.
Sporer, S L
1991-03-01
Various encoding strategies that supposedly promote deeper processing of human faces (e.g., character judgments) have led to better recognition than more shallow processing tasks (judging the width of the nose). However, does deeper processing actually lead to an improvement in recognition, or, conversely, does shallow processing lead to a deterioration in performance when compared with naturally employed encoding strategies? Three experiments systematically compared a total of 8 different encoding strategies manipulating depth of processing, amount of elaboration, and self-generation of judgmental categories. All strategies that required a scanning of the whole face were basically equivalent but no better than natural strategy controls. The consistently worst groups were the ones that rated faces along preselected physical dimensions. This can be explained by subjects' lesser task involvement as revealed by manipulation checks.
The effect of encoding strategy on the neural correlates of memory for faces.
Bernstein, Lori J; Beig, Sania; Siegenthaler, Amy L; Grady, Cheryl L
2002-01-01
Encoding and recognition of unfamiliar faces in young adults were examined using positron emission tomography to determine whether different encoding strategies would lead to encoding/retrieval differences in brain activity. Three types of encoding were compared: a 'deep' task (judging pleasantness/unpleasantness), a 'shallow' task (judging right/left orientation), and an intentional learning task in which subjects were instructed to learn the faces for a subsequent memory test but were not provided with a specific strategy. Memory for all faces was tested with an old/new recognition test. A modest behavioral effect was obtained, with deeply-encoded faces being recognized more accurately than shallowly-encoded or intentionally-learned faces. Regardless of encoding strategy, encoding activated a primarily ventral system including bilateral temporal and fusiform regions and left prefrontal cortices, whereas recognition activated a primarily dorsal set of regions including right prefrontal and parietal areas. Within encoding, the type of strategy produced different brain activity patterns, with deep encoding being characterized by left amygdala and left anterior cingulate activation. There was no effect of encoding strategy on brain activity during the recognition conditions. Posterior fusiform gyrus activation was related to better recognition accuracy in those conditions encouraging perceptual strategies, whereas activity in left frontal and temporal areas correlated with better performance during the 'deep' condition. Results highlight three important aspects of face memory: (1) the effect of encoding strategy was seen only at encoding and not at recognition; (2) left inferior prefrontal cortex was engaged during encoding of faces regardless of strategy; and (3) differential activity in fusiform gyrus was found, suggesting that activity in this area is not only a result of automatic face processing but is modulated by controlled processes.
Worst error performance of continuous Kalman filters. [for deep space navigation and maneuvers
NASA Technical Reports Server (NTRS)
Nishimura, T.
1975-01-01
The worst error performance of estimation filters is investigated for continuous systems in this paper. The pathological performance study, without assuming any dynamical model such as Markov processes for perturbations, except for its bounded amplitude, will give practical and dependable criteria in establishing the navigation and maneuver strategy in deep space missions.
ERIC Educational Resources Information Center
De Clercq, Mikael; Galand, Benoit; Frenay, Mariane
2013-01-01
The aim of this study was to investigate the direction of the effect between goal orientation, self-regulation and deep processing strategies in order to understand the impact of these three constructs on students' achievement. The participants were 110 freshmen from the engineering faculty at the Universite catholique de Louvain in Belgium, who…
[Efficacy of the program "Testas's (mis)adventures" to promote the deep approach to learning].
Rosário, Pedro; González-Pienda, Julio Antonio; Cerezo, Rebeca; Pinto, Ricardo; Ferreira, Pedro; Abilio, Lourenço; Paiva, Olimpia
2010-11-01
This paper provides information about the efficacy of a tutorial training program intended to enhance elementary fifth graders' study processes and foster their deep approaches to learning. The program "Testas's (mis)adventures" consists of a set of books in which Testas, a typical student, reveals and reflects upon his life experiences during school years. These life stories are nothing but an opportunity to present and train a wide range of learning strategies and self-regulatory processes, designed to insure students' deeper preparation for present and future learning challenges. The program has been developed along a school year, in a one hour weekly tutorial sessions. The training program had a semi-experimental design, included an experimental group (n=50) and a control one (n=50), and used pre- and posttest measures (learning strategies' declarative knowledge, learning approaches and academic achievement). Data suggest that the students enrolled in the training program, comparing with students in the control group, showed a significant improvement in their declarative knowledge of learning strategies and in their deep approach to learning, consequently lowering their use of a surface approach. In spite of this, in what concerns to academic achievement, no statistically significant differences have been found.
Process Simulation of Aluminium Sheet Metal Deep Drawing at Elevated Temperatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winklhofer, Johannes; Trattnig, Gernot; Lind, Christoph
Lightweight design is essential for an economic and environmentally friendly vehicle. Aluminium sheet metal is well known for its ability to improve the strength to weight ratio of lightweight structures. One disadvantage of aluminium is that it is less formable than steel. Therefore complex part geometries can only be realized by expensive multi-step production processes. One method for overcoming this disadvantage is deep drawing at elevated temperatures. In this way the formability of aluminium sheet metal can be improved significantly, and the number of necessary production steps can thereby be reduced. This paper introduces deep drawing of aluminium sheet metalmore » at elevated temperatures, a corresponding simulation method, a characteristic process and its optimization. The temperature and strain rate dependent material properties of a 5xxx series alloy and their modelling are discussed. A three dimensional thermomechanically coupled finite element deep drawing simulation model and its validation are presented. Based on the validated simulation model an optimised process strategy regarding formability, time and cost is introduced.« less
Personal and Environmental Influences on Students' Beliefs about Effective Study Strategies.
ERIC Educational Resources Information Center
Nolen, Susan Bobbitt; Haladyna, Thomas M.
1990-01-01
A model of personal and environmental influences on students' valuing of two deep-processing strategies for studying expository texts is described. Questionnaire data from 281 high school science students indicated that students' task orientation and perceptions about teacher expectations were central to students' attitudes. (TJH)
ERIC Educational Resources Information Center
Schmeck, Ronald R; Spofford, Mark
1982-01-01
An investigation was undertaken to determine whether highly aroused (e.g. highly anxious) students are handicapped with regard to their ability to learn through deep processing and elaboration. The hypothesis that well-developed deep and elaborative habits of thought might counteract the disruptive effects that excessive arousal has upon students…
The Effect of Peer Feedback for Blogging on College Students' Reflective Learning Processes
ERIC Educational Resources Information Center
Xie, Ying; Ke, Fengfeng; Sharma, Priya
2008-01-01
Reflection is an important prerequisite to making meaning of new information, and to advance from surface to deep learning. Strategies such as journal writing and peer feedback have been found to promote reflection as well as deep thinking and learning. This study used an empirical design to investigate the interaction effects of peer feedback and…
Golfenshtein, Nadya; Drach-Zahavy, Anat
2015-05-01
To understand the role of patients' attributions under the attribution theory framework (locus, controllability, stability) in nurses' performance of surface or deep acting, as they unfold in interactions with different patients. Regulation of emotions at work, or emotional labour, has been conceptualized in terms of two main strategies: surface acting and deep acting. Most prior research tested for between-subject variation in the search for the factors evoking these strategies in nurses, assuming them to be trait-like characteristics. Only scant research has examined how nurses modify their emotional labour strategies in different patient-nurse encounters. A nested cross-sectional design (patients within nurses). Data were collected during 2011-2012 through validated questionnaires from the nursing staff (N = 41) of two paediatric hospital wards and their randomly selected patients (N = 239). Questionnaires were administered to nurses multiple times after encounters with different patients. Analyses were conducted using mixed effects models. In accordance with attribution theory, different combinations of locus, controllability and stability attributions were related to the choice of surface or deep acting. Nurses' perceptions of patients' controllability were associated positively with surface acting and negatively with deep acting. Interaction terms of stability and locus and of controllability and stability, were distinctively associated with deep and surface acting. Findings innovatively introduce the attribution process as an explanatory perspective to nurses' emotional labour and highlight its situational nature, providing a potential tool for emotional labour strategy prediction. Raising nurses' awareness of how they perceive patients may increase control of the strategies employed. © 2015 John Wiley & Sons Ltd.
Aharony, Noa
2006-12-01
The learning context is learning English in an Internet environment. The examination of this learning process was based on the Biggs and Moore's teaching-learning model (Biggs & Moore, 1993). The research aims to explore the use of the deep and surface strategies in an Internet environment among EFL students who come from different socio-economic backgrounds. The results of the research may add an additional level to the understanding of students' functioning in the Internet environment. One hundred fourty-eight Israeli junior and high school students participated in this research. The methodology was based on special computer software: Screen Cam, which recorded the students' learning process. In addition, expert judges completed a questionnaire which examined and categorized the students' learning strategies. The research findings show a clear preference of participants from all socio-economic backgrounds towards the surface learning strategy. The findings also showed that students from the medium to high socio-economic background used both learning strategies more frequently than low socio-economic students. The results reflect the habits that students acquire during their adjustment process throughout their education careers. A brief encounter with the Internet learning environment apparently cannot change norms or habits, which were acquired in the non-Internet learning environment.
Organizational Leadership Process for University Education
ERIC Educational Resources Information Center
Llamosa-Villalba, Ricardo; Delgado, Dario J.; Camacho, Heidi P.; Paéz, Ana M.; Valdivieso, Raúl F.
2014-01-01
This paper relates the "Agile School", an emerging archetype of the enterprise architecture: "Processes of Organizational Leadership" for leading and managing strategies, tactics and operations of forming in Higher Education Institutions. Agile School is a system for innovation and deep transformation of University Institutions…
Deeper processing is beneficial during episodic memory encoding for adults with Williams syndrome.
Greer, Joanna; Hamiliton, Colin; Riby, Deborah M; Riby, Leigh M
2014-07-01
Previous research exploring declarative memory in Williams syndrome (WS) has revealed impairment in the processing of episodic information accompanied by a relative strength in semantic ability. The aim of the current study was to extend this literature by examining how relatively spared semantic memory may support episodic remembering. Using a level of processing paradigm, older adults with WS (aged 35-61 years) were compared to typical adults of the same chronological age and typically developing children matched for verbal ability. In the study phase, pictures were encoded using either a deep (decide if a picture belongs to a particular category) or shallow (perceptual based processing) memory strategy. Behavioural indices (reaction time and accuracy) at retrieval were suggestive of an overall difficulty in episodic memory for WS adults. Interestingly, however, semantic support was evident with a greater recall of items encoded with deep compared to shallow processing, indicative of an ability to employ semantic encoding strategies to maximise the strength of the memory trace created. Unlike individuals with autism who find semantic elaboration strategies problematic, the pattern of findings reported here suggests in those domains that are relatively impaired in WS, support can be recruited from relatively spared cognitive processes. Copyright © 2014 Elsevier Ltd. All rights reserved.
The effects of age on the neural correlates of episodic encoding.
Grady, C L; McIntosh, A R; Rajah, M N; Beig, S; Craik, F I
1999-12-01
Young and old adults underwent positron emission tomographic scans while encoding pictures of objects and words using three encoding strategies: deep processing (a semantic living/nonliving judgement), shallow processing (size judgement) and intentional learning. Picture memory exceeded word memory in both young and old groups, and there was an age-related decrement only in word recognition. During the encoding tasks three brain activity patterns were found that differentiated stimulus type and the different encoding strategies. The stimulus-specific pattern was characterized by greater activity in extrastriate and medial temporal cortices during picture encoding, and greater activity in left prefrontal and temporal cortices during encoding of words. The older adults showed this pattern to a significantly lesser degree. A pattern distinguishing deep processing from intentional learning of words and pictures was identified, characterized mainly by differences in prefrontal cortex, and this pattern also was of significantly lesser magnitude in the old group. A final pattern identified areas with increased activity during deep processing and intentional learning of pictures, including left prefrontal and bilateral medial temporal regions. There was no group difference in this pattern. These results indicate age-related dysfunction in several encoding networks, with sparing of one specifically involved in more elaborate encoding of pictures. These age-related changes appear to affect verbal memory more than picture memory.
Fryer, Luke K; Vermunt, Jan D
2018-03-01
Contemporary models of student learning within higher education are often inclusive of processing and regulation strategies. Considerable research has examined their use over time and their (person-centred) convergence. The longitudinal stability/variability of learning strategy use, however, is poorly understood, but essential to supporting student learning across university experiences. Develop and test a person-centred longitudinal model of learning strategies across the first-year university experience. Japanese university students (n = 933) completed surveys (deep and surface approaches to learning; self, external, and lack of regulation) at the beginning and end of their first year. Following invariance and cross-sectional tests, latent profile transition analysis (LPTA) was undertaken. Initial difference testing supported small but significant differences for self-/external regulation. Fit indices supported a four-group model, consistent across both measurement points. These subgroups were labelled Low Quality (low deep approaches and self-regulation), Low Quantity (low strategy use generally), Average (moderate strategy use), and High Quantity (intense use of all strategies) strategies. The stability of these groups ranged from stable to variable: Average (93% stayers), Low Quality (90% stayers), High Quantity (72% stayers), and Low Quantity (40% stayers). The three largest transitions presented joint shifts in processing/regulation strategy preference across the year, from adaptive to maladaptive and vice versa. Person-centred longitudinal findings presented patterns of learning transitions that different students experience during their first year at university. Stability/variability of students' strategy use was linked to the nature of initial subgroup membership. Findings also indicated strong connections between processing and regulation strategy changes across first-year university experiences. Implications for theory and practice are discussed. © 2017 The British Psychological Society.
Levels-of-processing effects in first-degree relatives of individuals with schizophrenia.
Bonner-Jackson, Aaron; Csernansky, John G; Barch, Deanna M
2007-05-15
First-degree relatives of individuals with schizophrenia show cognitive impairments that are similar to but less severe than their ill relatives. We have shown that memory impairments can be improved and prefrontal cortical (PFC) activity increased in individuals with schizophrenia by providing beneficial encoding strategies. The current study used a similar paradigm to determine whether siblings of individuals with schizophrenia (SIBs) also show increases in brain activity when presented with beneficial encoding strategies. Twenty-one SIBs and 38 siblings of healthy comparison subjects underwent functional magnetic resonance imaging scans while engaged in deep (abstract/concrete judgments) and shallow (orthographic judgments) encoding. Subjects were then given a recognition memory test. The groups did not differ on encoding or recognition accuracy, and the SIBs benefited from deep encoding to a similar degree as control subjects. The SIBs showed deep encoding-related activity in a number of PFC regions typically activated during semantic processing. However, SIBs showed more activity than control subjects in three subregions of PFC (left BA 44 & BA 47 bilaterally). Siblings of individuals with schizophrenia benefit from supportive verbal encoding conditions. Like individuals with schizophrenia, SIBs also show increased task-related activity in a larger number of PFC subregions than control subjects during deep verbal encoding.
Psyching Out the Science Teacher: Student Motivation, Perceived Teacher Goals and Study Strategies.
ERIC Educational Resources Information Center
Nolen, Susan Bobbitt; Haladyna, Thomas M.
This paper describes a model of the influence of personal and environmental factors on students' valuing of two deep-processing strategies for studying expository texts. In the model, task orientation (a form of intrinsic motivation in which learning and understanding are the major goals) interacts with perceptions of the teacher's goals to…
Cohesive Features of Deep Text Comprehension Processes
ERIC Educational Resources Information Center
Allen, Laura K.; Jacovina, Matthew E.; McNamara, Danielle S.
2016-01-01
This study investigates how cohesion manifests in readers' thought processes while reading texts when they are instructed to engage in self-explanation, a strategy associated with deeper, more successful comprehension. In Study 1, college students (n = 21) were instructed to either paraphrase or self-explain science texts. Paraphrasing was…
ERIC Educational Resources Information Center
Mayer, Peter; Crowley, Kevin; Kaminska, Zofia
2007-01-01
Theories of literacy acquisition, developed mostly with reference to English, have characterised this process as passing through a series of stages. The culmination of this process is a strategy which takes account of the complex relationship between graphemes and phonemes within a deep orthography (Frith (1985). In K. Patterson, & M. Coltheart,…
To Master or Perform? Exploring Relations between Achievement Goals and Conceptual Change Learning
ERIC Educational Resources Information Center
Ranellucci, John; Muis, Krista R.; Duffy, Melissa; Wang, Xihui; Sampasivam, Lavanya; Franco, Gina M.
2013-01-01
Background: Research is needed to explore conceptual change in relation to achievement goal orientations and depth of processing. Aims: To address this need, we examined relations between achievement goals, use of deep versus shallow processing strategies, and conceptual change learning using a think-aloud protocol. Sample and Method:…
Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing
Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin
2016-01-01
With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate. PMID:27070606
Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.
Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin
2016-04-07
With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.
Cognitive Styles and Virtual Environments.
ERIC Educational Resources Information Center
Ford, Nigel
2000-01-01
Discussion of navigation through virtual information environments focuses on the need for robust user models that take into account individual differences. Considers Pask's information processing styles and strategies; deep (transformational) and surface (reproductive) learning; field dependence/independence; divergent/convergent thinking;…
Using deep learning in image hyper spectral segmentation, classification, and detection
NASA Astrophysics Data System (ADS)
Zhao, Xiuying; Su, Zhenyu
2018-02-01
Recent years have shown that deep learning neural networks are a valuable tool in the field of computer vision. Deep learning method can be used in applications like remote sensing such as Land cover Classification, Detection of Vehicle in Satellite Images, Hyper spectral Image classification. This paper addresses the use of the deep learning artificial neural network in Satellite image segmentation. Image segmentation plays an important role in image processing. The hue of the remote sensing image often has a large hue difference, which will result in the poor display of the images in the VR environment. Image segmentation is a pre processing technique applied to the original images and splits the image into many parts which have different hue to unify the color. Several computational models based on supervised, unsupervised, parametric, probabilistic region based image segmentation techniques have been proposed. Recently, one of the machine learning technique known as, deep learning with convolution neural network has been widely used for development of efficient and automatic image segmentation models. In this paper, we focus on study of deep neural convolution network and its variants for automatic image segmentation rather than traditional image segmentation strategies.
ERIC Educational Resources Information Center
Prieto, Daniel; Aparicio, Gonzalo; Sotelo-Silveira, Jose R.
2017-01-01
Cell and developmental processes are complex, and profoundly dependent on spatial relationships that change over time. Innovative educational or teaching strategies are always needed to foster deep comprehension of these processes and their dynamic features. However, laboratory exercises in cell and developmental biology at the undergraduate level…
Núñez, José Carlos; Cerezo, Rebeca; Bernardo, Ana; Rosário, Pedro; Valle, Antonio; Fernández, Estrella; Suárez, Natalia
2011-04-01
This paper tests the efficacy of an intervention program in virtual format intended to train studying and self-regulation strategies in university students. The aim of this intervention is to promote a series of strategies which allow students to manage their learning processes in a more proficient and autonomous way. The program has been developed in Moodle format and hosted by the Virtual Campus of the University of Oviedo. The present study had a semi-experimental design, included an experimental group (n=167) and a control one (n=206), and used pretest and posttest measures (self-regulated learning strategies' declarative knowledge, self-regulated learning macro-strategy planning-execution-assessment, self-regulated learning strategies on text, surface and deep learning approaches, and academic achievement). Data suggest that the students enrolled in the training program, comparing with students in the control group, showed a significant improvement in their declarative knowledge, general and on text use of learning strategies, increased their deep approach to learning, decreased their use of a surface approach and, in what concerns to academic achievement, statistically significant differences have been found in favour of the experimental group.
Prakash, Amol; Peterman, Scott; Ahmad, Shadab; Sarracino, David; Frewen, Barbara; Vogelsang, Maryann; Byram, Gregory; Krastins, Bryan; Vadali, Gouri; Lopez, Mary
2014-12-05
Data-dependent acquisition (DDA) and data-independent acquisition strategies (DIA) have both resulted in improved understanding of proteomics samples. Both strategies have advantages and disadvantages that are well-published, where DDA is typically applied for deep discovery and DIA may be used to create sample records. In this paper, we present a hybrid data acquisition and processing strategy (pSMART) that combines the strengths of both techniques and provides significant benefits for qualitative and quantitative peptide analysis. The performance of pSMART is compared to published DIA strategies in an experiment that allows the objective assessment of DIA performance with respect to interrogation of previously acquired MS data. The results of this experiment demonstrate that pSMART creates fewer decoy hits than a standard DIA strategy. Moreover, we show that pSMART is more selective, sensitive, and reproducible than either standard DIA or DDA strategies alone.
Deep learning for tumor classification in imaging mass spectrometry.
Behrmann, Jens; Etmann, Christian; Boskamp, Tobias; Casadonte, Rita; Kriegsmann, Jörg; Maaß, Peter
2018-04-01
Tumor classification using imaging mass spectrometry (IMS) data has a high potential for future applications in pathology. Due to the complexity and size of the data, automated feature extraction and classification steps are required to fully process the data. Since mass spectra exhibit certain structural similarities to image data, deep learning may offer a promising strategy for classification of IMS data as it has been successfully applied to image classification. Methodologically, we propose an adapted architecture based on deep convolutional networks to handle the characteristics of mass spectrometry data, as well as a strategy to interpret the learned model in the spectral domain based on a sensitivity analysis. The proposed methods are evaluated on two algorithmically challenging tumor classification tasks and compared to a baseline approach. Competitiveness of the proposed methods is shown on both tasks by studying the performance via cross-validation. Moreover, the learned models are analyzed by the proposed sensitivity analysis revealing biologically plausible effects as well as confounding factors of the considered tasks. Thus, this study may serve as a starting point for further development of deep learning approaches in IMS classification tasks. https://gitlab.informatik.uni-bremen.de/digipath/Deep_Learning_for_Tumor_Classification_in_IMS. jbehrmann@uni-bremen.de or christianetmann@uni-bremen.de. Supplementary data are available at Bioinformatics online.
Simulation technology used for risky assessment in deep exploration project in China
NASA Astrophysics Data System (ADS)
jiao, J.; Huang, D.; Liu, J.
2013-12-01
Deep exploration has been carried out in China for five years in which various heavy duty instruments and equipments are employed for gravity, magnetic, seismic and electromagnetic data prospecting as well as ultra deep drilling rig established for obtaining deep samples, and so on. The deep exploration is a large and complex system engineering crossing multiple subjects with great investment. It is necessary to employ advanced technical means technology for verification, appraisal, and optimization of geographical prospecting equipment development. To reduce risk of the application and exploration, efficient and allegeable management concept and skills have to be enhanced in order to consolidate management measure and workflow to benefit the ambitious project. Therefore, evidence, prediction, evaluation and related decision strategies have to be taken into accouter simultaneously to meet practical scientific requests and technique limits and extendable attempts. Simulation technique is then proposed as a tool that can be used to carry out dynamic test on actual or imagined system. In practice, it is necessary to combine the simulation technique with the instruments and equipment to accomplish R&D tasks. In this paper, simulation technique is introduced into the R&D process of heavy-duty equipment and high-end engineering project technology. Based on the information provided by a drilling group recently, a digital model is constructed by combination of geographical data, 3d visualization, database management, and visual reality technologies together. It result in push ahead a R&D strategy, in which data processing , instrument application, expected result and uncertainty, and even operation workflow effect environment atmosphere are simulated systematically or simultaneously, in order to obtain an optimal consequence as well as equipment updating strategy. The simulation technology is able to adjust, verify, appraise and optimize the primary plan due to changing in the real world or process, which can provide new insight to the equipment to meet requests from application and construction process and facilitates by means of direct perception and understanding of installation, debugging and experimental process of key equipment for deep exploration. Finally, the objective of project cost conservation and risk reduction can be reasonably approached. Risk assessment can be used to quantitatively evaluate the possible degree of the impact. During the research and development stage, information from the installation, debugging and simulation demonstration of the experiment process of the key instrument and equipment are used to evaluate the fatigue and safety of the device. It needs fully understanding the controllable and uncontrollable risk factors during the process, and then adjusting and improving the unsafe risk factors in the risk assessment and prediction. With combination with professional Geo software to process and interpret the environment to obtain evaluation parameters, simulation modeling is more likely close to exploration target which need more details of evaluations. From micro and macro comprehensive angles to safety and risk assessment can be achieved to satisfy the purpose of reducing the risk of equipment development, and to avoid unnecessary loss on the way of the development.
Explaining Achievement in Higher Education
ERIC Educational Resources Information Center
Jansen, Ellen P. W. A.; Bruinsma, Marjon
2005-01-01
This research project investigated the relationship between students' pre-entry characteristics, perceptions of the learning environment, reported work discipline, the use of deep information processing strategies, and academic achievement. Ability measured by grade-point average in pre-university education was the most important predictor of…
NASA Astrophysics Data System (ADS)
Cheek, Kim A.
2013-07-01
Many geologic processes occur in the context of geologic or deep time. Students of all ages demonstrate difficulty grasping this fundamental concept which impacts their ability to acquire other geoscience concepts. A concept of deep time requires the ability to sequence events on an immense temporal scale (succession) and to judge the durations of geologic processes based on the rates at which they occur. The twin concepts of succession and duration are the same ideas that underlie a concept of conventional time. If deep time is an extension of conventional time and not qualitatively different from it, students should display similar reasoning patterns when dealing with analogous tasks over disparate temporal periods. Thirty-five US students aged 13-24 years participated in individual task-based interviews to ascertain how they thought about succession and duration in conventional and deep time. This is the first attempt to explore this relationship in the same study in over 30 years. Most students successfully completed temporal succession tasks, but there was greater variability in responses on duration tasks. Conventional time concepts appear to impact how students reason about deep time. The application of spatial reasoning to temporal tasks sometimes leads to correct responses but in other instances does not. Implications for future research and teaching strategies are discussed.
NASA Astrophysics Data System (ADS)
Vertenten, Kristin
2002-01-01
Finding a way to encourage first year students to use deep processing strategies was the aim of this research. The need for an adequate method became clear after using the Inventory of Learning Styles (ILS) of Vermunt: almost half of the first year students turned out to have an undirected or a reproduction-directed learning style. A possible intervention is process-oriented instruction. In this type of instruction learning strategies are taught in coherence with domain specific knowledge. The emphasis is on a gradual transfer from a strongly instruction-guided regulation of the learning process towards a student-regulation. By promoting congruence and constructive frictions between instruction and learning strategies, students are challenged to improve their learning strategies. These general features of process-oriented instruction were refined by Vermunt (1992) in twelve general and specific principles. Literature was studied in which researchers reported about their experiences with interventions aimed at teaching physics knowledge, physics strategies and/or learning and thinking strategies. It became obvious that several successful interventions stressed four principles: (1) the student must experience (constructive) f&barbelow;rictions, including cognitive conflicts; (2) he must be encouraged to ṟeflect on his experiences (thinking about them and analysing them); (3) the instruction must e&barbelow;xplicate and demonstrate the necessary knowledge and strategies; and (4) the student must be given the opportunity to practice (ḏoing) with the learned knowledge and strategies. These four FRED-principles are useful for teaching both general and domain specific knowledge and strategies. They show similarities with the four stages in the learning cycle of Kolb (1984). Moreover, other elements of process-oriented instruction are also depicted by the learning cycle, which, when used in process-oriented instruction, has to start with experiencing (constructive) frictions. The gradual shift of the regulation of the learning process can also be translated to the learning cycle. This can be accomplished by giving a new meaning to the radius of the circle which must represent the growing self-regulation of the learning process. This transforms the learning cycle into a learning spiral. The four FRED-principles were used to develop a learning environment for the first year physics problem-solving classes. After working in this learning environment during the first semester, students began using deep processing strategies in a self-regulated manner. After the second semester the reproduction-directed and undirected learning style were vanished or strongly diminished. These effects were not found in a traditional learning environment. The experimental group also obtained better study results. Working in the developed learning environment did not heighten the study load. (Abstract shortened by UMI.)
DeepPicker: A deep learning approach for fully automated particle picking in cryo-EM.
Wang, Feng; Gong, Huichao; Liu, Gaochao; Li, Meijing; Yan, Chuangye; Xia, Tian; Li, Xueming; Zeng, Jianyang
2016-09-01
Particle picking is a time-consuming step in single-particle analysis and often requires significant interventions from users, which has become a bottleneck for future automated electron cryo-microscopy (cryo-EM). Here we report a deep learning framework, called DeepPicker, to address this problem and fill the current gaps toward a fully automated cryo-EM pipeline. DeepPicker employs a novel cross-molecule training strategy to capture common features of particles from previously-analyzed micrographs, and thus does not require any human intervention during particle picking. Tests on the recently-published cryo-EM data of three complexes have demonstrated that our deep learning based scheme can successfully accomplish the human-level particle picking process and identify a sufficient number of particles that are comparable to those picked manually by human experts. These results indicate that DeepPicker can provide a practically useful tool to significantly reduce the time and manual effort spent in single-particle analysis and thus greatly facilitate high-resolution cryo-EM structure determination. DeepPicker is released as an open-source program, which can be downloaded from https://github.com/nejyeah/DeepPicker-python. Copyright © 2016 Elsevier Inc. All rights reserved.
Mirghani, Hisham M; Ezimokhai, Mutairu; Shaban, Sami; van Berkel, Henk J M
2014-01-01
Students' learning approaches have a significant impact on the success of the educational experience, and a mismatch between instructional methods and the learning approach is very likely to create an obstacle to learning. Educational institutes' understanding of students' learning approaches allows those institutes to introduce changes in their curriculum content, instructional format, and assessment methods that will allow students to adopt deep learning techniques and critical thinking. The objective of this study was to determine and compare learning approaches among medical students following an interdisciplinary integrated curriculum. This was a cross-sectional study in which an electronic questionnaire using the Biggs two-factor Study Process Questionnaire (SPQ) with 20 questions was administered. Of a total of 402 students at the medical school, 214 (53.2%) completed the questionnaire. There was a significant difference in the mean score of superficial approach, motive and strategy between students in the six medical school years. However, no significant difference was observed in the mean score of deep approach, motive and strategy. The mean score for years 1 and 2 showed a significantly higher surface approach, surface motive and surface strategy when compared with students in years 4-6 in medical school. The superficial approach to learning was mostly preferred among first and second year medical students, and the least preferred among students in the final clinical years. These results may be useful in creating future teaching, learning and assessment strategies aiming to enhance a deep learning approach among medical students. Future studies are needed to investigate the reason for the preferred superficial approach among medical students in their early years of study.
An Imagination Effect in Learning from Scientific Text
ERIC Educational Resources Information Center
Leopold, Claudia; Mayer, Richard E.
2015-01-01
Asking students to imagine the spatial arrangement of the elements in a scientific text constitutes a learning strategy intended to foster deep processing of the instructional material. Two experiments investigated the effects of mental imagery prompts on learning from scientific text. Students read a computer-based text on the human respiratory…
Psychological Determinants of University Students' Academic Performance: An Empirical Study
ERIC Educational Resources Information Center
Gebka, Bartosz
2014-01-01
This study utilises an integrated conceptual model of academic performance which captures a series of psychological factors: cognitive style; self-theories such as self-esteem and self-efficacy; achievement goals such as mastery, performance, performance avoidance and work avoidance; study-processing strategies such as deep and surface learning;…
Daugherty, Ana M; Ofen, Noa
2015-08-01
The development of associative memory during childhood may be influenced by metacognitive factors. Here, one aspect of metamemory function--belief in strategy efficacy-was tested for a role in the effective use of encoding strategies. A sample of 61 children and adults (8-25 years of age) completed an associative recognition memory test and were assessed on belief in the efficacy of encoding strategies. Independent of age, belief ratings identified two factors: "deep" and "shallow" encoding strategies. Although the strategy factor structure was stable across age, adolescents and adults were more likely to prefer using a deep encoding strategy, whereas children were equally likely to prefer a shallow strategy. Belief ratings of deep encoding strategies increased with age and, critically, accounted for better associative recognition. Copyright © 2015 Elsevier Inc. All rights reserved.
Surface, Deep, and Transfer? Considering the Role of Content Literacy Instructional Strategies
ERIC Educational Resources Information Center
Frey, Nancy; Fisher, Douglas; Hattie, John
2017-01-01
This article provides an organizational review of content literacy instructional strategies to forward a claim that some strategies work better for surface learning, whereas others are more effective for deep learning and still others for transfer learning. The authors argue that the failure to adopt content literacy strategies by disciplinary…
An adaptive deep-coupled GNSS/INS navigation system with hybrid pre-filter processing
NASA Astrophysics Data System (ADS)
Wu, Mouyan; Ding, Jicheng; Zhao, Lin; Kang, Yingyao; Luo, Zhibin
2018-02-01
The deep-coupling of a global navigation satellite system (GNSS) with an inertial navigation system (INS) can provide accurate and reliable navigation information. There are several kinds of deeply-coupled structures. These can be divided mainly into coherent and non-coherent pre-filter based structures, which have their own strong advantages and disadvantages, especially in accuracy and robustness. In this paper, the existing pre-filters of the deeply-coupled structures are analyzed and modified to improve them firstly. Then, an adaptive GNSS/INS deeply-coupled algorithm with hybrid pre-filters processing is proposed to combine the advantages of coherent and non-coherent structures. An adaptive hysteresis controller is designed to implement the hybrid pre-filters processing strategy. The simulation and vehicle test results show that the adaptive deeply-coupled algorithm with hybrid pre-filters processing can effectively improve navigation accuracy and robustness, especially in a GNSS-challenged environment.
Automatic Classification of volcano-seismic events based on Deep Neural Networks.
NASA Astrophysics Data System (ADS)
Titos Luzón, M.; Bueno Rodriguez, A.; Garcia Martinez, L.; Benitez, C.; Ibáñez, J. M.
2017-12-01
Seismic monitoring of active volcanoes is a popular remote sensing technique to detect seismic activity, often associated to energy exchanges between the volcano and the environment. As a result, seismographs register a wide range of volcano-seismic signals that reflect the nature and underlying physics of volcanic processes. Machine learning and signal processing techniques provide an appropriate framework to analyze such data. In this research, we propose a new classification framework for seismic events based on deep neural networks. Deep neural networks are composed by multiple processing layers, and can discover intrinsic patterns from the data itself. Internal parameters can be initialized using a greedy unsupervised pre-training stage, leading to an efficient training of fully connected architectures. We aim to determine the robustness of these architectures as classifiers of seven different types of seismic events recorded at "Volcán de Fuego" (Colima, Mexico). Two deep neural networks with different pre-training strategies are studied: stacked denoising autoencoder and deep belief networks. Results are compared to existing machine learning algorithms (SVM, Random Forest, Multilayer Perceptron). We used 5 LPC coefficients over three non-overlapping segments as training features in order to characterize temporal evolution, avoid redundancy and encode the signal, regardless of its duration. Experimental results show that deep architectures can classify seismic events with higher accuracy than classical algorithms, attaining up to 92% recognition accuracy. Pre-training initialization helps these models to detect events that occur simultaneously in time (such explosions and rockfalls), increase robustness against noisy inputs, and provide better generalization. These results demonstrate deep neural networks are robust classifiers, and can be deployed in real-environments to monitor the seismicity of restless volcanoes.
Deep and shallow encoding effects on face recognition: an ERP study.
Marzi, Tessa; Viggiano, Maria Pia
2010-12-01
Event related potentials (ERPs) were employed to investigate whether and when brain activity related to face recognition varies according to the processing level undertaken at encoding. Recognition was assessed when preceded by a "shallow" (orientation judgement) or by a "deep" study task (occupation judgement). Moreover, we included a further manipulation by presenting at encoding faces either in the upright or inverted orientation. As expected, deeply encoded faces were recognized more accurately and more quickly with respect to shallowly encoded faces. The ERP showed three main findings: i) as witnessed by more positive-going potentials for deeply encoded faces, at early and later processing stage, face recognition was influenced by the processing strategy adopted during encoding; ii) structural encoding, indexed by the N170, turned out to be "cognitively penetrable" showing repetition priming effects for deeply encoded faces; iii) face inversion, by disrupting configural processing during encoding, influenced memory related processes for deeply encoded faces and impaired the recognition of faces shallowly processed. The present study adds weight to the concept that the depth of processing during memory encoding affects retrieval. We found that successful retrieval following deep encoding involved both familiarity- and recollection-related processes showing from 500 ms a fronto-parietal distribution, whereas shallow encoding affected only earlier processing stages reflecting perceptual priming. Copyright © 2010 Elsevier B.V. All rights reserved.
Chen, Zhen; Cao, Shansong; Wang, Haorong; Li, Yanqiu; Kishen, Anil; Deng, Xuliang; Yang, Xiaoping; Wang, Yinghui; Cong, Changhong; Wang, Huajun; Zhang, Xu
2015-01-01
Currently, it is still a tough task for dentists to remineralize dentine in deep caries. The aim of this study was to remineralize demineralized dentine in a tooth model of deep caries using nanocomplexes of carboxymethyl chitosan/amorphous calcium phosphate (CMC/ACP) based on mimicking the stabilizing effect of dentine matrix protein 1 (DMP1) on ACP in the biomineralization of dentine. The experimental results indicate that CMC can stabilize ACP to form nanocomplexes of CMC/ACP, which is able to be processed into scaffolds by lyophilization. In the single-layer collagen model, ACP nanoparticles are released from scaffolds of CMC/ACP nanocomplexes dissolved and then infiltrate into collagen fibrils via the gap zones (40 nm) to accomplish intrafibrillar mineralization of collagen. With this method, the completely demineralized dentine was partially remineralized in the tooth mode. This is a bottom-up remineralizing strategy based on non-classical crystallization theory. Since nanocomplexes of CMC/ACP show a promising effect of remineralization on demineralized dentine via biomimetic strategy, thereby preserving dentinal tissue to the maximum extent possible, it would be a potential indirect pulp capping (IPC) material for the management of deep caries during vital pulp therapy based on the concept of minimally invasive dentistry (MID).
Chen, Zhen; Cao, Shansong; Wang, Haorong; Li, Yanqiu; Kishen, Anil; Deng, Xuliang; Yang, Xiaoping; Wang, Yinghui; Cong, Changhong; Wang, Huajun; Zhang, Xu
2015-01-01
Currently, it is still a tough task for dentists to remineralize dentine in deep caries. The aim of this study was to remineralize demineralized dentine in a tooth model of deep caries using nanocomplexes of carboxymethyl chitosan/amorphous calcium phosphate (CMC/ACP) based on mimicking the stabilizing effect of dentine matrix protein 1 (DMP1) on ACP in the biomineralization of dentine. The experimental results indicate that CMC can stabilize ACP to form nanocomplexes of CMC/ACP, which is able to be processed into scaffolds by lyophilization. In the single-layer collagen model, ACP nanoparticles are released from scaffolds of CMC/ACP nanocomplexes dissolved and then infiltrate into collagen fibrils via the gap zones (40 nm) to accomplish intrafibrillar mineralization of collagen. With this method, the completely demineralized dentine was partially remineralized in the tooth mode. This is a bottom-up remineralizing strategy based on non-classical crystallization theory. Since nanocomplexes of CMC/ACP show a promising effect of remineralization on demineralized dentine via biomimetic strategy, thereby preserving dentinal tissue to the maximum extent possible, it would be a potential indirect pulp capping (IPC) material for the management of deep caries during vital pulp therapy based on the concept of minimally invasive dentistry (MID). PMID:25587986
NASA Astrophysics Data System (ADS)
Schmid, H.; Suttner, S.; Merklein, M.
2017-09-01
Nowadays lightweight design in metal forming processes leads to complex deep drawing geometries, which can cause multiple damages. Therefore, drawbeads are one way to regulate and control material flow during the forming process. Not only in research, but also in industrial practice, it could be determined that material is work hardened passing drawbead geometries. It particularly means when material is pre-deformed with tensile and alternating bending loads. This incident also gives the opportunity to utilize it in a reasonable way if examined properly. To investigate these findings, a process oriented and comprehensive analysis of the material behaviour during these forming operations is needed. In this paper, sheet metal strips are linearly drawn through a drawbead and stopped after passing the drawbead. Within this forming operation, the material undergoes non-linear straining before reaching the in-plane position again. Here, the process will be stopped to investigate a permanent strengthening local along the sheet thickness. Therefore, microhardness measurements are realized before and after passing the drawbead. Because of its common use and its wide known material data, a deep drawing steel DC will be used for these studies. Additionally, the strategy is applied to advanced high strength steel.
NASA Astrophysics Data System (ADS)
Coppola, L.; Prieur, L.; Taupier-Letage, I.; Estournel, C.; Testor, P.; Lefevre, D.; Belamari, S.; LeReste, S.; Taillandier, V.
2017-08-01
During the winter 2013, an intense observation and monitoring was performed in the north-western Mediterranean Sea to study deep water formation process that drives thermohaline circulation and biogeochemical processes (HYMEX SOP2 and DEWEX projects). To observe intensively and continuously the impact of deep convection on oxygen (O2) ventilation, an observation strategy was based on the enhancement of the Argo-O2 floats to monitor the offshore dense water formation area (DWF) in the Gulf of Lion prior to and at the end of the convective period (December 2012 to April 2013). The intense O2 measurements performed through shipborne CTD casts and Argo-O2 floats deployment revealed an O2 inventory rapidly impacted by mixed layer (ML) deepening on the month scale. The open-sea convection in winter 2013 ventilated the deep waters from mid-February to the end of May 2013. The newly ventilated dense water volume, based on an Apparent Oxygen Utilization (AOU) threshold, was estimated to be about 1.5 × 1013 m3 during the DWF episode, increasing the deep O2 concentrations from 196 to 205 µmol kg-1 in the north-western basin.
NASA Astrophysics Data System (ADS)
Zhironkin, S. A.; Khoreshok, A. A.; Tyulenev, M. A.; Barysheva, G. A.; Hellmer, M. C.
2016-08-01
This article describes the problems and prospects of development of coal mining in Kuzbass - the center of coal production in Siberia and Russia, in the framework of the major initiatives of the National Energy Strategy for the period until 2035. The structural character of the regional coal industry problems, caused by decline in investment activity, high level of fixed assets depreciation, slow development of deep coal processing and technological reduction of coal mining is shown.
Learning Approaches of Undergraduate Computer Technology Students: Strategies for Improvement
ERIC Educational Resources Information Center
Malakolunthu, Suseela; Joshua, Alice
2012-01-01
Purpose: In recent times, quality of graduates and their performance has been questioned. Students' performance is an indicator of the kind of approach (deep or surface) that is taken. This study investigates the kind of undergraduates take in their learning processes. Methodology: This quantitative survey used Revised Two-Factor Study Process…
Deep Knowledge: A Strategy for University Budgetary Cuts
ERIC Educational Resources Information Center
Reynolds, Douglas B.
2016-01-01
During and after the Financial Crisis of 2008, many institutions of higher learning have had revenue and budgetary reductions, forcing them to make severe university budget cuts and university reductions in force. Often the university cuts are preceded by a process of evaluation of academic programs where institutions determine what they stand for…
ERIC Educational Resources Information Center
Bradshaw, Vicki
2012-01-01
This action research study is the culmination of several action cycles investigating cognitive information processing and learning strategies based on students approach to learning theory and assessing students' meta-cognitive learning, motivation, and reflective development suggestive of deep learning. The study introduces a reading…
Gray, Stephen J; Gallo, David A
2015-01-01
People can use a content-specific recapitulation strategy to trigger memories (i.e., mentally reinstating encoding conditions), but how people deploy this strategy is unclear. Is recapitulation naturally used to guide all recollection attempts, or is it only used selectively, after retrieving incomplete information that requires additional monitoring? According to a retrieval orientation model, people use recapitulation whenever they search memory for specific information, regardless of what information might come to mind. In contrast, according to a postretrieval monitoring model, people selectively engage recapitulation only after retrieving ambiguous information in order to evaluate this information and guide additional retrieval attempts. We tested between these models using a criterial recollection task, and by manipulating the strength of ambiguous information associated with to-be-rejected foils (i.e., familiarity or noncriterial information). Replicating prior work, foil rejections were greater when people attempted to recollect targets studied at a semantic level (deep test) compared to an orthographic level (shallow test), implicating more accurate retrieval monitoring. To investigate the role of a recapitulation strategy in this monitoring process, a final test assessed memory for the foils that were earlier processed on these recollection tests. Performance on this foil recognition test suggested that people had engaged in more elaborative content-specific recapitulation when initially tested for deep compared to shallow recollections, and critically, this elaboration effect did not interact with the experimental manipulation of foil strength. These results support the retrieval orientation model, whereby a recapitulation strategy was used to orient retrieval toward specific information during every recollection attempt. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Deep-sea geohazards in the South China Sea
NASA Astrophysics Data System (ADS)
Wu, Shiguo; Wang, Dawei; Völker, David
2018-02-01
Various geological processes and features that might inflict hazards identified in the South China Sea by using new technologies and methods. These features include submarine landslides, pockmark fields, shallow free gas, gas hydrates, mud diapirs and earthquake tsunami, which are widely distributed in the continental slope and reefal islands of the South China Sea. Although the study and assessment of geohazards in the South China Sea came into operation only recently, advances in various aspects are evolving at full speed to comply with National Marine Strategy and `the Belt and Road' Policy. The characteristics of geohazards in deep-water seafloor of the South China Sea are summarized based on new scientific advances. This progress is aimed to aid ongoing deep-water drilling activities and decrease geological risks in ocean development.
An adaptive deep Q-learning strategy for handwritten digit recognition.
Qiao, Junfei; Wang, Gongming; Li, Wenjing; Chen, Min
2018-02-22
Handwritten digits recognition is a challenging problem in recent years. Although many deep learning-based classification algorithms are studied for handwritten digits recognition, the recognition accuracy and running time still need to be further improved. In this paper, an adaptive deep Q-learning strategy is proposed to improve accuracy and shorten running time for handwritten digit recognition. The adaptive deep Q-learning strategy combines the feature-extracting capability of deep learning and the decision-making of reinforcement learning to form an adaptive Q-learning deep belief network (Q-ADBN). First, Q-ADBN extracts the features of original images using an adaptive deep auto-encoder (ADAE), and the extracted features are considered as the current states of Q-learning algorithm. Second, Q-ADBN receives Q-function (reward signal) during recognition of the current states, and the final handwritten digits recognition is implemented by maximizing the Q-function using Q-learning algorithm. Finally, experimental results from the well-known MNIST dataset show that the proposed Q-ADBN has a superiority to other similar methods in terms of accuracy and running time. Copyright © 2018 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Servizzi, Kelli M.
2013-01-01
The study examined preschool students' use of thinking strategies when responding to deep structure questions during interactive book readings. The children were enrolled in two different inclusive preschool classrooms in a large Midwestern city. The study explored which thinking strategies the preschool children used when answering deep structure…
Chiu, Yen-Lin; Liang, Jyh-Chong; Hou, Cheng-Yen; Tsai, Chin-Chung
2016-07-18
Students' epistemic beliefs may vary in different domains; therefore, it may be beneficial for medical educators to better understand medical students' epistemic beliefs regarding medicine. Understanding how medical students are aware of medical knowledge and how they learn medicine is a critical issue of medical education. The main purposes of this study were to investigate medical students' epistemic beliefs relating to medical knowledge, and to examine their relationships with students' approaches to learning medicine. A total of 340 undergraduate medical students from 9 medical colleges in Taiwan were surveyed with the Medical-Specific Epistemic Beliefs (MSEB) questionnaire (i.e., multi-source, uncertainty, development, justification) and the Approach to Learning Medicine (ALM) questionnaire (i.e., surface motive, surface strategy, deep motive, and deep strategy). By employing the structural equation modeling technique, the confirmatory factor analysis and path analysis were conducted to validate the questionnaires and explore the structural relations between these two constructs. It was indicated that medical students with multi-source beliefs who were suspicious of medical knowledge transmitted from authorities were less likely to possess a surface motive and deep strategies. Students with beliefs regarding uncertain medical knowledge tended to utilize flexible approaches, that is, they were inclined to possess a surface motive but adopt deep strategies. Students with beliefs relating to justifying medical knowledge were more likely to have mixed motives (both surface and deep motives) and mixed strategies (both surface and deep strategies). However, epistemic beliefs regarding development did not have significant relations with approaches to learning. Unexpectedly, it was found that medical students with sophisticated epistemic beliefs (e.g., suspecting knowledge from medical experts) did not necessarily engage in deep approaches to learning medicine. Instead of a deep approach, medical students with sophisticated epistemic beliefs in uncertain and justifying medical knowledge intended to employ a flexible approach and a mixed approach, respectively.
Cerebellar Deep Nuclei Involvement in Cognitive Adaptation and Automaticity
ERIC Educational Resources Information Center
Callu, Delphine; Lopez, Joelle; El Massioui, Nicole
2013-01-01
To determine the role of the interpositus nuclei of cerebellum in rule-based learning and optimization processes, we studied (1) successive transfers of an initially acquired response rule in a cross maze and (2) behavioral strategies in learning a simple response rule in a T maze in interpositus lesioned rats (neurotoxic or electrolytic lesions).…
Strategies for Solving Fraction Tasks and Their Link to Algebraic Thinking
ERIC Educational Resources Information Center
Pearn, Catherine; Stephens, Max
2015-01-01
Many researchers argue that a deep understanding of fractions is important for a successful transition to algebra. Teaching, especially in the middle years, needs to focus specifically on those areas of fraction knowledge and operations that support subsequent solution processes for algebraic equations. This paper focuses on the results of Year 6…
Story Innovation: An Instructional Strategy for Developing Vocabulary and Fluency
ERIC Educational Resources Information Center
Griffith, Priscilla L.; Ruan, Jiening
2007-01-01
Story innovation is a form of scaffold writing in which the sentence and text patterns remain intact but the content is altered through the substitution of vocabulary to change the setting, characters, or action in a story. Story innovation is presented as a way to develop vocabulary knowledge through deep processing and to provide fluency…
Evolution in the deep sea: biological traits, ecology and phylogenetics of pelagic copepods.
Laakmann, Silke; Auel, Holger; Kochzius, Marc
2012-11-01
Deep-sea biodiversity has received increasing interest in the last decade, mainly focusing on benthic communities. In contrast, studies of zooplankton in the meso- to bathypelagic zones are relatively scarce. In order to explore evolutionary processes in the pelagic deep sea, the present study focuses on copepods of two clausocalanoid families, Euchaetidae and Aetideidae, which are abundant and species-rich in the deep-sea pelagic realm. Molecular phylogenies based on concatenated-portioned data on 18S, 28S and internal transcribed spacer 2 (ITS2), as well as mitochondrial cytochrome c oxidase subunit I (COI), were examined on 13 species, mainly from Arctic and Antarctic regions, together with species-specific biological traits (i.e. vertical occurrence, feeding behaviour, dietary preferences, energy storage, and reproductive strategy). Relationships were resolved on genus, species and even sub-species levels, the latter two established by COI with maximum average genetic distances ranging from ≤5.3% at the intra-specific, and 20.6% at the inter-specific level. There is no resolution at a family level, emphasising the state of Euchaetidae and Aetideidae as sister families and suggesting a fast radiation of these lineages, a hypothesis which is further supported by biological parameters. Euchaetidae were similar in lipid-specific energy storage, reproductive strategy, as well as feeding behaviour and dietary preference. In contrast, Aetideidae were more diverse, comprising a variety of characteristics ranging from similar adaptations within Paraeuchaeta, to genera consisting of species with completely different reproductive and feeding ecologies. Reproductive strategies were generally similar within each aetideid genus, but differed between genera. Closely related species (congeners), which were similar in the aforementioned biological and ecological traits, generally occurred in different depth layers, suggesting that vertical partitioning of the water column represents an important mechanism in the speciation processes for these deep-sea copepods. High COI divergence between Arctic and Antarctic specimens of the mesopelagic cosmopolitan Gaetanus tenuispinus and the bipolar Aetideopsis minor suggest different geographic forms, potentially cryptic species or sibling species. On the contrary, Arctic and Antarctic individuals of the bathypelagic cosmopolitans Gaetanus brevispinus and Paraeuchaeta barbata were very similar in COI sequence, suggesting more gene flow at depth and/or that driving forces for speciation were less pronounced in bathypelagic than at mesopelagic depths. Copyright © 2012 Elsevier Inc. All rights reserved.
On the Shallow Processing (Dis)Advantage: Grammar and Economy
Koornneef, Arnout; Reuland, Eric
2016-01-01
In the psycholinguistic literature it has been proposed that readers and listeners often adopt a “good-enough” processing strategy in which a “shallow” representation of an utterance driven by (top-down) extra-grammatical processes has a processing advantage over a “deep” (bottom-up) grammatically-driven representation of that same utterance. In the current contribution we claim, both on theoretical and experimental grounds, that this proposal is overly simplistic. Most importantly, in the domain of anaphora there is now an accumulating body of evidence showing that the anaphoric dependencies between (reflexive) pronominals and their antecedents are subject to an economy hierarchy. In this economy hierarchy, deriving anaphoric dependencies by deep—grammatical—operations requires less processing costs than doing so by shallow—extra-grammatical—operations. In addition, in case of ambiguity when both a shallow and a deep derivation are available to the parser, the latter is actually preferred. This, we argue, contradicts the basic assumptions of the shallow–deep dichotomy and, hence, a rethinking of the good-enough processing framework is warranted. PMID:26903897
Evaluation of Deep Discount Fare Strategies
DOT National Transportation Integrated Search
1995-08-01
This report evaluates the success of a fare pricing strategy known as deep discounting, that entails the bulk sale of transit tickets or tokens to customers at a significant discount compared to the full fare single ticket price. This market-driven s...
Catrysse, Leen; Gijbels, David; Donche, Vincent; De Maeyer, Sven; Lesterhuis, Marije; Van den Bossche, Piet
2018-03-01
Up until now, empirical studies in the Student Approaches to Learning field have mainly been focused on the use of self-report instruments, such as interviews and questionnaires, to uncover differences in students' general preferences towards learning strategies, but have focused less on the use of task-specific and online measures. This study aimed at extending current research on students' learning strategies by combining general and task-specific measurements of students' learning strategies using both offline and online measures. We want to clarify how students process learning contents and to what extent this is related to their self-report of learning strategies. Twenty students with different generic learning profiles (according to self-report questionnaires) read an expository text, while their eye movements were registered to answer questions on the content afterwards. Eye-tracking data were analysed with generalized linear mixed-effects models. The results indicate that students with an all-high profile, combining both deep and surface learning strategies, spend more time on rereading the text than students with an all-low profile, scoring low on both learning strategies. This study showed that we can use eye-tracking to distinguish very strategic students, characterized using cognitive processing and regulation strategies, from low strategic students, characterized by a lack of cognitive and regulation strategies. These students processed the expository text according to how they self-reported. © 2017 The British Psychological Society.
Diverging receptive and expressive word processing mechanisms in a deep dyslexic reader.
Ablinger, Irene; Radach, Ralph
2016-01-29
We report on KJ, a patient with acquired dyslexia due to cerebral artery infarction. He represents an unusually clear case of an "output" deep dyslexic reader, with a distinct pattern of pure semantic reading. According to current neuropsychological models of reading, the severity of this condition is directly related to the degree of impairment in semantic and phonological representations and the resulting imbalance in the interaction between the two word processing pathways. The present work sought to examine whether an innovative eye movement supported intervention combining lexical and segmental therapy would strengthen phonological processing and lead to an attenuation of the extreme semantic over-involvement in KJ's word identification process. Reading performance was assessed before (T1) between (T2) and after (T3) therapy using both analyses of linguistic errors and word viewing patterns. Therapy resulted in improved reading aloud accuracy along with a change in error distribution that suggested a return to more sequential reading. Interestingly, this was in contrast to the dynamics of moment-to-moment word processing, as eye movement analyses still suggested a predominantly holistic strategy, even at T3. So, in addition to documenting the success of the therapeutic intervention, our results call for a theoretically important conclusion: Real-time letter and word recognition routines should be considered separately from properties of the verbal output. Combining both perspectives may provide a promising strategy for future assessment and therapy evaluation. Copyright © 2015. Published by Elsevier Ltd.
Blackboxing: social learning strategies and cultural evolution.
Heyes, Cecilia
2016-05-05
Social learning strategies (SLSs) enable humans, non-human animals, and artificial agents to make adaptive decisions aboutwhenthey should copy other agents, andwhothey should copy. Behavioural ecologists and economists have discovered an impressive range of SLSs, and explored their likely impact on behavioural efficiency and reproductive fitness while using the 'phenotypic gambit'; ignoring, or remaining deliberately agnostic about, the nature and origins of the cognitive processes that implement SLSs. Here I argue that this 'blackboxing' of SLSs is no longer a viable scientific strategy. It has contributed, through the 'social learning strategies tournament', to the premature conclusion that social learning is generally better than asocial learning, and to a deep puzzle about the relationship between SLSs and cultural evolution. The puzzle can be solved by recognizing that whereas most SLSs are 'planetary'--they depend on domain-general cognitive processes--some SLSs, found only in humans, are 'cook-like'--they depend on explicit, metacognitive rules, such ascopy digital natives. These metacognitive SLSs contribute to cultural evolution by fostering the development of processes that enhance the exclusivity, specificity, and accuracy of social learning. © 2016 The Author(s).
Márquez U, Carolina; Fasce H, Eduardo; Pérez V, Cristhian; Ortega B, Javiera; Parra P, Paula; Ortiz M, Liliana; Matus B, Olga; Ibáñez G, Pilar
2014-11-01
Self-directed learning (SDL) skills are particularly important in medical education, considering that physicians should be able to regulate their own learning experiences. To evaluate the relationship between learning styles and strategies and self-directed learning in medical students. One hundred ninety nine first year medical students (120 males) participated in the study. Preparation for Independent Learning (EPAI) scale was used to assess self-direction. Schmeck learning strategies scale and Honey and Alonso (CHAEA) scales were used to evaluate learning styles and strategies. Theoretical learning style and deep processing learning strategy had positive correlations with self-direct learning. Medical students with theoretical styles and low retention of facts are those with greater ability to self-direct their learning. Further studies are required to determine the relationship between learning styles and strategies with SDL in medical students. The acquired knowledge will allow the adjustment of teaching strategies to encourage SDL.
[Teaching practices and learning strategies in health careers].
Carrasco Z, Constanza; Pérez V, Cristhian; Torres A, Graciela; Fasce H, Eduardo
2016-09-01
Medical Education, according to the constructivist education paradigm, puts students as the protagonists of the teaching and learning process. It demands changes in the practice of teaching. However, it is unclear whether this new model is coherent with the teachers ways to cope with learning. To analyze the relationship between teaching practices and learning strategies among teachers of health careers in Chilean universities. The Teaching Practices Questionnaire and Learning Strategies Inventory of Schmeck were applied to 200 teachers aged 24 to 72 years (64% females). Teachers use different types of teaching practices. They commonly use deep and elaborative learning strategies. A multiple regression analysis showed that learning strategies had a 13% predictive value to identify student-centered teaching, but they failed to predict teacher-centered teaching. Teaching practices and learning strategies of teachers are related. Teachers frequently select constructivist model strategies, using different teaching practices in their work.
Bernstein, Jacob G.; Allen, Brian D.; Guerra, Alexander A.; Boyden, Edward S.
2016-01-01
Optogenetics enables light to be used to control the activity of genetically targeted cells in the living brain. Optical fibers can be used to deliver light to deep targets, and LEDs can be spatially arranged to enable patterned light delivery. In combination, arrays of LED-coupled optical fibers can enable patterned light delivery to deep targets in the brain. Here we describe the process flow for making LED arrays and LED-coupled optical fiber arrays, explaining key optical, electrical, thermal, and mechanical design principles to enable the manufacturing, assembly, and testing of such multi-site targetable optical devices. We also explore accessory strategies such as surgical automation approaches as well as innovations to enable low-noise concurrent electrophysiology. PMID:26798482
Assessment of methods for methyl iodide emission reduction and pest control using a simulation model
NASA Astrophysics Data System (ADS)
Luo, Lifang; Ashworth, Daniel J.; Šimunek, Jirka; Xuan, Richeng; Yates, Scott R.
2013-02-01
The increasing registration of the fumigant methyl iodide within the USA has led to more concerns about its toxicity to workers and bystanders. Emission mitigation strategies are needed to protect the public and environmental health while providing effective pest control. The effectiveness of various methods on emissions reduction and pest control was assessed using a process-based mathematical model in this study. Firstly, comparisons between the simulated and laboratory measured emission fluxes and cumulative emissions were made for methyl iodide (MeI) under four emission reduction treatments: 1) control, 2) using soil with high organic matter content (HOM), 3) being covered by virtually impermeable film (VIF), and 4) irrigating soil surface following fumigation (Irrigation). Then the model was extended to simulate a broader range of emission reduction strategies for MeI, including 5) being covered by high density polyethylene (HDPE), 6) increasing injection depth from 30 cm to 46 cm (Deep), 7) HDPE + Deep, 8) adding a reagent at soil surface (Reagent), 9) Reagent + Irrigation, and 10) Reagent + HDPE. Furthermore, the survivability of three types of soil-borne pests (citrus nematodes [Tylenchulus semipenetrans], barnyard seeds [Echinochloa crus-galli], fungi [Fusarium oxysporum]) was also estimated for each scenario. Overall, the trend of the measured emission fluxes as well as total emission were reasonably reproduced by the model for treatments 1 through 4. Based on the numerical simulation, the ranking of effectiveness in total emission reduction was VIF (82.4%) > Reagent + HDPE (73.2%) > Reagent + Irrigation (43.0%) > Reagent (23.5%) > Deep + HDPE (19.3%) > HOM (17.6%) > Deep (13.0%) > Irrigation (11.9%) > HDPE (5.8%). The order for pest control efficacy suggests, VIF had the highest pest control efficacy, followed by Deep + HDPE, Irrigation, Reagent + Irrigation, HDPE, Deep, Reagent + HDPE, Reagent, and HOM. Therefore, VIF is the optimal method disregarding the cost of the film since it maximizes efficacy while minimizing volatility losses. Otherwise, the integrated methods such as Deep + HDPE and Reagent + Irrigation, are recommended.
ERIC Educational Resources Information Center
De Backer, Liesje; Van Keer, Hilde; Valcke, Martin
2017-01-01
The present study investigates collaborative learners' adoption of key regulation activities (i.e., orienting, planning, monitoring, and evaluating) and a deep-level regulation approach in relation to characteristics of their collaboration on the cognitive and communicative level. More specifically, the correlation of collaborative learners'…
ERIC Educational Resources Information Center
Watkins, David; McInerney, Dennis; Akande, Adebowale; Lee, Clement
2003-01-01
Compared school motivation and use of deep processing (an indicator of learning quality) among black and white South African students from two recently integrated secondary schools. Student surveys found no significant ethnic group differences. Both groups considered working hard and having interest in school tasks to be more important than…
Towards automatic pulmonary nodule management in lung cancer screening with deep learning
NASA Astrophysics Data System (ADS)
Ciompi, Francesco; Chung, Kaman; van Riel, Sarah J.; Setio, Arnaud Arindra Adiyoso; Gerke, Paul K.; Jacobs, Colin; Th. Scholten, Ernst; Schaefer-Prokop, Cornelia; Wille, Mathilde M. W.; Marchianò, Alfonso; Pastorino, Ugo; Prokop, Mathias; van Ginneken, Bram
2017-04-01
The introduction of lung cancer screening programs will produce an unprecedented amount of chest CT scans in the near future, which radiologists will have to read in order to decide on a patient follow-up strategy. According to the current guidelines, the workup of screen-detected nodules strongly relies on nodule size and nodule type. In this paper, we present a deep learning system based on multi-stream multi-scale convolutional networks, which automatically classifies all nodule types relevant for nodule workup. The system processes raw CT data containing a nodule without the need for any additional information such as nodule segmentation or nodule size and learns a representation of 3D data by analyzing an arbitrary number of 2D views of a given nodule. The deep learning system was trained with data from the Italian MILD screening trial and validated on an independent set of data from the Danish DLCST screening trial. We analyze the advantage of processing nodules at multiple scales with a multi-stream convolutional network architecture, and we show that the proposed deep learning system achieves performance at classifying nodule type that surpasses the one of classical machine learning approaches and is within the inter-observer variability among four experienced human observers.
Towards automatic pulmonary nodule management in lung cancer screening with deep learning.
Ciompi, Francesco; Chung, Kaman; van Riel, Sarah J; Setio, Arnaud Arindra Adiyoso; Gerke, Paul K; Jacobs, Colin; Scholten, Ernst Th; Schaefer-Prokop, Cornelia; Wille, Mathilde M W; Marchianò, Alfonso; Pastorino, Ugo; Prokop, Mathias; van Ginneken, Bram
2017-04-19
The introduction of lung cancer screening programs will produce an unprecedented amount of chest CT scans in the near future, which radiologists will have to read in order to decide on a patient follow-up strategy. According to the current guidelines, the workup of screen-detected nodules strongly relies on nodule size and nodule type. In this paper, we present a deep learning system based on multi-stream multi-scale convolutional networks, which automatically classifies all nodule types relevant for nodule workup. The system processes raw CT data containing a nodule without the need for any additional information such as nodule segmentation or nodule size and learns a representation of 3D data by analyzing an arbitrary number of 2D views of a given nodule. The deep learning system was trained with data from the Italian MILD screening trial and validated on an independent set of data from the Danish DLCST screening trial. We analyze the advantage of processing nodules at multiple scales with a multi-stream convolutional network architecture, and we show that the proposed deep learning system achieves performance at classifying nodule type that surpasses the one of classical machine learning approaches and is within the inter-observer variability among four experienced human observers.
Towards automatic pulmonary nodule management in lung cancer screening with deep learning
Ciompi, Francesco; Chung, Kaman; van Riel, Sarah J.; Setio, Arnaud Arindra Adiyoso; Gerke, Paul K.; Jacobs, Colin; Th. Scholten, Ernst; Schaefer-Prokop, Cornelia; Wille, Mathilde M. W.; Marchianò, Alfonso; Pastorino, Ugo; Prokop, Mathias; van Ginneken, Bram
2017-01-01
The introduction of lung cancer screening programs will produce an unprecedented amount of chest CT scans in the near future, which radiologists will have to read in order to decide on a patient follow-up strategy. According to the current guidelines, the workup of screen-detected nodules strongly relies on nodule size and nodule type. In this paper, we present a deep learning system based on multi-stream multi-scale convolutional networks, which automatically classifies all nodule types relevant for nodule workup. The system processes raw CT data containing a nodule without the need for any additional information such as nodule segmentation or nodule size and learns a representation of 3D data by analyzing an arbitrary number of 2D views of a given nodule. The deep learning system was trained with data from the Italian MILD screening trial and validated on an independent set of data from the Danish DLCST screening trial. We analyze the advantage of processing nodules at multiple scales with a multi-stream convolutional network architecture, and we show that the proposed deep learning system achieves performance at classifying nodule type that surpasses the one of classical machine learning approaches and is within the inter-observer variability among four experienced human observers. PMID:28422152
Li, Meng; Baker, Brett J; Anantharaman, Karthik; Jain, Sunit; Breier, John A; Dick, Gregory J
2015-11-17
Microbial activity is one of the most important processes to mediate the flux of organic carbon from the ocean surface to the seafloor. However, little is known about the microorganisms that underpin this key step of the global carbon cycle in the deep oceans. Here we present genomic and transcriptomic evidence that five ubiquitous archaeal groups actively use proteins, carbohydrates, fatty acids and lipids as sources of carbon and energy at depths ranging from 800 to 4,950 m in hydrothermal vent plumes and pelagic background seawater across three different ocean basins. Genome-enabled metabolic reconstructions and gene expression patterns show that these marine archaea are motile heterotrophs with extensive mechanisms for scavenging organic matter. Our results shed light on the ecological and physiological properties of ubiquitous marine archaea and highlight their versatile metabolic strategies in deep oceans that might play a critical role in global carbon cycling.
Current Status of The Romanian National Deep Geological Repository Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radu, M.; Nicolae, R.; Nicolae, D.
2008-07-01
Construction of a deep geological repository is a very demanding and costly task. By now, countries that have Candu reactors, have not processed the spent fuel passing to the interim storage as a preliminary step of final disposal within the nuclear fuel cycle back-end. Romania, in comparison to other nations, represents a rather small territory, with high population density, wherein the geological formation areas with radioactive waste storage potential are limited and restricted not only from the point of view of the selection criteria due to the rocks natural characteristics, but also from the point of view of their involvementmore » in social and economical activities. In the framework of the national R and D Programs, series of 'Map investigations' have been made regarding the selection and preliminary characterization of the host geological formation for the nation's spent fuel deep geological repository. The fact that Romania has many deposits of natural gas, oil, ore and geothermal water, and intensively utilizes soil and also is very forested, cause some of the apparent acceptable sites to be rejected in the subsequent analysis. Currently, according to the Law on the spent fuel and radioactive waste management, including disposal, The National Agency of Radioactive Waste is responsible and coordinates the national strategy in the field and, subsequently, further actions will be decided. The Romanian National Strategy, approved in 2004, projects the operation of a deep geological repository to begin in 2055. (authors)« less
Dental students' perception of their approaches to learning in a PBL programme.
Haghparast, H; Ghorbani, A; Rohlin, M
2017-08-01
To compare dental students' perceptions of their learning approaches between different years of a problem-based learning (PBL) programme. The hypothesis was that in a comparison between senior and junior students, the senior students would perceive themselves as having a higher level of deep learning approach and a lower level of surface learning approach than junior students would. This hypothesis was based on the fact that senior students have longer experience of a student-centred educational context, which is supposed to underpin student learning. Students of three cohorts (first year, third year and fifth year) of a PBL-based dental programme were asked to respond to a questionnaire (R-SPQ-2F) developed to analyse students' learning approaches, that is deep approach and surface approach, using four subscales including deep strategy, surface strategy, deep motive and surface motive. The results of the three cohorts were compared using a one-way analysis of variance (ANOVA). A P-value was set at <0.05 for statistical significance. The fifth-year students demonstrated a lower surface approach than the first-year students (P = 0.020). There was a significant decrease in surface strategy from the first to the fifth year (P = 0.003). No differences were found concerning deep approach or its subscales (deep strategy and deep motive) between the mean scores of the three cohorts. The results did not show the expected increased depth in learning approaches over the programme years. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Baas, Diana; Castelijns, Jos; Vermeulen, Marjan; Martens, Rob; Segers, Mien
2015-03-01
Assessment for Learning (AfL) is believed to create a rich learning environment in which students develop their cognitive and metacognitive strategies. Monitoring student growth and providing scaffolds that shed light on the next step in the learning process are hypothesized to be essential elements of AfL that enhance cognitive and metacognitive strategies. However, empirical evidence for the relation between AfL and students' strategy use is scarce. This study investigates the relation between AfL and elementary school students' use of cognitive and metacognitive strategies. The sample comprised 528 grade four to six students (9- to 12-year-olds) from seven Dutch elementary schools. Students' perceptions of AfL and their cognitive and metacognitive strategy use were measured by means of questionnaires. Structural equation modelling was used to investigate the relations among the variables. The results reveal that monitoring activities that provide students an understanding of where they are in their learning process predict Students' task orientation and planning. Scaffolding activities that support students in taking the next step in their learning are positively related to the use of both surface and deep-level learning strategies and the extent to which they evaluate their learning process after performing tasks. The results underline the importance of assessment practices in ceding responsibility to students in taking control of their own learning. © 2014 The British Psychological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Joon H.; Arnold, Bill W.; Swift, Peter N.
2012-07-01
A deep borehole repository is one of the four geologic disposal system options currently under study by the U.S. DOE to support the development of a long-term strategy for geologic disposal of commercial used nuclear fuel (UNF) and high-level radioactive waste (HLW). The immediate goal of the generic deep borehole repository study is to develop the necessary modeling tools to evaluate and improve the understanding of the repository system response and processes relevant to long-term disposal of UNF and HLW in a deep borehole. A prototype performance assessment model for a generic deep borehole repository has been developed using themore » approach for a mined geological repository. The preliminary results from the simplified deep borehole generic repository performance assessment indicate that soluble, non-sorbing (or weakly sorbing) fission product radionuclides, such as I-129, Se-79 and Cl-36, are the likely major dose contributors, and that the annual radiation doses to hypothetical future humans associated with those releases may be extremely small. While much work needs to be done to validate the model assumptions and parameters, these preliminary results highlight the importance of a robust seal design in assuring long-term isolation, and suggest that deep boreholes may be a viable alternative to mined repositories for disposal of both HLW and UNF. (authors)« less
Anthropogenic impacts on deep submarine canyons of the western Mediterranean Sea
NASA Astrophysics Data System (ADS)
Sanchez-Vidal, A.; Tubau, X.; Llorca, M.; Woodall, L.; Canals, M.; Farré, M.; Barceló, D.; Thompson, R.
2016-02-01
Submarine canyons are seafloor geomorphic features connecting the shallow coastal ocean to the deep continental margin and basin. Often considered biodiversity hotspots, submarine canyons have been identified as preferential pathways for water, sediment, pollutant and litter transfers from the coastal to the deep ocean. Here we provide insights on the presence of some of the most insidious man-made debris and substances in submarine canyons of the western Mediterranean Sea, which are relevant to achieve a "Good Environmental Status" by 2020 as outlined in the European Union's ambitious Marine Strategy Framework Directive. Ranked by size on a decreasing basis, we review the origin, distribution and transport mechanisms of i) marine litter, including plastic, lost fishing gear and metallic objects; ii) microplastics in the form of fibers of rayon, polyester, polyamide and acetates; and iii) persistent organic pollutants including the toxic and persistent perfluoroalkyl substances. This integrated analysis allows us to understand the pivotal role of atmospheric driven oceanographic processes occurring in Mediterranean deep canyons (dense shelf water cascading, coastal storms) in spreading any type of man-made compound to the deep sea, where they sink and accumulate before getting buried.
Becheler, Ronan; Cassone, Anne-Laure; Noel, Philippe; Mouchel, Olivier; Morrison, Cheryl L.; Arnaud-Haond, Sophie
2017-01-01
Sampling in the deep sea is a technical challenge, which has hindered the acquisition of robust datasets that are necessary to determine the fine-grained biological patterns and processes that may shape genetic diversity. Estimates of the extent of clonality in deep-sea species, despite the importance of clonality in shaping the local dynamics and evolutionary trajectories, have been largely obscured by such limitations. Cold-water coral reefs along European margins are formed mainly by two reef-building species, Lophelia pertusa and Madrepora oculata. Here we present a fine-grained analysis of the genotypic and genetic composition of reefs occurring in the Bay of Biscay, based on an innovative deep-sea sampling protocol. This strategy was designed to be standardized, random, and allowed the georeferencing of all sampled colonies. Clonal lineages discriminated through their Multi-Locus Genotypes (MLG) at 6–7 microsatellite markers could thus be mapped to assess the level of clonality and the spatial spread of clonal lineages. High values of clonal richness were observed for both species across all sites suggesting a limited occurrence of clonality, which likely originated through fragmentation. Additionally, spatial autocorrelation analysis underlined the possible occurrence of fine-grained genetic structure in several populations of both L. pertusa and M. oculata. The two cold-water coral species examined had contrasting patterns of connectivity among canyons, with among-canyon genetic structuring detected in M. oculata, whereas L. pertusa was panmictic at the canyon scale. This study exemplifies that a standardized, random and georeferenced sampling strategy, while challenging, can be applied in the deep sea, and associated benefits outlined here include improved estimates of fine grained patterns of clonality and dispersal that are comparable across sites and among species.
NASA Astrophysics Data System (ADS)
Becheler, Ronan; Cassone, Anne-Laure; Noël, Philippe; Mouchel, Olivier; Morrison, Cheryl L.; Arnaud-Haond, Sophie
2017-11-01
Sampling in the deep sea is a technical challenge, which has hindered the acquisition of robust datasets that are necessary to determine the fine-grained biological patterns and processes that may shape genetic diversity. Estimates of the extent of clonality in deep-sea species, despite the importance of clonality in shaping the local dynamics and evolutionary trajectories, have been largely obscured by such limitations. Cold-water coral reefs along European margins are formed mainly by two reef-building species, Lophelia pertusa and Madrepora oculata. Here we present a fine-grained analysis of the genotypic and genetic composition of reefs occurring in the Bay of Biscay, based on an innovative deep-sea sampling protocol. This strategy was designed to be standardized, random, and allowed the georeferencing of all sampled colonies. Clonal lineages discriminated through their Multi-Locus Genotypes (MLG) at 6-7 microsatellite markers could thus be mapped to assess the level of clonality and the spatial spread of clonal lineages. High values of clonal richness were observed for both species across all sites suggesting a limited occurrence of clonality, which likely originated through fragmentation. Additionally, spatial autocorrelation analysis underlined the possible occurrence of fine-grained genetic structure in several populations of both L. pertusa and M. oculata. The two cold-water coral species examined had contrasting patterns of connectivity among canyons, with among-canyon genetic structuring detected in M. oculata, whereas L. pertusa was panmictic at the canyon scale. This study exemplifies that a standardized, random and georeferenced sampling strategy, while challenging, can be applied in the deep sea, and associated benefits outlined here include improved estimates of fine grained patterns of clonality and dispersal that are comparable across sites and among species.
Measuring Deep, Reflective Comprehension and Learning Strategies: Challenges and Successes
ERIC Educational Resources Information Center
McNamara, Danielle S.
2011-01-01
There is a heightened understanding that metacognition and strategy use are crucial to deep, long-lasting comprehension and learning, but their assessment is challenging. First, students' judgments of what their abilities and habits and measurements of their performance often do not match. Second, students tend to learn and comprehend differently…
Want a Tip? Service Performance as a Function of Emotion Regulation and Extraversion
ERIC Educational Resources Information Center
Chi, Nai-Wen; Grandey, Alicia A.; Diamond, Jennifer A.; Krimmel, Kathleen Royer
2011-01-01
Surface acting and deep acting with customers are strategies for service performance, but evidence for their effectiveness is limited and mixed. We propose that deep acting is an effective strategy for most employees, whereas surface acting's effect on performance effectiveness depends on employee extraversion. In Study 1, restaurant servers who…
Implementation Experience with Deep Discount Fares
DOT National Transportation Integrated Search
1994-09-01
This report reviews the experiences of transit agencies across the country with Deep Discount fares, a new public transit pricing strategy, between 1988 and 1993. Based on new market research findings, Deep Discounting has shown that it is possible t...
A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification
Liu, Fuxian
2018-01-01
One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references. PMID:29581722
A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification.
Yu, Yunlong; Liu, Fuxian
2018-01-01
One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references.
Kuriyama, Motone; Yoshida, Yukitaka; Ninomiya, Hitoshi; Yamamoto, Shin; Sasaguri, Shiro; Akita, Shinsuke; Mitsukawa, Nobuyuki
2018-05-01
Poststernotomy deep sternal wound infections are persistent and occasionally fatal, especially in cases involving prosthetic grafts, because of their complicated structure and virtual impossibility of removal. We aimed to verify the influence of cooperation with plastic surgeons and our novel strategy for treating deep sternal wound infection after aortic replacement on cardiovascular surgery outcomes. Nine hundred eighty-three consecutive patients were divided into two groups: an early group (2012-2013) and a late group (2014-2015). The late group had received cooperatively improved perioperative wound management: our novel strategy of deep sternal infection based on radical debridement and immediate reconstruction decided by reference to severities of the patient's general condition and widespread infection by early intervention of plastic surgeons. The groups were analysed retrospectively. Binary variables were analysed statistically with the Fisher exact test and continuous variables with the Mann-Whitney U test. Inter-group differences were assessed with the chi-square test. Twenty of 390 cases in the early group and 13 of 593 cases in the late group were associated with deep sternal infection. Morbidity rates of deep sternal wound infection and associated mortality rates 1 year after reconstruction surgery were significantly less (p <0.05 for both) in the late group. Intervention by plastic surgeons improved perioperative wound management outcomes. Our treatment strategy for deep sternal wound infection also reduced associated mortality rates. Facilities should consider the early inclusion of plastic surgeons in the treatment of patients undergoing aortic replacement to facilitate better outcomes. Copyright © 2018 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Shah, Dev Kumar; Yadav, Ram Lochan; Sharma, Deepak; Yadav, Prakash Kumar; Sapkota, Niraj Khatri; Jha, Rajesh Kumar; Islam, Md Nazrul
2016-01-01
Many factors shape the quality of learning. The intrinsically motivated students adopt a deep approach to learning, while students who fear failure in assessments adopt a surface approach to learning. In the area of health science education in Nepal, there is still a lack of studies on learning approach that can be used to transform the students to become better learners and improve the effectiveness of teaching. Therefore, we aimed to explore the learning approaches among medical, dental, and nursing students of Chitwan Medical College, Nepal using Biggs's Revised Two-Factor Study Process Questionnaire (R-SPQ-2F) after testing its reliability. R-SPQ-2F containing 20 items represented two main scales of learning approaches, deep and surface, with four subscales: deep motive, deep strategy, surface motive, and surface strategy. Each subscale had five items and each item was rated on a 5-point Likert scale. The data were analyzed using Student's t-test and analysis of variance. Reliability of the administered questionnaire was checked using Cronbach's alpha. The Cronbach's alpha value (0.6) for 20 items of R-SPQ-2F was found to be acceptable for its use. The participants predominantly had a deep approach to learning regardless of their age and sex (deep: 32.62±6.33 versus surface: 25.14±6.81, P<0.001). The level of deep approach among medical students (33.26±6.40) was significantly higher than among dental (31.71±6.51) and nursing (31.36±4.72) students. In comparison to first-year students, deep approach among second-year medical (34.63±6.51 to 31.73±5.93; P<0.001) and dental (33.47±6.73 to 29.09±5.62; P=0.002) students was found to be significantly decreased. On the other hand, surface approach significantly increased (25.55±8.19 to 29.34±6.25; P=0.023) among second-year dental students compared to first-year dental students. Medical students were found to adopt a deeper approach to learning than dental and nursing students. However, irrespective of disciplines and personal characteristics of participants, the primarily deep learning approach was found to be shifting progressively toward a surface approach after completion of an academic year, which should be avoided.
NASA Astrophysics Data System (ADS)
Lee, Silvia Wen-Yu; Liang, Jyh-Chong; Tsai, Chin-Chung
2016-10-01
This study investigated the relationships among college students' epistemic beliefs in biology (EBB), conceptions of learning biology (COLB), and strategies of learning biology (SLB). EBB includes four dimensions, namely 'multiple-source,' 'uncertainty,' 'development,' and 'justification.' COLB is further divided into 'constructivist' and 'reproductive' conceptions, while SLB represents deep strategies and surface learning strategies. Questionnaire responses were gathered from 303 college students. The results of the confirmatory factor analysis and structural equation modelling showed acceptable model fits. Mediation testing further revealed two paths with complete mediation. In sum, students' epistemic beliefs of 'uncertainty' and 'justification' in biology were statistically significant in explaining the constructivist and reproductive COLB, respectively; and 'uncertainty' was statistically significant in explaining the deep SLB as well. The results of mediation testing further revealed that 'uncertainty' predicted surface strategies through the mediation of 'reproductive' conceptions; and the relationship between 'justification' and deep strategies was mediated by 'constructivist' COLB. This study provides evidence for the essential roles some epistemic beliefs play in predicting students' learning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2013-03-01
This fact sheet summarizes actions in the areas of light-duty vehicle, non-light-duty vehicle, fuel, and transportation demand that show promise for deep reductions in energy use. Energy efficient transportation strategies have the potential to simultaneously reduce oil consumption and greenhouse gas (GHG) emissions. The Transportation Energy Futures (TEF) project examined how the combination of multiple strategies could achieve deep reductions in GHG emissions and petroleum use on the order of 80%. Led by NREL, in collaboration with Argonne National Laboratory, the project's primary goal was to help inform domestic decisions about transportation energy strategies, priorities, and investments, with an emphasismore » on underexplored opportunities. TEF findings reveal three strategies with the potential to displace most transportation-related petroleum use and GHG emissions: 1) Stabilizing energy use in the transportation sector through efficiency and demand-side approaches. 2) Using additional advanced biofuels. 3) Expanding electric drivetrain technologies.« less
NASA Astrophysics Data System (ADS)
Kopf, A.
2009-04-01
The Deep-Sea and Sub-Seafloor Frontiers project, DS3F, represents the continuation of the DSF roadmap towards the sustainable management of oceanic resources on a European scale. It will develop strategies for sub-seafloor sampling to contribute to a better understanding of deep-sea and sub-seafloor processes by connecting marine research in life and geosciences, climate and environmental change, as well as socio-economic issues and policy building. We propose to establish a long-lived research approach that considers (i) the need for a sustainable management of the ocean, and particularly the deep sea with enhanced activity (fishery, hydrocarbon exploration), (ii) the necessity to unravel deep-seated geological processes that drive seafloor ecosystems, and (iii) the value of seabed archives for the reconstruction of paleo-environmental conditions and the improved prediction of future climate change. Sub-seafloor drilling and sampling can provide two key components in understanding how deep-sea ecosystems function at present, and how they will respond to global change: (a) an inventory of present subsurface processes and biospheres, and their links to surface ecosystems, including seafloor observation and baseline studies, and (b) a high resolution archive of past variations in environmental conditions and biodiversity. For both components, an international effort is needed to share knowledge, methods and technologies, including mission-specific platforms to increase the efficiency, coverage and accuracy of sub-seafloor sampling and exploration. The deep biosphere has been discovered only within the past two decades and comprises the last major frontier for biological exploration. We lack fundamental knowledge of composition, diversity, distribution and metabolism in sub-seafloor biological communities at Earth's extremes, and their repercussions on seafloor ecosystems and life in the deep sea. There is equally an emerging need to shed light on geodynamic processes fuelling biological activity, and how such processes tie into the emission of geofuels and the formation of hydrocarbons and other resources. In addition, geodynamic processes may be cause natural hazards such as earthquake slip, submarine landslides, or tsunamis with a profound effect for humans and ecosystems. Their governing principles and potential triggers are poorly understood and often related to the sub-seafloor environment. In summary, the three main research areas in the Integrated Ocean Drilling Program (IODP; see Initial Science Plan www.iodp.org/isp/), i.e. geodynamics, climate and deep biosphere, as well as the goals of DS3F show a strong overlap and suggest an emerging need to join forces. This will result in the most efficient use of sub-seafloor sampling techniques and existing marine infrastructure to study the geosystem and its effects on biosphere and marine ecosystems. The DS3F initiative aims at providing a comprehensive "white paper" for a sustainable use of the oceans, an European Maritime Policy, and a strong link between European mission-specific drilling projects including IODP, IMAGES, ESF-EuroMARC and EC campaigns.
NASA Astrophysics Data System (ADS)
Procaccini, Gabriele; Ruocco, Miriam; Marín-Guirao, Lázaro; Dattolo, Emanuela; Brunet, Christophe; D'Esposito, Daniela; Lauritano, Chiara; Mazzuca, Silvia; Serra, Ilia Anna; Bernardo, Letizia; Piro, Amalia; Beer, Sven; Björk, Mats; Gullström, Martin; Buapet, Pimchanok; Rasmusson, Lina M.; Felisberto, Paulo; Gobert, Sylvie; Runcie, John W.; Silva, João; Olivé, Irene; Costa, Monya M.; Barrote, Isabel; Santos, Rui
2017-02-01
Here we present the results of a multiple organizational level analysis conceived to identify acclimative/adaptive strategies exhibited by the seagrass Posidonia oceanica to the daily fluctuations in the light environment, at contrasting depths. We assessed changes in photophysiological parameters, leaf respiration, pigments, and protein and mRNA expression levels. The results show that the diel oscillations of P. oceanica photophysiological and respiratory responses were related to transcripts and proteins expression of the genes involved in those processes and that there was a response asynchrony between shallow and deep plants probably caused by the strong differences in the light environment. The photochemical pathway of energy use was more effective in shallow plants due to higher light availability, but these plants needed more investment in photoprotection and photorepair, requiring higher translation and protein synthesis than deep plants. The genetic differentiation between deep and shallow stands suggests the existence of locally adapted genotypes to contrasting light environments. The depth-specific diel rhythms of photosynthetic and respiratory processes, from molecular to physiological levels, must be considered in the management and conservation of these key coastal ecosystems.
Procaccini, Gabriele; Ruocco, Miriam; Marín-Guirao, Lázaro; Dattolo, Emanuela; Brunet, Christophe; D’Esposito, Daniela; Lauritano, Chiara; Mazzuca, Silvia; Serra, Ilia Anna; Bernardo, Letizia; Piro, Amalia; Beer, Sven; Björk, Mats; Gullström, Martin; Buapet, Pimchanok; Rasmusson, Lina M.; Felisberto, Paulo; Gobert, Sylvie; Runcie, John W.; Silva, João; Olivé, Irene; Costa, Monya M.; Barrote, Isabel; Santos, Rui
2017-01-01
Here we present the results of a multiple organizational level analysis conceived to identify acclimative/adaptive strategies exhibited by the seagrass Posidonia oceanica to the daily fluctuations in the light environment, at contrasting depths. We assessed changes in photophysiological parameters, leaf respiration, pigments, and protein and mRNA expression levels. The results show that the diel oscillations of P. oceanica photophysiological and respiratory responses were related to transcripts and proteins expression of the genes involved in those processes and that there was a response asynchrony between shallow and deep plants probably caused by the strong differences in the light environment. The photochemical pathway of energy use was more effective in shallow plants due to higher light availability, but these plants needed more investment in photoprotection and photorepair, requiring higher translation and protein synthesis than deep plants. The genetic differentiation between deep and shallow stands suggests the existence of locally adapted genotypes to contrasting light environments. The depth-specific diel rhythms of photosynthetic and respiratory processes, from molecular to physiological levels, must be considered in the management and conservation of these key coastal ecosystems. PMID:28211527
Onion, C W; Bartzokas, C A
1998-04-01
When attempting to implement evidence-based medicine, such as through clinical guidelines, we often rely on passive educational tactics, for example didactic lectures and bulletins. These methods involve the recipient in relatively superficial processing of information, and any consequent attitude changes can be expected to be short-lived. However, active methods, such as practice-based discussion, should involve recipients in deep processing, with more enduring attitude changes. In this experiment, the aim was to assess the efficacy of an active strategy at promoting deep processing and its effectiveness, relative to a typical passive method, at changing attitudes between groups of GPs over 12 months across an English Health District. All 191 GPs operating from 69 practices in the Wirral Health District of Northwest England were assigned, with minimization of known confounding variables, to three experimental groups: active, passive and control. The groups were shown to have similar learning styles. The objective of the study was to impart knowledge of best management of infections as captured in a series of locally developed clinical guidelines. The passive group GPs were given a copy of the guidelines and were invited to an hour-long lecture event. The GPs in the deep group were given a copy of the guidelines and were invited to engage in an hour-long discussion about the guideline content at their own premises. The control group received neither the guidelines nor any educational contact regarding them. Three months before and 12 months after the interventions, all GPs were sent a postal questionnaire on their preferred empirical antibiotic for 10 common bacterial infections. The responses were compared in order to ascertain whether increased knowledge of best clinical practice was evident in each group. Seventy-five per cent (144/191) of GPs responded to the pre-intervention questionnaire, 62 % (119/191) post-intervention. Thirty-four per cent (22/64) of GPs in the passive group attended the lecture; 91% (60/66) of the active group engaged in discussion at meetings with the authors. A significantly higher proportion of the active group participants' speaking time, during a sample of four visits, was devoted to verbal indicators of active processing than the passive group lecture attenders (difference = 55%, Fisher's exact test P = 0.002, OR = 11.5, 95% CI 2.1-113.4). Inter-observer agreement on the classification of the verbal evidence was highly statistically significant for all classes (Pearson's product moment correlation, P < 0.0005, r = +0.893 to +0.999) except repetition (P > 0.05, r = +0.407). Median compliance of responses with the guidelines improved by 2.5% within the control group and 4% within the passive, but by 23% within the active. The difference between the changes in the active and control groups was highly statistically significant at 17.5% (Mann-Whitney test, P = 0.004, 95% CI 6-29%). However, for the 10 infections, the median difference between the changes in the passive and control groups was not significant at 3% (P = 0.75, 95% CI -8 to +12. The median difference between changes in the active and passive groups was significant at 17% (P = 0.015, 95% CI 7-24%) in favour of the active. An active educational strategy attracted more participation and was more effective at generating deep cognitive processing than a passive strategy. A large improvement, lasting for at least 12 months, in attitude-compliance with guidelines on the optimal treatment of infections was imparted by the active processing method. A typical passive method was much less popular and had an insignificant impact on attitudes. The findings suggest that initiatives aiming to implement evidence-based guidelines must employ active educational strategies if enduring changes in attitude are to result.
An active learning approach to Bloom's Taxonomy.
Weigel, Fred K; Bonica, Mark
2014-01-01
As educators strive toward improving student learning outcomes, many find it difficult to instill their students with a deep understanding of the material the instructors share. One challenge lies in how to provide the material with a meaningful and engaging method that maximizes student understanding and synthesis. By following a simple strategy involving Active Learning across the 3 primary domains of Bloom's Taxonomy (cognitive, affective, and psychomotor), instructors can dramatically improve the quality of the lesson and help students retain and understand the information. By applying our strategy, instructors can engage their students at a deeper level and may even find themselves enjoying the process more.
Tactical Synthesis Of Efficient Global Search Algorithms
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2009-01-01
Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.
High resolution beamforming on large aperture vertical line arrays: Processing synthetic data
NASA Astrophysics Data System (ADS)
Tran, Jean-Marie Q.; Hodgkiss, William S.
1990-09-01
This technical memorandum studies the beamforming of large aperture line arrays deployed vertically in the water column. The work concentrates on the use of high resolution techniques. Two processing strategies are envisioned: (1) full aperture coherent processing which offers in theory the best processing gain; and (2) subaperture processing which consists in extracting subapertures from the array and recombining the angular spectra estimated from these subarrays. The conventional beamformer, the minimum variance distortionless response (MVDR) processor, the multiple signal classification (MUSIC) algorithm and the minimum norm method are used in this study. To validate the various processing techniques, the ATLAS normal mode program is used to generate synthetic data which constitute a realistic signals environment. A deep-water, range-independent sound velocity profile environment, characteristic of the North-East Pacific, is being studied for two different 128 sensor arrays: a very long one cut for 30 Hz and operating at 20 Hz; and a shorter one cut for 107 Hz and operating at 100 Hz. The simulated sound source is 5 m deep. The full aperture and subaperture processing are being implemented with curved and plane wavefront replica vectors. The beamforming results are examined and compared to the ray-theory results produced by the generic sonar model.
ERIC Educational Resources Information Center
Vos, Nienke; van der Meijden, Henny; Denessen, Eddie
2011-01-01
In this study the effects of two different interactive learning tasks, in which simple games were included were described with respect to student motivation and deep strategy use. The research involved 235 students from four elementary schools in The Netherlands. One group of students (N = 128) constructed their own memory "drag and…
Emotional labor actors: a latent profile analysis of emotional labor strategies.
Gabriel, Allison S; Daniels, Michael A; Diefendorff, James M; Greguras, Gary J
2015-05-01
Research on emotional labor focuses on how employees utilize 2 main regulation strategies-surface acting (i.e., faking one's felt emotions) and deep acting (i.e., attempting to feel required emotions)-to adhere to emotional expectations of their jobs. To date, researchers largely have considered how each strategy functions to predict outcomes in isolation. However, this variable-centered perspective ignores the possibility that there are subpopulations of employees who may differ in their combined use of surface and deep acting. To address this issue, we conducted 2 studies that examined surface acting and deep acting from a person-centered perspective. Using latent profile analysis, we identified 5 emotional labor profiles-non-actors, low actors, surface actors, deep actors, and regulators-and found that these actor profiles were distinguished by several emotional labor antecedents (positive affectivity, negative affectivity, display rules, customer orientation, and emotion demands-abilities fit) and differentially predicted employee outcomes (emotional exhaustion, job satisfaction, and felt inauthenticity). Our results reveal new insights into the nature of emotion regulation in emotional labor contexts and how different employees may characteristically use distinct combinations of emotion regulation strategies to manage their emotional expressions at work. (c) 2015 APA, all rights reserved.
Navigation Strategies for Primitive Solar System Body Rendezvous and Proximity Operations
NASA Technical Reports Server (NTRS)
Getzandanner, Kenneth M.
2011-01-01
A wealth of scientific knowledge regarding the composition and evolution of the solar system can be gained through reconnaissance missions to primitive solar system bodies. This paper presents analysis of a baseline navigation strategy designed to address the unique challenges of primitive body navigation. Linear covariance and Monte Carlo error analysis was performed on a baseline navigation strategy using simulated data from a· design reference mission (DRM). The objective of the DRM is to approach, rendezvous, and maintain a stable orbit about the near-Earth asteroid 4660 Nereus. The outlined navigation strategy and resulting analyses, however, are not necessarily limited to this specific target asteroid as they may he applicable to a diverse range of mission scenarios. The baseline navigation strategy included simulated data from Deep Space Network (DSN) radiometric tracking and optical image processing (OpNav). Results from the linear covariance and Monte Carlo analyses suggest the DRM navigation strategy is sufficient to approach and perform proximity operations in the vicinity of the target asteroid with meter-level accuracy.
Li, Zhiyong; Wang, Yuezhu; Li, Jinlong; Liu, Fang; He, Liming; He, Ying; Wang, Shenyue
2016-12-01
Sponges host complex symbiotic communities, but to date, the whole picture of the metabolic potential of sponge microbiota remains unclear, particularly the difference between the shallow-water and deep-sea sponge holobionts. In this study, two completely different sponges, shallow-water sponge Theonella swinhoei from the South China Sea and deep-sea sponge Neamphius huxleyi from the Indian Ocean, were selected to compare their whole symbiotic communities and metabolic potential, particularly in element transformation. Phylogenetically diverse bacteria, archaea, fungi, and algae were detected in both shallow-water sponge T. swinhoei and deep-sea sponge N. huxleyi, and different microbial community structures were indicated between these two sponges. Metagenome-based gene abundance analysis indicated that, though the two sponge microbiota have similar core functions, they showed different potential strategies in detailed metabolic processes, e.g., in the transformation and utilization of carbon, nitrogen, phosphorus, and sulfur by corresponding microbial symbionts. This study provides insight into the putative metabolic potentials of the microbiota associated with the shallow-water and deep-sea sponges at the whole community level, extending our knowledge of the sponge microbiota's functions, the association of sponge- microbes, as well as the adaption of sponge microbiota to the marine environment.
An improved multi-domain convolution tracking algorithm
NASA Astrophysics Data System (ADS)
Sun, Xin; Wang, Haiying; Zeng, Yingsen
2018-04-01
Along with the wide application of the Deep Learning in the field of Computer vision, Deep learning has become a mainstream direction in the field of object tracking. The tracking algorithm in this paper is based on the improved multidomain convolution neural network, and the VOT video set is pre-trained on the network by multi-domain training strategy. In the process of online tracking, the network evaluates candidate targets sampled from vicinity of the prediction target in the previous with Gaussian distribution, and the candidate target with the highest score is recognized as the prediction target of this frame. The Bounding Box Regression model is introduced to make the prediction target closer to the ground-truths target box of the test set. Grouping-update strategy is involved to extract and select useful update samples in each frame, which can effectively prevent over fitting. And adapt to changes in both target and environment. To improve the speed of the algorithm while maintaining the performance, the number of candidate target succeed in adjusting dynamically with the help of Self-adaption parameter Strategy. Finally, the algorithm is tested by OTB set, compared with other high-performance tracking algorithms, and the plot of success rate and the accuracy are drawn. which illustrates outstanding performance of the tracking algorithm in this paper.
Adult stem cell lineage tracing and deep tissue imaging
Fink, Juergen; Andersson-Rolf, Amanda; Koo, Bon-Kyoung
2015-01-01
Lineage tracing is a widely used method for understanding cellular dynamics in multicellular organisms during processes such as development, adult tissue maintenance, injury repair and tumorigenesis. Advances in tracing or tracking methods, from light microscopy-based live cell tracking to fluorescent label-tracing with two-photon microscopy, together with emerging tissue clearing strategies and intravital imaging approaches have enabled scientists to decipher adult stem and progenitor cell properties in various tissues and in a wide variety of biological processes. Although technical advances have enabled time-controlled genetic labeling and simultaneous live imaging, a number of obstacles still need to be overcome. In this review, we aim to provide an in-depth description of the traditional use of lineage tracing as well as current strategies and upcoming new methods of labeling and imaging. [BMB Reports 2015; 48(12): 655-667] PMID:26634741
How to improve healthcare? Identify, nurture and embed individuals and teams with "deep smarts".
Eljiz, Kathy; Greenfield, David; Molineux, John; Sloan, Terry
2018-03-19
Purpose Unlocking and transferring skills and capabilities in individuals to the teams they work within, and across, is the key to positive organisational development and improved patient care. Using the "deep smarts" model, the purpose of this paper is to examine these issues. Design/methodology/approach The "deep smarts" model is described, reviewed and proposed as a way of transferring knowledge and capabilities within healthcare organisations. Findings Effective healthcare delivery is achieved through, and continues to require, integrative care involving numerous, dispersed service providers. In the space of overlapping organisational boundaries, there is a need for "deep smarts" people who act as "boundary spanners". These are critical integrative, networking roles employing clinical, organisational and people skills across multiple settings. Research limitations/implications Studies evaluating the barriers and enablers to the application of the deep smarts model and 13 knowledge development strategies proposed are required. Such future research will empirically and contemporary ground our understanding of organisational development in modern complex healthcare settings. Practical implications An organisation with "deep smarts" people - in managerial, auxiliary and clinical positions - has a greater capacity for integration and achieving improved patient-centred care. Originality/value In total, 13 developmental strategies, to transfer individual capabilities into organisational capability, are proposed. These strategies are applicable to different contexts and challenges faced by individuals and teams in complex healthcare organisations.
NASA Astrophysics Data System (ADS)
Hadzidaki, Pandora
2008-01-01
Empirical studies persistently indicate that the usual explanatory strategies used in quantum mechanics (QM) instruction fail, in general, to yield understanding. In this study, we propose an instructional intervention, which: (a) incorporates into its subject matter a critical comparison of QM scientific content with the fundamental epistemological and ontological commitments of the prominent philosophical theories of explanation, a weak form of which we meet in QM teaching; (b) illuminates the reasons of their failure in the quantum domain; and (c) implements an explanatory strategy highly inspired by the epistemological pathways through which, during the birth-process of QM, science has gradually reached understanding. This strategy, an inherent element of which is the meta-cognitive and meta-scientific thinking, aims at leading learners not only to an essential understanding of QM worldview, but to a deep insight into the ‘Nature of Science’ as well.
de la Fuente, Jesús; Fernández-Cabezas, María; Cambil, Matilde; Vera, Manuel M.; González-Torres, Maria Carmen; Artuch-Garde, Raquel
2017-01-01
The aim of the present research was to analyze the linear relationship between resilience (meta-motivational variable), learning approaches (meta-cognitive variables), strategies for coping with academic stress (meta-emotional variable) and academic achievement, necessary in the context of university academic stress. A total of 656 students from a southern university in Spain completed different questionnaires: a resiliency scale, a coping strategies scale, and a study process questionnaire. Correlations and structural modeling were used for data analyses. There was a positive and significant linear association showing a relationship of association and prediction of resilience to the deep learning approach, and problem-centered coping strategies. In a complementary way, these variables positively and significantly predicted the academic achievement of university students. These results enabled a linear relationship of association and consistent and differential prediction to be established among the variables studied. Implications for future research are set out. PMID:28713298
Liu, Dan; Yan, Xu; Zhuo, Shengnan; Si, Mengying; Liu, Mingren; Wang, Sheng; Ren, Lili; Chai, Liyuan; Shi, Yan
2018-06-01
Lignin depolymerization is a challenging process in biorefinery due to the recalcitrant and complex structure of lignin. This challenge was herein addressed via elaborating a new strategy of combining the bacterial strain Pandoraea sp. B-6 (hereafter B-6) with a deep eutectic solvent (DES) to pretreat rice straw (RS). In this approach, DES effectively depolymerized lignin yet easily caused sugar loss under severe conditions. B-6 not only overcame the obstacle of lignin droplets, but also significantly improved enzymatic digestibility. After B-6 assisted DES pretreatment, the reducing sugar yield increases by 0.3-1.5 times over DES pretreatment and 0.9-3.1 times over the untreated RS. Furthermore, a "cornhusking" mechanism explaining the improvement of the enzymatic digestibility by B-6 was suggested based on physicochemical characterizations of the untreated and pretreated RS. The findings provided a comprehensive perspective to establish a DES-microbial process for lignocellulose pretreatment. Copyright © 2018 Elsevier Ltd. All rights reserved.
A Spatiotemporal Prediction Framework for Air Pollution Based on Deep RNN
NASA Astrophysics Data System (ADS)
Fan, J.; Li, Q.; Hou, J.; Feng, X.; Karimian, H.; Lin, S.
2017-10-01
Time series data in practical applications always contain missing values due to sensor malfunction, network failure, outliers etc. In order to handle missing values in time series, as well as the lack of considering temporal properties in machine learning models, we propose a spatiotemporal prediction framework based on missing value processing algorithms and deep recurrent neural network (DRNN). By using missing tag and missing interval to represent time series patterns, we implement three different missing value fixing algorithms, which are further incorporated into deep neural network that consists of LSTM (Long Short-term Memory) layers and fully connected layers. Real-world air quality and meteorological datasets (Jingjinji area, China) are used for model training and testing. Deep feed forward neural networks (DFNN) and gradient boosting decision trees (GBDT) are trained as baseline models against the proposed DRNN. Performances of three missing value fixing algorithms, as well as different machine learning models are evaluated and analysed. Experiments show that the proposed DRNN framework outperforms both DFNN and GBDT, therefore validating the capacity of the proposed framework. Our results also provides useful insights for better understanding of different strategies that handle missing values.
fMRI differences in encoding and retrieval of pictures due to encoding strategy in the elderly.
Mandzia, Jennifer L; Black, Sandra E; McAndrews, Mary Pat; Grady, Cheryl; Graham, Simon
2004-01-01
Functional MRI (fMRI) was used to examine the neural correlates of depth of processing during encoding and retrieval of photographs in older normal volunteers (n = 12). Separate scans were run during deep (natural vs. man-made decision) and shallow (color vs. black-and-white decision) encoding and during old/new recognition of pictures initially presented in one of the two encoding conditions. A baseline condition consisting of a scrambled, color photograph was used as a contrast in each scan. Recognition accuracy was greater for the pictures on which semantic decisions were made at encoding, consistent with the expected levels of processing effect. A mixed-effects model was used to compare fMRI differences between conditions (deep-baseline vs. shallow-baseline) in both encoding and retrieval. For encoding, this contrast revealed greater activation associated with deep encoding in several areas, including the left parahippocampal gyrus (PHG), left middle temporal gyrus, and left anterior thalamus. Increased left hippocampal, right dorsolateral, and inferior frontal activations were found for recognition of items that had been presented in the deep relative to the shallow encoding condition. We speculate that the modulation of activity in these regions by the depth of processing manipulation shows that these regions support effective encoding and successful retrieval. A direct comparison between encoding and retrieval revealed greater activation during retrieval in the medial temporal (right hippocampus and bilateral PHG), anterior cingulate, and bilateral prefrontal (inferior and dorsolateral). Most notably, greater right posterior PHG was found during encoding compared to recognition. Focusing on the medial temporal lobe (MTL) region, our results suggest a greater involvement of both anterior MTL and prefrontal regions in retrieval compared to encoding. Copyright 2003 Wiley-Liss, Inc.
Exploring geo-tagged photos for land cover validation with deep learning
NASA Astrophysics Data System (ADS)
Xing, Hanfa; Meng, Yuan; Wang, Zixuan; Fan, Kaixuan; Hou, Dongyang
2018-07-01
Land cover validation plays an important role in the process of generating and distributing land cover thematic maps, which is usually implemented by high cost of sample interpretation with remotely sensed images or field survey. With an increasing availability of geo-tagged landscape photos, the automatic photo recognition methodologies, e.g., deep learning, can be effectively utilised for land cover applications. However, they have hardly been utilised in validation processes, as challenges remain in sample selection and classification for highly heterogeneous photos. This study proposed an approach to employ geo-tagged photos for land cover validation by using the deep learning technology. The approach first identified photos automatically based on the VGG-16 network. Then, samples for validation were selected and further classified by considering photos distribution and classification probabilities. The implementations were conducted for the validation of the GlobeLand30 land cover product in a heterogeneous area, western California. Experimental results represented promises in land cover validation, given that GlobeLand30 showed an overall accuracy of 83.80% with classified samples, which was close to the validation result of 80.45% based on visual interpretation. Additionally, the performances of deep learning based on ResNet-50 and AlexNet were also quantified, revealing no substantial differences in final validation results. The proposed approach ensures geo-tagged photo quality, and supports the sample classification strategy by considering photo distribution, with accuracy improvement from 72.07% to 79.33% compared with solely considering the single nearest photo. Consequently, the presented approach proves the feasibility of deep learning technology on land cover information identification of geo-tagged photos, and has a great potential to support and improve the efficiency of land cover validation.
ERIC Educational Resources Information Center
Fenwick, Lisl; Humphrey, Sally; Quinn, Marie; Endicott, Michele
2014-01-01
The development of deep understanding of theoretical knowledge is an essential element of successful tertiary-programs that prepare individuals to enter professions. This study investigates the extent to which an emphasis on the application of knowledge within curriculum design, teaching strategies and assessment methods developed deep knowledge…
DeepID-Net: Deformable Deep Convolutional Neural Networks for Object Detection.
Ouyang, Wanli; Zeng, Xingyu; Wang, Xiaogang; Qiu, Shi; Luo, Ping; Tian, Yonglong; Li, Hongsheng; Yang, Shuo; Wang, Zhe; Li, Hongyang; Loy, Chen Change; Wang, Kun; Yan, Junjie; Tang, Xiaoou
2016-07-07
In this paper, we propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of model averaging. The proposed approach improves the mean averaged precision obtained by RCNN [16], which was the state-of-the-art, from 31% to 50.3% on the ILSVRC2014 detection test set. It also outperforms the winner of ILSVRC2014, GoogLeNet, by 6.1%. Detailed component-wise analysis is also provided through extensive experimental evaluation, which provides a global view for people to understand the deep learning object detection pipeline.
Sun, Jinyang; Wang, Junsheng; Pan, Xinxiang; Yuan, Haichao
2015-01-01
Ships’ ballast water can carry aquatic organisms into foreign ecosystems. In our previous studies, a concept using ion exchange membrane electrolysis to treat ballast water has been proven. In addition to other substantial approaches, a new strategy for inactivating algae is proposed based on the developed ballast water treatment system. In the new strategy, the means of multi-trial injection with small doses of electrolytic products is applied for inactivating algae. To demonstrate the performance of the new strategy, contrast experiments between new strategies and routine processes were conducted. Four algae species including Chlorella vulgaris, Platymonas subcordiformis, Prorocentrum micans and Karenia mikimotoi were chosen as samples. The different experimental parameters are studied including the injection times and doses of electrolytic products. Compared with the conventional one trial injection method, mortality rate time (MRT) and available chlorine concentration can be saved up to about 84% and 40%, respectively, under the application of the new strategy. The proposed new approach has great potential in practical ballast water treatment. Furthermore, the strategy is also helpful for deep insight of mechanism of algal tolerance. PMID:26068239
DEEP ATTRACTOR NETWORK FOR SINGLE-MICROPHONE SPEAKER SEPARATION.
Chen, Zhuo; Luo, Yi; Mesgarani, Nima
2017-03-01
Despite the overwhelming success of deep learning in various speech processing tasks, the problem of separating simultaneous speakers in a mixture remains challenging. Two major difficulties in such systems are the arbitrary source permutation and unknown number of sources in the mixture. We propose a novel deep learning framework for single channel speech separation by creating attractor points in high dimensional embedding space of the acoustic signals which pull together the time-frequency bins corresponding to each source. Attractor points in this study are created by finding the centroids of the sources in the embedding space, which are subsequently used to determine the similarity of each bin in the mixture to each source. The network is then trained to minimize the reconstruction error of each source by optimizing the embeddings. The proposed model is different from prior works in that it implements an end-to-end training, and it does not depend on the number of sources in the mixture. Two strategies are explored in the test time, K-means and fixed attractor points, where the latter requires no post-processing and can be implemented in real-time. We evaluated our system on Wall Street Journal dataset and show 5.49% improvement over the previous state-of-the-art methods.
Metaproteomics of aquatic microbial communities in a deep and stratified estuary.
Colatriano, David; Ramachandran, Arthi; Yergeau, Etienne; Maranger, Roxane; Gélinas, Yves; Walsh, David A
2015-10-01
Here we harnessed the power of metaproteomics to assess the metabolic diversity and function of stratified aquatic microbial communities in the deep and expansive Lower St. Lawrence Estuary, located in eastern Canada. Vertical profiling of the microbial communities through the stratified water column revealed differences in metabolic lifestyles and in carbon and nitrogen processing pathways. In productive surface waters, we identified heterotrophic populations involved in the processing of high and low molecular weight organic matter from both terrestrial (e.g. cellulose and xylose) and marine (e.g. organic compatible osmolytes) sources. In the less productive deep waters, chemosynthetic production coupled to nitrification by MG-I Thaumarchaeota and Nitrospina appeared to be a dominant metabolic strategy. Similar to other studies of the coastal ocean, we identified methanol oxidation proteins originating from the common OM43 marine clade. However, we also identified a novel lineage of methanol-oxidizers specifically in the particle-rich bottom (i.e. nepheloid) layer. Membrane transport proteins assigned to the uncultivated MG-II Euryarchaeota were also specifically detected in the nepheloid layer. In total, these results revealed strong vertical structure of microbial taxa and metabolic activities, as well as the presence of specific "nepheloid" taxa that may contribute significantly to coastal ocean nutrient cycling. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Wanke, Stefan; Granados Mendoza, Carolina; Müller, Sebastian; Paizanni Guillén, Anna; Neinhuis, Christoph; Lemmon, Alan R; Lemmon, Emily Moriarty; Samain, Marie-Stéphanie
2017-12-01
Recalcitrant relationships are characterized by very short internodes that can be found among shallow and deep phylogenetic scales all over the tree of life. Adding large amounts of presumably informative sequences, while decreasing systematic error, has been suggested as a possible approach to increase phylogenetic resolution. The development of enrichment strategies, coupled with next generation sequencing, resulted in a cost-effective way to facilitate the reconstruction of recalcitrant relationships. By applying the anchored hybrid enrichment (AHE) genome partitioning strategy to Aristolochia using an universal angiosperm probe set, we obtained 231-233 out of 517 single or low copy nuclear loci originally contained in the enrichment kit, resulting in a total alignment length of 154,756bp to 160,150bp. Since Aristolochia (Piperales; magnoliids) is distantly related to any angiosperm species whose genome has been used for the plant AHE probe design (Amborella trichopoda being the closest), it serves as a proof of universality for this probe set. Aristolochia comprises approximately 500 species grouped in several clades (OTUs), whose relationships to each other are partially unknown. Previous phylogenetic studies have shown that these lineages branched deep in time and in quick succession, seen as short-deep internodes. Short-shallow internodes are also characteristic of some Aristolochia lineages such as Aristolochia subsection Pentandrae, a clade of presumably recent diversification. This subsection is here included to test the performance of AHE at species level. Filtering and subsampling loci using the phylogenetic informativeness method resolves several recalcitrant phylogenetic relationships within Aristolochia. By assuming different ploidy levels during bioinformatics processing of raw data, first hints are obtained that polyploidization contributed to the evolution of Aristolochia. Phylogenetic results are discussed in the light of current systematics and morphology. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Referred pain and cutaneous responses from deep tissue electrical pain stimulation in the groin.
Aasvang, E K; Werner, M U; Kehlet, H
2015-08-01
Persistent postherniotomy pain is located around the scar and external inguinal ring and is often described as deep rather than cutaneous, with frequent complaints of pain in adjacent areas. Whether this pain is due to local pathology or referred/projected pain is unknown, hindering mechanism-based treatment. Deep tissue electrical pain stimulation by needle electrodes in the right groin (rectus muscle, ilioinguinal/iliohypogastric nerve and perispermatic cord) was combined with assessment of referred/projected pain and the cutaneous heat pain threshold (HPT) at three prespecified areas (both groins and the lower right arm) in 19 healthy subjects. The assessment was repeated 10 days later to assess the reproducibility of individual responses. Deep electrical stimulation elicited pain at the stimulation site in all subjects, and in 15 subjects, pain from areas outside the stimulation area was reported, with 90-100% having the same response on both days, depending on the location. Deep pain stimulation significantly increased the cutaneous HPT (P<0.014). Individual HPT responses before and during deep electrical pain stimulation were significantly correlated (ρ>0.474, P≤0.040) at the two test days for the majority of test areas. Our results corroborate a systematic relationship between deep pain and changes in cutaneous nociception. The individual referred/projected pain patterns and cutaneous responses are variable, but reproducible, supporting individual differences in anatomy and sensory processing. Future studies investigating the responses to deep tissue electrical stimulation in persistent postherniotomy pain patients may advance our understanding of underlying pathophysiological mechanisms and strategies for treatment and prevention. ClinicalTrials.gov (NCT01701427). © The Author 2015. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Effect of lexical proficiency on reading strategies used for shallow and deep orthographies.
Jeon, Hyeon-Ae
2012-12-05
The aim of the present study was to explore how different levels of proficiency in deep orthography (DO) influence the reading strategies used for sentences containing both shallow orthographies and DO, and to examine the neural correlates involved. High-proficiency participants, who depend on rapid and direct semantic retrieval by the lexical route, activated the anterior cingulate cortex, middle frontal, and fusiform gyri. Low-proficiency participants, who rely on the sublexical route, activated inferior parietal lobule and inferior frontal gyrus. These findings suggest that level of proficiency in DO modulates the selection of specific reading strategies, and that the neural pathways underlying these strategies are separately laid out in the cortical areas.
Understanding coping with cancer: how can qualitative research help?
Chittem, Mahati
2014-01-01
Research in psycho-oncology investigates the psycho-social and emotional aspects of cancer and how this is related to health, well-being and overall patient care. Coping with cancer is a prime focus for researchers owing to its impact on patients' psychological processing and life in general. Research so far has focused mainly on quantitative study designs such as questionnaires to examine the coping strategies used by cancer patients. However, in order to gain a rich and deep understanding of the reasons, processes and types of strategies that patients use to deal with cancer, qualitative study designs are necessary. Few studies have used qualitative designs such as semi-structured interviews to explore coping with cancer. The current paper aims to review the suitability and benefits of using qualitative research designs to understand coping with cancer with the help of some key literature in psycho-oncology research.
Deep Sleep and Parietal Cortex Gene Expression Changes Are Related to Cognitive Deficits with Age
Buechel, Heather M.; Popovic, Jelena; Searcy, James L.; Porter, Nada M.; Thibault, Olivier; Blalock, Eric M.
2011-01-01
Background Age-related cognitive deficits negatively affect quality of life and can presage serious neurodegenerative disorders. Despite sleep disruption's well-recognized negative influence on cognition, and its prevalence with age, surprisingly few studies have tested sleep's relationship to cognitive aging. Methodology We measured sleep stages in young adult and aged F344 rats during inactive (enhanced sleep) and active (enhanced wake) periods. Animals were behaviorally characterized on the Morris water maze and gene expression profiles of their parietal cortices were taken. Principal Findings Water maze performance was impaired, and inactive period deep sleep was decreased with age. However, increased deep sleep during the active period was most strongly correlated to maze performance. Transcriptional profiles were strongly associated with behavior and age, and were validated against prior studies. Bioinformatic analysis revealed increased translation and decreased myelin/neuronal pathways. Conclusions The F344 rat appears to serve as a reasonable model for some common sleep architecture and cognitive changes seen with age in humans, including the cognitively disrupting influence of active period deep sleep. Microarray analysis suggests that the processes engaged by this sleep are consistent with its function. Thus, active period deep sleep appears temporally misaligned but mechanistically intact, leading to the following: first, aged brain tissue appears capable of generating the slow waves necessary for deep sleep, albeit at a weaker intensity than in young. Second, this activity, presented during the active period, seems disruptive rather than beneficial to cognition. Third, this active period deep sleep may be a cognitively pathologic attempt to recover age-related loss of inactive period deep sleep. Finally, therapeutic strategies aimed at reducing active period deep sleep (e.g., by promoting active period wakefulness and/or inactive period deep sleep) may be highly relevant to cognitive function in the aging community. PMID:21483696
Blackboxing: social learning strategies and cultural evolution
Heyes, Cecilia
2016-01-01
Social learning strategies (SLSs) enable humans, non-human animals, and artificial agents to make adaptive decisions about when they should copy other agents, and who they should copy. Behavioural ecologists and economists have discovered an impressive range of SLSs, and explored their likely impact on behavioural efficiency and reproductive fitness while using the ‘phenotypic gambit’; ignoring, or remaining deliberately agnostic about, the nature and origins of the cognitive processes that implement SLSs. Here I argue that this ‘blackboxing' of SLSs is no longer a viable scientific strategy. It has contributed, through the ‘social learning strategies tournament', to the premature conclusion that social learning is generally better than asocial learning, and to a deep puzzle about the relationship between SLSs and cultural evolution. The puzzle can be solved by recognizing that whereas most SLSs are ‘planetary'—they depend on domain-general cognitive processes—some SLSs, found only in humans, are ‘cook-like'—they depend on explicit, metacognitive rules, such as copy digital natives. These metacognitive SLSs contribute to cultural evolution by fostering the development of processes that enhance the exclusivity, specificity, and accuracy of social learning. PMID:27069046
ERIC Educational Resources Information Center
Abdul Razzak, Nina
2016-01-01
Highly-traditional education systems that mainly offer what is known as "direct instruction" usually result in graduates with a surface approach to learning rather than a deep one. What is meant by deep-learning is learning that involves critical analysis, the linking of ideas and concepts, creative problem solving, and application…
ERIC Educational Resources Information Center
Xie, Ying; Ke, Fengfeng; Sharma, Priya
2010-01-01
Deep cognitive thinking refers to a learner's purposeful and conscious manipulation of ideas toward meaningful learning. Strategies such as journaling/blogging and peer feedback have been found to promote deep thinking. This article reports a research study about the effects of two different blog leader styles on students' deep thinking as…
Analysis of hyper-baric biofilms on engineering surfaces formed in the Deep Sea
NASA Astrophysics Data System (ADS)
Meier, A.; Tsaloglou, N. M.; Connelly, D.; Keevil, B.; Mowlem, M.
2012-04-01
Long-term monitoring of the environment is essential to our understanding of global processes, such as global warming, and their impact. As biofilm formation occurs after only short deployment periods in the marine environment, it is a major problem in long-term operation of environmental sensors. This makes the development of anti-fouling strategies for in situ sensors critical to their function. The effects on sensors can range from measurement drift, which can be compensated, to blockage of channels and material degradation, rendering them inoperative. In general, the longer the deployment period the more severe the effects of the biofouling become. Until now, biofilm research has focused mainly on the eutrophic and euphotic zones of the oceans. Hyper-baric biofilms are poorly understood due to difficulties in experimental setup and the assumption that biofouling in these oligotrophic regions could be regarded as insignificant. Our study shows significant biofilm formation occurs in the deep sea. We deployed a variety of materials, typically used in engineering structures, on a 4500 metre deep mooring during a cruise to the Cayman Trough, for 10 days. The materials were clear plain glass, poly-methyl methacrylate (PMMA), Delrin™, and copper, a known antifouling agent. The biofilms were studied by fluorescence microscopy and molecular analysis. For microscopy the nucleic acid stain, SYTO©9, was used and surface coverage was quantified by using a custom MATLAB™ program. Further molecular analyses, including UV Vis spectrometric quantification of DNA, nucleic acid amplification using Polymerase Chain Reaction (PCR), and Denaturing Gradient Gel Electrophoresis (DGGE), were utilised for the analysis of the microbial community composition of these biofilms. Six 16S/18S universal primer sets representative for the three kingdoms, Archea, Bacteria, and Eukarya were used for the PCR and DGGE. Preliminary results from fluorescence microscopy showed that the biofilm coverage on copper and PMMA was a third of that on Delrin™ and less than half the amount seen on glass surfaces. PCR showed that the microorganisms in these biofilms were predominantly Archea . DGGE conditions were optimised for the separation of PCR products from the three kingdoms. Sequencing data is still being processed. These results show that mitigation strategies are essential for any kind of long-term deployments of remote sensors even in the deep sea. Such strategies could consist for example of chlorine production through the electrolysis of seawater, back-flushing sensor channels with various chemicals, thin films of nickel/copper/zinc alloys in various ratios as surface treatments, quorum-sensing, furanone-treatment and micro-structuring of surfaces.
The relation between cognitive and metacognitive strategic processing during a science simulation.
Dinsmore, Daniel L; Zoellner, Brian P
2018-03-01
This investigation was designed to uncover the relations between students' cognitive and metacognitive strategies used during a complex climate simulation. While cognitive strategy use during science inquiry has been studied, the factors related to this strategy use, such as concurrent metacognition, prior knowledge, and prior interest, have not been investigated in a multidimensional fashion. This study addressed current issues in strategy research by examining not only how metacognitive, surface-level, and deep-level strategies influence performance, but also how these strategies related to each other during a contextually relevant science simulation. The sample for this study consisted of 70 undergraduates from a mid-sized Southeastern university in the United States. These participants were recruited from both physical and life science (e.g., biology) and education majors to obtain a sample with variance in terms of their prior knowledge, interest, and strategy use. Participants completed measures of prior knowledge and interest about global climate change. Then, they were asked to engage in an online climate simulator for up to 30 min while thinking aloud. Finally, participants were asked to answer three outcome questions about global climate change. Results indicated a poor fit for the statistical model of the frequency and level of processing predicting performance. However, a statistical model that independently examined the influence of metacognitive monitoring and control of cognitive strategies showed a very strong relation between the metacognitive and cognitive strategies. Finally, smallest space analysis results provided evidence that strategy use may be better captured in a multidimensional fashion, particularly with attention paid towards the combination of strategies employed. Conclusions drawn from the evidence point to the need for more dynamic, multidimensional models of strategic processing that account for the patterns of optimal and non-optimal strategy use. Additionally, analyses that can capture these complex patterns need to be further explored. © 2017 The British Psychological Society.
A concept for non-invasive temperature measurement during injection moulding processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopmann, Christian; Spekowius, Marcel, E-mail: spekowius@ikv.rwth-aachen.de; Wipperfürth, Jens
2016-03-09
Current models of the injection moulding process insufficiently consider the thermal interactions between melt, solidified material and the mould. A detailed description requires a deep understanding of the underlying processes and a precise observation of the temperature. Because todays measurement concepts do not allow a non-invasive analysis it is necessary to find new measurement techniques for temperature measurements during the manufacturing process. In this work we present the idea of a set up for a tomographic ultrasound measurement of the temperature field inside a plastics melt. The goal is to identify a concept that can be installed on a specializedmore » mould for the injection moulding process. The challenges are discussed and the design of a prototype is shown. Special attention is given to the spatial arrangement of the sensors. Besides the design of a measurement set up a reconstruction strategy for the ultrasound signals is required. We present an approach in which an image processing algorithm can be used to calculate a temperature distribution from the ultrasound scans. We discuss a reconstruction strategy in which the ultrasound signals are converted into a spartial temperature distribution by using pvT curves that are obtained by dilatometer measurements.« less
Standing on the shoulders of giants: improving medical image segmentation via bias correction.
Wang, Hongzhi; Das, Sandhitsu; Pluta, John; Craige, Caryne; Altinay, Murat; Avants, Brian; Weiner, Michael; Mueller, Susanne; Yushkevich, Paul
2010-01-01
We propose a simple strategy to improve automatic medical image segmentation. The key idea is that without deep understanding of a segmentation method, we can still improve its performance by directly calibrating its results with respect to manual segmentation. We formulate the calibration process as a bias correction problem, which is addressed by machine learning using training data. We apply this methodology on three segmentation problems/methods and show significant improvements for all of them.
Deep pockets or blueprint for change: traumatic brain injury (TBI) proactive strategy.
Wood, D W; Pohl, S; Lawler, S; Okamoto, G
1998-09-01
The Pacific Conference scheduled for October 1-3, 1988, is a critical event in the development of an integrated community-based plan for a comprehensive continuum of services to address the "silent epidemic," Traumatic Brain Injured (TBI). This paper provides insights of the complex nature and the special problems faced by the TBI survivors; their families, natural supports and caregivers, as well as the health, social and educational care providers in Hawaii. Process for the development of the community plan is presented.
Restrepo, Paula; Jameson, Deborah L; Carroll, Diane L
2015-01-01
Deep vein thrombosis remains a source of adverse outcomes in surgical patients. Deep vein thrombosis is preventable with prophylactic intervention. The success of noninvasive mechanical modalities for prophylaxis relies on compliance with correct application. The goals of this project were to create a guideline that reflected current evidence and expert thinking about mechanical modalities use, assess compliance with mechanical modalities, and develop strategies to disseminate an evidence-based guideline for deep vein thrombosis prophylaxis.
A fully automatic microcalcification detection approach based on deep convolution neural network
NASA Astrophysics Data System (ADS)
Cai, Guanxiong; Guo, Yanhui; Zhang, Yaqin; Qin, Genggeng; Zhou, Yuanpin; Lu, Yao
2018-02-01
Breast cancer is one of the most common cancers and has high morbidity and mortality worldwide, posing a serious threat to the health of human beings. The emergence of microcalcifications (MCs) is an important signal of early breast cancer. However, it is still challenging and time consuming for radiologists to identify some tiny and subtle individual MCs in mammograms. This study proposed a novel computer-aided MC detection algorithm on the full field digital mammograms (FFDMs) using deep convolution neural network (DCNN). Firstly, a MC candidate detection system was used to obtain potential MC candidates. Then a DCNN was trained using a novel adaptive learning strategy, neutrosophic reinforcement sample learning (NRSL) strategy to speed up the learning process. The trained DCNN served to recognize true MCs. After been classified by DCNN, a density-based regional clustering method was imposed to form MC clusters. The accuracy of the DCNN with our proposed NRSL strategy converges faster and goes higher than the traditional DCNN at same epochs, and the obtained an accuracy of 99.87% on training set, 95.12% on validation set, and 93.68% on testing set at epoch 40. For cluster-based MC cluster detection evaluation, a sensitivity of 90% was achieved at 0.13 false positives (FPs) per image. The obtained results demonstrate that the designed DCNN plays a significant role in the MC detection after being prior trained.
Tan, Peng; Xie, Xiao-Yan; Liu, Xiao-Qin; Pan, Ting; Gu, Chen; Chen, Peng-Fei; Zhou, Jia-Yu; Pan, Yichang; Sun, Lin-Bing
2017-01-05
Selective adsorption by use of metal-organic frameworks (MOFs) is an effective method for purification of hydrocarbon fuels. In consideration that the adsorption processes proceed in liquid phases, separation and recycling of adsorbents should be greatly facilitated if MOFs were endowed with magnetism. In the present study, we reported for the first time a dry gel conversion (DGC) strategy to fabricate magnetically responsive MOFs as adsorbents for deep desulfurization and denitrogenation. The solvent is separated from the solid materials in the DGC strategy, and vapor is generated at elevated temperatures to induce the growth of MOFs around magnetic Fe 3 O 4 nanoparticles. This strategy can greatly simplify the complicated procedures of the well-known layer-by-layer method and avoid the blockage of pores confronted by introducing magnetic Fe 3 O 4 nanoparticles to the pores of MOFs. Our results show that the adsorbents are capable of efficiently removing aromatic sulfur and nitrogen compounds from model fuels, for example removing 0.62mmolg -1 S and 0.89mmolg -1 N of thiophene and indole, respectively. In addition, the adsorbents are facile to separate from liquid phases by use of an external field. After 6 cycles, the adsorbents still show a good adsorption capacity that is comparable to the fresh one. Copyright © 2016 Elsevier B.V. All rights reserved.
Cosmopolitanism and Biogeography of the Genus Manganonema (Nematoda: Monhysterida) in the Deep Sea
Zeppilli, Daniela; Vanreusel, Ann; Danovaro, Roberto
2011-01-01
Simple Summary The deep sea comprises more than 60% of the Earth surface, and likely represents the largest reservoir of as yet undiscovered biodiversity. Nematodes are the most abundant taxon on Earth and are particularly abundant and diverse in the deep sea. Nevertheless, knowledge of their biogeography especially in the deep sea is still at its infancy. This article explores the distribution of the genus Manganonema in the deep Atlantic Ocean and Mediterranean Sea providing new insights about this apparently rare deep-sea genus. Abstract Spatial patterns of species diversity provide information about the mechanisms that regulate biodiversity and are important for setting conservation priorities. Present knowledge of the biogeography of meiofauna in the deep sea is scarce. This investigation focuses on the distribution of the deep-sea nematode genus Manganonema, which is typically extremely rare in deep-sea sediment samples. Forty-four specimens of eight different species of this genus were recorded from different Atlantic and Mediterranean regions. Four out of the eight species encountered are new to science. We report here that this genus is widespread both in the Atlantic and in the Mediterranean Sea. These new findings together with literature information indicate that Manganonema is a cosmopolitan genus, inhabiting a variety of deep-sea habitats and oceans. Manganonema shows the highest diversity at water depths >4,000 m. Our data, therefore, indicate that this is preferentially an abyssal genus that is able, at the same time, to colonize specific habitats at depths shallower than 1,000 m. The analysis of the distribution of the genus Manganonema indicates the presence of large differences in dispersal strategies among different species, ranging from locally endemic to cosmopolitan. Lacking meroplanktonic larvae and having limited dispersal ability due to their small size, it has been hypothesized that nematodes have limited dispersal potential. However, the investigated deep-sea nematodes were present across different oceans covering macro-scale distances. Among the possible explanations (hydrological conditions, geographical and geological pathways, long-term processes, specific historical events), their apparent preference of colonizing highly hydrodynamic systems, could suggest that these infaunal organisms are transported by means of deep-sea benthic storms and turbidity currents over long distances. PMID:26486501
The Use of Deep Learning Strategies in Online Business Courses to Impact Student Retention
ERIC Educational Resources Information Center
DeLotell, Pam Jones; Millam, Loretta A.; Reinhardt, Michelle M.
2010-01-01
Interest, application and understanding--these are key elements in successful online classroom experiences and all part of what is commonly referred to as deep learning. Deep learning occurs when students are able to connect with course topics, find value in them and see how to apply them to real-world situations. Asynchronous discussion forums in…
Poelchau, Monica F; Reynolds, Julie A; Elsik, Christine G; Denlinger, David L; Armbruster, Peter A
2013-05-22
Seasonal environments present fundamental physiological challenges to a wide range of insects. Many temperate insects surmount the exigencies of winter by undergoing photoperiodic diapause, in which photoperiod provides a token cue that initiates an alternative developmental programme leading to dormancy. Pre-diapause is a crucial preparatory phase of this process, preceding developmental arrest. However, the regulatory and physiological mechanisms of diapause preparation are largely unknown. Using high-throughput gene expression profiling in the Asian tiger mosquito, Aedes albopictus, we reveal major shifts in endocrine signalling, cell proliferation, metabolism, energy production and cellular structure across pre-diapause development. While some hallmarks of diapause, such as insulin signalling and stress response, were not important at the transcriptional level, two genes, Pepck and PCNA, appear to show diapause-induced transcriptional changes across insect taxa. These processes demonstrate physiological commonalities between Ae. albopictus pre-diapause and diapause strategies across insects, and support the idea of a genetic 'toolkit' for diapause. Observations of gene expression trends from a comparative developmental perspective suggest that individual physiological processes are delayed against a background of a fixed morphological ontogeny. Our results demonstrate how deep sequencing can provide new insights into elusive molecular bases of complex ecological adaptations.
Applying a punch with microridges in multistage deep drawing processes.
Lin, Bor-Tsuen; Yang, Cheng-Yu
2016-01-01
The developers of high aspect ratio components aim to minimize the processing stages in deep drawing processes. This study elucidates the application of microridge punches in multistage deep drawing processes. A microridge punch improves drawing performance, thereby reducing the number of stages required in deep forming processes. As an example, the original eight-stage deep forming process for a copper cylindrical cup with a high aspect ratio was analyzed by finite element simulation. Microridge punch designs were introduced in Stages 4 and 7 to replace the original punches. In addition, Stages 3 and 6 were eliminated. Finally, these changes were verified through experiments. The results showed that the microridge punches reduced the number of deep drawing stages yielding similar thickness difference percentages. Further, the numerical and experimental results demonstrated good consistency in the thickness distribution.
Robust visual tracking via multiscale deep sparse networks
NASA Astrophysics Data System (ADS)
Wang, Xin; Hou, Zhiqiang; Yu, Wangsheng; Xue, Yang; Jin, Zefenfen; Dai, Bo
2017-04-01
In visual tracking, deep learning with offline pretraining can extract more intrinsic and robust features. It has significant success solving the tracking drift in a complicated environment. However, offline pretraining requires numerous auxiliary training datasets and is considerably time-consuming for tracking tasks. To solve these problems, a multiscale sparse networks-based tracker (MSNT) under the particle filter framework is proposed. Based on the stacked sparse autoencoders and rectifier linear unit, the tracker has a flexible and adjustable architecture without the offline pretraining process and exploits the robust and powerful features effectively only through online training of limited labeled data. Meanwhile, the tracker builds four deep sparse networks of different scales, according to the target's profile type. During tracking, the tracker selects the matched tracking network adaptively in accordance with the initial target's profile type. It preserves the inherent structural information more efficiently than the single-scale networks. Additionally, a corresponding update strategy is proposed to improve the robustness of the tracker. Extensive experimental results on a large scale benchmark dataset show that the proposed method performs favorably against state-of-the-art methods in challenging environments.
Deep-Learning-Enabled On-Demand Design of Chiral Metamaterials.
Ma, Wei; Cheng, Feng; Liu, Yongmin
2018-06-11
Deep-learning framework has significantly impelled the development of modern machine learning technology by continuously pushing the limit of traditional recognition and processing of images, speech, and videos. In the meantime, it starts to penetrate other disciplines, such as biology, genetics, materials science, and physics. Here, we report a deep-learning-based model, comprising two bidirectional neural networks assembled by a partial stacking strategy, to automatically design and optimize three-dimensional chiral metamaterials with strong chiroptical responses at predesignated wavelengths. The model can help to discover the intricate, nonintuitive relationship between a metamaterial structure and its optical responses from a number of training examples, which circumvents the time-consuming, case-by-case numerical simulations in conventional metamaterial designs. This approach not only realizes the forward prediction of optical performance much more accurately and efficiently but also enables one to inversely retrieve designs from given requirements. Our results demonstrate that such a data-driven model can be applied as a very powerful tool in studying complicated light-matter interactions and accelerating the on-demand design of nanophotonic devices, systems, and architectures for real world applications.
Deep Learning as an Individual, Conditional, and Contextual Influence on First-Year Student Outcomes
ERIC Educational Resources Information Center
Reason, Robert D.; Cox, Bradley E.; McIntosh, Kadian; Terenzini, Patrick T.
2010-01-01
For years, educators have drawn a distinction between deep cognitive processing and surface-level cognitive processing, with the former resulting in greater learning. In recent years, researchers at NSSE have created DEEP Learning scales, which consist of items related to students' experiences which are believed to encourage deep processing. In…
Orozco-Solano, M I; Priego-Capote, F; Luque de Castro, M D
2011-09-28
The stability of the antioxidant fraction in edible vegetable oils has been evaluated during a simulated deep frying process at 180 °C. Four edible oils (i.e., extra-virgin olive oil with a 400 μg/mL overall content in naturally existing phenols; high-oleic sunflower oil without natural content of these compounds but enriched either with hydrophilic antioxidants isolated from olive pomace or with an oxidation inhibitor, dimethylsiloxane; and sunflower oil without enrichment) were subjected to deep heating consisting of 20 cycles at 180 °C for 5 min each. An oil aliquot was sampled after each heating cycle to study the influence of heating on the antioxidant fraction composed of hydrophilic and lipophilic antioxidants such as phenols and tocopherols, respectively. The decomposition curves for each group of compounds caused by the influence of deep heating were studied to compare their resistance to oxidation. Thus, the suitability of olive pomace as raw material to obtain these compounds offers an excellent alternative to the use of olive-tree materials different from leaves. The enrichment of refined edible oils with natural antioxidants from olive pomace is a sustainable strategy to take benefits from this residue.
Weng, Yejing; Sui, Zhigang; Jiang, Hao; Shan, Yichu; Chen, Lingfan; Zhang, Shen; Zhang, Lihua; Zhang, Yukui
2015-04-22
Due to the important roles of N-glycoproteins in various biological processes, the global N-glycoproteome analysis has been paid much attention. However, by current strategies for N-glycoproteome profiling, peptides with glycosylated Asn at N-terminus (PGANs), generated by protease digestion, could hardly be identified, due to the poor deglycosylation capacity by enzymes. However, theoretically, PGANs occupy 10% of N-glycopeptides in the typical tryptic digests. Therefore, in this study, we developed a novel strategy to identify PGANs by releasing N-glycans through the N-terminal site-selective succinylation assisted enzymatic deglycosylation. The obtained PGANs information is beneficial to not only achieve the deep coverage analysis of glycoproteomes, but also discover the new biological functions of such modification.
ERIC Educational Resources Information Center
Gallo, David A.; Meadow, Nathaniel G.; Johnson, Elizabeth L.; Foster, Katherine T.
2008-01-01
Thinking about the meaning of studied words (deep processing) enhances memory on typical recognition tests, relative to focusing on perceptual features (shallow processing). One explanation for this levels-of-processing effect is that deep processing leads to the encoding of more distinctive representations (i.e., more unique semantic or…
Ragland, J Daniel; Gur, Ruben C; Valdez, Jeffrey N; Loughead, James; Elliott, Mark; Kohler, Christian; Kanes, Stephen; Siegel, Steven J; Moelter, Stephen T; Gur, Raquel E
2005-10-01
Patients with schizophrenia improve episodic memory accuracy when given organizational strategies through levels-of-processing paradigms. This study tested if improvement is accompanied by normalized frontotemporal function. Event-related blood-oxygen-level-dependent functional magnetic resonance imaging (fMRI) was used to measure activation during shallow (perceptual) and deep (semantic) word encoding and recognition in 14 patients with schizophrenia and 14 healthy comparison subjects. Despite slower and less accurate overall word classification, the patients showed normal levels-of-processing effects, with faster and more accurate recognition of deeply processed words. These effects were accompanied by left ventrolateral prefrontal activation during encoding in both groups, although the thalamus, hippocampus, and lingual gyrus were overactivated in the patients. During word recognition, the patients showed overactivation in the left frontal pole and had a less robust right prefrontal response. Evidence of normal levels-of-processing effects and left prefrontal activation suggests that patients with schizophrenia can form and maintain semantic representations when they are provided with organizational cues and can improve their word encoding and retrieval. Areas of overactivation suggest residual inefficiencies. Nevertheless, the effect of teaching organizational strategies on episodic memory and brain function is a worthwhile topic for future interventional studies.
Differential impact of thalamic versus subthalamic deep brain stimulation on lexical processing.
Krugel, Lea K; Ehlen, Felicitas; Tiedt, Hannes O; Kühn, Andrea A; Klostermann, Fabian
2014-10-01
Roles of subcortical structures in language processing are vague, but, interestingly, basal ganglia and thalamic Deep Brain Stimulation can go along with reduced lexical capacities. To deepen the understanding of this impact, we assessed word processing as a function of thalamic versus subthalamic Deep Brain Stimulation. Ten essential tremor patients treated with thalamic and 14 Parkinson׳s disease patients with subthalamic Deep Brain Stimulation performed an acoustic Lexical Decision Task ON and OFF stimulation. Combined analysis of task performance and event-related potentials allowed the determination of processing speed, priming effects, and N400 as neurophysiological correlate of lexical stimulus processing. 12 age-matched healthy participants acted as control subjects. Thalamic Deep Brain Stimulation prolonged word decisions and reduced N400 potentials. No comparable ON-OFF effects were present in patients with subthalamic Deep Brain Stimulation. In the latter group of patients with Parkinson' disease, N400 amplitudes were, however, abnormally low, whether under active or inactive Deep Brain Stimulation. In conclusion, performance speed and N400 appear to be influenced by state functions, modulated by thalamic, but not subthalamic Deep Brain Stimulation, compatible with concepts of thalamo-cortical engagement in word processing. Clinically, these findings specify cognitive sequels of Deep Brain Stimulation in a target-specific way. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pisapia, C.
2015-12-01
Among all elements, carbon plays one of the major roles for the sustainability of life on Earth. Past considerations of the carbon cycle have mainly focused on surface processes occurring at the atmosphere, oceans and shallow crustal environments. By contrast, little is known about the Deep Carbon cycle whereas both geochemical and biological processes may induce organic carbon production and/or consumption at depth. Indeed, the nowadays-recognized capability of geochemical processes such as serpentinization to generate abiotic organic compounds as well as the existence of a potentially important intraterrestrial life raises questions about the limit of biotic/abiotic carbon on Earth's deep interior and how it impacts global biogeochemical cycles. It is then mandatory to increase our knowledge on the nature and extent of carbon reservoirs along with their sources, sinks and fluxes in the subsurface. This implies to be able to finely characterize organomineral associations within crustal rocks although it might be hampered by the scarceness and heterogeneous micrometric spatial distribution of organic molecules in natural rocks. We then developed an in situ analytical strategy based on the combination of high-resolution techniques to track organic molecules at the pore level in natural rocks and to determine their biological or abiotic origin. We associated classical high-resolution techniques and synchrotron-based imaging techniques in order to characterize their nature and localization (SEM/TEM, coupled CLSM/Raman spectroscopy, Tof-SIMS) along with their 3D-distribution relatively to mineral phases (S-FTIR, S-DeepUV, XANES, Biphoton microscopy). The effectiveness of this approach to shed light on the speciation and nature of carbon in subsurface environments will be illustrated through the study of (i) subsurface ecosystems and abiotic organic carbon within ultramafic rocks of the oceanic lithosphere as putative analogs for the nature and functioning of primitive ecosystems on Earth and of (ii) ecosystems inhabiting Archean craton and potentially playing a role in punk-rock karstification processes and rocks weathering.
[Deep vein thrombosis prophylaxis.
Sandoval-Chagoya, Gloria Alejandra; Laniado-Laborín, Rafael
2013-01-01
Background: despite the proven effectiveness of preventive therapy for deep vein thrombosis, a significant proportion of patients at risk for thromboembolism do not receive prophylaxis during hospitalization. Our objective was to determine the adherence to thrombosis prophylaxis guidelines in a general hospital as a quality control strategy. Methods: a random audit of clinical charts was conducted at the Tijuana General Hospital, Baja California, Mexico, to determine the degree of adherence to deep vein thrombosis prophylaxis guidelines. The instrument used was the Caprini's checklist for thrombosis risk assessment in adult patients. Results: the sample included 300 patient charts; 182 (60.7 %) were surgical patients and 118 were medical patients. Forty six patients (15.3 %) received deep vein thrombosis pharmacologic prophylaxis; 27.1 % of medical patients received deep vein thrombosis prophylaxis versus 8.3 % of surgical patients (p < 0.0001). Conclusions: our results show that adherence to DVT prophylaxis at our hospital is extremely low. Only 15.3 % of our patients at risk received treatment, and even patients with very high risk received treatment in less than 25 % of the cases. We have implemented strategies to increase compliance with clinical guidelines.
Learning strategies during clerkships and their effects on clinical performance.
van Lohuizen, M T; Kuks, J B M; van Hell, E A; Raat, A N; Cohen-Schotanus, J
2009-11-01
Previous research revealed relationships between learning strategies and knowledge acquisition. During clerkships, however, students' focus widens beyond mere knowledge acquisition as they further develop overall competence. This shift in focus can influence learning strategy use. We explored which learning strategies were used during clerkships and their relationship to clinical performance. Participants were 113 (78%) clerks at the university hospital or one of six affiliated hospitals. Learning strategies were assessed using the 'Approaches to Learning at Work Questionnaire' (deep, surface-rational and surface-disorganised learning). Clinical performance was calculated by taking the mean of clinical assessment marks. The relationship between learning strategies and clinical performance was explored using regression analysis. Most students (89%) did not clearly prefer a single learning strategy. No relationship was found between learning strategies and clinical performance. Since overall competence comprises integration of knowledge, skills and professional behaviour, we assume that students without a clear preference use more than one learning strategy. Finding no relationship between learning strategies and clinical performance reflects the complexity of clinical learning. Depending on circumstances it may be important to obtain relevant information quickly (surface-rational) or understand material thoroughly (deep). In future research we will examine when and why students use different learning strategies.
Extreme Longevity in Proteinaceous Deep-Sea Corals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roark, E B; Guilderson, T P; Dunbar, R B
2009-02-09
Deep-sea corals are found on hard substrates on seamounts and continental margins world-wide at depths of 300 to {approx}3000 meters. Deep-sea coral communities are hotspots of deep ocean biomass and biodiversity, providing critical habitat for fish and invertebrates. Newly applied radiocarbon age date from the deep water proteinaceous corals Gerardia sp. and Leiopathes glaberrima show that radial growth rates are as low as 4 to 35 {micro}m yr{sup -1} and that individual colony longevities are on the order of thousands of years. The management and conservation of deep sea coral communities is challenged by their commercial harvest for the jewelrymore » trade and damage caused by deep water fishing practices. In light of their unusual longevity, a better understanding of deep sea coral ecology and their interrelationships with associated benthic communities is needed to inform coherent international conservation strategies for these important deep-sea ecosystems.« less
NASA Technical Reports Server (NTRS)
Johnson, Daniel E.; Tao, W.-K.; Simpson, J.; Sui, C.-H.; Einaudi, Franco (Technical Monitor)
2001-01-01
Interactions between deep tropical clouds over the western Pacific warm pool and the larger-scale environment are key to understanding climate change. Cloud models are an extremely useful tool in simulating and providing statistical information on heat and moisture transfer processes between cloud systems and the environment, and can therefore be utilized to substantially improve cloud parameterizations in climate models. In this paper, the Goddard Cumulus Ensemble (GCE) cloud-resolving model is used in multi-day simulations of deep tropical convective activity over the Tropical Ocean-Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA COARE). Large-scale temperature and moisture advective tendencies, and horizontal momentum from the TOGA-COARE Intensive Flux Array (IFA) region, are applied to the GCE version which incorporates cyclical boundary conditions. Sensitivity experiments show that grid domain size produces the largest response to domain-mean temperature and moisture deviations, as well as cloudiness, when compared to grid horizontal or vertical resolution, and advection scheme. It is found that a minimum grid-domain size of 500 km is needed to adequately resolve the convective cloud features. The control experiment shows that the atmospheric heating and moistening is primarily a response to cloud latent processes of condensation/evaporation, and deposition/sublimation, and to a lesser extent, melting of ice particles. Air-sea exchange of heat and moisture is found to be significant, but of secondary importance, while the radiational response is small. The simulated rainfall and atmospheric heating and moistening, agrees well with observations, and performs favorably to other models simulating this case.
Williams, Isobel Anne; Wilkinson, Leonora; Limousin, Patricia; Jahanshahi, Marjan
2015-01-01
Deep brain stimulation of the subthalamic nucleus (STN DBS) ameliorates the motor symptoms of Parkinson's disease (PD). However, some aspects of executive control are impaired with STN DBS. We tested the prediction that (i) STN DBS interferes with switching from automatic to controlled processing during fast-paced random number generation (RNG) (ii) STN DBS-induced cognitive control changes are load-dependent. Fifteen PD patients with bilateral STN DBS performed paced-RNG, under three levels of cognitive load synchronised with a pacing stimulus presented at 1, 0.5 and 0.33 Hz (faster rates require greater cognitive control), with DBS on or off. Measures of output randomness were calculated. Countscore 1 (CS1) indicates habitual counting in steps of one (CS1). Countscore 2 (CS2) indicates a more controlled strategy of counting in twos. The fastest rate was associated with an increased CS1 score with STN DBS on compared to off. At the slowest rate, patients had higher CS2 scores with DBS off than on, such that the differences between CS1 and CS2 scores disappeared. We provide evidence for a load-dependent effect of STN DBS on paced RNG in PD. Patients could switch to more controlled RNG strategies during conditions of low cognitive load at slower rates only when the STN stimulators were off, but when STN stimulation was on, they engaged in more automatic habitual counting under increased cognitive load. These findings are consistent with the proposal that the STN implements a switch signal from the medial frontal cortex which enables a shift from automatic to controlled processing.
Williams, Isobel Anne; Wilkinson, Leonora; Limousin, Patricia; Jahanshahi, Marjan
2015-01-01
Background: Deep brain stimulation of the subthalamic nucleus (STN DBS) ameliorates the motor symptoms of Parkinson’s disease (PD). However, some aspects of executive control are impaired with STN DBS. Objective: We tested the prediction that (i) STN DBS interferes with switching from automatic to controlled processing during fast-paced random number generation (RNG) (ii) STN DBS-induced cognitive control changes are load-dependent. Methods: Fifteen PD patients with bilateral STN DBS performed paced-RNG, under three levels of cognitive load synchronised with a pacing stimulus presented at 1, 0.5 and 0.33 Hz (faster rates require greater cognitive control), with DBS on or off. Measures of output randomness were calculated. Countscore 1 (CS1) indicates habitual counting in steps of one (CS1). Countscore 2 (CS2) indicates a more controlled strategy of counting in twos. Results: The fastest rate was associated with an increased CS1 score with STN DBS on compared to off. At the slowest rate, patients had higher CS2 scores with DBS off than on, such that the differences between CS1 and CS2 scores disappeared. Conclusions: We provide evidence for a load-dependent effect of STN DBS on paced RNG in PD. Patients could switch to more controlled RNG strategies during conditions of low cognitive load at slower rates only when the STN stimulators were off, but when STN stimulation was on, they engaged in more automatic habitual counting under increased cognitive load. These findings are consistent with the proposal that the STN implements a switch signal from the medial frontal cortex which enables a shift from automatic to controlled processing. PMID:25720447
Sleep, Stress & Relaxation: Rejuvenate Body & Mind
Sleep, Stress & Relaxation: Rejuvenate Body & Mind; Relieve Stress; best ways to relieve stress; best way to relieve stress; different ways to relieve stress; does smoking relieve stress; does tobacco relieve stress; how can I relieve stress; how can you relieve stress; how do I relieve stress; reduce stress; does smoking reduce stress; how can I reduce stress; how to reduce stress; reduce stress; reduce stress levels; reducing stress; smoking reduce stress; smoking reduces stress; stress reducing techniques; techniques to reduce stress; stress relief; best stress relief; natural stress relief; need stress relief; relief for stress; relief from stress; relief of stress; smoking and stress relief; smoking for stress relief; smoking stress relief; deal with stress; dealing with stress; dealing with anger; dealing with stress; different ways of dealing with stress; help dealing with stress; how to deal with anger; how to deal with stress; how to deal with stress when quitting smoking; stress management; free stress management; how can you manage stress; how do you manage stress; how to manage stress; manage stress; management of stress; management stress; managing stress; strategies for managing stress; coping with stress; cope with stress; copeing with stress; coping and stress; coping skills for stress; coping strategies for stress; coping strategies with stress; coping strategy for stress; coping with stress; coping with stress and anxiety; emotional health; emotional health; emotional health article; emotional health articles; deep relaxation; deep breathing relaxation techniques; deep muscle relaxation; deep relaxation; deep relaxation meditation; deep relaxation technique; deep relaxation techniques; meditation exercises; mindful exercises; mindful meditation exercises; online relaxation exercises; relaxation breathing exercises; relaxation exercise; relaxation exercises; stress relaxation; methods of relaxation for stress; relax stress; relax techniques stress; relaxation and stress; relaxation and stress reduction; relaxation and stress relief; relaxation exercises for stress; relaxation for stress; mind; mind relaxation; mind relaxation techniques; mindful meditation; mindful meditation techniques; mindfulness; mindful meditation; mindful meditation techniques; relax your mind; relieve stress; best ways to relieve stress; help relieve stress; how can I relieve stress; how can we relieve stress; how can you relieve stress; how do I relieve stress; how do you relieve stress; how relieve stress; cope with stress; cope with stress; coping strategies for stress; coping with stress; how to cope with stress; ways of coping with stress; ways to cope with stress; trouble sleeping; having trouble sleeping; i have trouble sleeping; sleep trouble; sleeping trouble; trouble getting to sleep; trouble sleeping; trouble sleeping at night; sleep better; better sleep; help me sleep better; how can I get better sleep; how to sleep better; sleep better
Application of Deep Learning in GLOBELAND30-2010 Product Refinement
NASA Astrophysics Data System (ADS)
Liu, T.; Chen, X.
2018-04-01
GlobeLand30, as one of the best Global Land Cover (GLC) product at 30-m resolution, has been widely used in many research fields. Due to the significant spectral confusion among different land cover types and limited textual information of Landsat data, the overall accuracy of GlobeLand30 is about 80 %. Although such accuracy is much higher than most other global land cover products, it cannot satisfy various applications. There is still a great need of an effective method to improve the quality of GlobeLand30. The explosive high-resolution satellite images and remarkable performance of Deep Learning on image classification provide a new opportunity to refine GlobeLand30. However, the performance of deep leaning depends on quality and quantity of training samples as well as model training strategy. Therefore, this paper 1) proposed an automatic training sample generation method via Google earth to build a large training sample set; and 2) explore the best training strategy for land cover classification using GoogleNet (Inception V3), one of the most widely used deep learning network. The result shows that the fine-tuning from first layer of Inception V3 using rough large sample set is the best strategy. The retrained network was then applied in one selected area from Xi'an city as a case study of GlobeLand30 refinement. The experiment results indicate that the proposed approach with Deep Learning and google earth imagery is a promising solution for further improving accuracy of GlobeLand30.
Zooplankton Distribution in Four Western Norwegian Fjords
NASA Astrophysics Data System (ADS)
Gorsky, G.; Flood, P. R.; Youngbluth, M.; Picheral, M.; Grisoni, J.-M.
2000-01-01
A multi-instrumental array constructed in the Laboratoire d'Ecologie du Plancton Marin in Villefranche sur mer, France, named the Underwater Video Profiler (UVP), was used to investigate the vertical distribution of zooplankton in four western Norwegian fjords in the summer 1996. Six distinct zoological groups were monitored. The fauna included: (a) small crustaceans (mainly copepods), (b) ctenophores (mainly lobates), (c) siphonophores (mainly physonects), (d) a scyphomedusa Periphylla periphylla, (e) chaetognaths and (f) appendicularians. The use of the non-disturbing video technique demonstrated that the distribution of large zooplankton is heterogeneous vertically and geographically. Furthermore, the abundance of non-migrating filter feeders in the deep basins of the fjords indicates that there is enough food (living and non-living particulate organic matter) to support their dietary needs. This adaptation may be considered as a strategy for survival in fjords. Specifically, living in dark, deep water reduces visual predation and population loss encountered in the upper layer due to advective processes.
Vapor transport deposition of antimony selenide thin film solar cells with 7.6% efficiency.
Wen, Xixing; Chen, Chao; Lu, Shuaicheng; Li, Kanghua; Kondrotas, Rokas; Zhao, Yang; Chen, Wenhao; Gao, Liang; Wang, Chong; Zhang, Jun; Niu, Guangda; Tang, Jiang
2018-06-05
Antimony selenide is an emerging promising thin film photovoltaic material thanks to its binary composition, suitable bandgap, high absorption coefficient, inert grain boundaries and earth-abundant constituents. However, current devices produced from rapid thermal evaporation strategy suffer from low-quality film and unsatisfactory performance. Herein, we develop a vapor transport deposition technique to fabricate antimony selenide films, a technique that enables continuous and low-cost manufacturing of cadmium telluride solar cells. We improve the crystallinity of antimony selenide films and then successfully produce superstrate cadmium sulfide/antimony selenide solar cells with a certified power conversion efficiency of 7.6%, a net 2% improvement over previous 5.6% record of the same device configuration. We analyze the deep defects in antimony selenide solar cells, and find that the density of the dominant deep defects is reduced by one order of magnitude using vapor transport deposition process.
Guo, Yang; Liu, Shuhui; Li, Zhanhuai; Shang, Xuequn
2018-04-11
The classification of cancer subtypes is of great importance to cancer disease diagnosis and therapy. Many supervised learning approaches have been applied to cancer subtype classification in the past few years, especially of deep learning based approaches. Recently, the deep forest model has been proposed as an alternative of deep neural networks to learn hyper-representations by using cascade ensemble decision trees. It has been proved that the deep forest model has competitive or even better performance than deep neural networks in some extent. However, the standard deep forest model may face overfitting and ensemble diversity challenges when dealing with small sample size and high-dimensional biology data. In this paper, we propose a deep learning model, so-called BCDForest, to address cancer subtype classification on small-scale biology datasets, which can be viewed as a modification of the standard deep forest model. The BCDForest distinguishes from the standard deep forest model with the following two main contributions: First, a named multi-class-grained scanning method is proposed to train multiple binary classifiers to encourage diversity of ensemble. Meanwhile, the fitting quality of each classifier is considered in representation learning. Second, we propose a boosting strategy to emphasize more important features in cascade forests, thus to propagate the benefits of discriminative features among cascade layers to improve the classification performance. Systematic comparison experiments on both microarray and RNA-Seq gene expression datasets demonstrate that our method consistently outperforms the state-of-the-art methods in application of cancer subtype classification. The multi-class-grained scanning and boosting strategy in our model provide an effective solution to ease the overfitting challenge and improve the robustness of deep forest model working on small-scale data. Our model provides a useful approach to the classification of cancer subtypes by using deep learning on high-dimensional and small-scale biology data.
The JPL roadmap for Deep Space navigation
NASA Technical Reports Server (NTRS)
Martin-Mur, Tomas J.; Abraham, Douglas S.; Berry, David; Bhaskaran, Shyam; Cesarone, Robert J.; Wood, Lincoln
2006-01-01
This paper reviews the tentative set of deep space missions that will be supported by NASA's Deep Space Mission System in the next twenty-five years, and extracts the driving set of navigation capabilities that these missions will require. There will be many challenges including the support of new mission navigation approaches such as formation flying and rendezvous in deep space, low-energy and low-thrust orbit transfers, precise landing and ascent vehicles, and autonomous navigation. Innovative strategies and approaches will be needed to develop and field advanced navigation capabilities.
Evolutionary process of deep-sea bathymodiolus mussels.
Miyazaki, Jun-Ichi; de Oliveira Martins, Leonardo; Fujita, Yuko; Matsumoto, Hiroto; Fujiwara, Yoshihiro
2010-04-27
Since the discovery of deep-sea chemosynthesis-based communities, much work has been done to clarify their organismal and environmental aspects. However, major topics remain to be resolved, including when and how organisms invade and adapt to deep-sea environments; whether strategies for invasion and adaptation are shared by different taxa or unique to each taxon; how organisms extend their distribution and diversity; and how they become isolated to speciate in continuous waters. Deep-sea mussels are one of the dominant organisms in chemosynthesis-based communities, thus investigations of their origin and evolution contribute to resolving questions about life in those communities. We investigated worldwide phylogenetic relationships of deep-sea Bathymodiolus mussels and their mytilid relatives by analyzing nucleotide sequences of the mitochondrial cytochrome c oxidase subunit I (COI) and NADH dehydrogenase subunit 4 (ND4) genes. Phylogenetic analysis of the concatenated sequence data showed that mussels of the subfamily Bathymodiolinae from vents and seeps were divided into four groups, and that mussels of the subfamily Modiolinae from sunken wood and whale carcasses assumed the outgroup position and shallow-water modioline mussels were positioned more distantly to the bathymodioline mussels. We provisionally hypothesized the evolutionary history of Bathymodilolus mussels by estimating evolutionary time under a relaxed molecular clock model. Diversification of bathymodioline mussels was initiated in the early Miocene, and subsequently diversification of the groups occurred in the early to middle Miocene. The phylogenetic relationships support the "Evolutionary stepping stone hypothesis," in which mytilid ancestors exploited sunken wood and whale carcasses in their progressive adaptation to deep-sea environments. This hypothesis is also supported by the evolutionary transition of symbiosis in that nutritional adaptation to the deep sea proceeded from extracellular to intracellular symbiotic states in whale carcasses. The estimated evolutionary time suggests that the mytilid ancestors were able to exploit whales during adaptation to the deep sea.
NASA Astrophysics Data System (ADS)
Danovaro, R.; Corinaldesi, C.; dell'Anno, A.
2002-12-01
The deep-sea bed, acting as the ultimate sink for organic material derived from the upper oceans primary production, is now assumed to play a key role in biogeochemical cycling of organic matter on global scale. Early diagenesis of organic matter in marine sediments is dependent upon biological processes (largely mediated by bacterial activity) and by molecular diffusion. Organic matter reaching the sea floor by sedimentation is subjected to complex biogeochemical transformations that make organic matter largely unsuitable for direct utilization by benthic heterotrophs. Extracellular enzymatic activities in the sediment is generally recognized as the key step in the degradation and utilization of organic polymers by bacteria and a key role in biopolymeric carbon mobilization is played by aminopeptidase, alkaline phosphatase and glucosidase activities. In the present study we investigated bacterial density, bacterial C production and exo-enzymatic activities (aminopeptidase, glucosidase and phosphatase activity) in deep-sea sediments of the Pacific Ocean in relation with the biochemical composition of sediment organic matter (proteins, carbohydrates and lipids), in order to gather information on organic matter cycling and diagenesis. Benthic viral abundance was also measured to investigate the potential role of viruses on microbial loop functioning. Sediment samples were collected at eight stations (depth ranging from 2070-3100 m) along two transects located at the opposite side (north and south) of ocean seismic ridge Juan Fernandez (along latitudes 33° 20' - 33° 40'), constituted by the submerged vulcanoes, which connects the Chilean coasts to Rapa Nui Island. Since the northern and southern sides of this ridge apparently displayed small but significant differences in deep-sea temperature (related to the general ocean circulation), this sampling strategy allowed also investigating the role of different temperature constraints on bacterial activity and biogeochemical processes and to define possible scenarios dealing with climate induced changes in deep-sea conditions.
An, Xiaoping; Fan, Hang; Ma, Maijuan; Anderson, Benjamin D.; Jiang, Jiafu; Liu, Wei; Cao, Wuchun; Tong, Yigang
2014-01-01
This paper explored our hypothesis that sRNA (18∼30 bp) deep sequencing technique can be used as an efficient strategy to identify microorganisms other than viruses, such as prokaryotic and eukaryotic pathogens. In the study, the clean reads derived from the sRNA deep sequencing data of wild-caught ticks and mosquitoes were compared against the NCBI nucleotide collection (non-redundant nt database) using Blastn. The blast results were then analyzed with in-house Python scripts. An empirical formula was proposed to identify the putative pathogens. Results showed that not only viruses but also prokaryotic and eukaryotic species of interest can be screened out and were subsequently confirmed with experiments. Specially, a novel Rickettsia spp. was indicated to exist in Haemaphysalis longicornis ticks collected in Beijing. Our study demonstrated the reuse of sRNA deep sequencing data would have the potential to trace the origin of pathogens or discover novel agents of emerging/re-emerging infectious diseases. PMID:24618575
Graves, Nicholas; Wloch, Catherine; Wilson, Jennie; Barnett, Adrian; Sutton, Alex; Cooper, Nicola; Merollini, Katharina; McCreanor, Victoria; Cheng, Qinglu; Burn, Edward; Lamagni, Theresa; Charlett, Andre
2016-07-01
A deep infection of the surgical site is reported in 0.7% of all cases of total hip arthroplasty (THA). This often leads to revision surgery that is invasive, painful and costly. A range of strategies is employed in NHS hospitals to reduce risk, yet no economic analysis has been undertaken to compare the value for money of competing prevention strategies. To compare the costs and health benefits of strategies that reduce the risk of deep infection following THA in NHS hospitals. To make recommendations to decision-makers about the cost-effectiveness of the alternatives. The study comprised a systematic review and cost-effectiveness decision analysis. 77,321 patients who had a primary hip arthroplasty in NHS hospitals in 2012. Nine different treatment strategies including antibiotic prophylaxis, antibiotic-impregnated cement and ventilation systems used in the operating theatre. Change in the number of deep infections, change in the total costs and change in the total health benefits in quality-adjusted life-years (QALYs). Literature searches using MEDLINE, EMBASE, Cumulative Index to Nursing and Allied Health Literature and the Cochrane Central Register of Controlled Trials were undertaken to cover the period 1966-2012 to identify infection prevention strategies. Relevant journals, conference proceedings and bibliographies of retrieved papers were hand-searched. Orthopaedic surgeons and infection prevention experts were also consulted. English-language papers only. The selection of evidence was by two independent reviewers. Studies were included if they were interventions that reported THA-related deep surgical site infection (SSI) as an outcome. Mixed-treatment comparisons were made to produce estimates of the relative effects of competing infection control strategies. Twelve studies, six randomised controlled trials and six observational studies, involving 123,788 total hip replacements (THRs) and nine infection control strategies, were identified. The quality of the evidence was judged against four categories developed by the National Institute for Health and Care Excellence Methods for Development of NICE Public Health Guidance ( http://publications.nice.org.uk/methods-for-the-development-of-nice-public-health-guidance-third-edition-pmg4 ), accessed March 2012. All evidence was found to fit the two highest categories of 1 and 2. Nine competing infection control interventions [treatments (Ts) 1-9] were used in a cohort simulation model of 77,321 patients who had a primary THR in 2012. Predictions were made for cases of deep infection and total costs, and QALY outcomes. Compared with a baseline of T1 (no systemic antibiotics, plain cement and conventional ventilation) all other treatment strategies reduced risk. T6 was the most effective (systemic antibiotics, antibiotic-impregnated cement and conventional ventilation) and prevented a further 1481 cases of deep infection, and led to the largest annual cost savings and the greatest gains to QALYs. The additional uses of laminar airflow and body exhaust suits indicate higher costs and worse health outcomes. T6 is an optimal strategy for reducing the risk of SSI following THA. The other strategies that are commonly used among NHS hospitals lead to higher cost and worse QALY outcomes. Policy-makers, therefore, have an opportunity to save resources and improve health outcomes. The effects of laminar air flow and body exhaust suits might be further studied if policy-makers are to consider disinvesting in these technologies. A wide range of evidence sources was synthesised and there is large uncertainty in the conclusions. The National Institute for Health Research Health Technology Assessment programme and the Queensland Health Quality Improvement and Enhancement Programme (grant number 2008001769).
Sikandar, Shafaq; West, Steven J; McMahon, Stephen B; Bennett, David L; Dickenson, Anthony H
2017-07-01
Sensory processing of deep somatic tissue constitutes an important component of the nociceptive system, yet associated central processing pathways remain poorly understood. Here, we provide a novel electrophysiological characterization and immunohistochemical analysis of neural activation in the lateral spinal nucleus (LSN). These neurons show evoked activity to deep, but not cutaneous, stimulation. The evoked responses of neurons in the LSN can be sensitized to somatosensory stimulation following intramuscular hypertonic saline, an acute model of muscle pain, suggesting this is an important spinal relay site for the processing of deep tissue nociceptive inputs. Neurons of the thalamic ventrobasal complex (VBC) mediate both cutaneous and deep tissue sensory processing, but in contrast to the lateral spinal nucleus our electrophysiological studies do not suggest the existence of a subgroup of cells that selectively process deep tissue inputs. The sensitization of polymodal and thermospecific VBC neurons to mechanical somatosensory stimulation following acute muscle stimulation with hypertonic saline suggests differential roles of thalamic subpopulations in mediating cutaneous and deep tissue nociception in pathological states. Overall, our studies at both the spinal (lateral spinal nucleus) and supraspinal (thalamic ventrobasal complex) levels suggest a convergence of cutaneous and deep somatosensory inputs onto spinothalamic pathways, which are unmasked by activation of muscle nociceptive afferents to produce consequent phenotypic alterations in spinal and thalamic neural coding of somatosensory stimulation. A better understanding of the sensory pathways involved in deep tissue nociception, as well as the degree of labeled line and convergent pathways for cutaneous and deep somatosensory inputs, is fundamental to developing targeted analgesic therapies for deep pain syndromes. © 2017 University College London. Physiological Reports published by Wiley Periodicals, Inc. on behalf of The Physiological Society and the American Physiological Society.
Inverse Analysis to Formability Design in a Deep Drawing Process
NASA Astrophysics Data System (ADS)
Buranathiti, Thaweepat; Cao, Jian
Deep drawing process is an important process adding values to flat sheet metals in many industries. An important concern in the design of a deep drawing process generally is formability. This paper aims to present the connection between formability and inverse analysis (IA), which is a systematical means for determining an optimal blank configuration for a deep drawing process. In this paper, IA is presented and explored by using a commercial finite element software package. A number of numerical studies on the effect of blank configurations to the quality of a part produced by a deep drawing process were conducted and analyzed. The quality of the drawing processes is numerically analyzed by using an explicit incremental nonlinear finite element code. The minimum distance between elemental principal strains and the strain-based forming limit curve (FLC) is defined as tearing margin to be the key performance index (KPI) implying the quality of the part. The initial blank configuration has shown that it plays a highly important role in the quality of the product via the deep drawing process. In addition, it is observed that if a blank configuration is not greatly deviated from the one obtained from IA, the blank can still result a good product. The strain history around the bottom fillet of the part is also observed. The paper concludes that IA is an important part of the design methodology for deep drawing processes.
The Project for the Extension of the Continental Shelf - the Portuguese experience
NASA Astrophysics Data System (ADS)
Madureira, Pedro; Ribeiro, Luísa P.; Roque, Cristina; Henriques, Guida; Brandão, Filipe; Dias, Frederico; Simões, Maria; Neves, Mariana; Conceição, Patricia; Botelho Leal, Isabel; Emepc, Equipa
2017-04-01
Under the United Nations Convention on the Law of the Sea (UNCLOS), the continental shelf is a juridical term used to define a submarine area that extends throughout the natural prolongation of a land territory, where the coastal State exercises sovereign rights for the purpose of exploring it and exploiting its natural resources. Article 76 provides a methodology for determining the outer edge of the continental margin and to delineate the outer limits of the continental shelf. The task of preparing the Portuguese submission to the Commission on the Limits of the Continental Shelf was committed to the Task Group for the Extension of the Continental Shelf (EMEPC), which formally began its activity in January 2005. At that time, the existing national capacity to conduct such a task was very limited in its hydrographic, geological and geophysical components. A great effort has been made by Portugal to overcome these weaknesses and develop a strategy to submit the proposal for the extension of the continental shelf beyond 200 nautical miles on 11th May of 2009. The execution of the project involved the implementation of several complementary strategies including: 1) intensive bathymetric, geophysical and, locally, geological data acquisition; 2) acquisition/development of new stand-alone and ship mounted equipment; 3) interactions with universities and research institutes, with emphasis in R&D initiatives; 4) creation of critical mass in deep-sea research by promoting advanced studies on: International Law, Geophysics, Geology, Hydrography, Biology, amongst others; 5) promotion of the sea as a major national goal, coupled with an outreach strategy. Until now, more than 1050 days of surveying have resulted in a large scale seafloor mapping using two EM120 and one EM710 multibeam echosounders from Kongsberg mounted on two hydrographic vessels. The surveys follow IHO Order 2 Standard (SP44, 5th Edition) and cover an area over 2.6 million km2. A multichannel reflection and wide angle refraction seismic survey provided 2600 km of high quality MCS data, allowing an accurate imaging of the sediment cover. Also, the data collected under the project has been used to foster the collaboration with universities and research institutes and to support research projects and post graduate studies on the deep-sea. An educational strategy has been emplaced in order to promote Ocean Literacy among children and youngsters. Since 2008, EMEPC is responsible for the operation and maintenance of Luso, a work class ROV rated to 6,000 metres depth. More than 170 ROV dives allowed the direct observation of the deep-sea for almost 800 hours of video footages, which also provided key information on biodiversity and deep sea ecosystems, which stand as the base for the creation of a database on biological data and to develop a strategy to protect the marine environment. Portugal has now the capacity to access its entire maritime areas, reinforcing the knowledge on the natural processes that shape the deep-sea. Some views on the Portuguese interpretation and application of article 76 will be discussed based on the data gathered within the scope of the project, which is still ongoing.
Semantic Typicality Effects in Acquired Dyslexia: Evidence for Semantic Impairment in Deep Dyslexia.
Riley, Ellyn A; Thompson, Cynthia K
2010-06-01
BACKGROUND: Acquired deep dyslexia is characterized by impairment in grapheme-phoneme conversion and production of semantic errors in oral reading. Several theories have attempted to explain the production of semantic errors in deep dyslexia, some proposing that they arise from impairments in both grapheme-phoneme and lexical-semantic processing, and others proposing that such errors stem from a deficit in phonological production. Whereas both views have gained some acceptance, the limited evidence available does not clearly eliminate the possibility that semantic errors arise from a lexical-semantic input processing deficit. AIMS: To investigate semantic processing in deep dyslexia, this study examined the typicality effect in deep dyslexic individuals, phonological dyslexic individuals, and controls using an online category verification paradigm. This task requires explicit semantic access without speech production, focusing observation on semantic processing from written or spoken input. METHODS #ENTITYSTARTX00026; PROCEDURES: To examine the locus of semantic impairment, the task was administered in visual and auditory modalities with reaction time as the primary dependent measure. Nine controls, six phonological dyslexic participants, and five deep dyslexic participants completed the study. OUTCOMES #ENTITYSTARTX00026; RESULTS: Controls and phonological dyslexic participants demonstrated a typicality effect in both modalities, while deep dyslexic participants did not demonstrate a typicality effect in either modality. CONCLUSIONS: These findings suggest that deep dyslexia is associated with a semantic processing deficit. Although this does not rule out the possibility of concomitant deficits in other modules of lexical-semantic processing, this finding suggests a direction for treatment of deep dyslexia focused on semantic processing.
Cancer Precision Medicine: Why More Is More and DNA Is Not Enough.
Schütte, Moritz; Ogilvie, Lesley A; Rieke, Damian T; Lange, Bodo M H; Yaspo, Marie-Laure; Lehrach, Hans
2017-01-01
Every tumour is different. They arise in patients with different genomes, from cells with different epigenetic modifications, and by random processes affecting the genome and/or epigenome of a somatic cell, allowing it to escape the usual controls on its growth. Tumours and patients therefore often respond very differently to the drugs they receive. Cancer precision medicine aims to characterise the tumour (and often also the patient) to be able to predict, with high accuracy, its response to different treatments, with options ranging from the selective characterisation of a few genomic variants considered particularly important to predict the response of the tumour to specific drugs, to deep genome analysis of both tumour and patient, combined with deep transcriptome analysis of the tumour. Here, we compare the expected results of carrying out such analyses at different levels, from different size panels to a comprehensive analysis incorporating both patient and tumour at the DNA and RNA levels. In doing so, we illustrate the additional power gained by this unusually deep analysis strategy, a potential basis for a future precision medicine first strategy in cancer drug therapy. However, this is only a step along the way of increasingly detailed molecular characterisation, which in our view will, in the future, introduce additional molecular characterisation techniques, including systematic analysis of proteins and protein modification states and different types of metabolites in the tumour, systematic analysis of circulating tumour cells and nucleic acids, the use of spatially resolved analysis techniques to address the problem of tumour heterogeneity as well as the deep analyses of the immune system of the patient to, e.g., predict the response of the patient to different types of immunotherapy. Such analyses will generate data sets of even greater complexity, requiring mechanistic modelling approaches to capture enough of the complex situation in the real patient to be able to accurately predict his/her responses to all available therapies. © 2017 S. Karger AG, Basel.
Scanning the Horizon: Coast Guard Strategy in a Hot, Flat, Crowded World
2010-03-12
Mexico. From 1992 to 2007, deepwater offshore rigs drilling in deep water in the Gulf of Mexico increased from three to 30, and deepwater oil production...discusses the Coast Guard’s Integrated Deepwater System program, which includes recapitalization of its deep-water vessels and aircraft.89 At the...water and ultra deep water drilling. Discussion of increased outer continental shelf activity in higher level strategic planning indicates that
Fine-grained leukocyte classification with deep residual learning for microscopic images.
Qin, Feiwei; Gao, Nannan; Peng, Yong; Wu, Zizhao; Shen, Shuying; Grudtsin, Artur
2018-08-01
Leukocyte classification and cytometry have wide applications in medical domain, previous researches usually exploit machine learning techniques to classify leukocytes automatically. However, constrained by the past development of machine learning techniques, for example, extracting distinctive features from raw microscopic images are difficult, the widely used SVM classifier only has relative few parameters to tune, these methods cannot efficiently handle fine-grained classification cases when the white blood cells have up to 40 categories. Based on deep learning theory, a systematic study is conducted on finer leukocyte classification in this paper. A deep residual neural network based leukocyte classifier is constructed at first, which can imitate the domain expert's cell recognition process, and extract salient features robustly and automatically. Then the deep neural network classifier's topology is adjusted according to the prior knowledge of white blood cell test. After that the microscopic image dataset with almost one hundred thousand labeled leukocytes belonging to 40 categories is built, and combined training strategies are adopted to make the designed classifier has good generalization ability. The proposed deep residual neural network based classifier was tested on microscopic image dataset with 40 leukocyte categories. It achieves top-1 accuracy of 77.80%, top-5 accuracy of 98.75% during the training procedure. The average accuracy on the test set is nearly 76.84%. This paper presents a fine-grained leukocyte classification method for microscopic images, based on deep residual learning theory and medical domain knowledge. Experimental results validate the feasibility and effectiveness of our approach. Extended experiments support that the fine-grained leukocyte classifier could be used in real medical applications, assist doctors in diagnosing diseases, reduce human power significantly. Copyright © 2018 Elsevier B.V. All rights reserved.
Levels-of-processing effects on a task of olfactory naming.
Royet, Jean-Pierre; Koenig, Olivier; Paugam-Moisy, Helene; Puzenat, Didier; Chasse, Jean-Luc
2004-02-01
The effects of odor processing were investigated at various analytical levels, from simple sensory analysis to deep or semantic analysis, on a subsequent task of odor naming. Students (106 women, 23.6 +/- 5.5 yr. old; 65 men, 25.1 +/- 7.1 yr. old) were tested. The experimental procedure included two successive sessions, a first session to characterize a set of 30 odors with criteria that used various depths of processing and a second session to name the odors as quickly as possible. Four processing conditions rated the odors using descriptors before naming the odor. The control condition did not rate the odors before naming. The processing conditions were based on lower-level olfactory judgments (superficial processing), higher-level olfactory-gustatory-somesthetic judgments (deep processing), and higher-level nonolfactory judgments (Deep-Control processing, with subjects rating odors with auditory and visual descriptors). One experimental condition successively grouped lower- and higher-level olfactory judgments (Superficial-Deep processing). A naming index which depended on response accuracy and the subjects' response time were calculated. Odor naming was modified for 18 out of 30 odorants as a function of the level of processing required. For 94.5% of significant variations, the scores for odor naming were higher following those tasks for which it was hypothesized that the necessary olfactory processing was carried out at a deeper level. Performance in the naming task was progressively improved as follows: no rating of odors, then superficial, deep-control, deep, and superficial-deep processings. These data show that the deepest olfactory encoding was later associated with progressively higher performance in naming.
Contemporary crustal movement of southeastern Tibet: Constraints from dense GPS measurements
Pan, Yuanjin; Shen, Wen-Bin
2017-01-01
The ongoing collision between the Indian plate and the Eurasian plate brings up N-S crustal shortening and thickening of the Tibet Plateau, but its dynamic mechanisms remain controversial yet. As one of the most tectonically active regions of the world, South-Eastern Tibet (SET) has been greatly paid attention to by many geoscientists. Here we present the latest three-dimensional GPS velocity field to constrain the present-day tectonic process of SET, which may highlight the complex vertical crustal deformation. Improved data processing strategies are adopted to enhance the strain patterns throughout SET. The crustal uplifting and subsidence are dominated by regional deep tectonic dynamic processes. Results show that the Gongga Shan is uplifting with 1–1.5 mm/yr. Nevertheless, an anomalous crustal uplifting of ~8.7 mm/yr and negative horizontal dilation rates of 40–50 nstrain/yr throughout the Longmenshan structure reveal that this structure is caused by the intracontinental subduction of the Yangtze Craton. The Xianshuihe-Xiaojiang fault is a major active sinistral strike-slip fault which strikes essentially and consistently with the maximum shear strain rates. These observations suggest that the upper crustal deformation is closely related with the regulation and coupling of deep material. PMID:28349926
A deep 3D residual CNN for false-positive reduction in pulmonary nodule detection.
Jin, Hongsheng; Li, Zongyao; Tong, Ruofeng; Lin, Lanfen
2018-05-01
The automatic detection of pulmonary nodules using CT scans improves the efficiency of lung cancer diagnosis, and false-positive reduction plays a significant role in the detection. In this paper, we focus on the false-positive reduction task and propose an effective method for this task. We construct a deep 3D residual CNN (convolution neural network) to reduce false-positive nodules from candidate nodules. The proposed network is much deeper than the traditional 3D CNNs used in medical image processing. Specifically, in the network, we design a spatial pooling and cropping (SPC) layer to extract multilevel contextual information of CT data. Moreover, we employ an online hard sample selection strategy in the training process to make the network better fit hard samples (e.g., nodules with irregular shapes). Our method is evaluated on 888 CT scans from the dataset of the LUNA16 Challenge. The free-response receiver operating characteristic (FROC) curve shows that the proposed method achieves a high detection performance. Our experiments confirm that our method is robust and that the SPC layer helps increase the prediction accuracy. Additionally, the proposed method can easily be extended to other 3D object detection tasks in medical image processing. © 2018 American Association of Physicists in Medicine.
Transferring and generalizing deep-learning-based neural encoding models across subjects.
Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming
2018-08-01
Recent studies have shown the value of using deep learning models for mapping and characterizing how the brain represents and organizes information for natural vision. However, modeling the relationship between deep learning models and the brain (or encoding models), requires measuring cortical responses to large and diverse sets of natural visual stimuli from single subjects. This requirement limits prior studies to few subjects, making it difficult to generalize findings across subjects or for a population. In this study, we developed new methods to transfer and generalize encoding models across subjects. To train encoding models specific to a target subject, the models trained for other subjects were used as the prior models and were refined efficiently using Bayesian inference with a limited amount of data from the target subject. To train encoding models for a population, the models were progressively trained and updated with incremental data from different subjects. For the proof of principle, we applied these methods to functional magnetic resonance imaging (fMRI) data from three subjects watching tens of hours of naturalistic videos, while a deep residual neural network driven by image recognition was used to model visual cortical processing. Results demonstrate that the methods developed herein provide an efficient and effective strategy to establish both subject-specific and population-wide predictive models of cortical representations of high-dimensional and hierarchical visual features. Copyright © 2018 Elsevier Inc. All rights reserved.
Simulating North American mesoscale convective systems with a convection-permitting climate model
NASA Astrophysics Data System (ADS)
Prein, Andreas F.; Liu, Changhai; Ikeda, Kyoko; Bullock, Randy; Rasmussen, Roy M.; Holland, Greg J.; Clark, Martyn
2017-10-01
Deep convection is a key process in the climate system and the main source of precipitation in the tropics, subtropics, and mid-latitudes during summer. Furthermore, it is related to high impact weather causing floods, hail, tornadoes, landslides, and other hazards. State-of-the-art climate models have to parameterize deep convection due to their coarse grid spacing. These parameterizations are a major source of uncertainty and long-standing model biases. We present a North American scale convection-permitting climate simulation that is able to explicitly simulate deep convection due to its 4-km grid spacing. We apply a feature-tracking algorithm to detect hourly precipitation from Mesoscale Convective Systems (MCSs) in the model and compare it with radar-based precipitation estimates east of the US Continental Divide. The simulation is able to capture the main characteristics of the observed MCSs such as their size, precipitation rate, propagation speed, and lifetime within observational uncertainties. In particular, the model is able to produce realistically propagating MCSs, which was a long-standing challenge in climate modeling. However, the MCS frequency is significantly underestimated in the central US during late summer. We discuss the origin of this frequency biases and suggest strategies for model improvements.
Deep Learning for Image-Based Cassava Disease Detection.
Ramcharan, Amanda; Baranowski, Kelsee; McCloskey, Peter; Ahmed, Babuali; Legg, James; Hughes, David P
2017-01-01
Cassava is the third largest source of carbohydrates for human food in the world but is vulnerable to virus diseases, which threaten to destabilize food security in sub-Saharan Africa. Novel methods of cassava disease detection are needed to support improved control which will prevent this crisis. Image recognition offers both a cost effective and scalable technology for disease detection. New deep learning models offer an avenue for this technology to be easily deployed on mobile devices. Using a dataset of cassava disease images taken in the field in Tanzania, we applied transfer learning to train a deep convolutional neural network to identify three diseases and two types of pest damage (or lack thereof). The best trained model accuracies were 98% for brown leaf spot (BLS), 96% for red mite damage (RMD), 95% for green mite damage (GMD), 98% for cassava brown streak disease (CBSD), and 96% for cassava mosaic disease (CMD). The best model achieved an overall accuracy of 93% for data not used in the training process. Our results show that the transfer learning approach for image recognition of field images offers a fast, affordable, and easily deployable strategy for digital plant disease detection.
This SMMP is intended to provide management and monitoring strategies for disposal in the Mouth of Columbia River- Deep and Shallow Ocean Dredged Material Disposal Sites on the border of Oregon and Washington.
Teaching Real-World Applications of Business Statistics Using Communication to Scaffold Learning
ERIC Educational Resources Information Center
Green, Gareth P.; Jones, Stacey; Bean, John C.
2015-01-01
Our assessment research suggests that quantitative business courses that rely primarily on algorithmic problem solving may not produce the deep learning required for addressing real-world business problems. This article illustrates a strategy, supported by recent learning theory, for promoting deep learning by moving students gradually from…
Relationships between Emotional Labor, Job Performance, and Turnover
ERIC Educational Resources Information Center
Goodwin, Robyn E.; Groth, Markus; Frenkel, Stephen J.
2011-01-01
The present study investigates the relationship between the emotional labor strategies surface acting and deep acting and organizational outcomes, specifically, employees' overall job performance and turnover. Call center employees from two large financial service organizations completed an online survey about their use of surface and deep acting.…
The drawing effect: Evidence for reliable and robust memory benefits in free recall.
Wammes, Jeffrey D; Meade, Melissa E; Fernandes, Myra A
2016-01-01
In 7 free-recall experiments, the benefit of creating drawings of to-be-remembered information relative to writing was examined as a mnemonic strategy. In Experiments 1 and 2, participants were presented with a list of words and were asked to either draw or write out each. Drawn words were better recalled than written. Experiments 3-5 showed that the memory boost provided by drawing could not be explained by elaborative encoding (deep level of processing, LoP), visual imagery, or picture superiority, respectively. In Experiment 6, we explored potential limitations of the drawing effect, by reducing encoding time and increasing list length. Drawing, relative to writing, still benefited memory despite these constraints. In Experiment 7, the drawing effect was significant even when encoding trial types were compared in pure lists between participants, inconsistent with a distinctiveness account. Together these experiments indicate that drawing enhances memory relative to writing, across settings, instructions, and alternate encoding strategies, both within- and between-participants, and that a deep LoP, visual imagery, or picture superiority, alone or collectively, are not sufficient to explain the observed effect. We propose that drawing improves memory by encouraging a seamless integration of semantic, visual, and motor aspects of a memory trace.
Motivation, learning strategies, participation and medical school performance.
Stegers-Jager, Karen M; Cohen-Schotanus, Janke; Themmen, Axel P N
2012-07-01
Medical schools wish to better understand why some students excel academically and others have difficulty in passing medical courses. Components of self-regulated learning (SRL), such as motivational beliefs and learning strategies, as well as participation in scheduled learning activities, have been found to relate to student performance. Although participation may be a form of SRL, little is known about the relationships among motivational beliefs, learning strategies, participation and medical school performance. This study aimed to test and cross-validate a hypothesised model of relationships among motivational beliefs (value and self-efficacy), learning strategies (deep learning and resource management), participation (lecture attendance, skills training attendance and completion of optional study assignments) and Year 1 performance at medical school. Year 1 medical students in the cohorts of 2008 (n = 303) and 2009 (n = 369) completed a questionnaire on motivational beliefs and learning strategies (sourced from the Motivated Strategies for Learning Questionnaire) and participation. Year 1 performance was operationalised as students' average Year 1 course examination grades. Structural equation modelling was used to analyse the data. Participation and self-efficacy beliefs were positively associated with Year 1 performance (β = 0.78 and β = 0.19, respectively). Deep learning strategies were negatively associated with Year 1 performance (β =- 0.31), but positively related to resource management strategies (β = 0.77), which, in turn, were positively related to participation (β = 0.79). Value beliefs were positively related to deep learning strategies only (β = 0.71). The overall structural model for the 2008 cohort accounted for 47% of the variance in Year 1 grade point average and was cross-validated in the 2009 cohort. This study suggests that participation mediates the relationships between motivation and learning strategies, and medical school performance. However, participation and self-efficacy beliefs also made unique contributions towards performance. Encouraging participation and strengthening self-efficacy may help to enhance medical student performance. © Blackwell Publishing Ltd 2012.
Ragland, J. Daniel; Gur, Ruben C.; Valdez, Jeffrey N.; Loughead, James; Elliott, Mark; Kohler, Christian; Kanes, Stephen; Siegel, Steven J.; Moelter, Stephen T.; Gur, Raquel E.
2015-01-01
Objective Patients with schizophrenia improve episodic memory accuracy when given organizational strategies through levels-of-processing paradigms. This study tested if improvement is accompanied by normalized frontotemporal function. Method Event-related blood-oxygen-level-dependent functional magnetic resonance imaging (fMRI) was used to measure activation during shallow (perceptual) and deep (semantic) word encoding and recognition in 14 patients with schizophrenia and 14 healthy comparison subjects. Results Despite slower and less accurate overall word classification, the patients showed normal levels-of-processing effects, with faster and more accurate recognition of deeply processed words. These effects were accompanied by left ventrolateral prefrontal activation during encoding in both groups, although the thalamus, hippocampus, and lingual gyrus were overactivated in the patients. During word recognition, the patients showed overactivation in the left frontal pole and had a less robust right prefrontal response. Conclusions Evidence of normal levels-of-processing effects and left prefrontal activation suggests that patients with schizophrenia can form and maintain semantic representations when they are provided with organizational cues and can improve their word encoding and retrieval. Areas of overactivation suggest residual inefficiencies. Nevertheless, the effect of teaching organizational strategies on episodic memory and brain function is a worthwhile topic for future interventional studies. PMID:16199830
Suppression of Defects and Deep Levels Using Isoelectronic Tungsten Substitution in Monolayer MoSe 2
Li, Xufan; Puretzky, Alexander A.; Sang, Xiahan; ...
2017-05-18
Chemical vapor deposition (CVD) is one of the most promising, scalable synthetic techniques to enable large-area synthesis of two-dimensional (2D) transition metal dichalcogenides (TMDs) for the realization of next generation optoelectronic devices. However, defects formed during the CVD growth process currently limit the quality and electronic properties of 2D TMDs. Effective synthesis and processing strategies to suppress defects and enhance the quality of 2D TMDs are urgently needed. In this work, isoelectrnic doping to produce stable alloy is presented as a new strategy to suppress defects and enhance photoluminescence (PL) in CVD-grown TMD monolayers. The random, isoelectronic substitution of Wmore » atoms for Mo atoms in CVD-grown monolayers of Mo 1-xW xSe 2 (02 monolayers. The resultant decrease in defect-medicated non-radiative recombination in the Mo 0.82W 0.18Se 2 monolayers yielded ~10 times more intense PL and extended the carrier lifetime by a factor of 3 compared to pristine CVD-grown MoSe 2 monolayers grown under similar conditions. Low temperatures (4 125 K) PL from defect-related localized states confirms theoretical predictions that isoelectronic W alloying should suppress deep levels in MoSe 2, showing that the defect levels in Mo 1-xW xSe 2 monolayers are higher in energy and quenched more quickly than in MoSe 2. Isoelectronic substitution therefore appears to be a promising synthetic method to control the heterogeneity of 2D TMDs to realize the scalable production of high performance optoelectronic and electronic devices.« less
Suppression of Defects and Deep Levels Using Isoelectronic Tungsten Substitution in Monolayer MoSe 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xufan; Puretzky, Alexander A.; Sang, Xiahan
Chemical vapor deposition (CVD) is one of the most promising, scalable synthetic techniques to enable large-area synthesis of two-dimensional (2D) transition metal dichalcogenides (TMDs) for the realization of next generation optoelectronic devices. However, defects formed during the CVD growth process currently limit the quality and electronic properties of 2D TMDs. Effective synthesis and processing strategies to suppress defects and enhance the quality of 2D TMDs are urgently needed. In this work, isoelectrnic doping to produce stable alloy is presented as a new strategy to suppress defects and enhance photoluminescence (PL) in CVD-grown TMD monolayers. The random, isoelectronic substitution of Wmore » atoms for Mo atoms in CVD-grown monolayers of Mo 1-xW xSe 2 (02 monolayers. The resultant decrease in defect-medicated non-radiative recombination in the Mo 0.82W 0.18Se 2 monolayers yielded ~10 times more intense PL and extended the carrier lifetime by a factor of 3 compared to pristine CVD-grown MoSe 2 monolayers grown under similar conditions. Low temperatures (4 125 K) PL from defect-related localized states confirms theoretical predictions that isoelectronic W alloying should suppress deep levels in MoSe 2, showing that the defect levels in Mo 1-xW xSe 2 monolayers are higher in energy and quenched more quickly than in MoSe 2. Isoelectronic substitution therefore appears to be a promising synthetic method to control the heterogeneity of 2D TMDs to realize the scalable production of high performance optoelectronic and electronic devices.« less
Experimental Studies on Pressure and Temperature Effects on Deep Dea Organisms.
1980-02-28
SUPPLEMENTARY NOTES Research published in two papers: (a) George, R.Y. 1979. What Adaptive Strategies Promote Immig ration and Speciation in Deep Sea...Environment. Sarsia 64(1-2):61-65. (b) George, R.Y. 1979. Behavorial and 4etabolic Adaptations o Polar and Deep Sea Crustaceans. Bull. Biol. Soc. W ch #3...pages 283-296. 19. KE WORDS (Continue on reverse side If noeemy and Identify by block nuimber) Pressure adaptation , temperature-pressure interaction
Brébion, Gildas; Bressan, Rodrigo A; Pilowsky, Lyn S; David, Anthony S
2011-05-01
Previous work has suggested that decrement in both processing speed and working memory span plays a role in the memory impairment observed in patients with schizophrenia. We undertook a study to examine simultaneously the effect of these two factors. A sample of 49 patients with schizophrenia and 43 healthy controls underwent a battery of verbal and visual memory tasks. Superficial and deep encoding memory measures were tallied. We conducted regression analyses on the various memory measures, using processing speed and working memory span as independent variables. In the patient group, processing speed was a significant predictor of superficial and deep memory measures in verbal and visual memory. Working memory span was an additional significant predictor of the deep memory measures only. Regression analyses involving all participants revealed that the effect of diagnosis on all the deep encoding memory measures was reduced to non-significance when processing speed was entered in the regression. Decreased processing speed is involved in verbal and visual memory deficit in patients, whether the task require superficial or deep encoding. Working memory is involved only insofar as the task requires a certain amount of effort.
Teaching Decoding Strategies without Destroying Story.
ERIC Educational Resources Information Center
Kane, Sharon
1999-01-01
Argues that deep coding skills must and can be introduced, taught, practiced, and reinforced within contexts meaningful to students. Shows how teachers can provide these meaningful educational contexts within which decoding strategies make sense to emerging readers. (SR)
Xue, Chuang; Zhao, Jingbo; Chen, Lijie; Yang, Shang-Tian; Bai, Fengwu
Butanol as an advanced biofuel has gained great attention due to its environmental benefits and superior properties compared to ethanol. However, the cost of biobutanol production via conventional acetone-butanol-ethanol (ABE) fermentation by Clostridium acetobutylicum is not economically competitive, which has hampered its industrial application. The strain performance and downstream process greatly impact the economics of biobutanol production. Although various engineered strains with carefully orchestrated metabolic and sporulation-specific pathways have been developed, none of them is ideal for industrial biobutanol production. For further strain improvement, it is necessary to develop advanced genome editing tools and a deep understanding of cellular functioning of genes in metabolic and regulatory pathways. Processes with integrated product recovery can increase fermentation productivity by continuously removing inhibitory products while generating butanol (ABE) in a concentrated solution. In this review, we provide an overview of recent advances in C. acetobutylicum strain engineering and process development focusing on in situ product recovery. With deep understanding of systematic cellular bioinformatics, the exploration of state-of-the-art genome editing tools such as CRISPR-Cas for targeted gene knock-out and knock-in would play a vital role in Clostridium cell engineering for biobutanol production. Developing advanced hybrid separation processes for in situ butanol recovery, which will be discussed with a detailed comparison of advantages and disadvantages of various recovery techniques, is also imperative to the economical development of biobutanol. Copyright © 2017 Elsevier Inc. All rights reserved.
Anarchist, Neoliberal, & Democratic Decision-Making: Deepening the Joy in Learning and Teaching
ERIC Educational Resources Information Center
Briscoe, Felecia M.
2012-01-01
Using a critical postmodern framework, this article analyzes the relationship of the decision-making processes of anarchism and neoliberalism to that of deep democracy. Anarchist processes are found to share common core principals with deep democracy; but neoliberal processes are found to be antithetical to deep democracy. To increase the joy in…
ERIC Educational Resources Information Center
McClintic-Gilbert, Megan S.; Corpus, Jennifer Henderlong; Wormington, Stephanie V.; Haimovitz, Kyla
2013-01-01
The present study examined the extent to which middle school students' (N = 90) learning strategies mediated the relationship between their motivational orientations and academic achievement. Survey data revealed that higher degrees of intrinsic motivation predicted the use of both deep and surface learning strategies, whereas higher degrees of…
ERIC Educational Resources Information Center
Murayama, Kou; Pekrun, Reinhard; Lichtenfeld, Stephanie; vom Hofe, Rudolf
2013-01-01
This research examined how motivation (perceived control, intrinsic motivation, and extrinsic motivation), cognitive learning strategies (deep and surface strategies), and intelligence jointly predict long-term growth in students' mathematics achievement over 5 years. Using longitudinal data from six annual waves (Grades 5 through 10;…
2010-06-01
s) __ Test Plt(s) __ Test Trench(es) __ Deep Test(s) __ PZ or Humus Removal __ Testing/Excav. (strategy unknown) __ Mitigation/Block Excavation...Collection __ Surface Collection _Auger/Soil Corer _Shovel Test (s) .lL... Test Pit (s) _Test Trench (es) __ Deep Test (s) _ PZ or Humus Removal
NASA Astrophysics Data System (ADS)
Bleacher, J. E.; Gendreau, K.; Arzoumanian, Z.; Young, K. E.; McAdam, A.
2018-02-01
Science instruments to be used during human exploration should be designed to serve as multipurpose tools that are of use throughout a mission. Here we discuss a multipurpose tool approach to using contact XRD/XRF onboard the Deep Space Gateway.
Measurement needs guided by synthetic radar scans in high-resolution model output
NASA Astrophysics Data System (ADS)
Varble, A.; Nesbitt, S. W.; Borque, P.
2017-12-01
Microphysical and dynamical process interactions within deep convective clouds are not well understood, partly because measurement strategies often focus on statistics of cloud state rather than cloud processes. While processes cannot be directly measured, they can be inferred with sufficiently frequent and detailed scanning radar measurements focused on the life cycleof individual cloud regions. This is a primary goal of the 2018-19 DOE ARM Cloud, Aerosol, and Complex Terrain Interactions (CACTI) and NSF Remote sensing of Electrification, Lightning, And Mesoscale/microscale Processes with Adaptive Ground Observations (RELAMPAGO) field campaigns in central Argentina, where orographic deep convective initiation is frequent with some high-impact systems growing into the tallest and largest in the world. An array of fixed and mobile scanning multi-wavelength dual-polarization radars will be coupled with surface observations, sounding systems, multi-wavelength vertical profilers, and aircraft in situ measurements to characterize convective cloud life cycles and their relationship with environmental conditions. While detailed cloud processes are an observational target, the radar scan patterns that are most ideal for observing them are unclear. They depend on the locations and scales of key microphysical and dynamical processes operating within the cloud. High-resolution simulations of clouds, while imperfect, can provide information on these locations and scales that guide radar measurement needs. Radar locations are set in the model domain based on planned experiment locations, and simulatedorographic deep convective initiation and upscale growth are sampled using a number of different scans involving RHIs or PPIs with predefined elevation and azimuthal angles that approximately conform with radar range and beam width specifications. Each full scan pattern is applied to output atsingle model time steps with time step intervals that depend on the length of time required to complete each scan in the real world. The ability of different scans to detect key processes within the convective cloud life cycle are examined in connection with previous and subsequent dynamical and microphysical transitions. This work will guide strategic scan patterns that will be used during CACTI and RELAMPAGO.
Zhang, Wei; Li, Jiannian; Zhang, Jie; Sheng, Jinzhi; He, Ting; Tian, Meiyue; Zhao, Yufeng; Xie, Changjun; Mai, Liqiang; Mu, Shichun
2017-04-12
To overcome inferior rate capability and cycle stability of MnO-based materials as a lithium-ion battery anode associated with the pulverization and gradual aggregation during the conversion process, we constructed robust mesoporous N-doped carbon (N-C) protected MnO nanoparticles on reduced graphene oxide (rGO) (MnO@N-C/rGO) by a simple top-down incorporation strategy. Such dual carbon protection endows MnO@N-C/rGO with excellent structural stability and enhanced charge transfer kinetics. At 100 mA g -1 , it exhibits superior rate capability as high as 864.7 mAh g -1 , undergoing the deep charge/discharge for 70 cycles and outstanding cyclic stability (after 1300 cyclic tests at 2000 mA g -1 ; 425.0 mAh g -1 remains, accompanying merely 0.004% capacity decay per cycle). This facile method provides a novel strategy for synthesis of porous electrodes by making use of highly insulating materials.
Succession in the petroleum reservoir microbiome through an oil field production lifecycle.
Vigneron, Adrien; Alsop, Eric B; Lomans, Bartholomeus P; Kyrpides, Nikos C; Head, Ian M; Tsesmetzis, Nicolas
2017-09-01
Subsurface petroleum reservoirs are an important component of the deep biosphere where indigenous microorganisms live under extreme conditions and in isolation from the Earth's surface for millions of years. However, unlike the bulk of the deep biosphere, the petroleum reservoir deep biosphere is subject to extreme anthropogenic perturbation, with the introduction of new electron acceptors, donors and exogenous microbes during oil exploration and production. Despite the fundamental and practical significance of this perturbation, there has never been a systematic evaluation of the ecological changes that occur over the production lifetime of an active offshore petroleum production system. Analysis of the entire Halfdan oil field in the North Sea (32 producing wells in production for 1-15 years) using quantitative PCR, multigenic sequencing, comparative metagenomic and genomic bins reconstruction revealed systematic shifts in microbial community composition and metabolic potential, as well as changing ecological strategies in response to anthropogenic perturbation of the oil field ecosystem, related to length of time in production. The microbial communities were initially dominated by slow growing anaerobes such as members of the Thermotogales and Clostridiales adapted to living on hydrocarbons and complex refractory organic matter. However, as seawater and nitrate injection (used for secondary oil production) delivered oxidants, the microbial community composition progressively changed to fast growing opportunists such as members of the Deferribacteres, Delta-, Epsilon- and Gammaproteobacteria, with energetically more favorable metabolism (for example, nitrate reduction, H 2 S, sulfide and sulfur oxidation). This perturbation has profound consequences for understanding the microbial ecology of the system and is of considerable practical importance as it promotes detrimental processes such as reservoir souring and metal corrosion. These findings provide a new conceptual framework for understanding the petroleum reservoir biosphere and have consequences for developing strategies to manage microbiological problems in the oil industry.
Does the acceptance of hybrid learning affect learning approaches in France?
Marco, Lionel Di; Venot, Alain; Gillois, Pierre
2017-01-01
Acceptance of a learning technology affects students' intention to use that technology, but the influence of the acceptance of a learning technology on learning approaches has not been investigated in the literature. A deep learning approach is important in the field of health, where links must be created between skills, knowledge, and habits. Our hypothesis was that acceptance of a hybrid learning model would affect students' way of learning. We analysed these concepts, and their correlations, in the context of a flipped classroom method using a local learning management system. In a sample of all students within a single year of study in the midwifery program (n= 38), we used 3 validated scales to evaluate these concepts (the Study Process Questionnaire, My Intellectual Work Tools, and the Hybrid E-Learning Acceptance Model: Learner Perceptions). Our sample had a positive acceptance of the learning model, but a neutral intention to use it. Students reported that they were distractible during distance learning. They presented a better mean score for the deep approach than for the superficial approach (P< 0.001), which is consistent with their declared learning strategies (personal reorganization of information; search and use of examples). There was no correlation between poor acceptance of the learning model and inadequate learning approaches. The strategy of using deep learning techniques was moderately correlated with acceptance of the learning model (r s = 0.42, P= 0.03). Learning approaches were not affected by acceptance of a hybrid learning model, due to the flexibility of the tool. However, we identified problems in the students' time utilization, which explains their neutral intention to use the system.
Does the acceptance of hybrid learning affect learning approaches in France?
2017-01-01
Purpose Acceptance of a learning technology affects students’ intention to use that technology, but the influence of the acceptance of a learning technology on learning approaches has not been investigated in the literature. A deep learning approach is important in the field of health, where links must be created between skills, knowledge, and habits. Our hypothesis was that acceptance of a hybrid learning model would affect students’ way of learning. Methods We analysed these concepts, and their correlations, in the context of a flipped classroom method using a local learning management system. In a sample of all students within a single year of study in the midwifery program (n= 38), we used 3 validated scales to evaluate these concepts (the Study Process Questionnaire, My Intellectual Work Tools, and the Hybrid E-Learning Acceptance Model: Learner Perceptions). Results Our sample had a positive acceptance of the learning model, but a neutral intention to use it. Students reported that they were distractible during distance learning. They presented a better mean score for the deep approach than for the superficial approach (P< 0.001), which is consistent with their declared learning strategies (personal reorganization of information; search and use of examples). There was no correlation between poor acceptance of the learning model and inadequate learning approaches. The strategy of using deep learning techniques was moderately correlated with acceptance of the learning model (rs= 0.42, P= 0.03). Conclusion Learning approaches were not affected by acceptance of a hybrid learning model, due to the flexibility of the tool. However, we identified problems in the students’ time utilization, which explains their neutral intention to use the system. PMID:29051406
The study of deep-sea cephalopods.
Hoving, Henk-Jan T; Perez, Jose Angel A; Bolstad, Kathrin S R; Braid, Heather E; Evans, Aaron B; Fuchs, Dirk; Judkins, Heather; Kelly, Jesse T; Marian, José E A R; Nakajima, Ryuta; Piatkowski, Uwe; Reid, Amanda; Vecchione, Michael; Xavier, José C C
2014-01-01
"Deep-sea" cephalopods are here defined as cephalopods that spend a significant part of their life cycles outside the euphotic zone. In this chapter, the state of knowledge in several aspects of deep-sea cephalopod research are summarized, including information sources for these animals, diversity and general biogeography and life cycles, including reproduction. Recommendations are made for addressing some of the remaining knowledge deficiencies using a variety of traditional and more recently developed methods. The types of oceanic gear that are suitable for collecting cephalopod specimens and images are reviewed. Many groups of deep-sea cephalopods require taxonomic reviews, ideally based on both morphological and molecular characters. Museum collections play a vital role in these revisions, and novel (molecular) techniques may facilitate new use of old museum specimens. Fundamental life-cycle parameters remain unknown for many species; techniques developed for neritic species that could potentially be applied to deep-sea cephalopods are discussed. Reproductive tactics and strategies in deep-sea cephalopods are very diverse and call for comparative evolutionary and experimental studies, but even in the twenty-first century, mature individuals are still unknown for many species. New insights into diet and trophic position have begun to reveal a more diverse range of feeding strategies than the typically voracious predatory lifestyle known for many cephalopods. Regular standardized deep-sea cephalopod surveys are necessary to provide insight into temporal changes in oceanic cephalopod populations and to forecast, verify and monitor the impacts of global marine changes and human impacts on these populations. © 2014 Elsevier Ltd All rights reserved.
Statistical porcess control in Deep Space Network operation
NASA Technical Reports Server (NTRS)
Hodder, J. A.
2002-01-01
This report describes how the Deep Space Mission System (DSMS) Operations Program Office at the Jet Propulsion Laboratory's (EL) uses Statistical Process Control (SPC) to monitor performance and evaluate initiatives for improving processes on the National Aeronautics and Space Administration's (NASA) Deep Space Network (DSN).
Formability analysis of aluminum alloys through deep drawing process
NASA Astrophysics Data System (ADS)
Pranavi, U.; Janaki Ramulu, Perumalla; Chandramouli, Ch; Govardhan, Dasari; Prasad, PVS. Ram
2016-09-01
Deep drawing process is a significant metal forming process used in the sheet metal forming operations. From this process complex shapes can be manufactured with fewer defects. Deep drawing process has different effectible process parameters from which an optimum level of parameters should be identified so that an efficient final product with required mechanical properties will be obtained. The present work is to evaluate the formability of Aluminum alloy sheets using deep drawing process. In which effects of punch radius, lubricating conditions, die radius, and blank holding forces on deep drawing process observed for AA 6061 aluminum alloy sheet of 2 mm thickness. The numerical simulations are performed for deep drawing of square cups using three levels of aforesaid parameters like lubricating conditions and blank holding forces and two levels of punch radii and die radii. For numerical simulation a commercial FEM code is used in which Hollomon's power law and Hill's 1948 yield criterions are implemented. The deep drawing setup used in the FEM code is modeled using a CAD tool by considering the modeling requirements from the literature. Two different strain paths (150x150mm and 200x200mm) are simulated. Punch forces, thickness distributions and dome heights are evaluated for all the conditions. In addition failure initiation and propagation is also observed. From the results, by increasing the coefficient of friction and blank holding force, punch force, thickness distribution and dome height variations are observed. The comparison has done and the optimistic parameters were suggested from the results. From this work one can predict the formability for different strain paths without experimentation.
Readiness Assessment Towards Smart Manufacturing System for Tuna Processing Industry in Indonesia
NASA Astrophysics Data System (ADS)
Anggrahini, D.; Kurniati, N.; Karningsih, P. D.; Parenreng, S. M.; Syahroni, N.
2018-04-01
Marine product processing is one of the top priority clusters in the national development. Tuna, as a kind of deep ocean fishes, has the highest number of production that significantly increased throughout the years. Indonesia government encourages tuna processing industry, which are mostly dominated by small to medium enterprises, to grow continuously. Nowadays, manufacturers are facing substantial challenges in adopting modern system and technology that will lead a significant improvement through the internet of things (IoT). A smart factory transform integrated manufacturing process, in a high speed processing to respond customer needs. It has some positive impacts, such as increasing productivity, reducing set up time, shortening marketing and other support activities, hence the process is being more flexible and efficient. To implement smart manufacturing system, factories should know the readiness at any level of them, technology capability and strategy appropriateness. This exploratory study aims to identify the criterias, and develop an assessment tools to measure the level towards smart factory.
ERIC Educational Resources Information Center
Zhou, Ji; Urhahne, Detlef
2017-01-01
Self-regulated learning (SRL) in the museum was explored by 2 investigations. The first one investigated 233 visitors on their goals and intended learning strategies by questionnaire before they visited the science museum. Results indicated visitors' learning goals can predict their intended deep-learning strategy. Moreover, visitors can be…
Stable architectures for deep neural networks
NASA Astrophysics Data System (ADS)
Haber, Eldad; Ruthotto, Lars
2018-01-01
Deep neural networks have become invaluable tools for supervised machine learning, e.g. classification of text or images. While often offering superior results over traditional techniques and successfully expressing complicated patterns in data, deep architectures are known to be challenging to design and train such that they generalize well to new data. Critical issues with deep architectures are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients. In this paper, we propose new forward propagation techniques inspired by systems of ordinary differential equations (ODE) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks. The backbone of our approach is our interpretation of deep learning as a parameter estimation problem of nonlinear dynamical systems. Given this formulation, we analyze stability and well-posedness of deep learning and use this new understanding to develop new network architectures. We relate the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and present several strategies for stabilizing deep learning for very deep networks. While our new architectures restrict the solution space, several numerical experiments show their competitiveness with state-of-the-art networks.
Hong, Ming; Guo, Quan-Shu; Nie, Bi-Hong; Kang, Yi; Pei, Shun-Xiang; Jin, Jiang-Qun; Wang, Xiang-Fu
2011-11-01
This paper studied the population density, morphological characteristics, and biomass and its allocation of Cynodon dactylon at different altitudinal sections of the hydro-fluctuation belt in Three Gorges Reservoir area, based on located observations. At the three altitudinal sections, the population density of C. dactylon was in the order of shallow water section (165-170 m elevation) > non-flooded section (above 172 m elevation) > deep water section (145-150 m elevation), the root diameter and root length were in the order of deep water section > shallow water section > non-flooded section, the total biomass, root biomass, stem biomass, leaf biomass, and stem biomass allocation ratio were in the order of the shallow water section > non-flooded section > deep water section, and the root biomass allocation ratio, leaf biomass allocation ratio, and underground biomass/aboveground biomass were in the order of deep water section > shallow water section > non-flooded section. The unique adaption strategies of C. dactylon to the flooding-drying habitat change in the shallow water section were the accelerated elongation growth and the increased stem biomass allocation, those in the deep water section were the increased node number of primary and secondary branches, increased number of the branches, and increased leaf biomass allocation, whereas the common strategies in the shallow and deep water sections were the accelerated root growth and the increased tillering and underground biomass allocation for preparing nutrition and energy for the rapid growth in terrestrial environment.
NASA Astrophysics Data System (ADS)
Holschuh, Jodi Lynn
This study had two main purposes: to address measurement concerns about assessing students' epistemological beliefs and to explore the relationship between epistemological beliefs and deep and surface strategy use in an introductory biology classroom. The following research questions guided the study: (a) Are epistemological beliefs multidimensional? (b) Are the measures of epistemological beliefs correlated? (c) Are the measures of strategy use correlated? (d) Are epistemological beliefs correlated with deep and surface strategy use? (e) How much of the unique variance in Scholastic Aptitude Test (SAT) scores, grade point average (GPA), and course grade is accounted for by epistemological beliefs and strategy use? (f) To what extent does the content analysis of the open-ended questionnaire data support or refute the role of mature epistemological beliefs? and (g) To what extent does the content analysis of the open-ended questionnaire data support or refute the role of deep strategies? Participants (N = 518) were recruited from two sections of an introductory biology course. All participants completed five assessments including the Epistemological Questionnaire, the Epistemological Scenario, the Self-Regulated Learning Inventory, two strategy checklists, and an open-ended questionnaire. The factor analysis, which was used to answer the first question, indicated no clear loading of the hypothesized dimensions underlying epistemological beliefs as measured by the Epistemological Questionnaire. However, the factor analysis of the Epistemological Scenario indicated four factors underlying epistemological beliefs (i.e., certain knowledge, innate ability, quick learning, and simple knowledge). In addition, the correlation analyses, which were used to answer the second, third, and fourth questions, indicated a significant relationship between epistemological beliefs and strategy use. The multiple regression commonality analysis, which was used to answer the fifth question, indicated that epistemological beliefs and strategy use contributed a statistically significant amount of unique variance in SAT Verbal score, college GPA, and course grade. The findings indicate that students' epistemological beliefs and strategy use affect their academic performance. Educators need to develop instructional strategies to incorporate tasks that encourage mature epistemological beliefs into the classroom, especially when teaching complex science concepts.
NASA Astrophysics Data System (ADS)
Christina, M.; Laclau, J.; Nouvellon, Y.; Duursma, R. A.; Stape, J. L.; Lambais, G. R.; Le Maire, G.
2013-12-01
Little is known about the role of very deep roots to supply the water requirements of tropical forests. Clonal Eucalyptus plantations managed in short rotation on very deep Ferralsols are simple forest ecosystems (only 1 plant genotype growing on a relatively homogeneous soil) likely to provide an insight into tree water use strategies in tropical forests. Fine roots have been observed down to a depth of 6 m at age 1 year in Brazilian eucalypt plantations. However, the contribution of water stored in very deep soil layers to stand evapotranspiration over tree growth has been poorly quantified. An eco-physiological model, MAESPA, has been used to simulate half-hourly stand water balance over the first three years of growth in a clonal Eucalyptus grandis plantation in southern Brazil (Eucflux project, State of São Paulo). The water balance model in MAESPA is an equilibrium-type model between soil and leaf water potentials for individual trees aboveground, and at the stand scale belowground. The dynamics of the vertical fine root distribution have been taken into account empirically from linear interpolations between successive measurements. The simulations were compared to time series of soil water contents measured every meter down to 10m deep and to daily latent heat fluxes measured by eddy covariance. Simulations of volumetric soil water contents matched satisfactorily with measurements (RMSE = 0.01) over the three-year period. Good agreement was also observed between simulated and measured latent heat fluxes. In the rainy season, more than 75 % of tree transpiration was supplied by water withdrawn in the upper 1 m of soil, but water uptake progressed to deeper soil layers during dry periods, down to a depth of 6 m, 12 m and 15 m the first, second and third year after planting, respectively. During the second growing season, 15% of water was withdrawn below a depth of 6 m, and 5% below 10m. Most of the soil down to 12m deep was dried out the second year after planting and deep drainage was negligible after 2 years. As a consequence, during the third year after planting only 4% of water was taken up below 6m. However, during the dry season, this deep water still supplied 50% of water requirements. Our results show that deep fine roots of E. grandis play a major role in supplying tree water requirements during extended dry periods. Large amounts of water are stored in the whole soil profile after clear cutting and the fast exploration of deep soil layers by roots make it available for tree growth. After canopy closure, precipitation becomes the key limitation for the productivity of these plantations grown in deep sandy soils. Our results suggest that a territorial strategy leading to a fast exploration of very deep soil layers might provide a strong competitive advantage in regions prone to drought.
Validation of the FEA of a deep drawing process with additional force transmission
NASA Astrophysics Data System (ADS)
Behrens, B.-A.; Bouguecha, A.; Bonk, C.; Grbic, N.; Vucetic, M.
2017-10-01
In order to meet requirements by automotive industry like decreasing the CO2 emissions, which reflects in reducing vehicles mass in the car body, the chassis and the powertrain, the continuous innovation and further development of existing production processes are required. In sheet metal forming processes the process limits and components characteristics are defined through the process specific loads. While exceeding the load limits, a failure in the material occurs, which can be avoided by additional force transmission activated in the deep drawing process before the process limit is achieved. This contribution deals with experimental investigations of a forming process with additional force transmission regarding the extension of the process limits. Based on FEA a tool system is designed and developed by IFUM. For this purpose, the steel material HCT600 is analyzed numerically. Within the experimental investigations, the deep drawing processes, with and without the additional force transmission are carried out. Here, a comparison of the produced rectangle cups is done. Subsequently, the identical deep drawing processes are investigated numerically. Thereby, the values of the punch reaction force and displacement are estimated and compared with experimental results. Thus, the validation of material model is successfully carried out on process scale. For further quantitative verification of the FEA results the experimental determined geometry of the rectangular cup is measured optically with ATOS system of the company GOM mbH and digitally compared with external software Geomagic®QualifyTM. The goal of this paper is the verification of the transferability of the FEA model for a conventional deep drawing process to a deep drawing process with additional force transmission with a counter punch.
A review on the application of deep learning in system health management
NASA Astrophysics Data System (ADS)
Khan, Samir; Yairi, Takehisa
2018-07-01
Given the advancements in modern technological capabilities, having an integrated health management and diagnostic strategy becomes an important part of a system's operational life-cycle. This is because it can be used to detect anomalies, analyse failures and predict the future state based on up-to-date information. By utilising condition data and on-site feedback, data models can be trained using machine learning and statistical concepts. Once trained, the logic for data processing can be embedded on on-board controllers whilst enabling real-time health assessment and analysis. However, this integration inevitably faces several difficulties and challenges for the community; indicating the need for novel approaches to address this vexing issue. Deep learning has gained increasing attention due to its potential advantages with data classification and feature extraction problems. It is an evolving research area with diverse application domains and hence its use for system health management applications must been researched if it can be used to increase overall system resilience or potential cost benefits for maintenance, repair, and overhaul activities. This article presents a systematic review of artificial intelligence based system health management with an emphasis on recent trends of deep learning within the field. Various architectures and related theories are discussed to clarify its potential. Based on the reviewed work, deep learning demonstrates plausible benefits for fault diagnosis and prognostics. However, there are a number of limitations that hinder its widespread adoption and require further development. Attention is paid to overcoming these challenges, with future opportunities being enumerated.
Haahr, Anita; Kirkevold, Marit; Hall, Elisabeth O C; Ostergaard, Karen
2010-10-01
Deep Brain Stimulation for Parkinson's disease is a promising treatment for patients who can no longer be treated satisfactorily with L-dopa. Deep Brain Stimulation is known to relieve motor symptoms of Parkinson's disease and improve quality of life. Focusing on how patients experience life when treated with Deep Brain Stimulation can provide essential information on the process patients go through when receiving a treatment that alters the body and changes the illness trajectory. The aim of this study was to explore and describe the experience of living with Parkinson's disease when treated with Deep Brain Stimulation. The study was designed as a longitudinal study and data were gathered through qualitative in-depth interviews three times during the first year of treatment. Nine patients participated in the study. They were included when they had accepted treatment with Deep Brain Stimulation for Parkinson's disease. Data collection and data analysis were inspired by the hermeneutic phenomenological methodology of Van Manen. The treatment had a major impact on the body. Participants experienced great bodily changes and went through a process of adjustment in three phases during the first year of treatment with Deep Brain Stimulation. These stages were; being liberated: a kind of miracle, changes as a challenge: decline or opportunity and reconciliation: re-defining life with Parkinson's disease. The course of the process was unique for each participant, but dominant was that difficulties during the adjustment of stimulation and medication did affect the re-defining process. Patients go through a dramatic process of change following Deep Brain Stimulation. A changing body affects their entire lifeworld. Some adjust smoothly to changes while others are affected by loss of control, uncertainty and loss of everyday life as they knew it. These experiences affect the process of adjusting to life with Deep Brain Stimulation and re-define life with Parkinson's disease. It is of significant importance that health care professionals are aware of these dramatic changes in the patients' life and offer support during the adjustment process following Deep Brain Stimulation. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S
2016-12-01
We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Formation of metal and dielectric liners using a solution process for deep trench capacitors.
Ham, Yong-Hyun; Kim, Dong-Pyo; Baek, Kyu-Ha; Park, Kun-Sik; Kim, Moonkeun; Kwon, Kwang-Ho; Shin, Hong-Sik; Lee, Kijun; Do, Lee-Mi
2012-07-01
We demonstrated the feasibility of metal and dielectric liners using a solution process for deep trench capacitor application. The deep Si trench via with size of 10.3 microm and depth of 71 microm were fabricated by Bosch process in deep reactive ion etch (DRIE) system. The aspect ratio was about 7. Then, nano-Ag ink and poly(4-vinylphenol) (PVPh) were used to form metal and dielectric liners, respectively. The thicknesses of the Ag and PVPh liners were about 144 and 830 nm, respectively. When the curing temperature of Ag film increased from 120 to 150 degrees C, the sheet resistance decreased rapidly from 2.47 to 0.72 Omega/sq and then slightly decreased to 0.6 Omega/sq with further increasing the curing temperature beyond 150 degrees C. The proposed liner formation method using solution process is a simple and cost effective process for the high capacity of deep trench capacitor.
Thermal quenching effect of an infrared deep level in Mg-doped p-type GaN films
NASA Astrophysics Data System (ADS)
Kim, Keunjoo; Chung, Sang Jo
2002-03-01
The thermal quenching of an infrared deep level of 1.2-1.5 eV has been investigated on Mg-doped p-type GaN films, using one- and two-step annealing processes and photocurrent measurements. The deep level appeared in the one-step annealing process at a relatively high temperature of 900 °C, but disappeared in the two-step annealing process with a low-temperature step and a subsequent high-temperature step. The persistent photocurrent was residual in the sample including the deep level, while it was terminated in the sample without the deep level. This indicates that the deep level is a neutral hole center located above a quasi-Fermi level, estimated with an energy of EpF=0.1-0.15 eV above the valence band at a hole carrier concentration of 2.0-2.5×1017/cm3.
Towards deep learning with segregated dendrites
Guerguiev, Jordan; Lillicrap, Timothy P
2017-01-01
Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations—the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons. PMID:29205151
Towards deep learning with segregated dendrites.
Guerguiev, Jordan; Lillicrap, Timothy P; Richards, Blake A
2017-12-05
Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations-the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons.
Introduction: From pathogenesis to therapy, deep endometriosis remains a source of controversy.
Donnez, Jacques
2017-12-01
Deep endometriosis remains a source of controversy. A number of theories may explain its pathogenesis and many arguments support the hypothesis that genetic or epigenetic changes are a prerequisite for development of lesions into deep endometriosis. Deep endometriosis is frequently responsible for pelvic pain, dysmenorrhea, and/or deep dyspareunia, but can also cause obstetrical complications. Diagnosis may be improved by high-quality imaging. Therapeutic approaches are a source of contention as well. In this issue's Views and Reviews, medical and surgical strategies are discussed, and it is emphasized that treatment should be designed according to a patient's symptoms and individual needs. It is also vital that referral centers have the knowledge and experience to treat deep endometriosis medically and/or surgically. The debate must continue because emerging trends in therapy need to be followed and investigated for optimal management. Copyright © 2017 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Advances in deep-UV processing using cluster tools
NASA Astrophysics Data System (ADS)
Escher, Gary C.; Tepolt, Gary; Mohondro, Robert D.
1993-09-01
Deep-UV laser lithography has shown the capability of supporting the manufacture of multiple generations of integrated circuits (ICs) due to its wide process latitude and depth of focus (DOF) for 0.2 micrometers to 0.5 micrometers feature sizes. This capability has been attained through improvements in deep-UV wide field lens technology, excimer lasers, steppers and chemically amplified, positive deep-UV resists. Chemically amplified deep-UV resists are required for 248 nm lithography due to the poor absorption and sensitivity of conventional novolac resists. The acid catalyzation processes of the new resists requires control of the thermal history and environmental conditions of the lithographic process. Work is currently underway at several resist vendors to reduce the need for these controls, but practical manufacturing solutions exist today. One of these solutions is the integration of steppers and resist tracks into a `cluster tool' or `Lithocell' to insure a consistent thermal profile for the resist process and reduce the time the resist is exposed to atmospheric contamination. The work here reports processing and system integration results with a Machine Technology, Inc (MTI) post-exposure bake (PEB) track interfaced with an advanced GCA XLS 7800 deep-UV stepper [31 mm diameter, variable NA (0.35 - 0.53) and variable sigma (0.3 - 0.74)].
Approaches to Learning and Kolb's Learning Styles of Undergraduates with Better Grades
NASA Astrophysics Data System (ADS)
Almeida, Patrícia; Teixeira-Dias, José Joaquim; Martinho, Mariana; Balasooriya, Chinthaka
The purpose of this study is to investigate if the teaching, learning and assessment strategies conceived and implemented in a higher education chemistry course promote the development of conceptual understanding, as intended. Thus, our aim is to analyse the learning styles and the approaches to learning of chemistry undergraduates with better grades. The overall results show that the students with better grades possess the assimilator learning style, that is usually associated to the archetypal chemist. Moreover, the students with the highest grades revealed a conception of learning emphasising understanding. However, these students diverged both in their learning approaches and in their preferences for teaching strategies. The majority of students adopted a deep approach or a combination of a deep and a strategic approach, but half of them revealed their preference for teaching-centred strategies.
NASA Astrophysics Data System (ADS)
Mereminskiy, I. A.; Filippova, E. V.; Burenin, R. A.; Sazonov, S. Yu.; Pavlinsky, M. N.; Tkachenko, A. Yu.; Lapshov, I. Yu.; Shtykovskiy, A. E.; Krivonos, R. A.
2018-02-01
To choose the best strategy for conducting a deep extragalactic survey with the ART-XC X-ray telescope onboard the Spectrum-Röntgen-Gamma (SRG) observatory and to estimate the expected results, we have simulated the observations of a 1.1° × 1.1° field in the 5-11 and 8-24 keV energy bands. For this purpose, we have constructed a model of the active galactic nuclei (AGN) population that reflects the properties of the X-ray emission from such objects. The photons that "arrived" from these sources were passed through a numerical model of the telescope, while the resulting data were processed with the standard ART-XC data processing pipeline. We show that several hundred AGNs at redshifts up to z ≈ 3 will be detected in such a survey over 1.2 Ms of observations with the expected charged particle background levels. Among them there will be heavily obscured AGNs, which will allow a more accurate estimate of the fraction of such objects in the total population to be made. Source confusion is expected at fluxes below 2 × 10-14 erg s-1 cm-2 (5-11 keV). Since this value can exceed the source detection threshold in a deep survey at low particle background levels, it may turn out to be more interesting to conduct a survey of larger area (several square degrees) but smaller depth, obtaining a sample of approximately four hundred bright AGNs as a result.
Yu, Anthony; Prentice, Heather A; Burfeind, William E; Funahashi, Tadashi; Maletis, Gregory B
2018-03-01
Allograft tissue is frequently used in anterior cruciate ligament reconstruction (ACLR). It is often irradiated and/or chemically processed to decrease the risk of disease transmission, but some tissue is aseptically harvested without further processing. Irradiated and chemically processed allograft tissue appears to have a higher risk of revision, but whether this processing decreases the risk of infection is not clear. To determine the incidence of deep surgical site infection after ACLR with allograft in a large community-based sample and to evaluate the association of allograft processing and the risk of deep infection. Cohort study; Level of evidence, 3. The authors conducted a cohort study using the Kaiser Permanente Anterior Cruciate Ligament Reconstruction Registry. Primary isolated unilateral ACLR with allograft were identified from February 1, 2005 to September 30, 2015. Ninety-day postoperative deep infections were identified via an electronic screening algorithm and then validated through chart review. Logistic regression was used to evaluate the likelihood of 90-day postoperative deep infection per allograft processing method: processed (graft treated chemically and/or irradiated) or nonprocessed (graft not irradiated or chemically processed). Of 10,190 allograft cases, 8425 (82.7%) received a processed allograft, and 1765 (17.3%) received a nonprocessed allograft. There were 15 (0.15%) deep infections during the study period: 4 (26.7%) coagulase-negative Staphylococcus, 4 (26.7%) methicillin-sensitive Staphylococcus aureus, 1 (6.7%) Peptostreptococcus micros, and 6 (40.0%) with no growth. There was no difference in the likelihood for 90-day deep infection for processed versus nonprocessed allografts (odds ratio = 1.36, 95% CI = 0.31-6.04). The overall incidence of deep infection after ACLR with allograft tissue was very low (0.15%), suggesting that the methods currently employed by tissue banks to minimize the risk of infection are effective. In this cohort, no difference in the likelihood of infection between processed and nonprocessed allografts could be identified.
Pradel, Nathalie; Ji, Boyang; Gimenez, Grégory; Talla, Emmanuel; Lenoble, Patricia; Garel, Marc; Tamburini, Christian; Fourquet, Patrick; Lebrun, Régine; Bertin, Philippe; Denis, Yann; Pophillat, Matthieu; Barbe, Valérie; Ollivier, Bernard; Dolla, Alain
2013-01-01
Desulfovibrio piezophilus strain C1TLV30T is a piezophilic anaerobe that was isolated from wood falls in the Mediterranean deep-sea. D. piezophilus represents a unique model for studying the adaptation of sulfate-reducing bacteria to hydrostatic pressure. Here, we report the 3.6 Mbp genome sequence of this piezophilic bacterium. An analysis of the genome revealed the presence of seven genomic islands as well as gene clusters that are most likely linked to life at a high hydrostatic pressure. Comparative genomics and differential proteomics identified the transport of solutes and amino acids as well as amino acid metabolism as major cellular processes for the adaptation of this bacterium to hydrostatic pressure. In addition, the proteome profiles showed that the abundance of key enzymes that are involved in sulfate reduction was dependent on hydrostatic pressure. A comparative analysis of orthologs from the non-piezophilic marine bacterium D. salexigens and D. piezophilus identified aspartic acid, glutamic acid, lysine, asparagine, serine and tyrosine as the amino acids preferentially replaced by arginine, histidine, alanine and threonine in the piezophilic strain. This work reveals the adaptation strategies developed by a sulfate reducer to a deep-sea lifestyle. PMID:23383081
Estimation of effective temperatures in a quantum annealer: Towards deep learning applications
NASA Astrophysics Data System (ADS)
Realpe-Gómez, John; Benedetti, Marcello; Perdomo-Ortiz, Alejandro
Sampling is at the core of deep learning and more general machine learning applications; an increase in its efficiency would have a significant impact across several domains. Recently, quantum annealers have been proposed as a potential candidate to speed up these tasks, but several limitations still bar them from being used effectively. One of the main limitations, and the focus of this work, is that using the device's experimentally accessible temperature as a reference for sampling purposes leads to very poor correlation with the Boltzmann distribution it is programmed to sample from. Based on quantum dynamical arguments, one can expect that if the device indeed happens to be sampling from a Boltzmann-like distribution, it will correspond to one with an instance-dependent effective temperature. Unless this unknown temperature can be unveiled, it might not be possible to effectively use a quantum annealer for Boltzmann sampling processes. In this work, we propose a strategy to overcome this challenge with a simple effective-temperature estimation algorithm. We provide a systematic study assessing the impact of the effective temperatures in the quantum-assisted training of Boltzmann machines, which can serve as a building block for deep learning architectures. This work was supported by NASA Ames Research Center.
Miller, Karen J; Gunasekera, Rasanthi M
2017-04-10
Ecological processes in the deep sea are poorly understood due to the logistical constraints of sampling thousands of metres below the ocean's surface and remote from most land masses. Under such circumstances, genetic data provides unparalleled insight into biological and ecological relationships. We use microsatellite DNA to compare the population structure, reproductive mode and dispersal capacity in two deep sea corals from seamounts in the Southern Ocean. The solitary coral Desmophyllum dianthus has widespread dispersal consistent with its global distribution and resilience to disturbance. In contrast, for the matrix-forming colonial coral Solenosmilia variabilis asexual reproduction is important and the dispersal of sexually produced larvae is negligible, resulting in isolated populations. Interestingly, despite the recognised impacts of fishing on seamount communities, genetic diversity on fished and unfished seamounts was similar for both species, suggesting that evolutionary resilience remains despite reductions in biomass. Our results provide empirical evidence that a group of seamounts can function either as isolated islands or stepping stones for dispersal for different taxa. Furthermore different strategies will be required to protect the two sympatric corals and consequently the recently declared marine reserves in this region may function as a network for D. dianthus, but not for S. variabilis.
NASA Astrophysics Data System (ADS)
Miller, Karen J.; Gunasekera, Rasanthi M.
2017-04-01
Ecological processes in the deep sea are poorly understood due to the logistical constraints of sampling thousands of metres below the ocean’s surface and remote from most land masses. Under such circumstances, genetic data provides unparalleled insight into biological and ecological relationships. We use microsatellite DNA to compare the population structure, reproductive mode and dispersal capacity in two deep sea corals from seamounts in the Southern Ocean. The solitary coral Desmophyllum dianthus has widespread dispersal consistent with its global distribution and resilience to disturbance. In contrast, for the matrix-forming colonial coral Solenosmilia variabilis asexual reproduction is important and the dispersal of sexually produced larvae is negligible, resulting in isolated populations. Interestingly, despite the recognised impacts of fishing on seamount communities, genetic diversity on fished and unfished seamounts was similar for both species, suggesting that evolutionary resilience remains despite reductions in biomass. Our results provide empirical evidence that a group of seamounts can function either as isolated islands or stepping stones for dispersal for different taxa. Furthermore different strategies will be required to protect the two sympatric corals and consequently the recently declared marine reserves in this region may function as a network for D. dianthus, but not for S. variabilis.
Merk, Bruno; Litskevich, Dzianis
2015-01-01
The German government has decided for the nuclear phase out, but a decision on a strategy for the management of the highly radioactive waste is not defined yet. Partitioning and Transmutation (P&T) could be considered as a technological option for the management of highly radioactive waste, therefore a wide study has been conducted. In the study group objectives for P&T and the boundary conditions of the phase out have been discussed. The fulfillment of the given objectives is analyzed from neutronics point of view using simulations of a molten salt reactor with fast neutron spectrum. It is shown that the efficient transmutation of all existing transuranium isotopes would be possible from neutronic point of view in a time frame of about 60 years. For this task three reactors of a mostly new technology would have to be developed and a twofold life cycle consisting of a transmuter operation and a deep burn phase would be required. A basic insight for the optimization of the time duration of the deep burn phase is given. Further on, a detailed balance of different isotopic inventories is given to allow a deeper understanding of the processes during transmutation in the molten salt fast reactor. The effect of modeling and simulation is investigated based on three different modeling strategies and two different code versions. PMID:26717509
Merk, Bruno; Litskevich, Dzianis
2015-01-01
The German government has decided for the nuclear phase out, but a decision on a strategy for the management of the highly radioactive waste is not defined yet. Partitioning and Transmutation (P&T) could be considered as a technological option for the management of highly radioactive waste, therefore a wide study has been conducted. In the study group objectives for P&T and the boundary conditions of the phase out have been discussed. The fulfillment of the given objectives is analyzed from neutronics point of view using simulations of a molten salt reactor with fast neutron spectrum. It is shown that the efficient transmutation of all existing transuranium isotopes would be possible from neutronic point of view in a time frame of about 60 years. For this task three reactors of a mostly new technology would have to be developed and a twofold life cycle consisting of a transmuter operation and a deep burn phase would be required. A basic insight for the optimization of the time duration of the deep burn phase is given. Further on, a detailed balance of different isotopic inventories is given to allow a deeper understanding of the processes during transmutation in the molten salt fast reactor. The effect of modeling and simulation is investigated based on three different modeling strategies and two different code versions.
ERIC Educational Resources Information Center
Ahmed, Ambreen; Ahmed, Nawaz
2017-01-01
A survey was conducted to study the preferred learning strategies; that is, surface learning or deep learning of undergraduate and graduate male and female students and the impact of the preferred strategy on their academic performance. Both learning strategies help university students to get good scores in their examinations to meet the demands…
ERIC Educational Resources Information Center
Qingquan, Ni; Chatupote, Monta; Teo, Adisa
2008-01-01
This article focused on the investigation of the differences in the frequency of language learning strategy use by successful and unsuccessful first-year students of a Chinese university. The study found that successful students used a wider range of learning strategies for EFL learning significantly more frequently than unsuccessful students. It…
Optimal social-networking strategy is a function of socioeconomic conditions.
Oishi, Shigehiro; Kesebir, Selin
2012-12-01
In the two studies reported here, we examined the relation among residential mobility, economic conditions, and optimal social-networking strategy. In study 1, a computer simulation showed that regardless of economic conditions, having a broad social network with weak friendship ties is advantageous when friends are likely to move away. By contrast, having a small social network with deep friendship ties is advantageous when the economy is unstable but friends are not likely to move away. In study 2, we examined the validity of the computer simulation using a sample of American adults. Results were consistent with the simulation: American adults living in a zip code where people are residentially stable but economically challenged were happier if they had a narrow but deep social network, whereas in other socioeconomic conditions, people were generally happier if they had a broad but shallow networking strategy. Together, our studies demonstrate that the optimal social-networking strategy varies as a function of socioeconomic conditions.
Colangelo, Annette; Buchanan, Lori
2006-12-01
The failure of inhibition hypothesis posits a theoretical distinction between implicit and explicit access in deep dyslexia. Specifically, the effects of failure of inhibition are assumed only in conditions that have an explicit selection requirement in the context of production (i.e., aloud reading). In contrast, the failure of inhibition hypothesis proposes that implicit processing and explicit access to semantic information without production demands are intact in deep dyslexia. Evidence for intact implicit and explicit access requires that performance in deep dyslexia parallels that observed in neurologically intact participants on tasks based on implicit and explicit processes. In other words, deep dyslexics should produce normal effects in conditions with implicit task demands (i.e., lexical decision) and on tasks based on explicit access without production (i.e., forced choice semantic decisions) because failure of inhibition does not impact the availability of lexical information, only explicit retrieval in the context of production. This research examined the distinction between implicit and explicit processes in deep dyslexia using semantic blocking in lexical decision and forced choice semantic decisions as a test for the failure of inhibition hypothesis. The results of the semantic blocking paradigm support the distinction between implicit and explicit processing and provide evidence for failure of inhibition as an explanation for semantic errors in deep dyslexia.
Parallel Distributed Processing Theory in the Age of Deep Networks.
Bowers, Jeffrey S
2017-12-01
Parallel distributed processing (PDP) models in psychology are the precursors of deep networks used in computer science. However, only PDP models are associated with two core psychological claims, namely that all knowledge is coded in a distributed format and cognition is mediated by non-symbolic computations. These claims have long been debated in cognitive science, and recent work with deep networks speaks to this debate. Specifically, single-unit recordings show that deep networks learn units that respond selectively to meaningful categories, and researchers are finding that deep networks need to be supplemented with symbolic systems to perform some tasks. Given the close links between PDP and deep networks, it is surprising that research with deep networks is challenging PDP theory. Copyright © 2017. Published by Elsevier Ltd.
Self-imagining enhances recognition memory in memory-impaired individuals with neurological damage.
Grilli, Matthew D; Glisky, Elizabeth L
2010-11-01
The ability to imagine an elaborative event from a personal perspective relies on several cognitive processes that may potentially enhance subsequent memory for the event, including visual imagery, semantic elaboration, emotional processing, and self-referential processing. In an effort to find a novel strategy for enhancing memory in memory-impaired individuals with neurological damage, we investigated the mnemonic benefit of a method we refer to as self-imagining-the imagining of an event from a realistic, personal perspective. Fourteen individuals with neurologically based memory deficits and 14 healthy control participants intentionally encoded neutral and emotional sentences under three instructions: structural-baseline processing, semantic processing, and self-imagining. Findings revealed a robust "self-imagination effect (SIE)," as self-imagination enhanced recognition memory relative to deep semantic elaboration in both memory-impaired individuals, F(1, 13) = 32.11, p < .001, η2 = .71; and healthy controls, F(1, 13) = 5.57, p < .05, η2 = .30. In addition, results indicated that mnemonic benefits of self-imagination were not limited by severity of the memory disorder nor were they related to self-reported vividness of visual imagery, semantic processing, or emotional content of the materials. The findings suggest that the SIE may depend on unique mnemonic mechanisms possibly related to self-referential processing and that imagining an event from a personal perspective makes that event particularly memorable even for those individuals with severe memory deficits. Self-imagining may thus provide an effective rehabilitation strategy for individuals with memory impairment.
A universal deep learning approach for modeling the flow of patients under different severities.
Jiang, Shancheng; Chin, Kwai-Sang; Tsui, Kwok L
2018-02-01
The Accident and Emergency Department (A&ED) is the frontline for providing emergency care in hospitals. Unfortunately, relative A&ED resources have failed to keep up with continuously increasing demand in recent years, which leads to overcrowding in A&ED. Knowing the fluctuation of patient arrival volume in advance is a significant premise to relieve this pressure. Based on this motivation, the objective of this study is to explore an integrated framework with high accuracy for predicting A&ED patient flow under different triage levels, by combining a novel feature selection process with deep neural networks. Administrative data is collected from an actual A&ED and categorized into five groups based on different triage levels. A genetic algorithm (GA)-based feature selection algorithm is improved and implemented as a pre-processing step for this time-series prediction problem, in order to explore key features affecting patient flow. In our improved GA, a fitness-based crossover is proposed to maintain the joint information of multiple features during iterative process, instead of traditional point-based crossover. Deep neural networks (DNN) is employed as the prediction model to utilize their universal adaptability and high flexibility. In the model-training process, the learning algorithm is well-configured based on a parallel stochastic gradient descent algorithm. Two effective regularization strategies are integrated in one DNN framework to avoid overfitting. All introduced hyper-parameters are optimized efficiently by grid-search in one pass. As for feature selection, our improved GA-based feature selection algorithm has outperformed a typical GA and four state-of-the-art feature selection algorithms (mRMR, SAFS, VIFR, and CFR). As for the prediction accuracy of proposed integrated framework, compared with other frequently used statistical models (GLM, seasonal-ARIMA, ARIMAX, and ANN) and modern machine models (SVM-RBF, SVM-linear, RF, and R-LASSO), the proposed integrated "DNN-I-GA" framework achieves higher prediction accuracy on both MAPE and RMSE metrics in pairwise comparisons. The contribution of our study is two-fold. Theoretically, the traditional GA-based feature selection process is improved to have less hyper-parameters and higher efficiency, and the joint information of multiple features is maintained by fitness-based crossover operator. The universal property of DNN is further enhanced by merging different regularization strategies. Practically, features selected by our improved GA can be used to acquire an underlying relationship between patient flows and input features. Predictive values are significant indicators of patients' demand and can be used by A&ED managers to make resource planning and allocation. High accuracy achieved by the present framework in different cases enhances the reliability of downstream decision makings. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Lingvay, Mónika; Timofte, Roxana S.; Ciascai, Liliana; Predescu, Constantin
2015-01-01
Development of pupils' deep learning approach is an important goal of education nowadays, considering that a deep learning approach is mediating conceptual understanding and transfer. Different performance at PISA tests of Romanian and Hungarian pupils cause us to commence a study for the analysis of learning approaches employed by these pupils.…
Deep-learning networks and the functional architecture of executive control.
Cooper, Richard P
2017-01-01
Lake et al. underrate both the promise and the limitations of contemporary deep learning techniques. The promise lies in combining those techniques with broad multisensory training as experienced by infants and children. The limitations lie in the need for such systems to possess functional subsystems that generate, monitor, and switch goals and strategies in the absence of human intervention.
Initial public perceptions of deep geological and oceanic disposal of carbon dioxide.
Palmgren, Claire R; Morgan, M Granger; Bruine de Bruin, Wändi; Keith, David W
2004-12-15
Two studies were conducted to gauge likely public perceptions of proposals to avoid releasing carbon dioxide from power plants to the atmosphere by injecting it into deep geological formations or the deep ocean. Following a modified version of the mental model interview method, Study 1 involved face-to-face interviews with 18 nontechnical respondents. Respondents shared their beliefs after receiving basic information about the technologies and again after getting specific details. Many interviewees wanted to frame the issue in the broader context of alternative strategies for carbon management, but public understanding of mitigation strategies is limited. The second study, administered to a sample of 126 individuals, involved a closed-form survey that measured the prevalence of general beliefs revealed in study 1 and also assessed the respondent's views of these technologies. Study results suggest that the public may develop misgivings about deep injection of carbon dioxide because it can be seen as temporizing and perhaps creating future problems. Ocean injection was seen as more problematic than geological injection. An approach to public communication and regulation that is open and respectful of public concerns is likely to be a prerequisite to the successful adoption of this technology.
DEEP MOTIF DASHBOARD: VISUALIZING AND UNDERSTANDING GENOMIC SEQUENCES USING DEEP NEURAL NETWORKS.
Lanchantin, Jack; Singh, Ritambhara; Wang, Beilun; Qi, Yanjun
2017-01-01
Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our first visualization method is finding a test sequence's saliency map which uses first-order derivatives to describe the importance of each nucleotide in making the final prediction. Second, considering recurrent models make predictions in a temporal manner (from one end of a TFBS sequence to the other), we introduce temporal output scores, indicating the prediction score of a model over time for a sequential input. Lastly, a class-specific visualization strategy finds the optimal input sequence for a given TFBS positive class via stochastic gradient optimization. Our experimental results indicate that a convolutional-recurrent architecture performs the best among the three architectures. The visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them.
Deep Motif Dashboard: Visualizing and Understanding Genomic Sequences Using Deep Neural Networks
Lanchantin, Jack; Singh, Ritambhara; Wang, Beilun; Qi, Yanjun
2018-01-01
Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our first visualization method is finding a test sequence’s saliency map which uses first-order derivatives to describe the importance of each nucleotide in making the final prediction. Second, considering recurrent models make predictions in a temporal manner (from one end of a TFBS sequence to the other), we introduce temporal output scores, indicating the prediction score of a model over time for a sequential input. Lastly, a class-specific visualization strategy finds the optimal input sequence for a given TFBS positive class via stochastic gradient optimization. Our experimental results indicate that a convolutional-recurrent architecture performs the best among the three architectures. The visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them. PMID:27896980
NASA Astrophysics Data System (ADS)
Humborg, C.
2017-12-01
The Baltic Sea is especially susceptible to multiple human impacts due to its estuarine mixing patterns and long water residence times. Temporally and spatially, it is one of the best investigated marginal seas worldwide allowing for a deep knowledge of natural and human processes forming this unique brackish ecosystem. In this presentation, we shortly summarize the physical, biogeochemical and ecological settings of the Baltic Sea and address major human drivers and pressures threatening its ecosystem. Further, we summarize the scientific and political efforts that led to the formulation of Baltic Sea Action Plan, a milestone for eutrophication management and European environmental governance. Further, we summarize the efforts and societal pitfalls towards an Ecosystem Based Fisheries Management, strategies to decrease loads of environmental pollutants and management of marine biodiversity/habitat issues in the unique Baltic Sea context.
The effect of some heat treatment parameters on the dimensional stability of AISI D2
NASA Astrophysics Data System (ADS)
Surberg, Cord Henrik; Stratton, Paul; Lingenhöle, Klaus
2008-01-01
The tool steel AISI D2 is usually processed by vacuum hardening followed by multiple tempering cycles. It has been suggested that a deep cold treatment in between the hardening and tempering processes could reduce processing time and improve the final properties and dimensional stability. Hardened blocks were then subjected to various combinations of single and multiple tempering steps (520 and 540 °C) and deep cold treatments (-90, -120 and -150 °C). The greatest dimensional stability was achieved by deep cold treatments at the lowest temperature used and was independent of the deep cold treatment time.
Deep subsurface microbial processes
Lovley, D.R.; Chapelle, F.H.
1995-01-01
Information on the microbiology of the deep subsurface is necessary in order to understand the factors controlling the rate and extent of the microbially catalyzed redox reactions that influence the geophysical properties of these environments. Furthermore, there is an increasing threat that deep aquifers, an important drinking water resource, may be contaminated by man's activities, and there is a need to predict the extent to which microbial activity may remediate such contamination. Metabolically active microorganisms can be recovered from a diversity of deep subsurface environments. The available evidence suggests that these microorganisms are responsible for catalyzing the oxidation of organic matter coupled to a variety of electron acceptors just as microorganisms do in surface sediments, but at much slower rates. The technical difficulties in aseptically sampling deep subsurface sediments and the fact that microbial processes in laboratory incubations of deep subsurface material often do not mimic in situ processes frequently necessitate that microbial activity in the deep subsurface be inferred through nonmicrobiological analyses of ground water. These approaches include measurements of dissolved H2, which can predict the predominant microbially catalyzed redox reactions in aquifers, as well as geochemical and groundwater flow modeling, which can be used to estimate the rates of microbial processes. Microorganisms recovered from the deep subsurface have the potential to affect the fate of toxic organics and inorganic contaminants in groundwater. Microbial activity also greatly influences 1 the chemistry of many pristine groundwaters and contributes to such phenomena as porosity development in carbonate aquifers, accumulation of undesirably high concentrations of dissolved iron, and production of methane and hydrogen sulfide. Although the last decade has seen a dramatic increase in interest in deep subsurface microbiology, in comparison with the study of other habitats, the study of deep subsurface microbiology is still in its infancy.
Schott, Björn H; Wüstenberg, Torsten; Wimber, Maria; Fenker, Daniela B; Zierhut, Kathrin C; Seidenbecher, Constanze I; Heinze, Hans-Jochen; Walter, Henrik; Düzel, Emrah; Richardson-Klavehn, Alan
2013-02-01
New episodic memory traces represent a record of the ongoing neocortical processing engaged during memory formation (encoding). Thus, during encoding, deep (semantic) processing typically establishes more distinctive and retrievable memory traces than does shallow (perceptual) processing, as assessed by later episodic memory tests. By contrast, the hippocampus appears to play a processing-independent role in encoding, because hippocampal lesions impair encoding regardless of level of processing. Here, we clarified the neural relationship between processing and encoding by examining hippocampal-cortical connectivity during deep and shallow encoding. Participants studied words during functional magnetic resonance imaging and freely recalled these words after distraction. Deep study processing led to better recall than shallow study processing. For both levels of processing, successful encoding elicited activations of bilateral hippocampus and left prefrontal cortex, and increased functional connectivity between left hippocampus and bilateral medial prefrontal, cingulate and extrastriate cortices. Successful encoding during deep processing was additionally associated with increased functional connectivity between left hippocampus and bilateral ventrolateral prefrontal cortex and right temporoparietal junction. In the shallow encoding condition, on the other hand, pronounced functional connectivity increases were observed between the right hippocampus and the frontoparietal attention network activated during shallow study processing. Our results further specify how the hippocampus coordinates recording of ongoing neocortical activity into long-term memory, and begin to provide a neural explanation for the typical advantage of deep over shallow study processing for later episodic memory. Copyright © 2011 Wiley Periodicals, Inc.
Scaffolding reflective journal writing - negotiating power, play and position.
Harris, M
2008-04-01
A three-year qualitative study based on an action-research design, framed within the critical genre and using a multi-method approach, was used to establish how a model of critical reflective practice [Van Aswegen, E.J., Brink, H.I., Steyn, P.J., 2000. A model for facilitation of critical reflective practice: Part I- Introductory discussion and explanation of the phases followed to construct the model. Part ll - Conceptual analysis within the context of constructing the model. Part III - Description of the model. Curationis 23 (4), 117-135.] could be implemented. Reflective journals were introduced as one of the educational strategies within the model to support and sustain 'deep' transformatory learning. A component of this larger study focused on how scaffolding deep learning through reflective writing is enhanced by supportive structures. These include critiquing (feedback), a mutually developed self-evaluation strategy, as well as an awareness of and sensitivity to the need for student/writer-responder negotiation. Three student groups of part-time post-basic, practicing South African nurses engaged in reflective writing over the period of an academic year. This article is based on their perceptions, mid-way through their writing, of these strategies. It reflects the story of assumptions made by educators, and challenges for change. Students find reflective writing difficult, and although they are willing to accept its value and engage in the process, they require a regular, specific and sensitive critical response from their writer-responder and follow-up supportive contact. Self-evaluation for the purposes of 'owning' their own ideas is difficult, and requires constant support and validation. Transformatory learning comes at a cost, and a revisiting of the balance of power between student and educator is in order.
NASA Technical Reports Server (NTRS)
Ambur, Manjula Y.; Yagle, Jeremy J.; Reith, William; McLarney, Edward
2016-01-01
In 2014, a team of researchers, engineers and information technology specialists at NASA Langley Research Center developed a Big Data Analytics and Machine Intelligence Strategy and Roadmap as part of Langley's Comprehensive Digital Transformation Initiative, with the goal of identifying the goals, objectives, initiatives, and recommendations need to develop near-, mid- and long-term capabilities for data analytics and machine intelligence in aerospace domains. Since that time, significant progress has been made in developing pilots and projects in several research, engineering, and scientific domains by following the original strategy of collaboration between mission support organizations, mission organizations, and external partners from universities and industry. This report summarizes the work to date in Data Intensive Scientific Discovery, Deep Content Analytics, and Deep Q&A projects, as well as the progress made in collaboration, outreach, and education. Recommendations for continuing this success into future phases of the initiative are also made.
The Framework for Mental Health Services in Scotland--a progress report one year on.
Loudon, J B; Samuel, R
1999-01-01
Progress in implementing the Framework for Mental Health Services in Scotland since its launch in September 1997 is described. Reasons for action being behind the expected timetable are discussed. The success of the joint strategies, due for completion by the end of December 1998, hinges on overcoming the deep systemic factors which mitigate against change. The Framework as a template has the potential to be a powerful change agent to counter these. The development of better services requires a balance of "top down" and "bottom up" approaches, and front line staff must be actively engaged in the process.
Linnemann, Birgit; Lindhoff-Last, Edelgard
2012-09-01
An adequate vascular access is of importance for the treatment of patients with cancer and complex illnesses in the intensive, perioperative or palliative care setting. Deep vein thrombosis and thrombotic occlusion are the most common complications attributed to central venous catheters in short-term and, especially, in long-term use. In this review we will focus on the risk factors, management and prevention strategies of catheter-related thrombosis and occlusion. Due to the lack of randomised controlled trials, there is still controversy about the optimal treatment of catheter-related thrombotic complications, and therapy has been widely adopted using the evidence concerning lower extremity deep vein thrombosis. Given the increasing use of central venous catheters in patients that require long-term intravenous therapy, the problem of upper extremity deep venous thrombosis can be expected to increase in the future. We provide data for establishing a more uniform strategy for preventing, diagnosing and treating catheter-related thrombotic complications.
Problem Based Learning in Science
ERIC Educational Resources Information Center
Pepper, Coral
2009-01-01
Problem based learning (PBL) is a recognised teaching and learning strategy used to engage students in deep rather than surface learning. It is also viewed as a successful strategy to align university courses with the real life professional work students are expected to undertake on graduation (Biggs, 2003). Problem based learning is practised…
ERIC Educational Resources Information Center
Majeski, Robin; Stover, Merrily
2007-01-01
Online learning has enjoyed increasing popularity in gerontology. This paper presents instructional strategies grounded in Fink's (2003) theory of significant learning designed for the completely asynchronous online gerontology classroom. It links these components with the development of mastery learning goals and provides specific guidelines for…
What-Where-When Memory and Encoding Strategies in Healthy Aging
ERIC Educational Resources Information Center
Cheke, Lucy G.
2016-01-01
Older adults exhibit disproportionate impairments in memory for item-associations. These impairments may stem from an inability to self-initiate deep encoding strategies. The present study investigates this using the "treasure-hunt task"; a what-where-when style episodic memory test that requires individuals to "hide" items…
Reproducing American Sign Language sentences: cognitive scaffolding in working memory
Supalla, Ted; Hauser, Peter C.; Bavelier, Daphne
2014-01-01
The American Sign Language Sentence Reproduction Test (ASL-SRT) requires the precise reproduction of a series of ASL sentences increasing in complexity and length. Error analyses of such tasks provides insight into working memory and scaffolding processes. Data was collected from three groups expected to differ in fluency: deaf children, deaf adults and hearing adults, all users of ASL. Quantitative (correct/incorrect recall) and qualitative error analyses were performed. Percent correct on the reproduction task supports its sensitivity to fluency as test performance clearly differed across the three groups studied. A linguistic analysis of errors further documented differing strategies and bias across groups. Subjects' recall projected the affordance and constraints of deep linguistic representations to differing degrees, with subjects resorting to alternate processing strategies when they failed to recall the sentence correctly. A qualitative error analysis allows us to capture generalizations about the relationship between error pattern and the cognitive scaffolding, which governs the sentence reproduction process. Highly fluent signers and less-fluent signers share common chokepoints on particular words in sentences. However, they diverge in heuristic strategy. Fluent signers, when they make an error, tend to preserve semantic details while altering morpho-syntactic domains. They produce syntactically correct sentences with equivalent meaning to the to-be-reproduced one, but these are not verbatim reproductions of the original sentence. In contrast, less-fluent signers tend to use a more linear strategy, preserving lexical status and word ordering while omitting local inflections, and occasionally resorting to visuo-motoric imitation. Thus, whereas fluent signers readily use top-down scaffolding in their working memory, less fluent signers fail to do so. Implications for current models of working memory across spoken and signed modalities are considered. PMID:25152744
Abel, Gillian M
2011-04-01
This paper uses Arlie Hochschild's (1983) concept of emotion management and "surface" and "deep acting" to explore how sex workers separate and distance themselves from their public role. Experiences of stigmatisation prevail among sex workers and how stigma is resisted or managed has an impact on their health. In-depth interviews were carried out between August 2006 and April 2007 with 58 sex workers in five cities in New Zealand following decriminalisation of the sex industry. Most participants drew on ideas of professionalism in sustaining a psychological distance between their private and public lives. They utilised "deep acting", transmuting private experiences for use in the work environment, to accredit themselves as professional in their business practices. They also constructed different meanings for sex between public and private relationships with the condom providing an important symbol in separating the two. A few (mostly female street-based) participants were less adept at "deep acting" and relied on drugs to maintain a separation of roles. This paper argues that in an occupation which is highly stigmatised and in which depersonalisation as an aspect of burn-out has been reported as a common occurrence, the ability to draw on strategies which require "deep acting" provides a healthy estrangement between self and role and can be seen as protective. The separation of self from work identity is not damaging as many radical feminists would claim, but an effective strategy to manage emotions. Hochschild, A. (1983). The managed heart: Commercialization of human feeling. Berkeley: University of California Press. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Evans, Laura J.; Beheim, Glenn M.
2006-01-01
High aspect ratio silicon carbide (SiC) microstructures are needed for microengines and other harsh environment micro-electro-mechanical systems (MEMS). Previously, deep reactive ion etching (DRIE) of low aspect ratio (AR less than or = 1) deep (greater than 100 micron) trenches in SiC has been reported. However, existing DRIE processes for SiC are not well-suited for definition of high aspect ratio features because such simple etch-only processes provide insufficient control over sidewall roughness and slope. Therefore, we have investigated the use of a time-multiplexed etch-passivate (TMEP) process, which alternates etching with polymer passivation of the etch sidewalls. An optimized TMEP process was used to etch high aspect ratio (AR greater than 5) deep (less than 100 micron) trenches in 6H-SiC. Power MEMS structures (micro turbine blades) in 6H-SiC were also fabricated.
Exponential decline of deep-sea ecosystem functioning linked to benthic biodiversity loss.
Danovaro, Roberto; Gambi, Cristina; Dell'Anno, Antonio; Corinaldesi, Cinzia; Fraschetti, Simonetta; Vanreusel, Ann; Vincx, Magda; Gooday, Andrew J
2008-01-08
Recent investigations suggest that biodiversity loss might impair the functioning and sustainability of ecosystems. Although deep-sea ecosystems are the most extensive on Earth, represent the largest reservoir of biomass, and host a large proportion of undiscovered biodiversity, the data needed to evaluate the consequences of biodiversity loss on the ocean floor are completely lacking. Here, we present a global-scale study based on 116 deep-sea sites that relates benthic biodiversity to several independent indicators of ecosystem functioning and efficiency. We show that deep-sea ecosystem functioning is exponentially related to deep-sea biodiversity and that ecosystem efficiency is also exponentially linked to functional biodiversity. These results suggest that a higher biodiversity supports higher rates of ecosystem processes and an increased efficiency with which these processes are performed. The exponential relationships presented here, being consistent across a wide range of deep-sea ecosystems, suggest that mutually positive functional interactions (ecological facilitation) can be common in the largest biome of our biosphere. Our results suggest that a biodiversity loss in deep-sea ecosystems might be associated with exponential reductions of their functions. Because the deep sea plays a key role in ecological and biogeochemical processes at a global scale, this study provides scientific evidence that the conservation of deep-sea biodiversity is a priority for a sustainable functioning of the worlds' oceans.
Computer Aided Process Planning for Non-Axisymmetric Deep Drawing Products
NASA Astrophysics Data System (ADS)
Park, Dong Hwan; Yarlagadda, Prasad K. D. V.
2004-06-01
In general, deep drawing products have various cross-section shapes such as cylindrical, rectangular and non-axisymmetric shapes. The application of the surface area calculation to non-axisymmetric deep drawing process has not been published yet. In this research, a surface area calculation for non-axisymmetric deep drawing products with elliptical shape was constructed for a design of blank shape of deep drawing products by using an AutoLISP function of AutoCAD software. A computer-aided process planning (CAPP) system for rotationally symmetric deep drawing products has been developed. However, the application of the system to non-axisymmetric components has not been reported yet. Thus, the CAPP system for non-axisymmetric deep drawing products with elliptical shape was constructed by using process sequence design. The system developed in this work consists of four modules. The first is recognition of shape module to recognize non-axisymmetric products. The second is a three-dimensional (3-D) modeling module to calculate the surface area for non-axisymmetric products. The third is a blank design module to create an oval-shaped blank with the identical surface area. The forth is a process planning module based on the production rules that play the best important role in an expert system for manufacturing. The production rules are generated and upgraded by interviewing field engineers. Especially, the drawing coefficient, the punch and die radii for elliptical shape products are considered as main design parameters. The suitability of this system was verified by applying to a real deep drawing product. This CAPP system constructed would be very useful to reduce lead-time for manufacturing and improve an accuracy of products.
7.3 Communications and Navigation
NASA Technical Reports Server (NTRS)
Manning, Rob
2005-01-01
This presentation gives an overview of the networks NASA currently uses to support space communications and navigation, and the requirements for supporting future deep space missions, including manned lunar and Mars missions. The presentation addresses the Space Network, Deep Space Network, and Ground Network, why new support systems are needed, and the potential for catastrophic failure of aging antennas. Space communications and navigation are considered during Aerocapture, Entry, Descent and Landing (AEDL) only in order to precisely position, track and interact with the spacecraft at its destination (moon, Mars and Earth return) arrival. The presentation recommends a combined optical/radio frequency strategy for deep space communications.
NASA Astrophysics Data System (ADS)
Giorli, Giacomo
Deep diving odontocetes, like sperm whales, beaked whales, Risso's dolphins, and pilot whales are known to forage at deep depths in the ocean on squid and fish. These marine mammal species are top predators and for this reason are very important for the ecosystems they live in, since they can affect prey populations and control food web dynamics through top-down effects. The studies presented in this thesis investigate deep diving odontocetes. foraging strategies, and the density and size of their potential prey in the deep ocean using passive and active acoustic techniques. Ecological Acoustic Recorders (EAR) were used to monitor the foraging activity of deep diving odontocetes at three locations around the world: the Josephine Seamount High Sea Marine Protected Area (JHSMPA), the Ligurian Sea, and along the Kona coast of the island of Hawaii. In the JHSMPA, sperm whales. and beaked whales. foraging rates do not differ between night-time and day-time. However, in the Ligurian Sea, sperm whales switch to night-time foraging as the winter approaches, while beaked whales alternate between hunting mainly at night, and both at night and at day. Spatial differences were found in deep diving odontocetes. foraging activity in Hawaii where they forage most in areas with higher chlorophyll concentrations. Pilot whales (and false killer whales, clustered together in the category "blackfishes") and Risso's dolphins forage mainly at night at all locations. These two species adjust their foraging activity with the length of the night. The density and size of animals living in deep sea scattering layers was studied using a DIDSON imaging sonar at multiple stations along the Kona coast of Hawaii. The density of animals was affected by location, depth, month, and the time of day. The size of animals was influenced by station and month. The DIDSON proved to be a successful, non-invasive technique to study density and size of animals in the deep sea. Densities were found to be an order of magnitude higher than previously found with trawls, and sizes of animals were found to be 3-4 times larger than in trawl data.
Effect of structural changes of lignocelluloses material upon pre-treatment using green solvents
NASA Astrophysics Data System (ADS)
Gunny, Ahmad Anas Nagoor; Arbain, Dachyar; Jamal, Parveen
2017-04-01
The Malaysia Biomass strategy 2020 stated that the key step of biofuel production from biomass lies on the pretreatment process. Conventional `pre-treatment' methods are `non-green" and costly. The recent green and cost-effective biomass pretreatment is using new generation of Ionic Liquids also known as Deep Eutectic Solvents (DESs). DESs are made of renewable components are cheaper, greener and the process synthesis are easier. Thus, the present paper concerns with the preparation of various combination of DES and to study the effect of DESs pretreatment process on microcrystalline cellulose (MCC), a model substrate. The crystalline structural changes were studied using using X-ray Diffraction Methods, Fourier Transformed Infrared Spectroscopy (FTIR) and surface area and pore size analysis. Results showed reduction of crystalline structure of MCC treated with the DESs and increment of surface area and pore size of MCC after pre-treatment process. These results indicated the DES has successfully converted the lignocelluloses material in the form suitable for hydrolysis and conversion to simple sugar.
NASA Astrophysics Data System (ADS)
He, Fei; Han, Ye; Wang, Han; Ji, Jinchao; Liu, Yuanning; Ma, Zhiqiang
2017-03-01
Gabor filters are widely utilized to detect iris texture information in several state-of-the-art iris recognition systems. However, the proper Gabor kernels and the generative pattern of iris Gabor features need to be predetermined in application. The traditional empirical Gabor filters and shallow iris encoding ways are incapable of dealing with such complex variations in iris imaging including illumination, aging, deformation, and device variations. Thereby, an adaptive Gabor filter selection strategy and deep learning architecture are presented. We first employ particle swarm optimization approach and its binary version to define a set of data-driven Gabor kernels for fitting the most informative filtering bands, and then capture complex pattern from the optimal Gabor filtered coefficients by a trained deep belief network. A succession of comparative experiments validate that our optimal Gabor filters may produce more distinctive Gabor coefficients and our iris deep representations be more robust and stable than traditional iris Gabor codes. Furthermore, the depth and scales of the deep learning architecture are also discussed.
The Effects of Test Trial and Processing Level on Immediate and Delayed Retention.
Chang, Sau Hou
2017-03-01
The purpose of the present study was to investigate the effects of test trial and processing level on immediate and delayed retention. A 2 × 2 × 2 mixed ANOVAs was used with two between-subject factors of test trial (single test, repeated test) and processing level (shallow, deep), and one within-subject factor of final recall (immediate, delayed). Seventy-six college students were randomly assigned first to the single test (studied the stimulus words three times and took one free-recall test) and the repeated test trials (studied the stimulus words once and took three consecutive free-recall tests), and then to the shallow processing level (asked whether each stimulus word was presented in capital letter or in small letter) and the deep processing level (whether each stimulus word belonged to a particular category) to study forty stimulus words. The immediate test was administered five minutes after the trials, whereas the delayed test was administered one week later. Results showed that single test trial recalled more words than repeated test trial in immediate final free-recall test, participants in deep processing performed better than those in shallow processing in both immediate and delayed retention. However, the dominance of single test trial and deep processing did not happen in delayed retention. Additional study trials did not further enhance the delayed retention of words encoded in deep processing, but did enhance the delayed retention of words encoded in shallow processing.
The Effects of Test Trial and Processing Level on Immediate and Delayed Retention
Chang, Sau Hou
2017-01-01
The purpose of the present study was to investigate the effects of test trial and processing level on immediate and delayed retention. A 2 × 2 × 2 mixed ANOVAs was used with two between-subject factors of test trial (single test, repeated test) and processing level (shallow, deep), and one within-subject factor of final recall (immediate, delayed). Seventy-six college students were randomly assigned first to the single test (studied the stimulus words three times and took one free-recall test) and the repeated test trials (studied the stimulus words once and took three consecutive free-recall tests), and then to the shallow processing level (asked whether each stimulus word was presented in capital letter or in small letter) and the deep processing level (whether each stimulus word belonged to a particular category) to study forty stimulus words. The immediate test was administered five minutes after the trials, whereas the delayed test was administered one week later. Results showed that single test trial recalled more words than repeated test trial in immediate final free-recall test, participants in deep processing performed better than those in shallow processing in both immediate and delayed retention. However, the dominance of single test trial and deep processing did not happen in delayed retention. Additional study trials did not further enhance the delayed retention of words encoded in deep processing, but did enhance the delayed retention of words encoded in shallow processing. PMID:28344679
Processes governing transient responses of the deep ocean buoyancy budget to a doubling of CO2
NASA Astrophysics Data System (ADS)
Palter, J. B.; Griffies, S. M.; Hunter Samuels, B. L.; Galbraith, E. D.; Gnanadesikan, A.
2012-12-01
Recent observational analyses suggest there is a temporal trend and high-frequency variability in deep ocean buoyancy in the last twenty years, a phenomenon reproduced even in low-mixing models. Here we use an earth system model (GFDL's ESM2M) to evaluate physical processes that influence buoyancy (and thus steric sea level) budget of the deep ocean in quasi-steady state and under a doubling of CO2. A new suite of model diagnostics allows us to quantitatively assess every process that influences the buoyancy budget and its temporal evolution, revealing surprising dynamics governing both the equilibrium budget and its transient response to climate change. The results suggest that the temporal evolution of the deep ocean contribution to sea level rise is due to a diversity of processes at high latitudes, whose net effect is then advected in the Eulerian mean flow to mid and low latitudes. In the Southern Ocean, a slowdown in convection and spin up of the residual mean advection are approximately equal players in the deep steric sea level rise. In the North Atlantic, the region of greatest deep steric sea level variability in our simulations, a decrease in mixing of cold, dense waters from the marginal seas and a reduction in open ocean convection causes an accumulation of buoyancy in the deep subpolar gyre, which is then advected equatorward.
Lattanzi, Nicola; Menicagli, Dario; Dal Maso, Lorenzo
2016-04-01
Globalization phenomena and Information Communication Technology (ICT) are producing deep changes worldwide. The economic environment and society where firms both cooperate and compete with each other are rapidly changing leading firms towards recognizing the role of intangible resources as a source of fresh competitive advantage. Experience, innovation and the ability to create new knowledge completely arise from the act of human resources inviting firms to focus on how to generate and shape knowledge. Therefore, the future of firms depends greatly on how managers are able to explore and exploit human resources. However, without a clear understanding of the nature of human beings and the complexity behind human interactions, we cannot understand the theory of organizational knowledge creation. Thus, how can firms discover, manage and valorize this "human advantage"? Neuroscience can increase the understanding of how cognitive and emotional processes work; in doing so, we may be able to better understand how individuals involved in a business organization make decisions and how external factors influence their behavior, especially in terms of commitment activation and engagement level. In this respect, a neuroscientific approach to business can support managers in decision-making processes. In a scenario where economic humanism plays a central role in the process of fostering firms' competitiveness and emerging strategies, we believe that a neuroscience approach in a business organization could be a valid source of value and inspiration for manager decision-making processes.
Southern Ocean bottom water characteristics in CMIP5 models
NASA Astrophysics Data System (ADS)
Heuzé, CéLine; Heywood, Karen J.; Stevens, David P.; Ridley, Jeff K.
2013-04-01
Southern Ocean deep water properties and formation processes in climate models are indicative of their capability to simulate future climate, heat and carbon uptake, and sea level rise. Southern Ocean temperature and density averaged over 1986-2005 from 15 CMIP5 (Coupled Model Intercomparison Project Phase 5) climate models are compared with an observed climatology, focusing on bottom water. Bottom properties are reasonably accurate for half the models. Ten models create dense water on the Antarctic shelf, but it mixes with lighter water and is not exported as bottom water as in reality. Instead, most models create deep water by open ocean deep convection, a process occurring rarely in reality. Models with extensive deep convection are those with strong seasonality in sea ice. Optimum bottom properties occur in models with deep convection in the Weddell and Ross Gyres. Bottom Water formation processes are poorly represented in ocean models and are a key challenge for improving climate predictions.
NASA Astrophysics Data System (ADS)
Cuccu, Danila; Mereu, Marco; Agus, Blondine; Cau, Angelo; Culurgioni, Jacopo; Sabatini, Andrea; Jereb, Patrizia
2014-09-01
Coleoid cephalopods go through a single breeding period in their life cycle, i.e., they are semelparous, although a great flexibility has been observed in their reproductive strategies, which range from simultaneous terminal spawning over a short period at the end of the animal's life to continuous spawning over a long period of the animal's life. So far, the information available on deep-sea species reproductive strategies is still poor and most of our knowledge about squid reproduction relates to females. In particular, not much is known on what strategy male squids have evolved to store sperm into spermatophores and adapt to semelparity. In this study an investigation of male reproductive strategy of the deep-sea umbrella squid Histioteuthis bonnellii (Férussac, 1835) is presented. The reproductive system was examined in 119 males caught in the Sardinian waters (Central Western Mediterranean) and is described for the first time. Results indicate that this species produces and stores spermatophores over a considerable period of time. The total number of spermatophores found in the reproductive system ranged between 12 and 3097 and the size of spermatophores stored by a single individual varied greatly, up to over 300%. Spermatophore length (SpL) gradually decreased towards the distal end of the reproductive system, so that spermatophores found in the proximal part of Needham's Sac were larger than those found in the terminal organ. Body size and SpL of spermatophores from the proximal part of Needham's Sac were positively correlated. Both indices of the sperm mass and of the ejaculatory apparatus decreased with the increase of SpL, while the cement body index increased, indicating that larger spermatophores contain less sperm and are equipped with larger cement bodies. Up to 64 spermatangia were found, exclusively in the terminal organ. The large size range of mature males (ML: 60.0-198.0 mm; TW: 113.50-2409.00 g) and the variation in spermatophore number and size indicate that in H. bonnellii males the allocation and storage of sperm start early in the individual life and extends in time, while animals continue to grow and produce spermatophores presumably more successful in attaching to female tissues. This pattern enlarges the time window available for reproduction and likely maximizes the percentage of mating success as the animals grow older and chances of mating events become comparatively lower, due to the basic low density of specimens in the deep-sea environment. Both aspects are potentially indicative of adaptation to the deep sea.
Developing a Comprehensive Model of Intensive Care Unit Processes: Concept of Operations.
Romig, Mark; Tropello, Steven P; Dwyer, Cindy; Wyskiel, Rhonda M; Ravitz, Alan; Benson, John; Gropper, Michael A; Pronovost, Peter J; Sapirstein, Adam
2015-04-23
This study aimed to use a systems engineering approach to improve performance and stakeholder engagement in the intensive care unit to reduce several different patient harms. We developed a conceptual framework or concept of operations (ConOps) to analyze different types of harm that included 4 steps as follows: risk assessment, appropriate therapies, monitoring and feedback, as well as patient and family communications. This framework used a transdisciplinary approach to inventory the tasks and work flows required to eliminate 7 common types of harm experienced by patients in the intensive care unit. The inventory gathered both implicit and explicit information about how the system works or should work and converted the information into a detailed specification that clinicians could understand and use. Using the ConOps document, we created highly detailed work flow models to reduce harm and offer an example of its application to deep venous thrombosis. In the deep venous thrombosis model, we identified tasks that were synergistic across different types of harm. We will use a system of systems approach to integrate the variety of subsystems and coordinate processes across multiple types of harm to reduce the duplication of tasks. Through this process, we expect to improve efficiency and demonstrate synergistic interactions that ultimately can be applied across the spectrum of potential patient harms and patient locations. Engineering health care to be highly reliable will first require an understanding of the processes and work flows that comprise patient care. The ConOps strategy provided a framework for building complex systems to reduce patient harm.
Automatic detection of the inner ears in head CT images using deep convolutional neural networks
NASA Astrophysics Data System (ADS)
Zhang, Dongqing; Noble, Jack H.; Dawant, Benoit M.
2018-03-01
Cochlear implants (CIs) use electrode arrays that are surgically inserted into the cochlea to stimulate nerve endings to replace the natural electro-mechanical transduction mechanism and restore hearing for patients with profound hearing loss. Post-operatively, the CI needs to be programmed. Traditionally, this is done by an audiologist who is blind to the positions of the electrodes relative to the cochlea and relies on the patient's subjective response to stimuli. This is a trial-and-error process that can be frustratingly long (dozens of programming sessions are not unusual). To assist audiologists, we have proposed what we call IGCIP for image-guided cochlear implant programming. In IGCIP, we use image processing algorithms to segment the intra-cochlear anatomy in pre-operative CT images and to localize the electrode arrays in post-operative CTs. We have shown that programming strategies informed by image-derived information significantly improve hearing outcomes for both adults and pediatric populations. We are now aiming at deploying these techniques clinically, which requires full automation. One challenge we face is the lack of standard image acquisition protocols. The content of the image volumes we need to process thus varies greatly and visual inspection and labelling is currently required to initialize processing pipelines. In this work we propose a deep learning-based approach to automatically detect if a head CT volume contains two ears, one ear, or no ear. Our approach has been tested on a data set that contains over 2,000 CT volumes from 153 patients and we achieve an overall 95.97% classification accuracy.
Han, Guanghui; Liu, Xiabi; Zheng, Guangyuan; Wang, Murong; Huang, Shan
2018-06-06
Ground-glass opacity (GGO) is a common CT imaging sign on high-resolution CT, which means the lesion is more likely to be malignant compared to common solid lung nodules. The automatic recognition of GGO CT imaging signs is of great importance for early diagnosis and possible cure of lung cancers. The present GGO recognition methods employ traditional low-level features and system performance improves slowly. Considering the high-performance of CNN model in computer vision field, we proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling is performed on multi-views and multi-receptive fields, which reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has the ability to obtain the optimal fine-tuning model. Multi-CNN models fusion strategy obtains better performance than any single trained model. We evaluated our method on the GGO nodule samples in publicly available LIDC-IDRI dataset of chest CT scans. The experimental results show that our method yields excellent results with 96.64% sensitivity, 71.43% specificity, and 0.83 F1 score. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images. Graphical abstract We proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has ability to obtain the optimal fine-tuning model. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images.
Lee, Taehee; Kim, Uhnoh
2012-04-01
In the mammalian somatic system, peripheral inputs from cutaneous and deep receptors ascend via different subcortical channels and terminate in largely separate regions of the primary somatosensory cortex (SI). How these inputs are processed in SI and then projected back to the subcortical relay centers is critical for understanding how SI may regulate somatic information processing in the subcortex. Although it is now relatively well understood how SI cutaneous areas project to the subcortical structures, little is known about the descending projections from SI areas processing deep somatic input. We examined this issue by using the rodent somatic system as a model. In rat SI, deep somatic input is processed mainly in the dysgranular zone (DSZ) enclosed by the cutaneous barrel subfields. By using biotinylated dextran amine (BDA) as anterograde tracer, we characterized the topography of corticostriatal and corticofugal projections arising in the DSZ. The DSZ projections terminate mainly in the lateral subregions of the striatum that are also known as the target of certain SI cutaneous areas. This suggests that SI processing of deep and cutaneous information may be integrated, to a certain degree, in this striatal region. By contrast, at both thalamic and prethalamic levels as far as the spinal cord, descending projections from DSZ terminate in areas largely distinguishable from those that receive input from SI cutaneous areas. These subcortical targets of DSZ include not only the sensory but also motor-related structures, suggesting that SI processing of deep input may engage in regulating somatic and motor information flow between the cortex and periphery. Copyright © 2011 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Prasad, Paras N.
2017-02-01
This talk will focus on design and applications of nanomaterials exhibiting strong multiphoton upconversion for multiphoton microscopy as well as for image-guided and light activated therapy .1-3 Such processes can occur by truly nonlinear optical interactions proceeding through virtual intermediate states or by stepwise coupled linear excitations through real intermediate states. Multiphoton processes in biocompatible multifunctional nanoparticles allow for 3D deep tissue imaging. In addition, they can produce in-situ photon conversion of deep tissue penetrating near IR light into a needed shorter wavelength light for photo-activated therapy at a targeted site, thus overcoming the limited penetration of UV or visible light into biological media. We are using near IR emitters such as silicon quantum dots which also exhibit strong multiphoton excitation for multiphoton microscopy. Another approach involves nonlinear nanocrystals such as ZnO which can produce four wave mixing, sum frequency generation as well as second harmonic generation to convert a deep tissue penetrating Near IR light at the targeted biological site to a desired shorter wavelength light suitable for bio imaging or activation of a therapy. We have utilized this approach to activate a photosensitizer for photodynamic therapy. Yet another type of upconversion materials is rare-earth ion doped optical nanotransformers which transform a Near IR (NIR) light from an external source by sequential single photon absorption, in situ and on demand, to a needed wavelength. Applications of these nanotransformers in multiphoton photoacoustic imaging will also be presented. An exciting direction pursued by us using these multiphoton nanoparticles, is functional imaging of brain. Simultaneously, they can effect optogenetics for regioselective stimulation of neurons for providing an effective intervention/augmentation strategy to enhance the cognitive state and lead to a foundation for futuristic vision of super human capabilities. Challenges and opportunities will be discussed.
Excavationless Exterior Foundation Insulation Field Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schirber, T.; Mosiman, G.; Ojczyk, C.
Building science research supports installing exterior (soil side) foundation insulation as the optimal method to enhance the hygrothermal performance of new homes. With exterior foundation insulation, water management strategies are maximized while insulating the basement space and ensuring a more even temperature at the foundation wall. However, such an approach can be very costly and disruptive when applied to an existing home, requiring deep excavation around the entire house. The NorthernSTAR Building America Partnership team implemented an innovative, minimally invasive foundation insulation upgrade technique on an existing home. The approach consisted of using hydrovac excavation technology combined with a liquidmore » insulating foam. The team was able to excavate a continuous 4" wide by 4' to 5' deep trench around the entire house, 128 linear feet, except for one small part under the stoop that was obstructed with concrete debris. The combination pressure washer and vacuum extraction technology also enabled the elimination of large trenches and soil stockpiles normally produced by backhoe excavation. The resulting trench was filled with liquid insulating foam, which also served as a water-control layer of the assembly. The insulation was brought above grade using a liquid foam/rigid foam hybrid system and terminated at the top of the rim joist. Cost savings over the traditional excavation process ranged from 23% to 50%. The excavationless process could result in even greater savings since replacement of building structures, exterior features, utility meters, and landscaping would be minimal or non-existent in an excavationless process.« less
Excavationless Exterior Foundation Insulation Field Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schirber, T.; Mosiman, G.; Ojczyk, C.
Building science research supports installing exterior (soil side) foundation insulation as the optimal method to enhance the hygrothermal performance of new homes. With exterior foundation insulation, water management strategies are maximized while insulating the basement space and ensuring a more even temperature at the foundation wall. However, such an approach can be very costly and disruptive when applied to an existing home, requiring deep excavation around the entire house. The NorthernSTAR Building America Partnership team implemented an innovative, minimally invasive foundation insulation upgrade technique on an existing home. The approach consisted of using hydrovac excavation technology combined with liquid insulatingmore » foam. The team was able to excavate a continuous 4 inches wide by 4 feet to 5 feet deep trench around the entire house, 128 linear feet, except for one small part under the stoop that was obstructed with concrete debris. The combination pressure washer and vacuum extraction technology also enabled the elimination of large trenches and soil stockpiles normally produced by backhoe excavation. The resulting trench was filled with liquid insulating foam, which also served as a water-control layer of the assembly. The insulation was brought above grade using a liquid foam/rigid foam hybrid system and terminated at the top of the rim joist. Cost savings over the traditional excavation process ranged from 23% to 50%. The excavationless process could result in even greater savings since replacement of building structures, exterior features, utility meters, and landscaping would be minimal or non-existent in an excavationless process.« less
ERIC Educational Resources Information Center
Betoret, Fernando Domenech; Artiga, Amparo Gomez
2011-01-01
Introduction: This study examines the relationship between student basic need satisfaction (autonomy, competence, relatedness and belonging), their reporting of approaches to learning (deep and surface), their reporting of avoidance strategies (avoidance of effort and challenge, avoidance of help seeking and preference to avoid novelty) and…
ERIC Educational Resources Information Center
Wegner, Gregory
This paper describes the solution developed by Michigan State University to increase the institutions capacity for strategic innovation while respecting the Universitys limited financial means. One element of Michigan States strategy has been to send cross-institutional teams to participate in the Knight Collaboratives Wharton-IRHE (Institute for…
ERIC Educational Resources Information Center
Hill, K. Dara
2017-01-01
The current climate of reading instruction calls for fluency strategies that stress automaticity, accuracy, and prosody, within the scope of prescribed reading programs that compromise teacher autonomy, with texts that are often irrelevant to the students' experiences. Consequently, accuracy and speed are developed, but deep comprehension is…
ERIC Educational Resources Information Center
Peterson, Deborah S.
2014-01-01
This article describes two approaches to improving literacy in a high poverty, diverse urban high school. One curriculum program, "Striving Readers," included a prescribed course of study for students reading below grade level along with schoolwide strategies. This approach did not improve targeted students' reading scores or motivation…
Automation for deep space vehicle monitoring
NASA Technical Reports Server (NTRS)
Schwuttke, Ursula M.
1991-01-01
Information on automation for deep space vehicle monitoring is given in viewgraph form. Information is given on automation goals and strategy; the Monitor Analyzer of Real-time Voyager Engineering Link (MARVEL); intelligent input data management; decision theory for making tradeoffs; dynamic tradeoff evaluation; evaluation of anomaly detection results; evaluation of data management methods; system level analysis with cooperating expert systems; the distributed architecture of multiple expert systems; and event driven response.
NASA Astrophysics Data System (ADS)
Cordes, E. E.; Jones, D.; Levin, L. A.
2016-02-01
The oil and gas industry is one of the most active agents of the global industrialization of the deep sea. The wide array of impacts following the Deepwater Horizon oil spill highlighted the need for a systematic review of existing regulations both in US waters and internationally. Within different exclusive economic zones, there are a wide variety of regulations regarding the survey of deep-water areas prior to leasing and the acceptable set-back distances from vulnerable marine ecosystems once they are discovered. There are also varying mitigation strategies for accidental release of oil and gas, including active monitoring systems, temporary closings of oil and gas production, and marine protected areas. The majority of these regulations are based on previous studies of typical impacts from oil and gas drilling, rather than accidental releases. However, the probability of an accident from standard operations increases significantly with depth. The Oil & Gas working group of the Deep Ocean Stewardship Initiative is an international partnership of scientists, managers, non-governmental organizations, and industry professionals whose goal is to review existing regulations for the oil & gas industry and produce a best practices document to advise both developed and developing nations on their regulatory structure as energy development moves into deeper waters.
Human hippocampus associates information in memory
Henke, Katharina; Weber, Bruno; Kneifel, Stefan; Wieser, Heinz Gregor; Buck, Alfred
1999-01-01
The hippocampal formation, one of the most complex and vulnerable brain structures, is recognized as a crucial brain area subserving human long-term memory. Yet, its specific functions in memory are controversial. Recent experimental results suggest that the hippocampal contribution to human memory is limited to episodic memory, novelty detection, semantic (deep) processing of information, and spatial memory. We measured the regional cerebral blood flow by positron-emission tomography while healthy volunteers learned pairs of words with different learning strategies. These led to different forms of learning, allowing us to test the degree to which they challenge hippocampal function. Neither novelty detection nor depth of processing activated the hippocampal formation as much as semantically associating the primarily unrelated words in memory. This is compelling evidence for another function of the human hippocampal formation in memory: establishing semantic associations. PMID:10318979
Ethnographic process evaluation in primary care: explaining the complexity of implementation.
Bunce, Arwen E; Gold, Rachel; Davis, James V; McMullen, Carmit K; Jaworski, Victoria; Mercer, MaryBeth; Nelson, Christine
2014-12-05
The recent growth of implementation research in care delivery systems has led to a renewed interest in methodological approaches that deliver not only intervention outcome data but also deep understanding of the complex dynamics underlying the implementation process. We suggest that an ethnographic approach to process evaluation, when informed by and integrated with quantitative data, can provide this nuanced insight into intervention outcomes. The specific methods used in such ethnographic process evaluations are rarely presented in detail; our objective is to stimulate a conversation around the successes and challenges of specific data collection methods in health care settings. We use the example of a translational clinical trial among 11 community clinics in Portland, OR that are implementing an evidence-based, health-information technology (HIT)-based intervention focused on patients with diabetes. Our ethnographic process evaluation employed weekly diaries by clinic-based study employees, observation, informal and formal interviews, document review, surveys, and group discussions to identify barriers and facilitators to implementation success, provide insight into the quantitative study outcomes, and uncover lessons potentially transferable to other implementation projects. These methods captured the depth and breadth of factors contributing to intervention uptake, while minimizing disruption to clinic work and supporting mid-stream shifts in implementation strategies. A major challenge is the amount of dedicated researcher time required. The deep understanding of the 'how' and 'why' behind intervention outcomes that can be gained through an ethnographic approach improves the credibility and transferability of study findings. We encourage others to share their own experiences with ethnography in implementation evaluation and health services research, and to consider adapting the methods and tools described here for their own research.
Microelectromechanical systems (MEMS) sensors based on lead zirconate titanate (PZT) films
NASA Astrophysics Data System (ADS)
Wang, Li-Peng
2001-12-01
In this thesis, modeling, fabrication and testing of microelectromechanical systems (MEMS) accelerometers based on piezoelectric lead zirconate titanate (PZT) films are investigated. Three different types of structures, cantilever beam, trampoline, and annular diaphragm, are studied. It demonstrates the high-performance, miniaturate, mass-production-compatible, and potentially circuitry-integratable piezoelectric-type PZT MEMS devices. Theoretical models of the cantilever-beam and trampoline accelerometers are derived via structural dynamics and the constitutive equations of piezoelectricity. The time-dependent transverse vibration equations, mode shapes, resonant frequencies, and sensitivities of the accelerometers are calculated through the models. Optimization of the silicon and PZT thickness is achieved with considering the effects of the structural dynamics, the material properties, and manufacturability for different accelerometer specifications. This work is the first demonstration of the fabrication of bulk-micromachined accelerometers combining a deep-trench reactive ion etching (DRIE) release strategy and thick piezoelectric PZT films deposited using a sol-gel method. Processing challenges which are overcome included materials compatibility, metallization, processing of thick layers, double-side processing, deep-trench silicon etching, post-etch cleaning and process integration. In addition, the processed PZT films are characterized by dielectric, ferroelectric (polarization electric-field hysteresis), and piezoelectric measurements and no adverse effects are found. Dynamic frequency response and impedance resonance measurements are performed to ascertain the performance of the MEMS accelerometers. The results show high sensitivities and broad frequency ranges of the piezoelectric-type PZT MEMS accelerometers; the sensitivities range from 0.1 to 7.6 pC/g for resonant frequencies ranging from 44.3 kHz to 3.7 kHz. The sensitivities were compared to theoretical values and a reasonable agreement (˜36% difference) is obtained.
Managing Uncertainty in Water Infrastructure Design Using Info-gap Robustness
NASA Astrophysics Data System (ADS)
Irias, X.; Cicala, D.
2013-12-01
Info-gap theory, a tool for managing deep uncertainty, can be of tremendous value for design of water systems in areas of high seismic risk. Maintaining reliable water service in those areas is subject to significant uncertainties including uncertainty of seismic loading, unknown seismic performance of infrastructure, uncertain costs of innovative seismic-resistant construction, unknown costs to repair seismic damage, unknown societal impacts from downtime, and more. Practically every major earthquake that strikes a population center reveals additional knowledge gaps. In situations of such deep uncertainty, info-gap can offer advantages over traditional approaches, whether deterministic approaches that use empirical safety factors to address the uncertainties involved, or probabilistic methods that attempt to characterize various stochastic properties and target a compromise between cost and reliability. The reason is that in situations of deep uncertainty, it may not be clear what safety factor would be reasonable, or even if any safety factor is sufficient to address the uncertainties, and we may lack data to characterize the situation probabilistically. Info-gap is a tool that recognizes up front that our best projection of the future may be wrong. Thus, rather than seeking a solution that is optimal for that projection, info-gap seeks a solution that works reasonably well for all plausible conditions. In other words, info-gap seeks solutions that are robust in the face of uncertainty. Info-gap has been used successfully across a wide range of disciplines including climate change science, project management, and structural design. EBMUD is currently using info-gap to help it gain insight into possible solutions for providing reliable water service to an island community within its service area. The island, containing about 75,000 customers, is particularly vulnerable to water supply disruption from earthquakes, since it has negligible water storage and is entirely dependent on four potentially fragile water transmission mains for its day-to-day water supply. Using info-gap analysis, EBMUD is evaluating competing strategies for providing water supply to the island, for example submarine pipelines versus tunnels. The analysis considers not only the likely or 'average' results for each strategy, but also the worst-case performance of each strategy under varying levels of uncertainty. This analysis is improving the quality of the planning process, since it can identify strategies that ensure minimal disruption of water supply following a major earthquake, even if the earthquake and resulting damage fail to conform to our expectations. Results to date are presented, including a discussion of how info-gap analysis complements existing tools for comparing alternative strategies, and how info-gap improves our ability to quantify our tolerance for uncertainty.
Self-Imagining Enhances Recognition Memory in Memory-Impaired Individuals with Neurological Damage
Grilli, Matthew D.; Glisky, Elizabeth L.
2010-01-01
Objective The ability to imagine an elaborative event from a personal perspective relies on a number of cognitive processes that may potentially enhance subsequent memory for the event, including visual imagery, semantic elaboration, emotional processing, and self-referential processing. In an effort to find a novel strategy for enhancing memory in memory-impaired individuals with neurological damage, the present study investigated the mnemonic benefit of a method we refer to as “self-imagining” – or the imagining of an event from a realistic, personal perspective. Method Fourteen individuals with neurologically-based memory deficits and fourteen healthy control participants intentionally encoded neutral and emotional sentences under three instructions: structural-baseline processing, semantic processing, and self-imagining. Results Findings revealed a robust “self-imagination effect” as self-imagination enhanced recognition memory relative to deep semantic elaboration in both memory-impaired individuals, F (1, 13) = 32.11, p < .001, η2 = .71, and healthy controls, F (1, 13) = 5.57, p < .05, η2 = .30. In addition, results indicated that mnemonic benefits of self-imagination were not limited by severity of the memory disorder nor were they related to self-reported vividness of visual imagery, semantic processing, or emotional content of the materials. Conclusions The findings suggest that the self-imagination effect may depend on unique mnemonic mechanisms possibly related to self-referential processing, and that imagining an event from a personal perspective makes that event particularly memorable even for those individuals with severe memory deficits. Self-imagining may thus provide an effective rehabilitation strategy for individuals with memory impairment. PMID:20873930
The NASA SETI sky survey - Recent developments
NASA Technical Reports Server (NTRS)
Klein, Michael J.; Gulkis, Samuel; Olsen, Edward T.; Renzetti, Nicholas A.
1988-01-01
NASA's Search for Extraterrestrial Intelligence (SETI) project utilizes two complimentary search strategies: a sky survey and a targeted search. The SETI team at the Jet Propulsion Laboratory have primary responsibility to develop and carry out the sky survey part of the Microwave Observing Project. The paper describes progress that has been made to develop the major elements of the survey including a two-million channel wideband spectrum analyzer system that is being developed and constructed by JPL for the Deep Space Network. The new system will be a multiuser instrument that will serve as a prototype for the SETI Sky Survey processor. This system will be used to test the signal detection and observational strategies on deep-space network antennas in the near future.
NASA Astrophysics Data System (ADS)
QingJie, Wei; WenBin, Wang
2017-06-01
In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval
Very Deep Convolutional Neural Networks for Morphologic Classification of Erythrocytes.
Durant, Thomas J S; Olson, Eben M; Schulz, Wade L; Torres, Richard
2017-12-01
Morphologic profiling of the erythrocyte population is a widely used and clinically valuable diagnostic modality, but one that relies on a slow manual process associated with significant labor cost and limited reproducibility. Automated profiling of erythrocytes from digital images by capable machine learning approaches would augment the throughput and value of morphologic analysis. To this end, we sought to evaluate the performance of leading implementation strategies for convolutional neural networks (CNNs) when applied to classification of erythrocytes based on morphology. Erythrocytes were manually classified into 1 of 10 classes using a custom-developed Web application. Using recent literature to guide architectural considerations for neural network design, we implemented a "very deep" CNN, consisting of >150 layers, with dense shortcut connections. The final database comprised 3737 labeled cells. Ensemble model predictions on unseen data demonstrated a harmonic mean of recall and precision metrics of 92.70% and 89.39%, respectively. Of the 748 cells in the test set, 23 misclassification errors were made, with a correct classification frequency of 90.60%, represented as a harmonic mean across the 10 morphologic classes. These findings indicate that erythrocyte morphology profiles could be measured with a high degree of accuracy with "very deep" CNNs. Further, these data support future efforts to expand classes and optimize practical performance in a clinical environment as a prelude to full implementation as a clinical tool. © 2017 American Association for Clinical Chemistry.
Super-nonlinear fluorescence microscopy for high-contrast deep tissue imaging
NASA Astrophysics Data System (ADS)
Wei, Lu; Zhu, Xinxin; Chen, Zhixing; Min, Wei
2014-02-01
Two-photon excited fluorescence microscopy (TPFM) offers the highest penetration depth with subcellular resolution in light microscopy, due to its unique advantage of nonlinear excitation. However, a fundamental imaging-depth limit, accompanied by a vanishing signal-to-background contrast, still exists for TPFM when imaging deep into scattering samples. Formally, the focusing depth, at which the in-focus signal and the out-of-focus background are equal to each other, is defined as the fundamental imaging-depth limit. To go beyond this imaging-depth limit of TPFM, we report a new class of super-nonlinear fluorescence microscopy for high-contrast deep tissue imaging, including multiphoton activation and imaging (MPAI) harnessing novel photo-activatable fluorophores, stimulated emission reduced fluorescence (SERF) microscopy by adding a weak laser beam for stimulated emission, and two-photon induced focal saturation imaging with preferential depletion of ground-state fluorophores at focus. The resulting image contrasts all exhibit a higher-order (third- or fourth- order) nonlinear signal dependence on laser intensity than that in the standard TPFM. Both the physical principles and the imaging demonstrations will be provided for each super-nonlinear microscopy. In all these techniques, the created super-nonlinearity significantly enhances the imaging contrast and concurrently extends the imaging depth-limit of TPFM. Conceptually different from conventional multiphoton processes mediated by virtual states, our strategy constitutes a new class of fluorescence microscopy where high-order nonlinearity is mediated by real population transfer.
Liang, Liang; Liu, Minliang; Martin, Caitlin; Sun, Wei
2018-01-01
Structural finite-element analysis (FEA) has been widely used to study the biomechanics of human tissues and organs, as well as tissue-medical device interactions, and treatment strategies. However, patient-specific FEA models usually require complex procedures to set up and long computing times to obtain final simulation results, preventing prompt feedback to clinicians in time-sensitive clinical applications. In this study, by using machine learning techniques, we developed a deep learning (DL) model to directly estimate the stress distributions of the aorta. The DL model was designed and trained to take the input of FEA and directly output the aortic wall stress distributions, bypassing the FEA calculation process. The trained DL model is capable of predicting the stress distributions with average errors of 0.492% and 0.891% in the Von Mises stress distribution and peak Von Mises stress, respectively. This study marks, to our knowledge, the first study that demonstrates the feasibility and great potential of using the DL technique as a fast and accurate surrogate of FEA for stress analysis. © 2018 The Author(s).
The circadian clock stops ticking during deep hibernation in the European hamster
Revel, Florent G.; Herwig, Annika; Garidou, Marie-Laure; Dardente, Hugues; Menet, Jérôme S.; Masson-Pévet, Mireille; Simonneaux, Valérie; Saboureau, Michel; Pévet, Paul
2007-01-01
Hibernation is a fascinating, yet enigmatic, physiological phenomenon during which body temperature and metabolism are reduced to save energy. During the harsh season, this strategy allows substantial energy saving by reducing body temperature and metabolism. Accordingly, biological processes are considerably slowed down and reduced to a minimum. However, the persistence of a temperature-compensated, functional biological clock in hibernating mammals has long been debated. Here, we show that the master circadian clock no longer displays 24-h molecular oscillations in hibernating European hamsters. The clock genes Per1, Per2, and Bmal1 and the clock-controlled gene arginine vasopressin were constantly expressed in the suprachiasmatic nucleus during deep torpor, as assessed by radioactive in situ hybridization. Finally, the melatonin rhythm-generating enzyme, arylalkylamine N-acetyltransferase, whose rhythmic expression in the pineal gland is controlled by the master circadian clock, no longer exhibits day/night changes of expression but constantly elevated mRNA levels over 24 h. Overall, these data provide strong evidence that in the European hamster the molecular circadian clock is arrested during hibernation and stops delivering rhythmic output signals. PMID:17715068
The Hyper Suprime-Cam SSP Survey: Overview and survey design
NASA Astrophysics Data System (ADS)
Aihara, Hiroaki; Arimoto, Nobuo; Armstrong, Robert; Arnouts, Stéphane; Bahcall, Neta A.; Bickerton, Steven; Bosch, James; Bundy, Kevin; Capak, Peter L.; Chan, James H. H.; Chiba, Masashi; Coupon, Jean; Egami, Eiichi; Enoki, Motohiro; Finet, Francois; Fujimori, Hiroki; Fujimoto, Seiji; Furusawa, Hisanori; Furusawa, Junko; Goto, Tomotsugu; Goulding, Andy; Greco, Johnny P.; Greene, Jenny E.; Gunn, James E.; Hamana, Takashi; Harikane, Yuichi; Hashimoto, Yasuhiro; Hattori, Takashi; Hayashi, Masao; Hayashi, Yusuke; Hełminiak, Krzysztof G.; Higuchi, Ryo; Hikage, Chiaki; Ho, Paul T. P.; Hsieh, Bau-Ching; Huang, Kuiyun; Huang, Song; Ikeda, Hiroyuki; Imanishi, Masatoshi; Inoue, Akio K.; Iwasawa, Kazushi; Iwata, Ikuru; Jaelani, Anton T.; Jian, Hung-Yu; Kamata, Yukiko; Karoji, Hiroshi; Kashikawa, Nobunari; Katayama, Nobuhiko; Kawanomoto, Satoshi; Kayo, Issha; Koda, Jin; Koike, Michitaro; Kojima, Takashi; Komiyama, Yutaka; Konno, Akira; Koshida, Shintaro; Koyama, Yusei; Kusakabe, Haruka; Leauthaud, Alexie; Lee, Chien-Hsiu; Lin, Lihwai; Lin, Yen-Ting; Lupton, Robert H.; Mandelbaum, Rachel; Matsuoka, Yoshiki; Medezinski, Elinor; Mineo, Sogo; Miyama, Shoken; Miyatake, Hironao; Miyazaki, Satoshi; Momose, Rieko; More, Anupreeta; More, Surhud; Moritani, Yuki; Moriya, Takashi J.; Morokuma, Tomoki; Mukae, Shiro; Murata, Ryoma; Murayama, Hitoshi; Nagao, Tohru; Nakata, Fumiaki; Niida, Mana; Niikura, Hiroko; Nishizawa, Atsushi J.; Obuchi, Yoshiyuki; Oguri, Masamune; Oishi, Yukie; Okabe, Nobuhiro; Okamoto, Sakurako; Okura, Yuki; Ono, Yoshiaki; Onodera, Masato; Onoue, Masafusa; Osato, Ken; Ouchi, Masami; Price, Paul A.; Pyo, Tae-Soo; Sako, Masao; Sawicki, Marcin; Shibuya, Takatoshi; Shimasaku, Kazuhiro; Shimono, Atsushi; Shirasaki, Masato; Silverman, John D.; Simet, Melanie; Speagle, Joshua; Spergel, David N.; Strauss, Michael A.; Sugahara, Yuma; Sugiyama, Naoshi; Suto, Yasushi; Suyu, Sherry H.; Suzuki, Nao; Tait, Philip J.; Takada, Masahiro; Takata, Tadafumi; Tamura, Naoyuki; Tanaka, Manobu M.; Tanaka, Masaomi; Tanaka, Masayuki; Tanaka, Yoko; Terai, Tsuyoshi; Terashima, Yuichi; Toba, Yoshiki; Tominaga, Nozomu; Toshikawa, Jun; Turner, Edwin L.; Uchida, Tomohisa; Uchiyama, Hisakazu; Umetsu, Keiichi; Uraguchi, Fumihiro; Urata, Yuji; Usuda, Tomonori; Utsumi, Yousuke; Wang, Shiang-Yu; Wang, Wei-Hao; Wong, Kenneth C.; Yabe, Kiyoto; Yamada, Yoshihiko; Yamanoi, Hitomi; Yasuda, Naoki; Yeh, Sherry; Yonehara, Atsunori; Yuma, Suraphong
2018-01-01
Hyper Suprime-Cam (HSC) is a wide-field imaging camera on the prime focus of the 8.2-m Subaru telescope on the summit of Mauna Kea in Hawaii. A team of scientists from Japan, Taiwan, and Princeton University is using HSC to carry out a 300-night multi-band imaging survey of the high-latitude sky. The survey includes three layers: the Wide layer will cover 1400 deg2 in five broad bands (grizy), with a 5 σ point-source depth of r ≈ 26. The Deep layer covers a total of 26 deg2 in four fields, going roughly a magnitude fainter, while the UltraDeep layer goes almost a magnitude fainter still in two pointings of HSC (a total of 3.5 deg2). Here we describe the instrument, the science goals of the survey, and the survey strategy and data processing. This paper serves as an introduction to a special issue of the Publications of the Astronomical Society of Japan, which includes a large number of technical and scientific papers describing results from the early phases of this survey.
Low-power low-latency MAC protocol for aeronautical applications
NASA Astrophysics Data System (ADS)
Sabater, Jordi; Kluge, Martin; Bovelli, Sergio; Schalk, Josef
2007-05-01
This paper describes asynchronous MAC (Medium Access Control) strategies based on the IEEE 802.15.4 physical layer for wireless aeronautical applications where low power and low latency are important requirements as well as security and data integrity. Sensor data is acquired and collected on request, by means of a mobile device, and later stored in a centralized database. In order to have the smallest power consumption the wireless sensor has to remain in deep sleep mode as long as possible and wake up and listen periodically for RF activity. If its unique ID is mentioned in the destination address field, the complete frame is received, processed and replied if necessary. If the detected packet is addressed to another sensor the reception will stop immediately and the wireless sensor will go into deep sleep mode again. Listening instead of sending actively does not 'pollute' the already crowded 2.45GHz spectrum, reduces collisions and increases security. The mobile data concentrator can not be synchronized with all the sensors installed in a distributed environment, therefore smart asynchronous data transmission strategies are needed to reduce latencies and increase throughput. For the considered application, sensors are independent of each other, simply share the medium and together with the data concentrator are organized in a star network topology. The centre of the star is the concentrator which is rarely in range. It coordinates and activates the wireless sensor nodes to collect the measured data.
Creativity of Biology Students in Online Learning: Case Study of Universitas Terbuka, Indonesia
NASA Astrophysics Data System (ADS)
Diki, Diki
This is a study about the effect of students' attitudes of creativity toward their learning achievement and persistence in an online learning program. The study also investigated if there was an effect of indirect effect of attitudes of creativity toward learning achievement and persistence through learning strategies. There are three learning strategies, which are deep-learning, strategic-learning, and surface-learning. The participants were students of the department of biology and the department of biology teacher training in Universitas Terbuka (UT -- Indonesia Open University), a distance learning university in Indonesia. The researcher sent the questionnaire through email to students who lived throughout Indonesia. There were 102 students participated in the survey. The instruments were rCAB test for value and attitudes toward creativity (Runco, 2012) and approaches and Study Skills Inventory for Students (ASSIST) test (Speth, 2013). There were four research questions (RQ) in this study. The first was if there was a relationship between attitudes of creativity and persistence. The researcher used independent samples t test technique for RQ 1. The second was if there is a relationship between attitudes of creativity and learning outcome. The researcher used multiple regressions for RQ2. The third was if there was an indirect relationship between attitudes of creativity and persistence through learning strategy. The fourth question was if there is an indirect relationship between attitudes of creativity and learning outcome through learning strategy. The researcher used multiple regression for RQ3 and path analysis for RQ 4. Controlling variables were age, income, departments, gender, high school GPA, and daily online activities. The result showed that fun, and being unconventional negatively predicted learning outcomes while high school GPA positively predicted learning outcome. Age and high school GPA negatively predicted persistence while being unconventional positively predicted persistence. Two variables of deep-learning strategy predicted learning outcome. There were indirect relationships between attitudes of creativity and learning outcomes through deep-learning strategy.
Interventional Therapy for Upper Extremity Deep Vein Thrombosis
Carlon, Timothy A.; Sudheendra, Deepak
2017-01-01
Approximately 10% of all deep vein thromboses occur in the upper extremity, and that number is increasing due to the use of peripherally inserted central catheters. Sequelae of upper extremity deep vein thrombosis (UEDVT) are similar to those for lower extremity deep vein thrombosis (LEDVT) and include postthrombotic syndrome and pulmonary embolism. In addition to systemic anticoagulation, there are multiple interventional treatment options for UEDVT with the potential to reduce the incidence of these sequelae. To date, there have been no randomized trials to define the optimal management strategy for patients presenting with UEDVT, so many conclusions are drawn from smaller, single-center studies or from LEDVT research. In this article, the authors describe the evidence for the currently available treatment options and an approach to a patient with acute UEDVT. PMID:28265130
Succession in the petroleum reservoir microbiome through an oil field production lifecycle
Vigneron, Adrien; Alsop, Eric B.; Lomans, Bartholomeus P.; ...
2017-05-19
Subsurface petroleum reservoirs are an important component of the deep biosphere where indigenous microorganisms live under extreme conditions and in isolation from the Earth's surface for millions of years. However, unlike the bulk of the deep biosphere, the petroleum reservoir deep biosphere is subject to extreme anthropogenic perturbation, with the introduction of new electron acceptors, donors and exogenous microbes during oil exploration and production. Despite the fundamental and practical significance of this perturbation, there has never been a systematic evaluation of the ecological changes that occur over the production lifetime of an active offshore petroleum production system. Analysis of themore » entire Halfdan oil field in the North Sea (32 producing wells in production for 1-15 years) using quantitative PCR, multigenic sequencing, comparative metagenomic and genomic bins reconstruction revealed systematic shifts in microbial community composition and metabolic potential, as well as changing ecological strategies in response to anthropogenic perturbation of the oil field ecosystem, related to length of time in production. The microbial communities were initially dominated by slow growing anaerobes such as members of the Thermotogales and Clostridiales adapted to living on hydrocarbons and complex refractory organic matter. However, as seawater and nitrate injection (used for secondary oil production) delivered oxidants, the microbial community composition progressively changed to fast growing opportunists such as members of the Deferribacteres, Delta-, Epsilon- and Gammaproteobacteria, with energetically more favorable metabolism (for example, nitrate reduction, H2S, sulfide and sulfur oxidation). This perturbation has profound consequences for understanding the microbial ecology of the system and is of considerable practical importance as it promotes detrimental processes such as reservoir souring and metal corrosion. These findings provide a new conceptual framework for understanding the petroleum reservoir biosphere and have consequences for developing strategies to manage microbiological problems in the oil industry.« less
Succession in the petroleum reservoir microbiome through an oil field production lifecycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vigneron, Adrien; Alsop, Eric B.; Lomans, Bartholomeus P.
Subsurface petroleum reservoirs are an important component of the deep biosphere where indigenous microorganisms live under extreme conditions and in isolation from the Earth's surface for millions of years. However, unlike the bulk of the deep biosphere, the petroleum reservoir deep biosphere is subject to extreme anthropogenic perturbation, with the introduction of new electron acceptors, donors and exogenous microbes during oil exploration and production. Despite the fundamental and practical significance of this perturbation, there has never been a systematic evaluation of the ecological changes that occur over the production lifetime of an active offshore petroleum production system. Analysis of themore » entire Halfdan oil field in the North Sea (32 producing wells in production for 1-15 years) using quantitative PCR, multigenic sequencing, comparative metagenomic and genomic bins reconstruction revealed systematic shifts in microbial community composition and metabolic potential, as well as changing ecological strategies in response to anthropogenic perturbation of the oil field ecosystem, related to length of time in production. The microbial communities were initially dominated by slow growing anaerobes such as members of the Thermotogales and Clostridiales adapted to living on hydrocarbons and complex refractory organic matter. However, as seawater and nitrate injection (used for secondary oil production) delivered oxidants, the microbial community composition progressively changed to fast growing opportunists such as members of the Deferribacteres, Delta-, Epsilon- and Gammaproteobacteria, with energetically more favorable metabolism (for example, nitrate reduction, H2S, sulfide and sulfur oxidation). This perturbation has profound consequences for understanding the microbial ecology of the system and is of considerable practical importance as it promotes detrimental processes such as reservoir souring and metal corrosion. These findings provide a new conceptual framework for understanding the petroleum reservoir biosphere and have consequences for developing strategies to manage microbiological problems in the oil industry.« less
Software Graphics Processing Unit (sGPU) for Deep Space Applications
NASA Technical Reports Server (NTRS)
McCabe, Mary; Salazar, George; Steele, Glen
2015-01-01
A graphics processing capability will be required for deep space missions and must include a range of applications, from safety-critical vehicle health status to telemedicine for crew health. However, preliminary radiation testing of commercial graphics processing cards suggest they cannot operate in the deep space radiation environment. Investigation into an Software Graphics Processing Unit (sGPU)comprised of commercial-equivalent radiation hardened/tolerant single board computers, field programmable gate arrays, and safety-critical display software shows promising results. Preliminary performance of approximately 30 frames per second (FPS) has been achieved. Use of multi-core processors may provide a significant increase in performance.
NASA Astrophysics Data System (ADS)
Valls, Maria; Rueda, Lucía; Quetglas, Antoni
2017-10-01
Cephalopods and elasmobranchs are important components of marine ecosystems, whereby knowing the ecological role they play in the structure and dynamics of trophic networks is paramount. With this aim, stomach contents and stable isotopes of the most abundant elasmobranch and cephalopod species (5 and 18 species, respectively) inhabiting deep-sea ecosystems from the western Mediterranean were analyzed. The predators investigated encompassed different taxonomic groups, such as rays and sharks within elasmobranchs, and squids, octopuses and cuttlefishes within cephalopods. Specifically, we investigated ontogenetic shifts in diet, feeding strategies and prey consumption, trophic structure and potential dietary overlap between and within both taxonomical groups. Stable isotope analysis revealed ontogenetic shifts in diet in three elasmobranch (rays and sharks) and two cephalopod (octopuses and squids) species. Isotopic data showed a contrasting food source gradient (δ13C), from pelagic (squids and cuttlefishes) to benthic (octopuses and elasmobranchs). Stomach data highlighted a great variety of trophic guilds which could be further aggregated into three broad categories: benthic, benthopelagic and pelagic feeders. The combination of both stomach content and stable isotope analyses revealed a clear food partitioning among species. Mesopelagic prey were found to be an important food resource for deep-sea elasmobranchs and cephalopods, which could be related to the strong oligotrophic conditions in the area. The observed differences in feeding strategies within cephalopods and elasmobranchs should be taken into account when defining functional groups in trophodynamic models from the western Mediterranean. Our results also revealed that cephalopods play a key role for the benthopelagic coupling, whereas demersal elasmobranchs contribute primarily to a one-way flux accumulating energy resources into deep-sea ecosystems.
Blind CT image quality assessment via deep learning strategy: initial study
NASA Astrophysics Data System (ADS)
Li, Sui; He, Ji; Wang, Yongbo; Liao, Yuting; Zeng, Dong; Bian, Zhaoying; Ma, Jianhua
2018-03-01
Computed Tomography (CT) is one of the most important medical imaging modality. CT images can be used to assist in the detection and diagnosis of lesions and to facilitate follow-up treatment. However, CT images are vulnerable to noise. Actually, there are two major source intrinsically causing the CT data noise, i.e., the X-ray photo statistics and the electronic noise background. Therefore, it is necessary to doing image quality assessment (IQA) in CT imaging before diagnosis and treatment. Most of existing CT images IQA methods are based on human observer study. However, these methods are impractical in clinical for their complex and time-consuming. In this paper, we presented a blind CT image quality assessment via deep learning strategy. A database of 1500 CT images is constructed, containing 300 high-quality images and 1200 corresponding noisy images. Specifically, the high-quality images were used to simulate the corresponding noisy images at four different doses. Then, the images are scored by the experienced radiologists by the following attributes: image noise, artifacts, edge and structure, overall image quality, and tumor size and boundary estimation with five-point scale. We trained a network for learning the non-liner map from CT images to subjective evaluation scores. Then, we load the pre-trained model to yield predicted score from the test image. To demonstrate the performance of the deep learning network in IQA, correlation coefficients: Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are utilized. And the experimental result demonstrate that the presented deep learning based IQA strategy can be used in the CT image quality assessment.
Springback optimization in automotive Shock Absorber Cup with Genetic Algorithm
NASA Astrophysics Data System (ADS)
Kakandikar, Ganesh; Nandedkar, Vilas
2018-02-01
Drawing or forming is a process normally used to achieve a required component form from a metal blank by applying a punch which radially draws the blank into the die by a mechanical or hydraulic action or combining both. When the component is drawn for more depth than the diameter, it is usually seen as deep drawing, which involves complicated states of material deformation. Due to the radial drawing of the material as it enters the die, radial drawing stress occurs in the flange with existence of the tangential compressive stress. This compression generates wrinkles in the flange. Wrinkling is unwanted phenomenon and can be controlled by application of a blank-holding force. Tensile stresses cause thinning in the wall region of the cup. Three main types of the errors occur in such a process are wrinkling, fracturing and springback. This paper reports a work focused on the springback and control. Due to complexity of the process, tool try-outs and experimentation may be costly, bulky and time consuming. Numerical simulation proves to be a good option for studying the process and developing a control strategy for reducing the springback. Finite-element based simulations have been used popularly for such purposes. In this study, the springback in deep drawing of an automotive Shock Absorber Cup is simulated with finite element method. Taguchi design of experiments and analysis of variance are used to analyze the influencing process parameters on the springback. Mathematical relations are developed to relate the process parameters and the resulting springback. The optimization problem is formulated for the springback, referring to the displacement magnitude in the selected sections. Genetic Algorithm is then applied for process optimization with an objective to minimize the springback. The results indicate that a better prediction of the springback and process optimization could be achieved with a combined use of these methods and tools.
Inert gas narcosis and the encoding and retrieval of long-term memory.
Kneller, Wendy; Hobbs, Malcolm
2013-12-01
Prior research has indicated that inert gas narcosis (IGN) causes decrements in free recall memory performance and that these result from disruption of either encoding or self-guided search in the retrieval process. In a recent study we provided evidence, using a Levels of Processing approach, for the hypothesis that IGN affects the encoding of new information. The current study sought to replicate these results with an improved methodology. The effect of ambient pressure (111.5-212.8 kPa/1-11 msw vs. 456-516.8 kPa/35-41 msw) and level of processing (shallow vs. deep) on free recall memory performance was measured in 34 divers in the context of an underwater field experiment. Free recall was significantly worse at high ambient pressure compared to low ambient pressure in the deep processing condition (low pressure: M = 5.6; SD = 2.7; high pressure: M = 3.3; SD = 1.4), but not in the shallow processing condition (low pressure: M = 3.9; SD = 1.7; high pressure: M = 3.1; SD = 1.8), indicating IGN impaired memory ability in the deep processing condition. In the shallow water, deep processing improved recall over shallow processing but, significantly, this effect was eliminated in the deep water. In contrast to our earlier study this supported the hypothesis that IGN affects the self-guided search of information and not encoding. It is suggested that IGN may affect both encoding and self-guided search and further research is recommended.
MITLL Silicon Integrated Photonics Process: Design Guide
2015-07-31
Silicon Integrated Photonics Process Comprehensive Design Guide 16 Deep Etch for Fiber Coupling (DEEP_ETCH...facets for fiber coupling. Standard design layers for each process are defined in Section 3, but other options can be made available. Notes on...a silicon thinning process that can create very low loss waveguides (and which better suppresses back scatter and, therefore, resonance splitting in
NASA Astrophysics Data System (ADS)
Colwell, F. S.; Ntarlagiannis, D.
2007-05-01
The new subdiscipline of biogeophysics has focused mostly on the geophysical signatures of microbial processes in contaminated subsurface environments usually undergoing remediation. However, the use of biogeophysics to examine the biogeochemistry of marine sediments has not yet been well-integrated into conceptual models that describe subseafloor processes. Current examples of geophysical measurements that have been used to detect geomicrobiological processes or infer their location in the seafloor include sound surveillance system (SOSUS)-derived data that detect seafloor eruptive events, deep and shallow cross-sectional seismic surveys that determine the presence of hydraulically conductive zones or gas-bearing sediments (e.g., bottom-simulating reflectors or bubble-rich strata), and thermal profiles. One possible area for innovative biogeophysical characterization of the seafloor involves determining the depth of the sulfate-methane interface (SMI) in locations where sulfate diffuses from the seawater and methane emanates from subsurface strata. The SMI demarcates a stratum where microbially-driven anaerobic methane oxidation (AMO) is dependent upon methane as an electron donor and sulfate as an electron acceptor. AMO is carried out by a recently defined, unique consortium of microbes that metabolically temper the flux of methane into the overlying seawater. The depth of the SMI is, respectively, shallow or deep according to whether a high or low rate of methane flux occurs from the deep sediments. Presently, the SMI can only be determined by direct measurements of methane and sulfate concentrations in the interstitial waters or by molecular biological techniques that target the microbes responsible for creating the SMI. Both methods require collection and considerable analysis of sediment samples. Therefore, detection of the SMI by non-destructive methods would be advantageous. As a key biogeochemical threshold in marine sediments, the depth of the SMI defines methane charge in marine sediments, whether it is from dissolved methane or from methane hydrates. As such, a biogeophysical strategy for determining SMI depth would represent an important contribution to assessing methane charge with respect to climate change, sediment stability, or potential energy resources.
NASA Astrophysics Data System (ADS)
Han, Fengshan; Wu, Xinli; Li, Xia; Zhu, Dekang
2018-02-01
Zonal disintegration phenomenon was found in deep mining roadway surrounding rock. It seriously affects the safety of mining and underground engineering and it may lead to the occurrence of natural disasters. in deep mining roadway surrounding rock, tectonic stress in deep mining roadway rock mass, horizontal stress is much greater than the vertical stress, When the direction of maximum principal stress is parallel to the axis of the roadway in deep mining, this is the main reasons for Zonal disintegration phenomenon. Using ABAQUS software to numerical simulation of the three-dimensional model of roadway rupture formation process systematically, and the study shows that when The Direction of maximum main stress in deep underground mining is along the roadway axial direction, Zonal disintegration phenomenon in deep underground mining is successfully reproduced by our numerical simulation..numerical simulation shows that using ABAQUA simulation can reproduce Zonal disintegration phenomenon and the formation process of damage of surrounding rock can be reproduced. which have important engineering practical significance.
Measure Guideline. Deep Energy Enclosure Retrofit for Zero Energy Ready House Flat Roofs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loomis, H.; Pettit, B.
2015-05-29
This Measure Guideline provides design and construction information for a deep energy enclosure retrofit solution of a flat roof assembly. It describes the strategies and procedures for an exterior retrofit of a flat wood-framed roof with brick masonry exterior walls using exterior and interior (framing cavity) insulation. The approach supported in this guide could also be adapted for use with flat wood-framed roofs with wood-framed exterior walls.
Bergstad, O A
2013-12-01
This paper summarizes knowledge and knowledge gaps on benthic and benthopelagic deep-water fishes of the North Atlantic Ocean, i.e. species inhabiting deep continental shelf areas, continental and island slopes, seamounts and the Mid-Atlantic Ridge. While several studies demonstrate that distribution patterns are species specific, several also show that assemblages of species can be defined and such assemblages are associated with circulatory features and water mass distributions. In many subareas, sampling has, however, been scattered, restricted to shallow areas or soft substrata, and results from different studies tend to be difficult to compare quantitatively because of sampler differences. Particularly, few studies have been conducted on isolated deep oceanic seamounts and in Arctic deep-water areas. Time series of data are very few and most series are short. Recent studies of population structure of widely distributed demersal species show less than expected present connectivity and considerable spatial genetic heterogeneity and complexity for some species. In other species, genetic homogeneity across wide ranges was discovered. Mechanisms underlying the observed patterns have been proposed, but to test emerging hypotheses more species should be investigated across their entire distribution ranges. Studies of population biology reveal greater diversity in life-history strategies than often assumed, even between co-occurring species of the same family. Some slope and ridge-associated species are rather short-lived, others very long-lived, and growth patterns also show considerable variation. Recent comparative studies suggest variation in life-history strategies along a continuum correlated with depth, ranging from shelf waters to the deep sea where comparatively more species have extended lifetimes, and slow rates of growth and reproduction. Reproductive biology remains too poorly known for most deep-water species, and temporal variation in recruitment has only been studied for few deep-water species. A time series of roundnose grenadier Coryphaenoides rupestris recruitment spanning three decades of fisheries-independent data suggests that abundant year classes occur rarely and may influence size structure and abundance even for this long-lived species. © 2013 The Fisheries Society of the British Isles.
ERIC Educational Resources Information Center
Morrill, Richard L.
2007-01-01
"Strategic Leadership" addresses deep and continuing issues relating to strategy, governance, management, and leadership in higher education during a period of rapid change. Each of these themes is at the heart of current debates about the capacity of universities to respond to new expectations, market realities, reduced state funding,…
Critical Thinking: Strategies for Improving Student Learning, Part II
ERIC Educational Resources Information Center
Paul, Richard; Elder, Linda
2008-01-01
In the last column we focused (as a primary goal of instruction) on the importance of teaching so that students learn to think their way into and through content. We stressed the need for well-designed daily structures and tactics for fostering deep learning, offering three strategies as examples. In this column, we provide four additional…
Academic Self-Concept and Learning Strategies: Direction of Effect on Student Academic Achievement
ERIC Educational Resources Information Center
McInerney, Dennis M.; Cheng, Rebecca Wing-yi; Mok, Magdalena Mo Ching; Lam, Amy Kwok Hap
2012-01-01
This study examined the prediction of academic self-concept (English and Mathematics) and learning strategies (deep and surface), and their direction of effect, on academic achievement (English and Mathematics) of 8,354 students from 16 secondary schools in Hong Kong. Two competing models were tested to ascertain the direction of effect: Model A…
Deep learning with convolutional neural network in radiology.
Yasaka, Koichiro; Akai, Hiroyuki; Kunimatsu, Akira; Kiryu, Shigeru; Abe, Osamu
2018-04-01
Deep learning with a convolutional neural network (CNN) is gaining attention recently for its high performance in image recognition. Images themselves can be utilized in a learning process with this technique, and feature extraction in advance of the learning process is not required. Important features can be automatically learned. Thanks to the development of hardware and software in addition to techniques regarding deep learning, application of this technique to radiological images for predicting clinically useful information, such as the detection and the evaluation of lesions, etc., are beginning to be investigated. This article illustrates basic technical knowledge regarding deep learning with CNNs along the actual course (collecting data, implementing CNNs, and training and testing phases). Pitfalls regarding this technique and how to manage them are also illustrated. We also described some advanced topics of deep learning, results of recent clinical studies, and the future directions of clinical application of deep learning techniques.
NASA Astrophysics Data System (ADS)
Kellerer-Pirklbauer, Andreas; Bartsch, Annett; Gitschthaler, Christoph; Reisenhofer, Stefan; Weyss, Gernot; Riedl, Claudia; Avian, Michael
2016-04-01
About 2.5% (~2000 km²) of the national territory of Austria is influenced by permafrost conditions. A slightly smaller area of Austria is additionally affected by deep seasonal frost which is, however, similarly exposed to intensive physical weathering and related geomorphic processes. Currently, 23 skiing resorts, 31 water reservoirs and 42 mountain huts are either directly or indirectly influenced by permafrost and associated processes in Austria as determined from regional permafrost models. Ground thermal changes most likely affect the ground stability and infrastructure in those areas. Therefore, changes in the distribution and characteristics of permafrost and seasonal frost are of high economic and ecological importance. A range of Austrian institutions are interested in systematic permafrost monitoring (several universities, geological surveys, the Austrian torrent and avalanche control agency or several different alpine clubs). However, to date no coordinated monitoring network has been established on a national scale and a strategy for long-term permafrost/periglacial observation did not exist so far. Such a national strategy has been developed in 2015 within the permAT project funded through the StartClim2014-program. During permAT an extensive literature review and data search as well as a workshop with 40 participants (scientists, stakeholder and policy maker) were accomplished. The workshop allowed the integration of national as well as international colleagues into the strategy development. Results of permAT clearly demonstrate that the number of present permafrost/periglacial monitoring sites is far too little in Austria. Only few alpine areas of Austria are well represented by the existing monitoring activities but large areas lack such instrumentations. Furthermore, permafrost boreholes exist at only three sites in central Austria (all contribution to the GTN-P network) and there is a lack of knowledge about thermal conditions and recent changes of permafrost temperatures in western Austria. A central recommendation of the permAT-strategy is to increase the number of monitoring sites based on our analyses of the current situation and exchanges with different stakeholders. This should include temperature measurements in deep and shallow boreholes close to the surface, geophysical surveys and ground movement measurements (rock glaciers, instable rock faces). In addition to the terrestrial measurements a spatially continuous observation of surface movements with remote sensing methods is required. Demand is highest for the entire federal province of Tyrol, the district of Zell am See (province of Salzburg) and the south-eastern part of the province of Vorarlberg. In order to achieve a similar spatial coverage and technical set-up as in Switzerland a minimum investment of 1.5 Mio € is required taking advantages in synergies with e.g. existing automatic weather stations (e.g. from the Central Institute for Meteorology and Geodynamics), alpine huts and skiing infrastructure into account. Financial support could - similarly to Switzerland - come from a combination of partners from public institutions, economy and research institutes.
Computational Studies for Underground Coal Gasification (UCG) Process
NASA Astrophysics Data System (ADS)
Chatterjee, Dipankar
2017-07-01
Underground coal gasification (UCG) is a well proven technology in order to access the coal lying either too deep underground, or is otherwise too costly to be extracted using the conventional mining methods. UCG product gas is commonly used as a chemical feedstock or as fuel for power generation. During the UCG process, a cavity is formed in the coal seam during its conversion to gaseous products. The cavity grows in a three-dimensional fashion as the gasification proceeds. The UCG process is indeed a result of several complex interactions of various geo-thermo-mechanical processes such as the fluid flow, heat and mass transfer, chemical reactions, water influx, thermo-mechanical failure, and other geological aspects. The rate of the growth of this cavity and its shape will have a significant impact on the gas flow patterns, chemical kinetics, temperature distributions, and finally the quality of the product gas. It has been observed that there is insufficient information available in the literature to provide clear insight into these issues. It leaves us with a great opportunity to investigate and explore the UCG process, both from the experimental as well as theoretical perspectives. In the development and exploration of new research, experiment is undoubtedly very important. However, due to the excessive cost involvement with experimentation it is not always recommended for the complicated process like UCG. Recently, with the advent of the high performance computational facilities it is quite possible to make alternative experimentation numerically of many physically involved problems using certain computational tools like CFD (computational fluid dynamics). In order to gain a comprehensive understanding of the underlying physical phenomena, modeling strategies have frequently been utilized for the UCG process. Keeping in view the above, the various modeling strategies commonly deployed for carrying out mathematical modeling of UCG process are described here in a concise manner. The available strategies are categorized in several groups and their salient features are discussed in order to have a good understanding of the underlying physical phenomena. This would likely to be a valuable documentation in order to understand the physical process of UCG and will pave to formulate new and involved modeling and simulation techniques for computationally modeling the UCG process.
Graphene oxide as a sulfur immobilizer in high performance lithium/sulfur cells
Zhang, Yuegang; Cairns, Elton J.; Ji, Liwen; Rao, Mumin
2017-06-06
The loss of sulfur cathode material as a result of polysulfide dissolution causes significant capacity fading in rechargeable lithium/sulfur cells. Embodiments of the invention use a chemical approach to immobilize sulfur and lithium polysulfides via the reactive functional groups on graphene oxide. This approach obtains a uniform and thin (.about.tens of nanometers) sulfur coating on graphene oxide sheets by a chemical reaction-deposition strategy and a subsequent low temperature thermal treatment process. Strong interaction between graphene oxide and sulfur or polysulfides demonstrate lithium/sulfur cells with a high reversible capacity of 950-1400 mAh g.sup.-1, and stable cycling for more than 50 deep cycles at 0.1 C.
Recognition of digital characteristics based new improved genetic algorithm
NASA Astrophysics Data System (ADS)
Wang, Meng; Xu, Guoqiang; Lin, Zihao
2017-08-01
In the field of digital signal processing, Estimating the characteristics of signal modulation parameters is an significant research direction. The paper determines the set of eigenvalue which can show the difference of the digital signal modulation based on the deep research of the new improved genetic algorithm. Firstly take them as the best gene pool; secondly, The best gene pool will be changed in the genetic evolvement by selecting, overlapping and eliminating each other; Finally, Adapting the strategy of futher enhance competition and punishment to more optimizer the gene pool and ensure each generation are of high quality gene. The simulation results show that this method not only has the global convergence, stability and faster convergence speed.
Metasurface with interfering Fano resonance: manipulating transmission wave with high efficiency.
Su, Zhaoxian; Song, Kun; Yin, Jianbo; Zhao, Xiaopeng
2017-06-15
We proposed a novel strategy to design a deep subwavelength metasurface with full 2π transmission phase modulation and high transmission efficiency by applying resonators with interfering Fano resonance. Theoretical investigation demonstrates that the transmission efficiency of the resonators depends on the direct transmission coefficient, direct reflection coefficient, and Q factor. When an impedance layer is added in the resonators, the direct transmission and direct reflection coefficients can be facilely manipulated so that the span of the transmission phase around the resonance frequency can be extended to 2π. As a result, we can continuously adjust the transmission phase from 0 to 2π through changing the geometric parameters of the resonators and construct a deep subwavelength metasurface with the resonators to manipulate the transmission wave with high efficiency. We also find that a layer of grating can be used as the impedance layer to change direct transmission and direct reflection in the actual design of the metasurface. The proposed strategy may provide effective guidance to design a deep subwavelength metasurface for controlling a transmitted wave with high efficiency.
Auditory processing during deep propofol sedation and recovery from unconsciousness.
Koelsch, Stefan; Heinke, Wolfgang; Sammler, Daniela; Olthoff, Derk
2006-08-01
Using evoked potentials, this study investigated effects of deep propofol sedation, and effects of recovery from unconsciousness, on the processing of auditory information with stimuli suited to elicit a physical MMN, and a (music-syntactic) ERAN. Levels of sedation were assessed using the Bispectral Index (BIS) and the Modified Observer's Assessment of Alertness and Sedation Scale (MOAAS). EEG-measurements were performed during wakefulness, deep propofol sedation (MOAAS 2-3, mean BIS=68), and a recovery period. Between deep sedation and recovery period, the infusion rate of propofol was increased to achieve unconsciousness (MOAAS 0-1, mean BIS=35); EEG measurements of recovery period were performed after subjects regained consciousness. During deep sedation, the physical MMN was markedly reduced, but still significant. No ERAN was observed in this level. A clear P3a was elicited during deep sedation by those deviants, which were task-relevant during the awake state. As soon as subjects regained consciousness during the recovery period, a normal MMN was elicited. By contrast, the P3a was absent in the recovery period, and the P3b was markedly reduced. Results indicate that the auditory sensory memory (as indexed by the physical MMN) is still active, although strongly reduced, during deep sedation (MOAAS 2-3). The presence of the P3a indicates that attention-related processes are still operating during this level. Processes of syntactic analysis appear to be abolished during deep sedation. After propofol-induced anesthesia, the auditory sensory memory appears to operate normal as soon as subjects regain consciousness, whereas the attention-related processes indexed by P3a and P3b are markedly impaired. Results inform about effects of sedative drugs on auditory and attention-related mechanisms. The findings are important because these mechanisms are prerequisites for auditory awareness, auditory learning and memory, as well as language perception during anesthesia.
Fish protein hydrolysates: application in deep-fried food and food safety analysis.
He, Shan; Franco, Christopher; Zhang, Wei
2015-01-01
Four different processes (enzymatic, microwave-intensified enzymatic, chemical, and microwave-intensified chemical) were used to produce fish protein hydrolysates (FPH) from Yellowtail Kingfish for food applications. In this study, the production yield and oil-binding capacity of FPH produced from different processes were evaluated. Microwave intensification significantly increased the production yields of enzymatic process from 42% to 63%. It also increased the production yields of chemical process from 87% to 98%. The chemical process and microwave-intensified chemical process produced the FPH with low oil-binding capacity (8.66 g oil/g FPH and 6.25 g oil/g FPH), whereas the microwave-intensified enzymatic process produced FPH with the highest oil-binding capacity (16.4 g oil/g FPH). The FPH from the 4 processes were applied in the formulation of deep-fried battered fish and deep-fried fish cakes. The fat uptake of deep-fried battered fish can be reduced significantly from about 7% to about 4.5% by replacing 1% (w/w) batter powder with FPH, and the fat uptake of deep-fried fish cakes can be significantly reduced from about 11% to about 1% by replacing 1% (w/w) fish mince with FPH. Food safety tests of the FPH produced by these processes demonstrated that the maximum proportion of FPH that can be safely used in food formulation is 10%, due to its high content of histamine. This study demonstrates the value of FPH to the food industry and bridges the theoretical studies with the commercial applications of FPH. © 2015 Institute of Food Technologists®
Earthquake prediction using extinct monogenetic volcanoes: A possible new research strategy
NASA Astrophysics Data System (ADS)
Szakács, Alexandru
2011-04-01
Volcanoes are extremely effective transmitters of matter, energy and information from the deep Earth towards its surface. Their capacities as information carriers are far to be fully exploited so far. Volcanic conduits can be viewed in general as rod-like or sheet-like vertical features with relatively homogenous composition and structure crosscutting geological structures of far more complexity and compositional heterogeneity. Information-carrying signals such as earthquake precursor signals originating deep below the Earth surface are transmitted with much less loss of information through homogenous vertically extended structures than through the horizontally segmented heterogeneous lithosphere or crust. Volcanic conduits can thus be viewed as upside-down "antennas" or waveguides which can be used as privileged pathways of any possible earthquake precursor signal. In particular, conduits of monogenetic volcanoes are promising transmitters of deep Earth information to be received and decoded at surface monitoring stations because the expected more homogenous nature of their rock-fill as compared to polygenetic volcanoes. Among monogenetic volcanoes those with dominantly effusive activity appear as the best candidates for privileged earthquake monitoring sites. In more details, effusive monogenetic volcanic conduits filled with rocks of primitive parental magma composition indicating direct ascent from sub-lithospheric magma-generating areas are the most suitable. Further selection criteria may include age of the volcanism considered and the presence of mantle xenoliths in surface volcanic products indicating direct and straightforward link between the deep lithospheric mantle and surface through the conduit. Innovative earthquake prediction research strategies can be based and developed on these grounds by considering conduits of selected extinct monogenetic volcanoes and deep trans-crustal fractures as privileged emplacement sites of seismic monitoring stations using an assemblage of physical, chemical and biological sensors devised to detect precursory signals. Earthquake prediction systems can be built up based on the concept of a signal emission-transmission-reception system, in which volcanic conduits and/or deep fractures play the role of the most effective signal transmission paths through the lithosphere. Unique "precursory fingerprints" of individual seismic structures are expected to be pointed out as an outcome of target-oriented strategic prediction research. Intelligent pattern-recognition systems are to be included for evaluation of the signal assemblages recorded by complex sensor arrays. Such strategies are expected however to be limited to intermediate-depth and deep seismic structures. Due to its particular features and geotectonic setting, the Vrancea seismic structure in Romania appears to be an excellent experimental target for prediction research.
NASA Astrophysics Data System (ADS)
Zhang, Yanjie; Sun, Jin; Chen, Chong; Watanabe, Hiromi K.; Feng, Dong; Zhang, Yu; Chiu, Jill M. Y.; Qian, Pei-Yuan; Qiu, Jian-Wen
2017-04-01
Polynoid scale worms (Polynoidae, Annelida) invaded deep-sea chemosynthesis-based ecosystems approximately 60 million years ago, but little is known about their genetic adaptation to the extreme deep-sea environment. In this study, we reported the first two transcriptomes of deep-sea polynoids (Branchipolynoe pettiboneae, Lepidonotopodium sp.) and compared them with the transcriptome of a shallow-water polynoid (Harmothoe imbricata). We determined codon and amino acid usage, positive selected genes, highly expressed genes and putative duplicated genes. Transcriptome assembly produced 98,806 to 225,709 contigs in the three species. There were more positively charged amino acids (i.e., histidine and arginine) and less negatively charged amino acids (i.e., aspartic acid and glutamic acid) in the deep-sea species. There were 120 genes showing clear evidence of positive selection. Among the 10% most highly expressed genes, there were more hemoglobin genes with high expression levels in both deep-sea species. The duplicated genes related to DNA recombination and metabolism, and gene expression were only enriched in deep-sea species. Deep-sea scale worms adopted two strategies of adaptation to hypoxia in the chemosynthesis-based habitats (i.e., rapid evolution of tetra-domain hemoglobin in Branchipolynoe or high expression of single-domain hemoglobin in Lepidonotopodium sp.).
Zhang, Yanjie; Sun, Jin; Chen, Chong; Watanabe, Hiromi K.; Feng, Dong; Zhang, Yu; Chiu, Jill M.Y.; Qian, Pei-Yuan; Qiu, Jian-Wen
2017-01-01
Polynoid scale worms (Polynoidae, Annelida) invaded deep-sea chemosynthesis-based ecosystems approximately 60 million years ago, but little is known about their genetic adaptation to the extreme deep-sea environment. In this study, we reported the first two transcriptomes of deep-sea polynoids (Branchipolynoe pettiboneae, Lepidonotopodium sp.) and compared them with the transcriptome of a shallow-water polynoid (Harmothoe imbricata). We determined codon and amino acid usage, positive selected genes, highly expressed genes and putative duplicated genes. Transcriptome assembly produced 98,806 to 225,709 contigs in the three species. There were more positively charged amino acids (i.e., histidine and arginine) and less negatively charged amino acids (i.e., aspartic acid and glutamic acid) in the deep-sea species. There were 120 genes showing clear evidence of positive selection. Among the 10% most highly expressed genes, there were more hemoglobin genes with high expression levels in both deep-sea species. The duplicated genes related to DNA recombination and metabolism, and gene expression were only enriched in deep-sea species. Deep-sea scale worms adopted two strategies of adaptation to hypoxia in the chemosynthesis-based habitats (i.e., rapid evolution of tetra-domain hemoglobin in Branchipolynoe or high expression of single-domain hemoglobin in Lepidonotopodium sp.). PMID:28397791
A Robust Deep Model for Improved Classification of AD/MCI Patients
Li, Feng; Tran, Loc; Thung, Kim-Han; Ji, Shuiwang; Shen, Dinggang; Li, Jiang
2015-01-01
Accurate classification of Alzheimer’s Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), plays a critical role in possibly preventing progression of memory impairment and improving quality of life for AD patients. Among many research tasks, it is of particular interest to identify noninvasive imaging biomarkers for AD diagnosis. In this paper, we present a robust deep learning system to identify different progression stages of AD patients based on MRI and PET scans. We utilized the dropout technique to improve classical deep learning by preventing its weight co-adaptation, which is a typical cause of over-fitting in deep learning. In addition, we incorporated stability selection, an adaptive learning factor, and a multi-task learning strategy into the deep learning framework. We applied the proposed method to the ADNI data set and conducted experiments for AD and MCI conversion diagnosis. Experimental results showed that the dropout technique is very effective in AD diagnosis, improving the classification accuracies by 5.9% on average as compared to the classical deep learning methods. PMID:25955998
Fusion of shallow and deep features for classification of high-resolution remote sensing images
NASA Astrophysics Data System (ADS)
Gao, Lang; Tian, Tian; Sun, Xiao; Li, Hang
2018-02-01
Effective spectral and spatial pixel description plays a significant role for the classification of high resolution remote sensing images. Current approaches of pixel-based feature extraction are of two main kinds: one includes the widelyused principal component analysis (PCA) and gray level co-occurrence matrix (GLCM) as the representative of the shallow spectral and shape features, and the other refers to the deep learning-based methods which employ deep neural networks and have made great promotion on classification accuracy. However, the former traditional features are insufficient to depict complex distribution of high resolution images, while the deep features demand plenty of samples to train the network otherwise over fitting easily occurs if only limited samples are involved in the training. In view of the above, we propose a GLCM-based convolution neural network (CNN) approach to extract features and implement classification for high resolution remote sensing images. The employment of GLCM is able to represent the original images and eliminate redundant information and undesired noises. Meanwhile, taking shallow features as the input of deep network will contribute to a better guidance and interpretability. In consideration of the amount of samples, some strategies such as L2 regularization and dropout methods are used to prevent over-fitting. The fine-tuning strategy is also used in our study to reduce training time and further enhance the generalization performance of the network. Experiments with popular data sets such as PaviaU data validate that our proposed method leads to a performance improvement compared to individual involved approaches.
ERIC Educational Resources Information Center
Dinsmore, Daniel L.; Alexander, Patricia A.
2012-01-01
The prevailing assumption by some that deep processing promotes stronger learning outcomes while surface processing promotes weaker learning outcomes has been called into question by the inconsistency and ambiguity of results in investigations of the relation between levels of processing and performance. The purpose of this literature review is to…
2010-01-01
Horizon (DH) was an ultra deepwater , semisubmers- ible offshore drilling rig contracted to BP by its owner, Transocean. The rig was capable of...Warnings from Comparable Examples Including Deepwater Horizon 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...research quality and objectivity. StrategieS and WarningS from Comparable exampleS inCluding deepWater Horizon Confronting SpaCe DebriS dave baiocchi
Building a Values-Informed Mental Model for New Orleans Climate Risk Management.
Bessette, Douglas L; Mayer, Lauren A; Cwik, Bryan; Vezér, Martin; Keller, Klaus; Lempert, Robert J; Tuana, Nancy
2017-10-01
Individuals use values to frame their beliefs and simplify their understanding when confronted with complex and uncertain situations. The high complexity and deep uncertainty involved in climate risk management (CRM) lead to individuals' values likely being coupled to and contributing to their understanding of specific climate risk factors and management strategies. Most mental model approaches, however, which are commonly used to inform our understanding of people's beliefs, ignore values. In response, we developed a "Values-informed Mental Model" research approach, or ViMM, to elicit individuals' values alongside their beliefs and determine which values people use to understand and assess specific climate risk factors and CRM strategies. Our results show that participants consistently used one of three values to frame their understanding of risk factors and CRM strategies in New Orleans: (1) fostering a healthy economy, wealth, and job creation, (2) protecting and promoting healthy ecosystems and biodiversity, and (3) preserving New Orleans' unique culture, traditions, and historically significant neighborhoods. While the first value frame is common in analyses of CRM strategies, the latter two are often ignored, despite their mirroring commonly accepted pillars of sustainability. Other values like distributive justice and fairness were prioritized differently depending on the risk factor or strategy being discussed. These results suggest that the ViMM method could be a critical first step in CRM decision-support processes and may encourage adoption of CRM strategies more in line with stakeholders' values. © 2017 Society for Risk Analysis.
Exploiting broadband seismograms and the mechanism of deep-focus earthquakes
NASA Astrophysics Data System (ADS)
Jiao, Wenjie
1997-09-01
Modern broadband seismic instrumentation has provided enormous opportunities to retrieve the information in almost any frequency band of seismic interest. In this thesis, we have investigated the long period responses of the broadband seismometers and the problem of recovering actual groundmotion. For the first time, we recovered the static offset for an earthquake from dynamic seismograms. The very long period waves of near- and intermediate-field term from 1994 large Bolivian deep earthquake (depth = 630km, Msb{W}=8.2) and 1997 large Argentina deep earthquake (depth = 285km, Msb{W}=7.1) are successfully recovered from the portable broadband recordings by BANJO and APVC networks. These waves provide another dynamic window into the seismic source process and may provide unique information to help constrain the source dynamics of deep earthquakes in the future. We have developed a new method to locate global explosion events based on broadband waveform stacking and simulated annealing. This method utilizes the information provided by the full broadband waveforms. Instead of "picking times", the character of the wavelet is used for locating events. The application of this methodology to a Lop Nor nuclear explosion is very successful, and suggests a procedure for automatic monitoring. We have discussed the problem of deep earthquakes from the viewpoint of rock mechanics and seismology. The rupture propagation of deep earthquakes requires a slip-weakening process unlike that for shallow events. However, this process is not necessarily the same as the process which triggers the rupture. Partial melting due to stress release is developed to account for the slip-weakening process in the deep earthquake rupture. The energy required for partial melting in this model is on the same order of the maximum energy required for the slip-weakening process in the shallow earthquake rupture. However, the verification of this model requires experimental work on the thermodynamic properties of rocks under non-hydrostatic stress. The solution of the deep earthquake problem will require an interdisciplinary study of seismology, high pressure rock mechanics, and mineralogy.
The effects of quantity and depth of processing on children's time perception.
Arlin, M
1986-08-01
Two experiments were conducted to investigate the effects of quantity and depth of processing on children's time perception. These experiments tested the appropriateness of two adult time-perception models (attentional and storage size) for younger ages. Children were given stimulus sets of equal time which varied by level of processing (deep/shallow) and quantity (list length). In the first experiment, 28 children in Grade 6 reproduced presentation times of various quantities of pictures under deep (living/nonliving categorization) or shallow (repeating label) conditions. Students also compared pairs of durations. In the second experiment, 128 children in Grades K, 2, 4, and 6 reproduced presentation times under similar conditions with three or six pictures and with deep or shallow processing requirements. Deep processing led to decreased estimation of time. Higher quantity led to increased estimation of time. Comparative judgments were influenced by quantity. The interaction between age and depth of processing was significant. Older children were more affected by depth differences than were younger children. Results were interpreted as supporting different aspects of each adult model as explanations of children's time perception. The processing effect supported the attentional model and the quantity effect supported the storage size model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ojczyk, Cindy; Mosiman, Garrett; Huelman, Pat
The development of an alternative method to interior-applied insulation strategies or exterior applied 'band-aids' such as heat tapes and ice belts may help reduce energy needs of millions of 1-1/2 story homes while reducing the risk of ice dam formation. A potential strategy for energy improvement of the roof is borrowed from new construction best practices: Here an 'overcoat' of a continuous air, moisture, and thermal barrier is applied on the outside of the roof structure for improved overall performance. The continuous insulation of this approach facilitates a reduction in thermal bridging which could further reduce energy consumption and bringmore » existing homes closer to meeting the Building America goals for energy reduction. Research favors an exterior approach to deep energy retrofits and ice dam prevention in existing homes. The greatest amount of research focuses on whole house deep energy retrofits leaving a void in roof-only applications. The research is also void of data supporting the hygrothermal performance, durability, constructability, and cost of roof-only exterior overcoat strategies. Yet, contractors interviewed for this report indicate an understanding that exterior approaches are most promising for mitigating ice dams and energy loss and are able to sell these strategies to homeowners.« less
NASA Astrophysics Data System (ADS)
André, Michel; Favali, Paolo; Piatteli, Paolo; Miranda, Jorge; Waldmann, Christoph; Esonet Lido Demonstration Mission Team
2010-05-01
Understanding the link between natural and anthropogenic processes is essential for predicting the magnitude and impact of future changes of the natural balance of the oceans. Deep-sea observatories have the potential to play a key role in the assessment and monitoring of these changes. ESONET is a European Network of Excellence of deep-sea observatories that includes 55 partners belonging to 14 countries. ESONET NoE is providing data on key parameters from the subsurface down to the seafloor at representative locations that transmit them to shore. The strategies of deployment, data sampling, technological development, standardisation and data management are being integrated with projects dealing with the spatial and near surface time series. LIDO (Listening to the Deep Ocean environment) is one of these projects and proposes to establish a first nucleus of a regional network of multidisciplinary seafloor observatories contributing to the coordination of high quality research in the ESONET NoE by allowing the real-time long-term monitoring of Geohazards and Marine Ambient Noise in the Mediterranean Sea and the adjacent Atlantic waters. Specific activities address the long-term monitoring of earthquakes and tsunamis and the characterisation of ambient noise, marine mammal sounds and anthropogenic sources. The objective of this demonstration mission will be achieved through the extension of the present capabilities of the observatories working in the ESONET key-sites of Eastern Sicily (NEMO-SN1) and of the Gulf of Cadiz (GEOSTAR configured for NEAREST pilot experiment) by installing new sensor equipments related to Bioacoustics and Geohazards, as well as by implementing international standard methods in data acquisition and management.
Improved process robustness by using closed loop control in deep drawing applications
NASA Astrophysics Data System (ADS)
Barthau, M.; Liewald, M.; Christian, Held
2017-09-01
The production of irregular shaped deep-drawing parts with high quality requirements, which are common in today’s automotive production, permanently challenges production processes. High requirements on lightweight construction of passenger car bodies following European regulations until 2020 have been massively increasing the use of high strength steels substantially for years and are also leading to bigger challenges in sheet metal part production. Of course, the more and more complex shapes of today’s car body shells also intensify the issue due to modern and future design criteria. The metal forming technology tries to meet these challenges by developing a highly sophisticated layout of deep drawing dies that consider part quality requirements, process robustness and controlled material flow during the deep or stretch drawing process phase. A new method for controlling material flow using a closed loop system was developed at the IFU Stuttgart. In contrast to previous approaches, this new method allows a control intervention during the deep-drawing stroke. The blank holder force around the outline of the drawn part is used as control variable. The closed loop is designed as trajectory follow up with feed forward control. The used command variable is the part-wall stress that is measured with a piezo-electric measuring pin. In this paper the used control loop will be described in detail. The experimental tool that was built for testing the new control approach is explained here with its features. A method for gaining the follow up trajectories from simulation will also be presented. Furthermore, experimental results considering the robustness of the deep drawing process and the gain in process performance with developed control loop will be shown. Finally, a new procedure for the industrial application of the new control method of deep drawing will be presented by using a new kind of active element to influence the local blank holder pressure onto part flange.
Study on super-long deep-hole drilling of titanium alloy.
Liu, Zhanfeng; Liu, Yanshu; Han, Xiaolan; Zheng, Wencui
2018-01-01
In this study, the super-long deep-hole drilling of a titanium alloy was investigated. According to material properties of the titanium alloy, an experimental approach was designed to study three issues discovered during the drilling process: the hole-axis deflection, chip morphology, and tool wear. Based on the results of drilling experiments, crucial parameters for the super-long deep-hole drilling of titanium alloys were obtained, and the influences of these parameters on quality of the alloy's machining were also evaluated. Our results suggest that the developed drilling process is an effective method to overcome the challenge of super-long deep-hole drilling on difficult-to-cut materials.
Deep Energy Retrofit Guidance for the Building America Solutions Center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Less, Brennan; Walker, Iain
2015-01-01
The U.S. DOE Building America program has established a research agenda targeting market-relevant strategies to achieve 40% reductions in existing home energy use by 2030. Deep Energy Retrofits (DERs) are part of the strategy to meet and exceed this goal. DERs are projects that create new, valuable assets from existing residences, by bringing homes into alignment with the expectations of the 21st century. Ideally, high energy using, dated homes that are failing to provide adequate modern services to their owners and occupants (e.g., comfortable temperatures, acceptable humidity, clean, healthy), are transformed through comprehensive upgrades to the building envelope, services andmore » miscellaneous loads into next generation high performance homes. These guidance documents provide information to aid in the broader market adoption of DERs.« less
NASA Astrophysics Data System (ADS)
Cheung, Derek
2015-02-01
For students to be successful in school chemistry, a strong sense of self-efficacy is essential. Chemistry self-efficacy can be defined as students' beliefs about the extent to which they are capable of performing specific chemistry tasks. According to Bandura (Psychol. Rev. 84:191-215, 1977), students acquire information about their level of self-efficacy from four sources: performance accomplishments, vicarious experiences, verbal persuasion, and physiological states. No published studies have investigated how instructional strategies in chemistry lessons can provide students with positive experiences with these four sources of self-efficacy information and how the instructional strategies promote students' chemistry self-efficacy. In this study, questionnaire items were constructed to measure student perceptions about instructional strategies, termed efficacy-enhancing teaching, which can provide positive experiences with the four sources of self-efficacy information. Structural equation modeling was then applied to test a hypothesized mediation model, positing that efficacy-enhancing teaching positively affects students' chemistry self-efficacy through their use of deep learning strategies such as metacognitive control strategies. A total of 590 chemistry students at nine secondary schools in Hong Kong participated in the survey. The mediation model provided a good fit to the student data. Efficacy-enhancing teaching had a direct effect on students' chemistry self-efficacy. Efficacy-enhancing teaching also directly affected students' use of deep learning strategies, which in turn affected students' chemistry self-efficacy. The implications of these findings for developing secondary school students' chemistry self-efficacy are discussed.
A system of automated processing of deep water hydrological information
NASA Technical Reports Server (NTRS)
Romantsov, V. A.; Dyubkin, I. A.; Klyukbin, L. N.
1974-01-01
An automated system for primary and scientific analysis of deep water hydrological information is presented. Primary processing of the data in this system is carried out on a drifting station, which also calculates the parameters of vertical stability of the sea layers, as well as their depths and altitudes. Methods of processing the raw data are described.
Design and performance test of a MEMS vibratory gyroscope with a novel AGC force rebalance control
NASA Astrophysics Data System (ADS)
Sung, Woon-Tahk; Sung, Sangkyung; Lee, Jang Gyu; Kang, Taesam
2007-10-01
In this paper, the development and performance test results of a laterally oscillating MEMS gyroscope using a novel force rebalance control strategy are presented. The micromachined structure and electrodes are fabricated using the deep reactive ion etching (DRIE) and anodic wafer bonding process. The high quality factor required for the resonance-based sensor is achieved using a vacuum-sealed device package. A systematic design approach of the force rebalance control is applied via a modified automatic gain control (AGC) method. The rebalance control design takes advantages of a novel AGC loop modification, which allows the approximation of the system's dynamics into a simple linear form. Using the proposed modification of AGC and the rebalance strategy that maintains a biased oscillation, a number of performance improvements including bandwidth extension and widened operating range were observed to be achieved. Finally, the experimental results of the gyroscope's practical application verify the feasibility and performance of the developed sensor.
Self-initiated object-location memory in young and older adults.
Berger-Mandelbaum, Anat; Magen, Hagit
2017-11-20
The present study explored self-initiated object-location memory in ecological contexts, as aspect of memory that is largely absent from the research literature. Young and older adults memorized objects-location associations they selected themselves or object-location associations provided to them, and elaborated on the strategy they used when selecting the locations themselves. Retrieval took place 30 min and 1 month after encoding. The results showed an age-related decline in self-initiated and provided object-location memory. Older adults benefited from self-initiation more than young adults when tested after 30 min, while the benefit was equal when tested after 1 month. Furthermore, elaboration enhanced memory only in older adults, and only after 30 min. Both age groups used deep encoding strategies on the majority of the trials, but their percentage was lower in older adults. Overall, the study demonstrated the processes involved in self-initiated object-location memory, which is an essential part of everyday functioning.
NASA Technical Reports Server (NTRS)
Wilson, K.; Parvin, B.; Fugate, R.; Kervin, P.; Zingales, S.
2003-01-01
Future NASA deep space missions will fly advanced high resolution imaging instruments that will require high bandwidth links to return the huge data volumes generated by these instruments. Optical communications is a key technology for returning these large data volumes from deep space probes. Yet to cost effectively realize the high bandwidth potential of the optical link will require deployment of ground receivers in diverse locations to provide high link availability. A recent analysis of GOES weather satellite data showed that a network of ground stations located in Hawaii and the Southwest continental US can provide an average of 90% availability for the deep space optical link. JPL and AFRL are exploring the use of large telescopes in Hawaii, California, and Albuquerque to support the Mars Telesat laser communications demonstration. Designed to demonstrate multi-Mbps communications from Mars, the mission will investigate key operational strategies of future deep space optical communications network.
Cobain, S L; Hodgson, D M; Peakall, J; Wignall, P B; Cobain, M R D
2018-01-10
Macrofauna is known to inhabit the top few 10s cm of marine sediments, with rare burrows up to two metres below the seabed. Here, we provide evidence from deep-water Permian strata for a previously unrecognised habitat up to at least 8 metres below the sediment-water interface. Infaunal organisms exploited networks of forcibly injected sand below the seabed, forming living traces and reworking sediment. This is the first record that shows sediment injections are responsible for hosting macrofaunal life metres below the contemporaneous seabed. In addition, given the widespread occurrence of thick sandy successions that accumulate in deep-water settings, macrofauna living in the deep biosphere are likely much more prevalent than considered previously. These findings should influence future sampling strategies to better constrain the depth range of infaunal animals living in modern deep-sea sands. One Sentence Summary: The living depth of infaunal macrofauna is shown to reach at least 8 metres in new habitats associated with sand injections.
Atmospheric Science Data Center
2015-03-16
Deep Convective Clouds and Chemistry (DC3) Data and Information The Deep Convective Clouds and Chemistry ( DC3 ) field campaign is investigating the impact of deep, ... processes, on upper tropospheric (UT) composition and chemistry. The primary science objectives are: To quantify and ...
NASA Astrophysics Data System (ADS)
Hancher, M.
2017-12-01
Recent years have seen promising results from many research teams applying deep learning techniques to geospatial data processing. In that same timeframe, TensorFlow has emerged as the most popular framework for deep learning in general, and Google has assembled petabytes of Earth observation data from a wide variety of sources and made them available in analysis-ready form in the cloud through Google Earth Engine. Nevertheless, developing and applying deep learning to geospatial data at scale has been somewhat cumbersome to date. We present a new set of tools and techniques that simplify this process. Our approach combines the strengths of several underlying tools: TensorFlow for its expressive deep learning framework; Earth Engine for data management, preprocessing, postprocessing, and visualization; and other tools in Google Cloud Platform to train TensorFlow models at scale, perform additional custom parallel data processing, and drive the entire process from a single familiar Python development environment. These tools can be used to easily apply standard deep neural networks, convolutional neural networks, and other custom model architectures to a variety of geospatial data structures. We discuss our experiences applying these and related tools to a range of machine learning problems, including classic problems like cloud detection, building detection, land cover classification, as well as more novel problems like illegal fishing detection. Our improved tools will make it easier for geospatial data scientists to apply modern deep learning techniques to their own problems, and will also make it easier for machine learning researchers to advance the state of the art of those techniques.
Measure Guideline: Deep Energy Enclosure Retrofit for Zero Energy Ready House Flat Roofs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loomis, H.; Pettit, B.
2015-05-01
This Measure Guideline provides design and construction information for a deep energy enclosure retrofit (DEER) solution of a flat roof assembly. It describes the strategies and procedures for an exterior retrofit of a flat, wood-framed roof with brick masonry exterior walls, using exterior and interior (framing cavity) insulation. The approach supported in this guide could also be adapted for use with flat, wood-framed roofs with wood-framed exterior walls.
Student Engagement: A Principle-Based Concept Analysis.
Bernard, Jean S
2015-08-04
A principle-based concept analysis of student engagement was used to examine the state of the science across disciplines. Four major perspectives of philosophy of science guided analysis and provided a framework for study of interrelationships and integration of conceptual components which then resulted in formulation of a theoretical definition. Findings revealed student engagement as a dynamic reiterative process marked by positive behavioral, cognitive, and affective elements exhibited in pursuit of deep learning. This process is influenced by a broader sociocultural environment bound by contextual preconditions of self-investment, motivation, and a valuing of learning. Outcomes of student engagement include satisfaction, sense of well-being, and personal development. Findings of this analysis prove relevant to nursing education as faculty transition from traditional teaching paradigms, incorporate learner-centered strategies, and adopt innovative pedagogical methodologies. It lends support for curricula reform, development of more accurate evaluative measures, and creation of meaningful teaching-learning environments within the discipline.
A tool for simulating parallel branch-and-bound methods
NASA Astrophysics Data System (ADS)
Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail
2016-01-01
The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.
Zhong, Mei; Niu, Wei; Lu, Zhi John; Sarov, Mihail; Murray, John I.; Janette, Judith; Raha, Debasish; Sheaffer, Karyn L.; Lam, Hugo Y. K.; Preston, Elicia; Slightham, Cindie; Hillier, LaDeana W.; Brock, Trisha; Agarwal, Ashish; Auerbach, Raymond; Hyman, Anthony A.; Gerstein, Mark; Mango, Susan E.; Kim, Stuart K.; Waterston, Robert H.; Reinke, Valerie; Snyder, Michael
2010-01-01
Transcription factors are key components of regulatory networks that control development, as well as the response to environmental stimuli. We have established an experimental pipeline in Caenorhabditis elegans that permits global identification of the binding sites for transcription factors using chromatin immunoprecipitation and deep sequencing. We describe and validate this strategy, and apply it to the transcription factor PHA-4, which plays critical roles in organ development and other cellular processes. We identified thousands of binding sites for PHA-4 during formation of the embryonic pharynx, and also found a role for this factor during the starvation response. Many binding sites were found to shift dramatically between embryos and starved larvae, from developmentally regulated genes to genes involved in metabolism. These results indicate distinct roles for this regulator in two different biological processes and demonstrate the versatility of transcription factors in mediating diverse biological roles. PMID:20174564
Advances in nanosized zeolites
NASA Astrophysics Data System (ADS)
Mintova, Svetlana; Gilson, Jean-Pierre; Valtchev, Valentin
2013-07-01
This review highlights recent developments in the synthesis of nanosized zeolites. The strategies available for their preparation (organic-template assisted, organic-template free, and alternative procedures) are discussed. Major breakthroughs achieved by the so-called zeolite crystal engineering and encompass items such as mastering and using the physicochemical properties of the precursor synthesis gel/suspension, optimizing the use of silicon and aluminium precursor sources, the rational use of organic templates and structure-directing inorganic cations, and careful adjustment of synthesis conditions (temperature, pressure, time, heating processes from conventional to microwave and sonication) are addressed. An on-going broad and deep fundamental understanding of the crystallization process, explaining the influence of all variables of this complex set of reactions, underpins an even more rational design of nanosized zeolites with exceptional properties. Finally, the advantages and limitations of these methods are addressed with particular attention to their industrial prospects and utilization in existing and advanced applications.
Law, evolution and the brain: applications and open questions.
Jones, Owen D
2004-01-01
This paper discusses several issues at the intersection of law and brain science. It focuses principally on ways in which an improved understanding of how evolutionary processes affect brain function and human behaviour may improve law's ability to regulate behaviour. It explores sample uses of such 'evolutionary analysis in law' and also raises questions about how that analysis might be improved in the future. Among the discussed uses are: (i) clarifying cost-benefit analyses; (ii) providing theoretical foundation and potential predictive power; (iii) assessing comparative effectiveness of legal strategies; and (iv) revealing deep patterns in legal architecture. Throughout, the paper emphasizes the extent to which effective law requires: (i) building effective behavioural models; (ii) integrating life-science perspectives with social-science perspectives; (iii) considering the effects of brain biology on behaviours that law seeks to regulate; and (iv) examining the effects of evolutionary processes on brain design. PMID:15590611
Law, evolution and the brain: applications and open questions.
Jones, Owen D
2004-11-29
This paper discusses several issues at the intersection of law and brain science. It focuses principally on ways in which an improved understanding of how evolutionary processes affect brain function and human behaviour may improve law's ability to regulate behaviour. It explores sample uses of such 'evolutionary analysis in law' and also raises questions about how that analysis might be improved in the future. Among the discussed uses are: (i) clarifying cost-benefit analyses; (ii) providing theoretical foundation and potential predictive power; (iii) assessing comparative effectiveness of legal strategies; and (iv) revealing deep patterns in legal architecture. Throughout, the paper emphasizes the extent to which effective law requires: (i) building effective behavioural models; (ii) integrating life-science perspectives with social-science perspectives; (iii) considering the effects of brain biology on behaviours that law seeks to regulate; and (iv) examining the effects of evolutionary processes on brain design.
The biogeochemistry of anchialine caves: Progress and possibilities
Pohlman, John W.
2011-01-01
Recent investigations of anchialine caves and sinkholes have identified complex food webs dependent on detrital and, in some cases, chemosynthetically produced organic matter. Chemosynthetic microbes in anchialine systems obtain energy from reduced compounds produced during organic matter degradation (e.g., sulfide, ammonium, and methane), similar to what occurs in deep ocean cold seeps and mud volcanoes, but distinct from dominant processes operating at hydrothermal vents and sulfurous mineral caves where the primary energy source is mantle derived. This review includes case studies from both anchialine and non-anchialine habitats, where evidence for in situ chemosynthetic production of organic matter and its subsequent transfer to higher trophic level metazoans is documented. The energy sources and pathways identified are synthesized to develop conceptual models for elemental cycles and energy cascades that occur within oligotrophic and eutrophic anchialine caves. Strategies and techniques for testing the hypothesis of chemosynthesis as an active process in anchialine caves are also suggested.
Deep processing activates the medial temporal lobe in young but not in old adults.
Daselaar, Sander M; Veltman, Dick J; Rombouts, Serge A R B; Raaijmakers, Jeroen G W; Jonker, Cees
2003-11-01
Age-related impairments in episodic memory have been related to a deficiency in semantic processing, based on the finding that elderly adults typically benefit less than young adults from deep, semantic as opposed to shallow, nonsemantic processing of study items. In the present study, we tested the hypothesis that elderly adults are not able to perform certain cognitive operations under deep processing conditions. We further hypothesised that this inability does not involve regions commonly associated with lexical/semantic retrieval processes, but rather involves a dysfunction of the medial temporal lobe (MTL) memory system. To this end, we used functional MRI on rather extensive groups of young and elderly adults to compare brain activity patterns obtained during a deep (living/nonliving) and a shallow (uppercase/lowercase) classification task. Common activity in relation to semantic classification was observed in regions that have been previously related to semantic retrieval, including mainly left-lateralised activity in the inferior prefrontal, middle temporal, and middle frontal/anterior cingulate gyrus. Although the young adults showed more activity in some of these areas, the finding of mainly overlapping activation patterns during semantic classification supports the idea that lexical/semantic retrieval processes are still intact in elderly adults. This received further support by the finding that both groups showed similar behavioural performances as well on the deep and shallow classification tasks. Importantly, though, the young revealed significantly more activity than the elderly adults in the left anterior hippocampus during deep relative to shallow classification. This finding is in line with the idea that age-related impairments in episodic encoding are, at least partly, due to an under-recruitment of the medial temporal lobe memory system.
NASA Astrophysics Data System (ADS)
Bozau, Elke; Hemme, Christina; Sattler, Carl-Diedrich; van Berk, Wolfgang
2015-04-01
Deep formation water can be classified according to depth, temperature, and salinity (e.g., Graf et al. 1966, Kharaka & Hanor 2007). Most of the deep formation waters contain dissolved solids in excess of sea water. The hydrogeochemical development of formation water has been discussed for a long time. It is widely accepted that deep aquifers are influenced by the meteoric cycle and geochemical processes within the crust (e.g., Hebig et al. 2012). Similar hydrogeochemical signatures are found in deep formation waters of all continents and can be explained by general geochemical processes within the deep reservoirs (e.g., Land 1995). Therefore, data of deep formation waters from Western Europe, Russia, and North America are collected and classified by the major water components. The data are used to identify important hydrogeochemical processes (e.g., halite dissolution and albitisation) leading to different compositions of formation water. Two significant water types are identified: Na-Cl water and Na-Ca-Cl water. Based on the collected hydrogeochemical data, development trends are stated for the formation waters, and albitisation is favoured as the main process for calcium enrichment. Furthermore, differences of formation water according to stratigraphical units are shown for deep reservoirs of the North German Basin and the North Sea. References: Graf, D.L., 1982. Chemical osmosis, reverse chemical osmosis, and the origin of subsurface brines. Geochimica Cosmochimica Acta 46, 1431-1448. Hebig, K.H., Ito, N., Scheytt, T., Marui, A., 2012. Review: Deep groundwater research with focus on Germany. Hydrogeology Journal 20, 227-243. Kharaka, Y.K., Hanor, J.S., 2007. Deep fluids in continents: I. Sedimentary Basins. Treatise on Geochemistry 5, 1-48. Land, L.S., 1995. The role of saline formation water in the crustal cycling. Aquatic Geochemistry 1, 137-145. Acknowledgements: The presented data are results of the collaborative research program "gebo" (Geothermal energy and high performance drilling), financed by the Ministry of Science and Culture of the Federal State of Lower Saxony and industry partner Baker Hughes Celle.
Anomalies of rupture velocity in deep earthquakes
NASA Astrophysics Data System (ADS)
Suzuki, M.; Yagi, Y.
2010-12-01
Explaining deep seismicity is a long-standing challenge in earth science. Deeper than 300 km, the occurrence rate of earthquakes with depth remains at a low level until ~530 km depth, then rises until ~600 km, finally terminate near 700 km. Given the difficulty of estimating fracture properties and observing the stress field in the mantle transition zone (410-660 km), the seismic source processes of deep earthquakes are the most important information for understanding the distribution of deep seismicity. However, in a compilation of seismic source models of deep earthquakes, the source parameters for individual deep earthquakes are quite varied [Frohlich, 2006]. Rupture velocities for deep earthquakes estimated using seismic waveforms range from 0.3 to 0.9Vs, where Vs is the shear wave velocity, a considerably wider range than the velocities for shallow earthquakes. The uncertainty of seismic source models prevents us from determining the main characteristics of the rupture process and understanding the physical mechanisms of deep earthquakes. Recently, the back projection method has been used to derive a detailed and stable seismic source image from dense seismic network observations [e.g., Ishii et al., 2005; Walker et al., 2005]. Using this method, we can obtain an image of the seismic source process from the observed data without a priori constraints or discarding parameters. We applied the back projection method to teleseismic P-waveforms of 24 large, deep earthquakes (moment magnitude Mw ≥ 7.0, depth ≥ 300 km) recorded since 1994 by the Data Management Center of the Incorporated Research Institutions for Seismology (IRIS-DMC) and reported in the U.S. Geological Survey (USGS) catalog, and constructed seismic source models of deep earthquakes. By imaging the seismic rupture process for a set of recent deep earthquakes, we found that the rupture velocities are less than about 0.6Vs except in the depth range of 530 to 600 km. This is consistent with the depth variation of deep seismicity: it peaks between about 530 and 600 km, where the fast rupture earthquakes (greater than 0.7Vs) are observed. Similarly, aftershock productivity is particularly low from 300 to 550 km depth and increases markedly at depth greater than 550 km [e.g., Persh and Houston, 2004]. We propose that large fracture surface energy (Gc) value for deep earthquakes generally prevent the acceleration of dynamic rupture propagation and generation of earthquakes between 300 and 700 km depth, whereas small Gc value in the exceptional depth range promote dynamic rupture propagation and explain the seismicity peak near 600 km.
Bell, Andrew H; Munoz, Douglas P
2008-10-01
Performance in a behavioural task can be influenced by both bottom-up and top-down processes such as stimulus modality and prior probability. Here, we exploited differences in behavioural strategy to explore the role of the intermediate and deep layers of the superior colliculus (dSC) in covert orienting. Two monkeys were trained on a predictive cued-saccade task in which the cue predicted the target's upcoming location with 80% validity. When the delay between cue and target onset was 250 ms, both monkeys showed faster responses to the uncued (Invalid) location. This was associated with a reduced target-aligned response in the dSC on Valid trials for both monkeys and is consistent with a bottom-up (i.e. involuntary) bias. When the delay was increased to 650 ms, one monkey continued to show faster responses to the Invalid location whereas the other monkey showed faster responses to the Valid location, consistent with a top-down (i.e. voluntary) bias. This latter behaviour was correlated with an increase in activity in dSC neurons preceding target onset that was absent in the other monkey. Thus, using the information provided by the cue shifted the emphasis towards top-down processing, while ignoring this information allowed bottom-up processing to continue to dominate. Regardless of the selected strategy, however, neurons in the dSC consistently reflected the current bias between the two processes, emphasizing its role in both the bottom-up and top-down control of orienting behaviour.
ERIC Educational Resources Information Center
Ross, Margaret E.; Green, Samuel B.; Salisbury-Glennon, Jill D.; Tollefson, Nona
2006-01-01
We conducted the present study to investigate whether college students adjust their study strategies to meet the cognitive demands of testing, a metacognitive self-regulatory skill. Participants were randomly assigned to one of the two testing conditions. In one condition we told participants to study for a test that required deep-level cognitive…
The writing approaches of secondary students.
Lavelle, Ellen; Smith, Jennifer; O'Ryan, Leslie
2002-09-01
Research with college students has supported a model of writing approaches that defines the relationship between a writer and writing task along a deep and surface process continuum (Biggs, 1988). Based on that model, Lavelle (1993) developed the Inventory of Processes in College Composition which reflects students' motives and strategies as related to writing outcomes. It is also important to define the approaches of secondary students to better understand writing processes at that level, and development in written composition. This study was designed to define the writing approaches of secondary students by factor analysing students' responses to items regarding writing beliefs and writing strategies, and to compare the secondary approaches to those of college students. A related goal was to explore the relationships of the secondary writing approaches to perceived self-regulatory efficacy for writing (Zimmerman & Bandura, 1994), writing preferences, and writing outcomes. The initial, factor analytic phase involved 398 junior level high school students (11th grade) enrolled in a mandatory language arts class at each of three large Midwestern high schools (USA). Then, 49 junior level students enrolled in two language arts classes participated as subjects in the second phase. Classroom teachers administered the Inventory of Processes in College Composition (Lavelle, 1993), which contained 72 true-or-false items regarding writing beliefs and strategies, during regular class periods. Data were factor analysed and the structure compared to that of college students. In the second phase, the new inventory, Inventory of Processes in Secondary Composition, was administered in conjunction with the Perceived Self-Regulatory Efficacy for Writing Inventory (Zimmerman & Bandura, 1994), and a writing preferences survey. A writing sample and grade in Language Arts classes were obtained and served as outcome variables. The factor structure of secondary writing reflected three process dimensions. The first factor, Elaborative-Expressive, describes a writing strategy based on personal investment and audience concern. The second factor, Planful-Procedural, denotes sticking to a plan, following the rules, and 'preparing' for writing. Achieving-Competitive, the third factor, reflects a 'teacher pleasing' strategy or doing only what needs to be done to get a good grade. Two factors from the college model, Elaborative and Procedural, were replicated, and two were not, Reflective-Revision and Low Self-Efficacy. Regression analyses supported that the processes in writing under a timed condition are different from those used when writing over time, and that students' perceptions of writing self-regulatory efficacy were predictive of writing success under both conditions.
NASA Astrophysics Data System (ADS)
Danovaro, Roberto; Carugati, Laura; Corinaldesi, Cinzia; Gambi, Cristina; Guilini, Katja; Pusceddu, Antonio; Vanreusel, Ann
2013-08-01
The deep sea is the largest biome of the biosphere. The knowledge of the spatial variability of deep-sea biodiversity is one of the main challenges of marine ecology and evolutionary biology. The choice of the observational spatial scale is assumed to play a key role for understanding processes structuring the deep-sea benthic communities and one of the most typical features of marine biodiversity distribution is the existence of bathymetric gradients. However, the analysis of biodiversity bathymetric gradients and the associated changes in species composition (beta diversity) typically compared large depth ranges (with intervals of 500 to 1000 or even 2000 m depth among sites). To test whether significant changes in alpha and beta diversity occur also at fine-scale bathymetric gradients (i.e., within few hundred-meter depth intervals) the variability of deep-sea nematode biodiversity and assemblage composition along a bathymetric transect (200-1200 m depth) with intervals of 200 m among sampling depths, was investigated. A hierarchical sampling strategy for the analysis of nematode species richness, beta diversity, functional (trophic) diversity, and related environmental variables, was used. The results indicate the lack of significant differences in taxonomic and functional diversity across sampling depths, but the presence of high beta diversity at all spatial scales investigated: between cores collected from the same box corer (on average 56%), among deployments at the same depth (58%), and between all sampling depths (62%). Such high beta diversity is influenced by the presence of small-scale patchiness in the deep sea and is also related to the large number of rare or very rare species (typically accounting for >80% of total species richness). Moreover, the number of ubiquitous nematode species across all sampling depths is quite low (ca. 15%). Multiple regression analyses provide evidence that such patterns could be related to the different availability, composition and size spectra of food particles in the sediments. Additionally, though to a lesser extent, our results indicate, that selective predation can influence the nematode trophic composition. These findings suggest that a multiple scale analysis based on a nested sampling design could significantly improve our knowledge of bathymetric patterns of deep-sea biodiversity and its drivers.
How We Get Pictures from Space. NASA Facts (Revised Edition).
ERIC Educational Resources Information Center
Haynes, Robert
This booklet discusses image processing from spacecraft in deep space. The camera system on board the spacecraft, the Deep Space Network (DSN), and the image processing system are described. A table listing photographs taken by unmanned spacecraft from 1959-1977 is provided. (YP)
NASA Astrophysics Data System (ADS)
Hernández-Molina, Francisco Javier; Stow, Dorrik A. V.; Llave, Estefanía; Rebesco, Michele; Ercilla, Gemma; van Rooij, David; Mena, Anxo; Vázquez, Juan-Tomás; Voelker, Antje H. L.
2011-12-01
Deep-water circulation is a critical part of the global conveyor belt that regulates Earth's climate. The bottom (contour)-current component of this circulation is of key significance in shaping the deep seafloor through erosion, transport, and deposition. As a result, there exists a high variety of large-scale erosional and depositional features (drifts) that together form more complex contourite depositional systems on continental slopes and rises as well as in ocean basins, generated by different water masses flowing at different depths and at different speeds either in the same or in opposite directions. Yet, the nature of these deep-water processes and the deposited contourites is still poorly understood in detail. Their ultimate decoding will undoubtedly yield information of fundamental importance to the earth and ocean sciences. The international congress Deep-water Circulation: Processes & Products was held from 16-18 June 2010 in Baiona, Spain, hosted by the University of Vigo. Volume 31(5/6) of Geo-Marine Letters is a special double issue containing 17 selected contributions from the congress, guest edited by F.J. Hernández-Molina, D.A.V. Stow, E. Llave, M. Rebesco, G. Ercilla, D. Van Rooij, A. Mena, J.-T. Vázquez and A.H.L. Voelker. The papers and discussions at the congress and the articles in this special issue provide a truly multidisciplinary perspective of interest to both academic and industrial participants, contributing to the advancement of knowledge on deep-water bottom circulation and related processes, as well as contourite sedimentation. The multidisciplinary contributions (including geomorphology, tectonics, stratigraphy, sedimentology, paleoceanography, physical oceanography, and deep-water ecology) have demonstrated that advances in paleoceanographic reconstructions and our understanding of the ocean's role in the global climate system depend largely on the feedbacks among disciplines. New insights into the link between the biota of deep-water ecosystems and bottom currents confirm the need for this field to be investigated and mapped in detail. Likewise, it is confirmed that deep-water contourites are not only of academic interest but also potential resources of economic value. Cumulatively, both the congress and the present volume serve to demonstrate that the role of bottom currents in shaping the seafloor has to date been generally underestimated, and that our understanding of such systems is still in its infancy. Future research on contourites, using new and more advanced techniques, should focus on a more detailed visualization of water-mass circulation and its variability, in order to decipher the physical processes involved and the associations between drifts and other common bedforms. Moreover, contourite facies models should be better established, including their associations with other deep-water sedimentary environments both in modern and ancient submarine domains. The rapid increase in deep-water exploration and the new deep-water technologies available to the oil industry and academic institutions will undoubtedly lead to spectacular advances in contourite research in terms of processes, morphology, sediment stacking patterns, facies, and their relationships with other deep-marine depositional systems.
How can we identify and communicate the ecological value of deep-sea ecosystem services?
Jobstvogt, Niels; Townsend, Michael; Witte, Ursula; Hanley, Nick
2014-01-01
Submarine canyons are considered biodiversity hotspots which have been identified for their important roles in connecting the deep sea with shallower waters. To date, a huge gap exists between the high importance that scientists associate with deep-sea ecosystem services and the communication of this knowledge to decision makers and to the wider public, who remain largely ignorant of the importance of these services. The connectivity and complexity of marine ecosystems makes knowledge transfer very challenging, and new communication tools are necessary to increase understanding of ecological values beyond the science community. We show how the Ecosystem Principles Approach, a method that explains the importance of ocean processes via easily understandable ecological principles, might overcome this challenge for deep-sea ecosystem services. Scientists were asked to help develop a list of clear and concise ecosystem principles for the functioning of submarine canyons through a Delphi process to facilitate future transfers of ecological knowledge. These ecosystem principles describe ecosystem processes, link such processes to ecosystem services, and provide spatial and temporal information on the connectivity between deep and shallow waters. They also elucidate unique characteristics of submarine canyons. Our Ecosystem Principles Approach was successful in integrating ecological information into the ecosystem services assessment process. It therefore has a high potential to be the next step towards a wider implementation of ecological values in marine planning. We believe that successful communication of ecological knowledge is the key to a wider public support for ocean conservation, and that this endeavour has to be driven by scientists in their own interest as major deep-sea stakeholders.
How Can We Identify and Communicate the Ecological Value of Deep-Sea Ecosystem Services?
Jobstvogt, Niels; Townsend, Michael; Witte, Ursula; Hanley, Nick
2014-01-01
Submarine canyons are considered biodiversity hotspots which have been identified for their important roles in connecting the deep sea with shallower waters. To date, a huge gap exists between the high importance that scientists associate with deep-sea ecosystem services and the communication of this knowledge to decision makers and to the wider public, who remain largely ignorant of the importance of these services. The connectivity and complexity of marine ecosystems makes knowledge transfer very challenging, and new communication tools are necessary to increase understanding of ecological values beyond the science community. We show how the Ecosystem Principles Approach, a method that explains the importance of ocean processes via easily understandable ecological principles, might overcome this challenge for deep-sea ecosystem services. Scientists were asked to help develop a list of clear and concise ecosystem principles for the functioning of submarine canyons through a Delphi process to facilitate future transfers of ecological knowledge. These ecosystem principles describe ecosystem processes, link such processes to ecosystem services, and provide spatial and temporal information on the connectivity between deep and shallow waters. They also elucidate unique characteristics of submarine canyons. Our Ecosystem Principles Approach was successful in integrating ecological information into the ecosystem services assessment process. It therefore has a high potential to be the next step towards a wider implementation of ecological values in marine planning. We believe that successful communication of ecological knowledge is the key to a wider public support for ocean conservation, and that this endeavour has to be driven by scientists in their own interest as major deep-sea stakeholders. PMID:25055119
deepTools: a flexible platform for exploring deep-sequencing data.
Ramírez, Fidel; Dündar, Friederike; Diehl, Sarah; Grüning, Björn A; Manke, Thomas
2014-07-01
We present a Galaxy based web server for processing and visualizing deeply sequenced data. The web server's core functionality consists of a suite of newly developed tools, called deepTools, that enable users with little bioinformatic background to explore the results of their sequencing experiments in a standardized setting. Users can upload pre-processed files with continuous data in standard formats and generate heatmaps and summary plots in a straight-forward, yet highly customizable manner. In addition, we offer several tools for the analysis of files containing aligned reads and enable efficient and reproducible generation of normalized coverage files. As a modular and open-source platform, deepTools can easily be expanded and customized to future demands and developments. The deepTools webserver is freely available at http://deeptools.ie-freiburg.mpg.de and is accompanied by extensive documentation and tutorials aimed at conveying the principles of deep-sequencing data analysis. The web server can be used without registration. deepTools can be installed locally either stand-alone or as part of Galaxy. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Processing-Induced Electrically Active Defects in Black Silicon Nanowire Devices.
Carapezzi, Stefania; Castaldini, Antonio; Mancarella, Fulvio; Poggi, Antonella; Cavallini, Anna
2016-04-27
Silicon nanowires (Si NWs) are widely investigated nowadays for implementation in advanced energy conversion and storage devices, as well as many other possible applications. Black silicon (BSi)-NWs are dry etched NWs that merge the advantages related to low-dimensionality with the special industrial appeal connected to deep reactive ion etching (RIE). In fact, RIE is a well established technique in microelectronics manufacturing. However, RIE processing could affect the electrical properties of BSi-NWs by introducing deep states into their forbidden gap. This work applies deep level transient spectroscopy (DLTS) to identify electrically active deep levels and the associated defects in dry etched Si NW arrays. Besides, the successful fitting of DLTS spectra of BSi-NWs-based Schottky barrier diodes is an experimental confirmation that the same theoretical framework of dynamic electronic behavior of deep levels applies in bulk as well as in low dimensional structures like NWs, when quantum confinement conditions do not occur. This has been validated for deep levels associated with simple pointlike defects as well as for deep levels associated with defects with richer structures, whose dynamic electronic behavior implies a more complex picture.
NASA Astrophysics Data System (ADS)
Crespi, Mattia; Fratarcangeli, Francesca; Mazzoni, Augusto; Nascetti, Andrea; Monsorno, Roberto; Schloegel, Romy; Corsini, Alessandro; Mulas, Marco; Mair, Volkmar
2017-04-01
The Corvara landslide is an active, large-scale, deep-seated and slow moving earthslide of about 30 Mm3 located in the Dolomites (Italy). It is frequently damaging a national road and, occasionally, isolated buildings and recreational ski facilities. In this work we present the analysis performed on data acquired thank to the installation of 3 DualFrequency GPS in permanent acquisition installed in the accumulation, track and source zone of the active portion of the landslide. In particular two years (2014 and 2015) of data were processed with several approaches and goals: daily time series were produced through Precise Point Positioning and Differential Positioning using both scientific packages and automatic on line tool based on open source libraries, specifically developed in order to provide a prototypal service. The achievable results based on single frequency (L1) data processing were also investigated in order to pave the way to the deployment of lowcost GPS receiver for this kind of application. Moreover, daily and sub-daily phenomena were analyzed. Different strategies were investigated in order to describe the kinematics on the basis of 0.2 Hz data collected by the 3 permanent receivers. For particular events also the variometric approach, through the recent advances of VADASE, was applied, to detect significant movements. Finally, tropospheric parameters were estimated over the whole period in order to give a contribution to the SAR interferometry techniques. Also for this specific purpose and application, the possibilities of single frequency use were assessed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NorthernSTAR
Building science research supports installing exterior (soil side) foundation insulation as the optimal method to enhance the hygrothermal performance of new homes. With exterior foundation insulation, water management strategies are maximized while insulating the basement space and ensuring a more even temperature at the foundation wall. However, such an approach can be very costly and disruptive when applied to an existing home, requiring deep excavation around the entire house. The NorthernSTAR Building America Partnership team implemented an innovative, minimally invasive foundation insulation upgrade technique on an existing home. The approach consisted of using hydrovac excavation technology combined with a liquidmore » insulating foam. The team was able to excavate a continuous 4" wide by 4' to 5' deep trench around the entire house, 128 linear feet, except for one small part under the stoop that was obstructed with concrete debris. The combination pressure washer and vacuum extraction technology also enabled the elimination of large trenches and soil stockpiles normally produced by backhoe excavation. The resulting trench was filled with liquid insulating foam, which also served as a water-control layer of the assembly. The insulation was brought above grade using a liquid foam/rigid foam hybrid system and terminated at the top of the rim joist. Cost savings over the traditional excavation process ranged from 23% to 50%. The excavationless process could result in even greater savings since replacement of building structures, exterior features, utility meters, and landscaping would be minimal or non-existent in an excavationless process.« less
Stacked Sparse Autoencoder (SSAE) for Nuclei Detection on Breast Cancer Histopathology Images.
Xu, Jun; Xiang, Lei; Liu, Qingshan; Gilmore, Hannah; Wu, Jianzhong; Tang, Jinghai; Madabhushi, Anant
2016-01-01
Automated nuclear detection is a critical step for a number of computer assisted pathology related image analysis algorithms such as for automated grading of breast cancer tissue specimens. The Nottingham Histologic Score system is highly correlated with the shape and appearance of breast cancer nuclei in histopathological images. However, automated nucleus detection is complicated by 1) the large number of nuclei and the size of high resolution digitized pathology images, and 2) the variability in size, shape, appearance, and texture of the individual nuclei. Recently there has been interest in the application of "Deep Learning" strategies for classification and analysis of big image data. Histopathology, given its size and complexity, represents an excellent use case for application of deep learning strategies. In this paper, a Stacked Sparse Autoencoder (SSAE), an instance of a deep learning strategy, is presented for efficient nuclei detection on high-resolution histopathological images of breast cancer. The SSAE learns high-level features from just pixel intensities alone in order to identify distinguishing features of nuclei. A sliding window operation is applied to each image in order to represent image patches via high-level features obtained via the auto-encoder, which are then subsequently fed to a classifier which categorizes each image patch as nuclear or non-nuclear. Across a cohort of 500 histopathological images (2200 × 2200) and approximately 3500 manually segmented individual nuclei serving as the groundtruth, SSAE was shown to have an improved F-measure 84.49% and an average area under Precision-Recall curve (AveP) 78.83%. The SSAE approach also out-performed nine other state of the art nuclear detection strategies.
Impact of a surgical site infection reduction strategy after colorectal resection.
Connolly, T M; Foppa, C; Kazi, E; Denoya, P I; Bergamaschi, R
2016-09-01
This study was performed to determine the impact of a surgical site infection (SSI) reduction strategy on SSI rates following colorectal resection. American College of Surgeons National Surgical Quality Improvement Program (NSQIP) data from 2006-14 were utilized and supplemented by institutional review board-approved chart review. The primary end-point was superficial and deep incisional SSI. The inclusion criterion was colorectal resection. The SSI reduction strategy consisted of preoperative (blood glucose, bowel preparation, shower, hair removal), intra-operative (prophylactic antibiotics, antimicrobial incisional drape, wound protector, wound closure technique) and postoperative (wound dressing technique) components. The SSI reduction strategy was prospectively implemented and compared with historical controls (pre-SSI strategy arm). Statistical analysis included Pearson's chi-square test, and Student's t-test performed with spss software. Of 1018 patients, 379 were in the pre-SSI strategy arm, 311 in the SSI strategy arm and 328 were included to test durability. The study arms were comparable for all measured parameters. Preoperative wound class, operation time, resection type and stoma creation did not differ significantly. The SSI strategy arm demonstrated a significant decrease in overall SSI rates (32.19% vs 18.97%) and superficial SSI rates (23.48% vs 8.04%). Deep SSI and organ space rates did not differ. A review of patients testing durability demonstrated continued improvement in overall SSI rates (8.23%). The implementation of an SSI reduction strategy resulted in a 41% decrease in SSI rates following colorectal resection over its initial 3 years, and its durability as demonstrated by continuing improvement was seen over an additional 2 years. Colorectal Disease © 2015 The Association of Coloproctology of Great Britain and Ireland.
NASA Technical Reports Server (NTRS)
1974-01-01
The progress is reported of Deep Space Network (DSN) research in the following areas: (1) flight project support, (2) spacecraft/ground communications, (3) station control and operations technology, (4) network control and processing, and (5) deep space stations. A description of the DSN functions and facilities is included.
NASA Astrophysics Data System (ADS)
Lecun, Yann; Bengio, Yoshua; Hinton, Geoffrey
2015-05-01
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey
2015-05-28
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
Deep Space Network equipment performance, reliability, and operations management information system
NASA Technical Reports Server (NTRS)
Cooper, T.; Lin, J.; Chatillon, M.
2002-01-01
The Deep Space Mission System (DSMS) Operations Program Office and the DeepSpace Network (DSN) facilities utilize the Discrepancy Reporting Management System (DRMS) to collect, process, communicate and manage data discrepancies, equipment resets, physical equipment status, and to maintain an internal Station Log. A collaborative effort development between JPL and the Canberra Deep Space Communication Complex delivered a system to support DSN Operations.
The Deep Space Network. An instrument for radio navigation of deep space probes
NASA Technical Reports Server (NTRS)
Renzetti, N. A.; Jordan, J. F.; Berman, A. L.; Wackley, J. A.; Yunck, T. P.
1982-01-01
The Deep Space Network (DSN) network configurations used to generate the navigation observables and the basic process of deep space spacecraft navigation, from data generation through flight path determination and correction are described. Special emphasis is placed on the DSN Systems which generate the navigation data: the DSN Tracking and VLBI Systems. In addition, auxiliary navigational support functions are described.
The deep space network, volume 13
NASA Technical Reports Server (NTRS)
1973-01-01
The objectives, functions, and organization of the Deep Space Network are summarized. The deep space instrumentation facility, the ground communications facility, and the network control system are described. Other areas reported include: Helios Mission support, DSN support of the Mariner Mars 1971 extended mission, Mariner Venus/Mercury 1973 mission support, Viking mission support, radio science, tracking and ground-based navigation, network control and data processing, and deep space stations.
Spin, twist and hadron structure in deep inelastic processes
NASA Astrophysics Data System (ADS)
Jaffe, R. L.; Meyer, H.; Piller, G.
These notes provide an introduction to polarization effects in deep inelastic processes in QCD. We emphasize recent work on transverse asymmetries, subdominant effects, and the role of polarization in fragmentation and in purely hadronic processes. After a review of kinematics and some basic tools of short distance analysis, we study the twist, helicity, chirality and transversity dependence of a variety of high energy processes sensitive to the quark and gluon substructure of hadrons.
Attenuation of deep semantic processing during mind wandering: an event-related potential study.
Xu, Judy; Friedman, David; Metcalfe, Janet
2018-03-21
Although much research shows that early sensory and attentional processing is affected by mind wandering, the effect of mind wandering on deep (i.e. semantic) processing is relatively unexplored. To investigate this relation, we recorded event-related potentials as participants studied English-Spanish word pairs, one at a time, while being intermittently probed for whether they were 'on task' or 'mind wandering'. Both perceptual processing, indexed by the P2 component, and deep processing, indexed by a late, sustained slow wave maximal at parietal electrodes, was attenuated during periods preceding participants' mind wandering reports. The pattern when participants were on task, rather than mind wandering, is similar to the subsequent memory or difference in memory effect. These results support previous findings of sensory attenuation during mind wandering, and extend them to a long-duration slow wave by suggesting that the deeper and more sustained levels of processing are also disrupted.
Gait Recognition Based on Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Sokolova, A.; Konushin, A.
2017-05-01
In this work we investigate the problem of people recognition by their gait. For this task, we implement deep learning approach using the optical flow as the main source of motion information and combine neural feature extraction with the additional embedding of descriptors for representation improvement. In order to find the best heuristics, we compare several deep neural network architectures, learning and classification strategies. The experiments were made on two popular datasets for gait recognition, so we investigate their advantages and disadvantages and the transferability of considered methods.
Walla, Peter; Greiner, Katharina; Duregger, Cornelia; Deecke, Lüder; Thurner, Stefan
2007-03-02
The effect of personal pronouns such as "ein" (German for "a"), "mein" (German for "my") and "sein" (German for "his") on the processing of associated nouns was investigated using MEG. Three different encoding strategies were provided in order to vary the level of consciousness involved in verbal information processing. A shallow (alphabetic), a deep (semantic) and a very deep (contextual) encoding instruction related to visual word presentation were given to all study participants. After the encoding of pronoun-noun pairs, recognition performances of nouns only were tested. The number of correctly recognized nouns previously associated with "sein" was significantly lower than the number of correctly recognized nouns previously associated with "ein" in the shallow encoding condition. The same trend was found for "mein" associated nouns which were also less accurately recognized compared to "ein" associated nouns. Magnetic field distributions recorded during the encoding phases revealed two significant effects, one between about 200 and 400ms after stimulus onset and the other between about 500 and 800ms. The earlier effect was found over occipito-parietal sensors, whereas the later effect occurred over left frontal sensors. Within both time ranges, brain activation varied significantly as a function of associated pronoun independent of depth of word processing. In the respective areas of both time ranges, conditions including personal pronouns ("mein" and "sein") showed higher magnetic field components compared to the control condition of no personal pronouns ("ein"). Evidence is shown that early stage processing is able to distinguish between no personal and personal information, whereas later stage processing is able to distinguish between information related to oneself and to another person (self and non-self). Along with other previous reports our MEG findings support the notion that particular human brain functions involved in processing neurophysiological correlates of self and non-self can be identified.
Bardinet, Eric; Bhattacharjee, Manik; Dormont, Didier; Pidoux, Bernard; Malandain, Grégoire; Schüpbach, Michael; Ayache, Nicholas; Cornu, Philippe; Agid, Yves; Yelnik, Jérôme
2009-02-01
The localization of any given target in the brain has become a challenging issue because of the increased use of deep brain stimulation to treat Parkinson disease, dystonia, and nonmotor diseases (for example, Tourette syndrome, obsessive compulsive disorders, and depression). The aim of this study was to develop an automated method of adapting an atlas of the human basal ganglia to the brains of individual patients. Magnetic resonance images of the brain specimen were obtained before extraction from the skull and histological processing. Adaptation of the atlas to individual patient anatomy was performed by reshaping the atlas MR images to the images obtained in the individual patient using a hierarchical registration applied to a region of interest centered on the basal ganglia, and then applying the reshaping matrix to the atlas surfaces. Results were evaluated by direct visual inspection of the structures visible on MR images and atlas anatomy, by comparison with electrophysiological intraoperative data, and with previous atlas studies in patients with Parkinson disease. The method was both robust and accurate, never failing to provide an anatomically reliable atlas to patient registration. The registration obtained did not exceed a 1-mm mismatch with the electrophysiological signatures in the region of the subthalamic nucleus. This registration method applied to the basal ganglia atlas forms a powerful and reliable method for determining deep brain stimulation targets within the basal ganglia of individual patients.
Tseng, Min-Chen; Chen, Chia-Cheng
2017-06-01
This study investigated the self-regulatory behaviors of arts students, namely memory strategy, goal-setting, self-evaluation, seeking assistance, environmental structuring, learning responsibility, and planning and organizing. We also explored approaches to learning, including deep approach (DA) and surface approach (SA), in a comparison between students' professional training and English learning. The participants consisted of 344 arts majors. The Academic Self-Regulation Questionnaire and the Revised Learning Process Questionnaire were adopted to examine students' self-regulatory behaviors and their approaches to learning. The results show that a positive and significant correlation was found in students' self-regulatory behaviors between professional training and English learning. The results indicated that increases in using self-regulatory behaviors in professional training were associated with increases in applying self-regulatory behaviors in learning English. Seeking assistance, self-evaluation, and planning and organizing were significant predictors for learning English. In addition, arts students used the deep approach more often than the surface approach in both their professional training and English learning. A positive correlation was found in DA, whereas a negative correlation was shown in SA between students' self-regulatory behaviors and their approaches to learning. Students with high self-regulation adopted a deep approach, and they applied the surface approach less in professional training and English learning. In addition, a SEM model confirmed that DA had a positive influence; however, SA had a negative influence on self-regulatory behaviors.
Optimization of remediation strategies using vadose zone monitoring systems
NASA Astrophysics Data System (ADS)
Dahan, Ofer
2016-04-01
In-situ bio-remediation of the vadose zone depends mainly on the ability to change the subsurface hydrological, physical and chemical conditions in order to enable development of specific, indigenous, pollutants degrading bacteria. As such the remediation efficiency is much dependent on the ability to implement optimal hydraulic and chemical conditions in deep sections of the vadose zone. These conditions are usually determined in laboratory experiments where parameters such as the chemical composition of the soil water solution, redox potential and water content of the sediment are fully controlled. Usually, implementation of desired optimal degradation conditions in deep vadose zone at full scale field setups is achieved through infiltration of water enriched with chemical additives on the land surface. It is assumed that deep percolation into the vadose zone would create chemical conditions that promote biodegradation of specific compounds. However, application of water with specific chemical conditions near land surface dose not necessarily results in promoting of desired chemical and hydraulic conditions in deep sections of the vadose zone. A vadose-zone monitoring system (VMS) that was recently developed allows continuous monitoring of the hydrological and chemical properties of deep sections of the unsaturated zone. The VMS includes flexible time-domain reflectometry (FTDR) probes which allow continuous monitoring of the temporal variation of the vadose zone water content, and vadose-zone sampling ports (VSPs) which are designed to allow frequent sampling of the sediment pore-water and gas at multiple depths. Implementation of the vadose zone monitoring system in sites that undergoes active remediation provides real time information on the actual chemical and hydrological conditions in the vadose zone as the remediation process progresses. Up-to-date the system has been successfully implemented in several studies on water flow and contaminant transport in the unsaturated zone including enhanced bioremediation of contaminated deep vadose zone (40 m depth). Manipulating subsurface conditions for enhanced bioremediation was demonstrated through two remediation projects. One site is characterized by 20 m deep vadose zone that is contaminated with gasoline products and the other is a 40 m deep vadose zone that is contaminated with perchlorate. In both cases temporal variation of the sediment water content as well as the variations in the vadose zone chemical and isotopic composition allowed real time detection of water flow velocities, contaminants transport rates and bio-degradation degree. Results and conclusions from each wetting cycle were used to improve the following wetting cycles in order to optimize contaminants degradation conditions while minimizing leaching of contaminants to the groundwater.
A major QTL controlling deep rooting on rice chromosome 4
Uga, Yusaku; Yamamoto, Eiji; Kanno, Noriko; Kawai, Sawako; Mizubayashi, Tatsumi; Fukuoka, Shuichi
2013-01-01
Drought is the most serious abiotic stress that hinders rice production under rainfed conditions. Breeding for deep rooting is a promising strategy to improve the root system architecture in shallow-rooting rice cultivars to avoid drought stress. We analysed the quantitative trait loci (QTLs) for the ratio of deep rooting (RDR) in three F2 mapping populations derived from crosses between each of three shallow-rooting varieties (‘ARC5955', ‘Pinulupot1', and ‘Tupa729') and a deep-rooting variety, ‘Kinandang Patong'. In total, we detected five RDR QTLs on chromosomes 2, 4, and 6. In all three populations, QTLs on chromosome 4 were found to be located at similar positions; they explained from 32.0% to 56.6% of the total RDR phenotypic variance. This suggests that one or more key genetic factors controlling the root growth angle in rice is located in this region of chromosome 4. PMID:24154623
A major QTL controlling deep rooting on rice chromosome 4.
Uga, Yusaku; Yamamoto, Eiji; Kanno, Noriko; Kawai, Sawako; Mizubayashi, Tatsumi; Fukuoka, Shuichi
2013-10-24
Drought is the most serious abiotic stress that hinders rice production under rainfed conditions. Breeding for deep rooting is a promising strategy to improve the root system architecture in shallow-rooting rice cultivars to avoid drought stress. We analysed the quantitative trait loci (QTLs) for the ratio of deep rooting (RDR) in three F₂ mapping populations derived from crosses between each of three shallow-rooting varieties ('ARC5955', 'Pinulupot1', and 'Tupa729') and a deep-rooting variety, 'Kinandang Patong'. In total, we detected five RDR QTLs on chromosomes 2, 4, and 6. In all three populations, QTLs on chromosome 4 were found to be located at similar positions; they explained from 32.0% to 56.6% of the total RDR phenotypic variance. This suggests that one or more key genetic factors controlling the root growth angle in rice is located in this region of chromosome 4.
A shot in the dark: same-sex sexual behaviour in a deep-sea squid.
Hoving, Hendrik J T; Bush, Stephanie L; Robison, Bruce H
2012-04-23
Little is known about the reproductive habits of deep-living squids. Using remotely operated vehicles in the deep waters of the Monterey Submarine Canyon, we have found evidence of mating, i.e. implanted sperm packages, on similar body locations in males and females of the rarely seen mesopelagic squid Octopoteuthis deletron. Equivalent numbers of both sexes were found to have mated, indicating that male squid routinely and indiscriminately mate with both males and females. Most squid species are short-lived, semelparous (i.e. with a single, brief reproductive period) and promiscuous. In the deep, dark habitat where O. deletron lives, potential mates are few and far between. We suggest that same-sex mating behaviour by O. deletron is part of a reproductive strategy that maximizes success by inducing males to indiscriminately and swiftly inseminate every conspecific that they encounter.
DeepStack: Expert-level artificial intelligence in heads-up no-limit poker.
Moravčík, Matej; Schmid, Martin; Burch, Neil; Lisý, Viliam; Morrill, Dustin; Bard, Nolan; Davis, Trevor; Waugh, Kevin; Johanson, Michael; Bowling, Michael
2017-05-05
Artificial intelligence has seen several breakthroughs in recent years, with games often serving as milestones. A common feature of these games is that players have perfect information. Poker, the quintessential game of imperfect information, is a long-standing challenge problem in artificial intelligence. We introduce DeepStack, an algorithm for imperfect-information settings. It combines recursive reasoning to handle information asymmetry, decomposition to focus computation on the relevant decision, and a form of intuition that is automatically learned from self-play using deep learning. In a study involving 44,000 hands of poker, DeepStack defeated, with statistical significance, professional poker players in heads-up no-limit Texas hold'em. The approach is theoretically sound and is shown to produce strategies that are more difficult to exploit than prior approaches. Copyright © 2017, American Association for the Advancement of Science.
Deep venous thrombosis and postthrombotic syndrome: invasive management.
Comerota, A J
2015-03-01
Invasive management of postthrombotic syndrome encompasses the two ends of the deep vein thrombosis spectrum, patients with acute iliofemoral deep vein thrombosis and those with chronic postthrombotic iliofemoral venous obstruction. Of all patients with acute deep vein thrombosis, those with involvement of the iliofemoral segments have the most severe chronic postthrombotic morbidity. Catheter-based techniques now permit percutaneous treatment to eliminate thrombus, restore patency, potentially maintain valvular function, and improve quality of life. Randomized trial data support an initial treatment strategy of thrombus removal. Failure to eliminate acute thrombus from the iliofemoral system leads to chronic postthrombotic obstruction of venous outflow. Debilitating chronic postthrombotic symptoms of the long-standing obstruction of venous outflow can be reduced by restoring unobstructed venous drainage from the profunda femoris vein to the vena cava. © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Habitat associations of juvenile Burbot in a tributary of the Kootenai River
Beard, Zachary S.; Quist, Michael C.; Hardy, Ryan S.; Ross, Tyler J.
2017-01-01
Burbot Lota lota in the lower Kootenai River, Idaho, have been the focus of extensive conservation efforts, particularly conservation aquaculture. One of the primary management strategies has been the release of Burbot into small tributaries in the Kootenai River basin, such as Deep Creek. Since 2012, approximately 12,000 juvenile Burbot have been stocked into Deep Creek; however, little is known about the habitat use of stocked Burbot. The objective of this study was to evaluate habitat associations of juvenile Burbot in Deep Creek. Fish and habitat were sampled from 58 reaches of the creek. Regression models suggested that Burbot moved little after stocking and were associated with areas of high mean depth and coarse substrate. This study provides additional knowledge on habitat associations of juvenile Burbot and suggests that managers should consider selecting deep habitats with coarse substrate for stocking locations.
Metsemakers, W-J; Handojo, K; Reynders, P; Sermon, A; Vanderschot, P; Nijs, S
2015-04-01
Despite modern advances in the treatment of tibial shaft fractures, complications including nonunion, malunion, and infection remain relatively frequent. A better understanding of these injuries and its complications could lead to prevention rather than treatment strategies. A retrospective study was performed to identify risk factors for deep infection and compromised fracture healing after intramedullary nailing (IMN) of tibial shaft fractures. Between January 2000 and January 2012, 480 consecutive patients with 486 tibial shaft fractures were enrolled in the study. Statistical analysis was performed to determine predictors of deep infection and compromised fracture healing. Compromised fracture healing was subdivided in delayed union and nonunion. The following independent variables were selected for analysis: age, sex, smoking, obesity, diabetes, American Society of Anaesthesiologists (ASA) classification, polytrauma, fracture type, open fractures, Gustilo type, primary external fixation (EF), time to nailing (TTN) and reaming. As primary statistical evaluation we performed a univariate analysis, followed by a multiple logistic regression model. Univariate regression analysis revealed similar risk factors for delayed union and nonunion, including fracture type, open fractures and Gustilo type. Factors affecting the occurrence of deep infection in this model were primary EF, a prolonged TTN, open fractures and Gustilo type. Multiple logistic regression analysis revealed polytrauma as the single risk factor for nonunion. With respect to delayed union, no risk factors could be identified. In the same statistical model, deep infection was correlated with primary EF. The purpose of this study was to evaluate risk factors of poor outcome after IMN of tibial shaft fractures. The univariate regression analysis showed that the nature of complications after tibial shaft nailing could be multifactorial. This was not confirmed in a multiple logistic regression model, which only revealed polytrauma and primary EF as risk factors for nonunion and deep infection, respectively. Future strategies should focus on prevention in high-risk populations such as polytrauma patients treated with EF. Copyright © 2014 Elsevier Ltd. All rights reserved.
Péron, J; Dondaine, T
2012-01-01
The subthalamic nucleus deep-brain stimulation Parkinson's disease patient model seems to represent a unique opportunity for studying the functional role of the basal ganglia and notably the subthalamic nucleus in human emotional processing. Indeed, in addition to constituting a therapeutic advance for severely disabled Parkinson's disease patients, deep brain stimulation is a technique, which selectively modulates the activity of focal structures targeted by surgery. There is growing evidence of a link between emotional impairments and deep-brain stimulation of the subthalamic nucleus. In this context, according to the definition of emotional processing exposed in the companion paper available in this issue, the aim of the present review will consist in providing a synopsis of the studies that investigated the emotional disturbances observed in subthalamic nucleus deep brain stimulation Parkinson's disease patients. This review leads to the conclusion that several emotional components would be disrupted after subthalamic nucleus deep brain stimulation in Parkinson's disease: subjective feeling, neurophysiological activation, and motor expression. Finally, after a description of the limitations of this study model, we discuss the functional role of the subthalamic nucleus (and the striato-thalamo-cortical circuits in which it is involved) in emotional processing. It seems reasonable to conclude that the striato-thalamo-cortical circuits are indeed involved in emotional processing and that the subthalamic nucleus plays a central in role the human emotional architecture. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
Xiao, Cao; Choi, Edward; Sun, Jimeng
2018-06-08
To conduct a systematic review of deep learning models for electronic health record (EHR) data, and illustrate various deep learning architectures for analyzing different data sources and their target applications. We also highlight ongoing research and identify open challenges in building deep learning models of EHRs. We searched PubMed and Google Scholar for papers on deep learning studies using EHR data published between January 1, 2010, and January 31, 2018. We summarize them according to these axes: types of analytics tasks, types of deep learning model architectures, special challenges arising from health data and tasks and their potential solutions, as well as evaluation strategies. We surveyed and analyzed multiple aspects of the 98 articles we found and identified the following analytics tasks: disease detection/classification, sequential prediction of clinical events, concept embedding, data augmentation, and EHR data privacy. We then studied how deep architectures were applied to these tasks. We also discussed some special challenges arising from modeling EHR data and reviewed a few popular approaches. Finally, we summarized how performance evaluations were conducted for each task. Despite the early success in using deep learning for health analytics applications, there still exist a number of issues to be addressed. We discuss them in detail including data and label availability, the interpretability and transparency of the model, and ease of deployment.
NASA Astrophysics Data System (ADS)
Singh, Swadesh Kumar; Kumar, D. Ravi
2005-08-01
Hydro-mechanical deep drawing is a process for producing cup shaped parts with the assistance of a pressurized fluid. In the present work, numerical simulation of the conventional and counter pressure deep drawing processes has been done with the help of a finite element method based software. Simulation results were analyzed to study the improvement in drawability by using hydro-mechanical processes. The thickness variations in the drawn cups were analyzed and also the effect of counter pressure and oil gap on the thickness distribution was studied. Numerical simulations were also used for the die design, which combines both drawing and ironing processes in a single operation. This modification in the die provides high drawability, facilitates smooth material flow, gives more uniform thickness distribution and corrects the shape distortion.
Deep Extragalactic VIsible Legacy Survey (DEVILS): Motivation, Design and Target Catalogue
NASA Astrophysics Data System (ADS)
Davies, L. J. M.; Robotham, A. S. G.; Driver, S. P.; Lagos, C. P.; Cortese, L.; Mannering, E.; Foster, C.; Lidman, C.; Hashemizadeh, A.; Koushan, S.; O'Toole, S.; Baldry, I. K.; Bilicki, M.; Bland-Hawthorn, J.; Bremer, M. N.; Brown, M. J. I.; Bryant, J. J.; Catinella, B.; Croom, S. M.; Grootes, M. W.; Holwerda, B. W.; Jarvis, M. J.; Maddox, N.; Meyer, M.; Moffett, A. J.; Phillipps, S.; Taylor, E. N.; Windhorst, R. A.; Wolf, C.
2018-06-01
The Deep Extragalactic VIsible Legacy Survey (DEVILS) is a large spectroscopic campaign at the Anglo-Australian Telescope (AAT) aimed at bridging the near and distant Universe by producing the highest completeness survey of galaxies and groups at intermediate redshifts (0.3 < z < 1.0). Our sample consists of ˜60,000 galaxies to Y<21.2 mag, over ˜6 deg2 in three well-studied deep extragalactic fields (Cosmic Origins Survey field, COSMOS, Extended Chandra Deep Field South, ECDFS and the X-ray Multi-Mirror Mission Large-Scale Structure region, XMM-LSS - all Large Synoptic Survey Telescope deep-drill fields). This paper presents the broad experimental design of DEVILS. Our target sample has been selected from deep Visible and Infrared Survey Telescope for Astronomy (VISTA) Y-band imaging (VISTA Deep Extragalactic Observations, VIDEO and UltraVISTA), with photometry measured by PROFOUND. Photometric star/galaxy separation is done on the basis of NIR colours, and has been validated by visual inspection. To maximise our observing efficiency for faint targets we employ a redshift feedback strategy, which continually updates our target lists, feeding back the results from the previous night's observations. We also present an overview of the initial spectroscopic observations undertaken in late 2017 and early 2018.
Depth of Information Processing and Memory for Medical Facts.
ERIC Educational Resources Information Center
Slade, Peter D.; Onion, Carl W. R.
1995-01-01
The current emphasis in medical education is on engaging learners in deep processing of information to achieve better understanding of the subject matter. Traditional approaches aimed for memorization of medical facts; however, a good memory for medical facts is still essential in clinical practice. This study demonstrates that deep information…
Sadeghi, Zahra
2016-09-01
In this paper, I investigate conceptual categories derived from developmental processing in a deep neural network. The similarity matrices of deep representation at each layer of neural network are computed and compared with their raw representation. While the clusters generated by raw representation stand at the basic level of abstraction, conceptual categories obtained from deep representation shows a bottom-up transition procedure. Results demonstrate a developmental course of learning from specific to general level of abstraction through learned layers of representations in a deep belief network. © The Author(s) 2016.
The DEEP-South: Scheduling and Data Reduction Software System
NASA Astrophysics Data System (ADS)
Yim, Hong-Suh; Kim, Myung-Jin; Bae, Youngho; Moon, Hong-Kyu; Choi, Young-Jun; Roh, Dong-Goo; the DEEP-South Team
2015-08-01
The DEep Ecliptic Patrol of the Southern sky (DEEP-South), started in October 2012, is currently in test runs with the first Korea Microlensing Telescope Network (KMTNet) 1.6 m wide-field telescope located at CTIO in Chile. While the primary objective for the DEEP-South is physical characterization of small bodies in the Solar System, it is expected to discover a large number of such bodies, many of them previously unknown.An automatic observation planning and data reduction software subsystem called "The DEEP-South Scheduling and Data reduction System" (the DEEP-South SDS) is currently being designed and implemented for observation planning, data reduction and analysis of huge amount of data with minimum human interaction. The DEEP-South SDS consists of three software subsystems: the DEEP-South Scheduling System (DSS), the Local Data Reduction System (LDR), and the Main Data Reduction System (MDR). The DSS manages observation targets, makes decision on target priority and observation methods, schedules nightly observations, and archive data using the Database Management System (DBMS). The LDR is designed to detect moving objects from CCD images, while the MDR conducts photometry and reconstructs lightcurves. Based on analysis made at the LDR and the MDR, the DSS schedules follow-up observation to be conducted at other KMTNet stations. In the end of 2015, we expect the DEEP-South SDS to achieve a stable operation. We also have a plan to improve the SDS to accomplish finely tuned observation strategy and more efficient data reduction in 2016.
Deep Space Wide Area Search Strategies
NASA Astrophysics Data System (ADS)
Capps, M.; McCafferty, J.
There is an urgent need to expand the space situational awareness (SSA) mission beyond catalog maintenance to providing near real-time indications and warnings of emerging events. While building and maintaining a catalog of space objects is essential to SSA, this does not address the threat of uncatalogued and uncorrelated deep space objects. The Air Force therefore has an interest in transformative technologies to scan the geostationary (GEO) belt for uncorrelated space objects. Traditional ground based electro-optical sensors are challenged in simultaneously detecting dim objects while covering large areas of the sky using current CCD technology. Time delayed integration (TDI) scanning has the potential to enable significantly larger coverage rates while maintaining sensitivity for detecting near-GEO objects. This paper investigates strategies of employing TDI sensing technology from a ground based electro-optical telescope, toward providing tactical indications and warnings of deep space threats. We present results of a notional wide area search TDI sensor that scans the GEO belt from three locations: Maui, New Mexico, and Diego Garcia. Deep space objects in the NASA 2030 debris catalog are propagated over multiple nights as an indicative data set to emulate notional uncatalogued near-GEO orbits which may be encountered by the TDI sensor. Multiple scan patterns are designed and simulated, to compare and contrast performance based on 1) efficiency in coverage, 2) number of objects detected, and 3) rate at which detections occur, to enable follow-up observations by other space surveillance network (SSN) sensors. A step-stare approach is also modeled using a dedicated, co-located sensor notionally similar to the Ground-Based Electro-Optical Deep Space Surveillance (GEODSS) tower. Equivalent sensitivities are assumed. This analysis quantifies the relative benefit of TDI scanning for the wide area search mission.
Bors, Eleanor K.; Rowden, Ashley A.; Maas, Elizabeth W.; Clark, Malcolm R.; Shank, Timothy M.
2012-01-01
Patterns of genetic connectivity are increasingly considered in the design of marine protected areas (MPAs) in both shallow and deep water. In the New Zealand Exclusive Economic Zone (EEZ), deep-sea communities at upper bathyal depths (<2000 m) are vulnerable to anthropogenic disturbance from fishing and potential mining operations. Currently, patterns of genetic connectivity among deep-sea populations throughout New Zealand’s EEZ are not well understood. Using the mitochondrial Cytochrome Oxidase I and 16S rRNA genes as genetic markers, this study aimed to elucidate patterns of genetic connectivity among populations of two common benthic invertebrates with contrasting life history strategies. Populations of the squat lobster Munida gracilis and the polychaete Hyalinoecia longibranchiata were sampled from continental slope, seamount, and offshore rise habitats on the Chatham Rise, Hikurangi Margin, and Challenger Plateau. For the polychaete, significant population structure was detected among distinct populations on the Chatham Rise, the Hikurangi Margin, and the Challenger Plateau. Significant genetic differences existed between slope and seamount populations on the Hikurangi Margin, as did evidence of population differentiation between the northeast and southwest parts of the Chatham Rise. In contrast, no significant population structure was detected across the study area for the squat lobster. Patterns of genetic connectivity in Hyalinoecia longibranchiata are likely influenced by a number of factors including current regimes that operate on varying spatial and temporal scales to produce potential barriers to dispersal. The striking difference in population structure between species can be attributed to differences in life history strategies. The results of this study are discussed in the context of existing conservation areas that are intended to manage anthropogenic threats to deep-sea benthic communities in the New Zealand region. PMID:23185341
Deep Brain Stimulation for Parkinson's Disease
... Strategy Current Research Research Funded by NINDS Basic Neuroscience Clinical Research Translational Research Research at NINDS Focus ... Diversity Resources Jobs at NINDS Director, Division of Neuroscience Director, NIH BRAIN Initiative® Health Scientist Administrator Channels ...
Ball, Oliver; Robinson, Sarah; Bure, Kim; Brindley, David A; Mccall, David
2018-04-01
Phacilitate held a Special Interest Group workshop event in Edinburgh, UK, in May 2017. The event brought together leading stakeholders in the cell therapy bioprocessing field to identify present and future challenges and propose potential solutions to automation in cell therapy bioprocessing. Here, we review and summarize discussions from the event. Deep biological understanding of a product, its mechanism of action and indication pathogenesis underpin many factors relating to bioprocessing and automation. To fully exploit the opportunities of bioprocess automation, therapeutics developers must closely consider whether an automation strategy is applicable, how to design an 'automatable' bioprocess and how to implement process modifications with minimal disruption. Major decisions around bioprocess automation strategy should involve all relevant stakeholders; communication between technical and business strategy decision-makers is of particular importance. Developers should leverage automation to implement in-process testing, in turn applicable to process optimization, quality assurance (QA)/ quality control (QC), batch failure control, adaptive manufacturing and regulatory demands, but a lack of precedent and technical opportunities can complicate such efforts. Sparse standardization across product characterization, hardware components and software platforms is perceived to complicate efforts to implement automation. The use of advanced algorithmic approaches such as machine learning may have application to bioprocess and supply chain optimization. Automation can substantially de-risk the wider supply chain, including tracking and traceability, cryopreservation and thawing and logistics. The regulatory implications of automation are currently unclear because few hardware options exist and novel solutions require case-by-case validation, but automation can present attractive regulatory incentives. Copyright © 2018 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.
Deep learning methods to guide CT image reconstruction and reduce metal artifacts
NASA Astrophysics Data System (ADS)
Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge
2017-03-01
The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.
NASA Astrophysics Data System (ADS)
Yuan-hui, Li; Gang, Lei; Shi-da, Xu; Da-wei, Wu
2018-07-01
Under high stress and blasting disturbance, the failure of deep rock masses is a complex, dynamic evolutionary process. To reveal the relation between macroscopic failure of deep rock masses and spatial-temporal evolution law of micro-cracking within, the initiation, extension, and connection of micro-cracks under blasting disturbance and the deformation and failure mechanism of deep rock masses were studied. The investigation was carried out using the microseismic (MS) monitoring system established in the deep mining area of Ashele Copper Mine (Xinjiang Uygur Autonomous Region, China). The results showed that the failure of the deep rock masses is a dynamic process accompanied with stress release and stress adjustment. It is not only related to the blasting-based mining, but also associated with zones of stress concentration formed due to the mining. In that space, the concentrated area in the cloud chart for the distribution of MS event density before failure of the rocks shows the basically same pattern with the damaged rocks obtained through scanning of mined-out areas, which indicates that the cloud chart can be used to determine potential risk areas of rocks in the spatial domain. In the time domain, relevant parameters of MS events presented different changes before the failure of the rocks: the energy index decreased while the cumulative apparent volume gradually increased, the magnitude distribution of microseismic events decreased rapidly, and the fractal dimension decreased at first and then remained stable. This demonstrates that the different changes in relevant MS parameters allow researchers to predict the failure time of the rocks. By analysing the dynamic evolution process of the failure of the deep rock masses, areas at potential risk can be predicted spatially and temporally. The result provides guidance for those involved in the safe production and management of underground engineering and establishes a theoretical basis for the study on the stability of deep rock masses.
Peng, Song; Zhao, Yihuan; Fu, Caixia; Pu, Xuemei; Zhou, Liang; Huang, Yan; Lu, Zhiyun
2018-06-07
A series of blue-emissive 7-(diphenylamino)-4-phenoxycoumarin derivatives bearing -CF 3 , -OMe, or -N(Me) 2 substituents on the phenoxy subunit were synthesized. Although both the -CF 3 and -N(Me) 2 modifications were found to trigger redshifted fluorescence, the -OMe substitution was demonstrated to exert an unexpected blueshift color-tuning effect toward the deep-blue region. The reason is that the moderate electron-donating -OMe group can endow coumarins with unaltered HOMO but elevated LUMO energy levels. Moreover, the -OMe substitution was found to be beneficial to the thermal stability of these coumarins. Therefore, the trimethoxy-substituted objective compound can act as a high-performance deep-blue organic light-emitting diode (OLED) emitter, and OLED based on it emits deep-blue light with CIE coordinates of (0.148, 0.084), maximum luminance of 7800 cd m -2 , and maximum external quantum efficiency of 5.1 %. These results not only shed light on the molecular design strategy for high-performance deep-blue OLED emitters through color-tuning, but also show the perspective of coumarin derivatives as deep-blue OLED emitters. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kot, Wing K.; Pegg, Ian L.; Brandys, Marek
One of the primary roles of waste pretreatment at the Hanford Tank Waste Treatment and Immobilization Plant (WTP) is to separate the majority of the radioactive components from the majority of the nonradioactive components in retrieved tank wastes, producing a high level waste (HLW) stream and a low activity waste (LAW) stream. This separation process is a key element in the overall strategy to reduce the volume of HLW that requires vitrification and subsequent disposal in a national deep geological repository for high level nuclear waste. After removal of the radioactive constituents, the LAW stream, which has a much largermore » volume but smaller fraction of radioactivity than the HLW stream, will be immobilized and disposed of in near surface facilities at the Hanford site.« less
Bismuth interstitial impurities and the optical properties of GaP 1- x - yBi xN y
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christian, Theresa M.; Beaton, Daniel A.; Perkins, John D.
Two distinctive regimes of behavior are observed from GaP 1-x-y Bi x N y alloys with x < 2.4%, y < 3.4% grown by molecular beam epitaxy. These regimes are correlated with abundant bismuth interstitial impurities that are encouraged or suppressed according to the sample growth temperature, with up to 55% of incorporated bismuth located interstitially. When bismuth interstitials are present, radiative recombination arises at near-band-edge localized states rather than from impurity bands and deep state luminescence. Finally, this change demonstrates a novel strategy for controlling luminescence in isoelectronic semiconductor alloys and is attributed to a disruption of carrier transfermore » processes.« less
Bismuth interstitial impurities and the optical properties of GaP 1- x - yBi xN y
Christian, Theresa M.; Beaton, Daniel A.; Perkins, John D.; ...
2017-10-10
Two distinctive regimes of behavior are observed from GaP 1-x-y Bi x N y alloys with x < 2.4%, y < 3.4% grown by molecular beam epitaxy. These regimes are correlated with abundant bismuth interstitial impurities that are encouraged or suppressed according to the sample growth temperature, with up to 55% of incorporated bismuth located interstitially. When bismuth interstitials are present, radiative recombination arises at near-band-edge localized states rather than from impurity bands and deep state luminescence. Finally, this change demonstrates a novel strategy for controlling luminescence in isoelectronic semiconductor alloys and is attributed to a disruption of carrier transfermore » processes.« less
NASA Astrophysics Data System (ADS)
Lu, Guoping; Wang, Xiao; Li, Fusi; Xu, Fangyiming; Wang, Yanxin; Qi, Shihua; Yuen, David
2017-03-01
This paper investigated the deep fault thermal flow processes in the Xinzhou geothermal field in the Yangjiang region of Guangdong Province. Deep faults channel geothermal energy to the shallow ground, which makes it difficult to study due to the hidden nature. We conducted numerical experiments in order to investigate the physical states of the geothermal water inside the fault zone. We view the deep fault as a fast flow path for the thermal water from the deep crust driven up by the buoyancy. Temperature measurements at the springs or wells constrain the upper boundary, and the temperature inferred from the Currie temperature interface bounds the bottom. The deepened boundary allows the thermal reservoir to revolve rather than to be at a fixed temperature. The results detail the concept of a thermal reservoir in terms of its formation and heat distribution. The concept also reconciles the discrepancy in reservoir temperatures predicted from both quartz and Na-K-Mg. The downward displacement of the crust increases the pressure at the deep ground and leads to an elevated temperature and a lighter water density. Ultimately, our results are a first step in implementing numerical studies of deep faults through geothermal water flows; future works need to extend to cases of supercritical states. This approach is applicable to general deep-fault thermal flows and dissipation paths for the seismic energy from the deep crust.
Cell dynamic morphology classification using deep convolutional neural networks.
Li, Heng; Pang, Fengqian; Shi, Yonggang; Liu, Zhiwen
2018-05-15
Cell morphology is often used as a proxy measurement of cell status to understand cell physiology. Hence, interpretation of cell dynamic morphology is a meaningful task in biomedical research. Inspired by the recent success of deep learning, we here explore the application of convolutional neural networks (CNNs) to cell dynamic morphology classification. An innovative strategy for the implementation of CNNs is introduced in this study. Mouse lymphocytes were collected to observe the dynamic morphology, and two datasets were thus set up to investigate the performances of CNNs. Considering the installation of deep learning, the classification problem was simplified from video data to image data, and was then solved by CNNs in a self-taught manner with the generated image data. CNNs were separately performed in three installation scenarios and compared with existing methods. Experimental results demonstrated the potential of CNNs in cell dynamic morphology classification, and validated the effectiveness of the proposed strategy. CNNs were successfully applied to the classification problem, and outperformed the existing methods in the classification accuracy. For the installation of CNNs, transfer learning was proved to be a promising scheme. © 2018 International Society for Advancement of Cytometry. © 2018 International Society for Advancement of Cytometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wellman, Dawn M.; Triplett, Mark B.; Freshley, Mark D.
DOE-EM, Office of Groundwater and Soil Remediation and DOE Richland, in collaboration with the Hanford site and Pacific Northwest National Laboratory, have established the Deep Vadose Zone Applied Field Research Center (DVZ-AFRC). The DVZ-AFRC leverages DOE investments in basic science from the Office of Science, applied research from DOE EM Office of Technology Innovation and Development, and site operation (e.g., site contractors [CH2M HILL Plateau Remediation Contractor and Washington River Protection Solutions], DOE-EM RL and ORP) in a collaborative effort to address the complex region of the deep vadose zone. Although the aim, goal, motivation, and contractual obligation of eachmore » organization is different, the integration of these activities into the framework of the DVZ-AFRC brings the resources and creativity of many to provide sites with viable alternative remedial strategies to current baseline approaches for persistent contaminants and deep vadose zone contamination. This cooperative strategy removes stove pipes, prevents duplication of efforts, maximizes resources, and facilitates development of the scientific foundation needed to make sound and defensible remedial decisions that will successfully meet the target cleanup goals for one of DOE EM's most intractable problems, in a manner that is acceptable by regulators.« less
A Deep Learning Network Approach to ab initio Protein Secondary Structure Prediction
Spencer, Matt; Eickholt, Jesse; Cheng, Jianlin
2014-01-01
Ab initio protein secondary structure (SS) predictions are utilized to generate tertiary structure predictions, which are increasingly demanded due to the rapid discovery of proteins. Although recent developments have slightly exceeded previous methods of SS prediction, accuracy has stagnated around 80% and many wonder if prediction cannot be advanced beyond this ceiling. Disciplines that have traditionally employed neural networks are experimenting with novel deep learning techniques in attempts to stimulate progress. Since neural networks have historically played an important role in SS prediction, we wanted to determine whether deep learning could contribute to the advancement of this field as well. We developed an SS predictor that makes use of the position-specific scoring matrix generated by PSI-BLAST and deep learning network architectures, which we call DNSS. Graphical processing units and CUDA software optimize the deep network architecture and efficiently train the deep networks. Optimal parameters for the training process were determined, and a workflow comprising three separately trained deep networks was constructed in order to make refined predictions. This deep learning network approach was used to predict SS for a fully independent test data set of 198 proteins, achieving a Q3 accuracy of 80.7% and a Sov accuracy of 74.2%. PMID:25750595
A Deep Learning Network Approach to ab initio Protein Secondary Structure Prediction.
Spencer, Matt; Eickholt, Jesse; Jianlin Cheng
2015-01-01
Ab initio protein secondary structure (SS) predictions are utilized to generate tertiary structure predictions, which are increasingly demanded due to the rapid discovery of proteins. Although recent developments have slightly exceeded previous methods of SS prediction, accuracy has stagnated around 80 percent and many wonder if prediction cannot be advanced beyond this ceiling. Disciplines that have traditionally employed neural networks are experimenting with novel deep learning techniques in attempts to stimulate progress. Since neural networks have historically played an important role in SS prediction, we wanted to determine whether deep learning could contribute to the advancement of this field as well. We developed an SS predictor that makes use of the position-specific scoring matrix generated by PSI-BLAST and deep learning network architectures, which we call DNSS. Graphical processing units and CUDA software optimize the deep network architecture and efficiently train the deep networks. Optimal parameters for the training process were determined, and a workflow comprising three separately trained deep networks was constructed in order to make refined predictions. This deep learning network approach was used to predict SS for a fully independent test dataset of 198 proteins, achieving a Q3 accuracy of 80.7 percent and a Sov accuracy of 74.2 percent.
Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising.
Zhang, Kai; Zuo, Wangmeng; Chen, Yunjin; Meng, Deyu; Zhang, Lei
2017-07-01
The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.
The Cold Gas History of the Universe as seen by the ngVLA
NASA Astrophysics Data System (ADS)
Riechers, Dominik A.; Carilli, Chris Luke; Casey, Caitlin; da Cunha, Elisabete; Hodge, Jacqueline; Ivison, Rob; Murphy, Eric J.; Narayanan, Desika; Sargent, Mark T.; Scoville, Nicholas; Walter, Fabian
2017-01-01
The Next Generation Very Large Array (ngVLA) will fundamentally advance our understanding of the formation processes that lead to the assembly of galaxies throughout cosmic history. The combination of large bandwidth with unprecedented sensitivity to the critical low-level CO lines over virtually the entire redshift range will open up the opportunity to conduct large-scale, deep cold molecular gas surveys, mapping the fuel for star formation in galaxies over substantial cosmic volumes. Informed by the first efforts with the Karl G. Jansky Very Large Array (COLDz survey) and the Atacama Large (sub)Millimeter Array (ASPECS survey), we here present initial predictions and possible survey strategies for such "molecular deep field" observations with the ngVLA. These investigations will provide a detailed measurement of the volume density of molecular gas in galaxies as a function of redshift, the "cold gas history of the universe". This will crucially complement studies of the neutral gas, star formation and stellar mass histories with large low-frequency arrays, the Large UV/Optical/Infrared Surveyor, and the Origins Space Telescope, providing the means to obtain a comprehensive picture of galaxy evolution through cosmic times.
NASA Astrophysics Data System (ADS)
Neumann, Lars; Ritscher, Allegra; Müller, Gerhard; Hafenbradl, Doris
2009-08-01
For the detection of the precise and unambiguous binding of fragments to a specific binding site on the target protein, we have developed a novel reporter displacement binding assay technology. The application of this technology for the fragment screening as well as the fragment evolution process with a specific modelling based design strategy is demonstrated for inhibitors of the protein kinase p38alpha. In a fragment screening approach seed fragments were identified which were then used to build compounds from the deep-pocket towards the hinge binding area of the protein kinase p38alpha based on a modelling approach. BIRB796 was used as a blueprint for the alignment of the fragments. The fragment evolution of these deep-pocket binding fragments towards the fully optimized inhibitor BIRB796 included the modulation of the residence time as well as the affinity. The goal of our study was to evaluate the robustness and efficiency of our novel fragment screening technology at high fragment concentrations, compare the screening data with biochemical activity data and to demonstrate the evolution of the hit fragments with fast kinetics, into slow kinetic inhibitors in an in silico approach.
Environmental Systems Microbiology of Contaminated Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayler, Gary; Hazen, Terry C.
Environmental Systems Microbiology is well positioned to move forward in dynamic complex system analysis probing new questions and developing new insight into the function, robustness and resilience in response to anthropogenic perturbations. Recent studies have demonstrated that natural bacterial communities can be used as quantitative biosensors in both groundwater and deep ocean water, predicting oil concentration from the Gulf of Mexico Deep Water Horizon spill and from groundwater at nuclear production waste sites (16, 17, 25). Since the first demonstration of catabolic gene expression in soil remediation (34) it has been clear that extension beyond organismal abundance to process andmore » function of microbial communities as a whole using the whole suite of omic tools available to the post genomic era. Metatranscriptomics have been highlighted as a prime vehicle for understanding responses to environmental drivers (35) in complex systems and with rapidly developing metabolomics, full functional understanding of complex community biogeochemical cycling is an achievable goal. Perhaps more exciting is the dynamic nature of these systems and their complex adaptive strategies that may lead to new control paradigms and emergence of new states and function in the course of a changing environment.« less
Experimental Attempts for Deep Insertion in Ultrasonically Forced Insertion Process
NASA Astrophysics Data System (ADS)
Ono, Satoshi; Aoyagi, Manabu; Tamura, Hideki; Takano, Takehiro
2011-07-01
In this paper, we describe two attempts of obtaining deep insertion in an ultrasonically forced insertion (USFI) process. One was to correct the inclination of an inserted rod by passively generated bending vibrations. The inclination causes a partial plastic deformation, which decreases the holding power of processing materials. Two types of horn with grooves for excitation of bending vibrations were examined. The other was to make differences in vibration velocity and the phase of a rod and a metal plate by damping the vibration of a metal plate by using a rubber sheet. As results, the attempts proposed in this study were confirmed to be effective to obtain a deep insertion.
NASA Astrophysics Data System (ADS)
McEntee, C.; Zurbuchen, T.; Easterling, W. E.; Gallaudet, T.; Werkheiser, W. H.; McEntee, C.; Zurbuchen, T.; Pandya, R.; Manduca, C. A.; Graumlich, L. J.; Snover, A. K.; Klinger, T.
2017-12-01
Now, more than any time in recent memory, scientists are stepping forward, eager to bring science to bear on environmental issues. The time could not be more ripe; we have the best tool ever developed by human kind for understanding cause and consequence: science itself. And we have an impressive tool kit for communicating science honed through decades of engagement. Despite these advances, we face a head wind. Public trust in experts is on the decline. Society's deep polarization means that wading into societal issues brings us uncomfortably close to the deep end of politics. The expertise that is required to tackle the thorniest of environmental problems is not just technical but also requires addressing differing value systems and pervasive issues of inequity. If we have robust science, honorable intentions, and good communication strategies, what's missing? It's all about design thinking, especially 1) empathy with users, 2) a discipline of prototyping, and 3) a tolerance for failure. In this talk, we share lessons in design thinking from the University of Washington's Climate Impacts Group and Washington Ocean Acidification Center, cornerstones of our new environmental institute, EarthLab. Connecting deeply and authentically with the experiences of user communities is at the core of our work. Collaboration is an iterative process centered on prototyping adaptive strategies in partnership with users. Using this approach, the Climate Impacts Group informs decision making ranging from culvert design to the Endangered Species Act, building long-term capacity for adaptation at every stage of the process. In partnership with the shellfish industry, the Washington Ocean Acidification Center pioneers adaptive strategies to sustain shellfish production—and shellfish producers—in a rapidly changing ocean. Finally, we will open the messy can of worms that is tolerance for failure. How can we afford failure in the context of declining public trust and support for science, and at a time when the stakes are so high? Practically speaking, can an assistant professor or soft-money researcher afford failure if he or she doesn't have tenure? Can a small business owner risk investment in a prototype that might fail? But, ultimately, how can we not afford to push the limits of innovation in addressing the pressing issues of the day?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neuhauser, K.
Through discussion of five case studies (test homes), this project evaluates strategies to elevate the performance of existing homes to a level commensurate with best-in-class implementation of high-performance new construction homes. The test homes featured in this research activity participated in Deep Energy Retrofit (DER) Pilot Program sponsored by the electric and gas utility National Grid in Massachusetts and Rhode Island. Building enclosure retrofit strategies are evaluated for impact on durability and indoor air quality in addition to energy performance. Evaluation of strategies is structured around the critical control functions of water, airflow, vapor flow, and thermal control. The aimmore » of the research project is to develop guidance that could serve as a foundation for wider adoption of high performance, 'deep' retrofit work. The project will identify risk factors endemic to advanced retrofit in the context of the general building type, configuration and vintage encountered in the National Grid DER Pilot. Results for the test homes are based on observation and performance testing of recently completed projects. Additional observation would be needed to fully gauge long-term energy performance, durability, and occupant comfort.« less
Core strength training for patients with chronic low back pain.
Chang, Wen-Dien; Lin, Hung-Yu; Lai, Ping-Tung
2015-03-01
[Purpose] Through core strength training, patients with chronic low back pain can strengthen their deep trunk muscles. However, independent training remains challenging, despite the existence of numerous core strength training strategies. Currently, no standardized system has been established analyzing and comparing the results of core strength training and typical resistance training. Therefore, we conducted a systematic review of the results of previous studies to explore the effectiveness of various core strength training strategies for patients with chronic low back pain. [Methods] We searched for relevant studies using electronic databases. Subsequently, we evaluated their quality by analyzing the reported data. [Results] We compared four methods of evaluating core strength training: trunk balance, stabilization, segmental stabilization, and motor control exercises. According to the results of various scales and evaluation instruments, core strength training is more effective than typical resistance training for alleviating chronic low back pain. [Conclusion] All of the core strength training strategies examined in this study assist in the alleviation of chronic low back pain; however, we recommend focusing on training the deep trunk muscles to alleviate chronic low back pain.
Implementing AORN recommended practices for prevention of deep vein thrombosis.
Van Wicklin, Sharon A
2011-11-01
One to two people per 1,000 are affected by deep vein thrombosis (DVT) or pulmonary embolism in the United States each year. AORN published its new "Recommended practices for prevention of deep vein thrombosis" to guide perioperative RNs in establishing organization-wide protocols for DVT prevention. Strategies for successful implementation of the recommended practices include taking a multidisciplinary approach to protocol development, providing education and guidance for performing preoperative patient assessments and administering DVT prophylaxis, and having appropriate resources and the facility's policy and procedure for DVT prevention readily available in the practice setting. Hospital and ambulatory patient scenarios have been included as examples of appropriate execution of the recommended practices. Copyright © 2011 AORN, Inc. Published by Elsevier Inc. All rights reserved.
Elucidating Small-Scale Animal-Fluid Interactions in the Deep Sea
NASA Astrophysics Data System (ADS)
Katija, K.; Sherman, A.; Graves, D.; Kecy, C. D.; Klimov, D.; Robison, B. H.
2016-02-01
The midwater region of the ocean (below the euphotic zone and above the benthos) is one of the largest ecosystems on our planet, yet remains one of the least explored. Little-known marine organisms that inhabit midwater have developed life strategies that contribute to their evolutionary success, and understanding interactions with their physical, fluid environment will shed light on these strategies. Although significant advances in underwater vehicle technologies have improved access to midwater, small-scale, in situ fluid mechanics measurement methods that seek to quantify the interactions that midwater organisms have with their physical environment are lacking. Here we present DeepPIV, an instrumentation package affixed to remotely operated vehicles that quantifies fluid motions from the surface of the ocean down to 4000 m depths. Utilizing ambient suspended particulate, fluid-structure interactions can be evaluated on a range of marine organisms in midwater and on the benthos. As a proof of concept for DeepPIV, we targeted giant larvaceans (Bathochordaeus stygias) in Monterey Bay that create mucus houses to filter food. Once mucus houses become clogged, they are abandoned by the larvacean, and are left to sink to the ocean bottom; in Monterey Bay, sinking mucus houses contribute to nearly a third of the particulate on the ocean bottom. Little is known about the structure of these mucus houses and the function they play in selectively filtering particles. Using DeepPIV, we reveal the complex structures and flows generated within larvacean mucus houses, which are used to ultimately elucidate how these structures function.
The formation of Greenland Sea Deep Water: double diffusion or deep convection?
NASA Astrophysics Data System (ADS)
Clarke, R. Allyn; Swift, James H.; Reid, Joseph L.; Koltermann, K. Peter
1990-09-01
An examination of the extensive hydrographic data sets collected by C.S.S. Hudson and F.S. Meteor in the Norwegian and Greenland Seas during February-June 1982 reveals property distributions and circulation patterns broadly similar to those seen in earlier data sets. These data sets, however, reveal the even stronger role played by topography, with evidence of separate circulation patterns and separate water masses in each of the deep basins. The high precision temperature, salinity and oxygen data obtained reveals significant differences in the deep and bottom waters found in the various basins of the Norwegian and Greenland Seas. A comparison of the 1982 data set with earlier sets shows that the renewal of Greenland Sea Deep Water must have taken place sometime over the last decade; however there is no evidence that deep convective renewal of any of the deep and bottom waters in this region was taking place at the time of the observations. The large-scale density fields, however, do suggest that deep convection to the bottom is most likely to occure in the Greenland Basin due to its deep cyclonic circulation. The hypothesis that Greenland Sea Deep Water (GSDW) is formed through dipycnal mixing processes acting on the warm salty core of Atlantic Water entering the Greenland Sea is examined. θ-S correlations and oxygen concentrations suggest that the salinity maxima in the Greenland Sea are the product of at least two separate mixing processes, not the hypothesized single mixing process leading to GSDW. A simple one-dimensional mixed layer model with ice growth and decay demonstrates that convective renewal of GSDW would have occurred within the Greenland Sea had the winter been a little more severe. The new GSDW produced would have only 0.003 less salt and less than 0.04 ml 1 -1 greater oxygen concentration than that already in the basin. Consequently, detection of whether new deep water has been produced following a winter cooling season could be difficult even with the best of modern accuracy.
An Improved Forwarding of Diverse Events with Mobile Sinks in Underwater Wireless Sensor Networks.
Raza, Waseem; Arshad, Farzana; Ahmed, Imran; Abdul, Wadood; Ghouzali, Sanaa; Niaz, Iftikhar Azim; Javaid, Nadeem
2016-11-04
In this paper, a novel routing strategy to cater the energy consumption and delay sensitivity issues in deep underwater wireless sensor networks is proposed. This strategy is named as ESDR: Event Segregation based Delay sensitive Routing. In this strategy sensed events are segregated on the basis of their criticality and, are forwarded to their respective destinations based on forwarding functions. These functions depend on different routing metrics like: Signal Quality Index, Localization free Signal to Noise Ratio, Energy Cost Function and Depth Dependent Function. The problem of incomparable values of previously defined forwarding functions causes uneven delays in forwarding process. Hence forwarding functions are redefined to ensure their comparable values in different depth regions. Packet forwarding strategy is based on the event segregation approach which forwards one third of the generated events (delay sensitive) to surface sinks and two third events (normal events) are forwarded to mobile sinks. Motion of mobile sinks is influenced by the relative distribution of normal nodes. We have also incorporated two different mobility patterns named as; adaptive mobility and uniform mobility for mobile sinks. The later one is implemented for collecting the packets generated by the normal nodes. These improvements ensure optimum holding time, uniform delay and in-time reporting of delay sensitive events. This scheme is compared with the existing ones and outperforms the existing schemes in terms of network lifetime, delay and throughput.
Gearing, Robin E; Schwalbe, Craig S; MacKenzie, Michael J; Brewer, Kathryne B; Ibrahim, Rawan W; Olimat, Hmoud S; Al-Makhamreh, Sahar S; Mian, Irfan; Al-Krenawi, Alean
2013-11-01
All too often, efficacious psychosocial evidence-based interventions fail when adapted from one culture to another. International translation requires a deep understanding of the local culture, nuanced differences within a culture, established service practices, and knowledge of obstacles and promoters to treatment implementation. This research investigated the following objectives to better facilitate cultural adaptation and translation of psychosocial and mental health treatments in Arab countries: (1) identify barriers or obstacles; (2) identify promoting strategies; and (3) provide clinical and research recommendations. This systematic review of 22 psychosocial or mental health studies in Middle East Arab countries identified more barriers (68%) than promoters (32%) to effective translation and adaptation of empirically supported psychosocial interventions. Identified barriers include obstacles related to acceptability of the intervention within the cultural context, community and system difficulties, and problems with clinical engagement processes. Whereas identified promoter strategies centre on the importance of partnering and working within the local and cultural context, the need to engage with acceptable and traditional intervention characteristics, and the development of culturally appropriate treatment strategies and techniques. Although Arab cultures across the Middle East are unique, this article provides a series of core clinical and research recommendations to assist effective treatment adaptation and translation within Arab communities in the Middle East.
Recent Advances of Light-Mediated Theranostics
Ai, Xiangzhao; Mu, Jing; Xing, Bengang
2016-01-01
Currently, precision theranostics have been extensively demanded for the effective treatment of various human diseases. Currently, efficient therapy at the targeted disease areas still remains challenging since most available drug molecules lack of selectivity to the pathological sites. Among different approaches, light-mediated therapeutic strategy has recently emerged as a promising and powerful tool to precisely control the activation of therapeutic reagents and imaging probes in vitro and in vivo, mostly attributed to its unique properties including minimally invasive capability and highly spatiotemporal resolution. Although it has achieved initial success, the conventional strategies for light-mediated theranostics are mostly based on the light with short wavelength (e.g., UV or visible light), which may usually suffer from several undesired drawbacks, such as limited tissue penetration depth, unavoidable light absorption/scattering and potential phototoxicity to healthy tissues, etc. Therefore, a near-infrared (NIR) light-mediated approach on the basis of long-wavelength light (700-1000 nm) irradiation, which displays deep-tissue penetration, minimized photo-damage and low autofluoresence in living systems, has been proposed as an inspiring alternative for precisely phototherapeutic applications in the last decades. Despite numerous NIR light-responsive molecules have been currently proposed for clinical applications, several inherent drawbacks, such as troublesome synthetic procedures, low water solubility and limited accumulation abilities in targeted areas, heavily restrict their applications in deep-tissue therapeutic and imaging studies. Thanks to the amazing properties of several nanomaterials with large extinction coefficient in the NIR region, the construction of NIR light responsive nanoplatforms with multifunctions have become promising approaches for deep-seated diseases diagnosis and therapy. In this review, we summarized various light-triggered theranostic strategies and introduced their great advances in biomedical applications in recent years. Moreover, some other promising light-assisted techniques, such as photoacoustic and Cerenkov radiation, were also systemically discussed. Finally, the potential challenges and future perspectives for light-mediated deep-tissue diagnosis and therapeutics were proposed. PMID:27877246
Pandey, Jatin; Singh, Manjari
2016-06-01
Emotional labour involves management of one's emotions to match the demands of their roles. This emotion display involves just expression (surface-level emotional labour) or experience in addition to expression (deep-level emotional labour) of the desired emotions. Emotional labour is required in the effective, efficient and successful healthcare service delivery. Burnout associated with emotional labour is an important factor that decides how satisfied frontline service providers with their job are. This empirical study investigates the link between surface and deep-level emotional labour, burnout and job satisfaction in women community health workers from India. Our results from the structural equation modelling of 177 accredited social health activists (ASHAs) indicate a negative relation between surface and deep-level emotional labour, clearly demarcating them as two different strategies for performance of emotional labour in community health care setting. Surface-level emotional labour is associated with higher job satisfaction, and burnout partially mediates this relation. Deep-level emotional labour is associated with lower job satisfaction; burnout fully mediates this relation. Qualitative post hoc analysis based on interviews of 10 ASHAs was done to understand the findings of the quantitative study. Surface-level emotional labour was found to be a more desirable strategy for community health care workers for the effective and efficient performance of their work roles. Our results have a significant contribution to design, redesign, and improvement of employment practices in community healthcare. This study brings forth the neglected issues of emotions and their implications for these healthcare workers in low and middle-income countries who are a vital link that delivers healthcare to weaker section of the society. The findings have relevance not merely for the individual providing this service but the beneficiary and the organization that facilitates this delivery. Interventions based on demographic, community, national and occupational factors have also been presented. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Deep space communication - Past, present, and future
NASA Technical Reports Server (NTRS)
Posner, E. C.; Stevens, R.
1984-01-01
This paper reviews the progress made in deep space communication from its beginnings until now, describes the development and applications of NASA's Deep Space Network, and indicates directions for the future. Limiting factors in deep space communication are examined using the upcoming Voyager encounter with Uranus, centered on the downlink telemetry from spacecraft to earth, as an example. A link calculation for Voyager at Uranus over Australia is exhibited. Seven basic deep space communication functions are discussed, and technical aspects of spacecraft communication equipment, ground antennas, and ground electronics and processing are considered.
Atlantic-Pacific Asymmetry in Deep Water Formation
NASA Astrophysics Data System (ADS)
Ferreira, David; Cessi, Paola; Coxall, Helen K.; de Boer, Agatha; Dijkstra, Henk A.; Drijfhout, Sybren S.; Eldevik, Tor; Harnik, Nili; McManus, Jerry F.; Marshall, David P.; Nilsson, Johan; Roquet, Fabien; Schneider, Tapio; Wills, Robert C.
2018-05-01
While the Atlantic Ocean is ventilated by high-latitude deep water formation and exhibits a pole-to-pole overturning circulation, the Pacific Ocean does not. This asymmetric global overturning pattern has persisted for the past 2–3 million years, with evidence for different ventilation modes in the deeper past. In the current climate, the Atlantic-Pacific asymmetry occurs because the Atlantic is more saline, enabling deep convection. To what extent the salinity contrast between the two basins is dominated by atmospheric processes (larger net evaporation over the Atlantic) or oceanic processes (salinity transport into the Atlantic) remains an outstanding question. Numerical simulations have provided support for both mechanisms; observations of the present climate support a strong role for atmospheric processes as well as some modulation by oceanic processes. A major avenue for future work is the quantification of the various processes at play to identify which mechanisms are primary in different climate states.
A Modeling Study of Deep Water Renewal in the Red Sea
NASA Astrophysics Data System (ADS)
Yao, F.; Hoteit, I.
2016-02-01
Deep water renewal processes in the Red Sea are examined in this study using a 50-year numerical simulation from 1952-2001. The deep water in the Red Sea below the thermocline ( 200 m) exhibits a near-uniform vertical structure in temperature and salinity, but geochemical tracer distributions, such as 14C and 3He, and dissolved oxygen concentrations indicate that the deep water is renewed on time scales as short as 36 years. The renewal process is accomplished through a deep overturning cell that consists of a southward bottom current and a northward returning current at depths of 400-600 m. Three sources regions are proposed for the formation of the deep water, including two deep outflows from the Gulfs of Aqaba and Suez and winter deep convections in the northern Red Sea. The MITgcm (MIT general circulation model), which has been used to simulate the shallow overturning circulations in the Red Sea, is configured in this study with increased resolutions in the deep water. During the 50 years of simulation, artificial passive tracers added in the model indicate that the deep water in the Red Sea was only episodically renewed during some anomalously cold years; two significant episodes of deep water renewal are reproduced in the winters of 1983 and 1992, in accordance with reported historical hydrographic observations. During these renewal events, deep convections reaching the bottom of the basin occurred, which further facilitated deep sinking of the outflows from the Gulfs of Aqaba and Suez. Ensuing spreading of the newly formed deep water along the bottom caused upward displacements of thermocline, which may have profound effects on the water exchanges in the Strait of Bab el Mandeb between the Red Sea and the Gulf of Aden and the functioning of the ecosystem in the Red Sea by changing the vertical distributions of nutrients.
Endogenous System Microbes as Treatment Process ...
Monitoring the efficacy of treatment strategies to remove pathogens in decentralized systems remains a challenge. Evaluating log reduction targets by measuring pathogen levels is hampered by their sporadic and low occurrence rates. Fecal indicator bacteria are used in centralized systems to indicate the presence of fecal pathogens, but are ineffective decentralized treatment process indicators as they generally occur at levels too low to assess log reduction targets. System challenge testing by spiking with high loads of fecal indicator organisms, like MS2 coliphage, has limitations, especially for large systems. Microbes that are endogenous to the decentralized system, occur in high abundances and mimic removal rates of bacterial, viral and/or parasitic protozoan pathogens during treatment could serve as alternative treatment process indicators to verify log reduction targets. To identify abundant microbes in wastewater, the bacterial and viral communities were examined using deep sequencing. Building infrastructure-associated bacteria, like Zoogloea, were observed as dominant members of the bacterial community in graywater. In blackwater, bacteriophage of the order Caudovirales constituted the majority of contiguous sequences from the viral community. This study identifies candidate treatment process indicators in decentralized systems that could be used to verify log removal during treatment. The association of the presence of treatment process indic
DeepSynergy: predicting anti-cancer drug synergy with Deep Learning
Preuer, Kristina; Lewis, Richard P I; Hochreiter, Sepp; Bender, Andreas; Bulusu, Krishna C; Klambauer, Günter
2018-01-01
Abstract Motivation While drug combination therapies are a well-established concept in cancer treatment, identifying novel synergistic combinations is challenging due to the size of combinatorial space. However, computational approaches have emerged as a time- and cost-efficient way to prioritize combinations to test, based on recently available large-scale combination screening data. Recently, Deep Learning has had an impact in many research areas by achieving new state-of-the-art model performance. However, Deep Learning has not yet been applied to drug synergy prediction, which is the approach we present here, termed DeepSynergy. DeepSynergy uses chemical and genomic information as input information, a normalization strategy to account for input data heterogeneity, and conical layers to model drug synergies. Results DeepSynergy was compared to other machine learning methods such as Gradient Boosting Machines, Random Forests, Support Vector Machines and Elastic Nets on the largest publicly available synergy dataset with respect to mean squared error. DeepSynergy significantly outperformed the other methods with an improvement of 7.2% over the second best method at the prediction of novel drug combinations within the space of explored drugs and cell lines. At this task, the mean Pearson correlation coefficient between the measured and the predicted values of DeepSynergy was 0.73. Applying DeepSynergy for classification of these novel drug combinations resulted in a high predictive performance of an AUC of 0.90. Furthermore, we found that all compared methods exhibit low predictive performance when extrapolating to unexplored drugs or cell lines, which we suggest is due to limitations in the size and diversity of the dataset. We envision that DeepSynergy could be a valuable tool for selecting novel synergistic drug combinations. Availability and implementation DeepSynergy is available via www.bioinf.jku.at/software/DeepSynergy. Contact klambauer@bioinf.jku.at Supplementary information Supplementary data are available at Bioinformatics online. PMID:29253077
Deep Ecology Education: Learning from Its Vaisnava Roots
ERIC Educational Resources Information Center
Haigh, Martin
2006-01-01
Deep ecology arises from the personal intuition that one's self is part of the world's environmental wholeness. This awareness may be constructed upon scientific foundations but it is more commonly thought a spiritual concept. Deep ecology pedagogy emerges from its three-step process of ecological Self-realization. This paper traces the roots of…
Ma, Weixing; Huang, Tinglin; Li, Xuan; Zhou, Zizhen; Li, Yang; Zeng, Kang
2015-01-01
Storm runoff events in the flooding season affect the water quality of reservoirs and increase risks to the water supply, but coping strategies have seldom been reported. The phenomenon of turbid current intrusion resulting in water turbidity and anoxic conditions reappearing after storm runoff, resulting in the deterioration of water quality, was observed in the flooding season in the deep canyon-shaped Heihe Reservoir. The objective of this work was to elucidate the effects of storm runoff on the Heihe Reservoir water quality and find a coping strategy. In this study, an intensive sampling campaign measuring water temperature, dissolved oxygen, turbidity, nutrients, and metals were conducted in the reservoir over a period of two years, and the water-lifting aerators were improved to achieve single aeration and a full layer of mixing and oxygenation functions using different volumes of gas. The operation of the improved water-lifting aerators mixed the reservoir three months ahead of the natural mixing time, and good water quality was maintained during the induced mixing period, thereby extending the good water quality period. The results can provide an effective coping strategy to improve the water quality of a source water reservoir and ensure the safety of drinking water. PMID:26184258
NASA Astrophysics Data System (ADS)
Machhammer, M.; Sommitsch, C.
2016-11-01
Research conducted in recent years has shown that heat-treatable Al-Mg-Si alloys (6xxx) have great potential concerning the design of lightweight car bodies. Compared to conventional deep drawing steels the field of application is limited by a lower formability. In order to minimize the disadvantage of a lower drawability a short-term heat-treatment (SHT) can be applied before the forming process. The SHT, conducted in selected areas on the initial blank, leads to a local reduction of strength aiming at the decrease of critical stress during the deep drawing process. For the successful procedure of the SHT a solid knowledge about the crucial process parameters such as the design of the SHT layout, the SHT process time and the maximum SHT temperature are urgently required. It also should be noted that the storage time between the SHT and the forming processes affects the mechanical properties of the SHT area. In this paper, the effect of diverse SHT process parameters and various storage time-frames on the major and minor strain situation of a deep drawn part is discussed by the evaluation of the forming limit diagram. For the purpose of achieving short heating times and a homogenous temperature distribution a one side contact heating tool has been used for the heat treatment in this study.
Integrated piezoelectric actuators in deep drawing tools
NASA Astrophysics Data System (ADS)
Neugebauer, R.; Mainda, P.; Drossel, W.-G.; Kerschner, M.; Wolf, K.
2011-04-01
The production of car body panels are defective in succession of process fluctuations. Thus the produced car body panel can be precise or damaged. To reduce the error rate, an intelligent deep drawing tool was developed at the Fraunhofer Institute for Machine Tools and Forming Technology IWU in cooperation with Audi and Volkswagen. Mechatronic components in a closed-loop control is the main differentiating factor between an intelligent and a conventional deep drawing tool. In correlation with sensors for process monitoring, the intelligent tool consists of piezoelectric actuators to actuate the deep drawing process. By enabling the usage of sensors and actuators at the die, the forming tool transform to a smart structure. The interface between sensors and actuators will be realized with a closed-loop control. The content of this research will present the experimental results with the piezoelectric actuator. For the analysis a production-oriented forming tool with all automotive requirements were used. The disposed actuators are monolithic multilayer actuators of the piezo injector system. In order to achieve required force, the actuators are combined in a cluster. The cluster is redundant and economical. In addition to the detailed assembly structures, this research will highlight intensive analysis with the intelligent deep drawing tool.
The DEEP2 Galaxy Redshift Survey: Design, Observations, Data Reduction, and Redshifts
NASA Technical Reports Server (NTRS)
Newman, Jeffrey A.; Cooper, Michael C.; Davis, Marc; Faber, S. M.; Coil, Alison L; Guhathakurta, Puraga; Koo, David C.; Phillips, Andrew C.; Conroy, Charlie; Dutton, Aaron A.;
2013-01-01
We describe the design and data analysis of the DEEP2 Galaxy Redshift Survey, the densest and largest high-precision redshift survey of galaxies at z approx. 1 completed to date. The survey was designed to conduct a comprehensive census of massive galaxies, their properties, environments, and large-scale structure down to absolute magnitude MB = -20 at z approx. 1 via approx.90 nights of observation on the Keck telescope. The survey covers an area of 2.8 Sq. deg divided into four separate fields observed to a limiting apparent magnitude of R(sub AB) = 24.1. Objects with z approx. < 0.7 are readily identifiable using BRI photometry and rejected in three of the four DEEP2 fields, allowing galaxies with z > 0.7 to be targeted approx. 2.5 times more efficiently than in a purely magnitude-limited sample. Approximately 60% of eligible targets are chosen for spectroscopy, yielding nearly 53,000 spectra and more than 38,000 reliable redshift measurements. Most of the targets that fail to yield secure redshifts are blue objects that lie beyond z approx. 1.45, where the [O ii] 3727 Ang. doublet lies in the infrared. The DEIMOS 1200 line mm(exp -1) grating used for the survey delivers high spectral resolution (R approx. 6000), accurate and secure redshifts, and unique internal kinematic information. Extensive ancillary data are available in the DEEP2 fields, particularly in the Extended Groth Strip, which has evolved into one of the richest multiwavelength regions on the sky. This paper is intended as a handbook for users of the DEEP2 Data Release 4, which includes all DEEP2 spectra and redshifts, as well as for the DEEP2 DEIMOS data reduction pipelines. Extensive details are provided on object selection, mask design, biases in target selection and redshift measurements, the spec2d two-dimensional data-reduction pipeline, the spec1d automated redshift pipeline, and the zspec visual redshift verification process, along with examples of instrumental signatures or other artifacts that in some cases remain after data reduction. Redshift errors and catastrophic failure rates are assessed through more than 2000 objects with duplicate observations. Sky subtraction is essentially photon-limited even under bright OH sky lines; we describe the strategies that permitted this, based on high image stability, accurate wavelength solutions, and powerful B-spline modeling methods. We also investigate the impact of targets that appear to be single objects in ground-based targeting imaging but prove to be composite in Hubble Space Telescope data; they constitute several percent of targets at z approx. 1, approaching approx. 5%-10% at z > 1.5. Summary data are given that demonstrate the superiority of DEEP2 over other deep high-precision redshift surveys at z approx. 1 in terms of redshift accuracy, sample number density, and amount of spectral information. We also provide an overview of the scientific highlights of the DEEP2 survey thus far.
Kauffmann-Lacroix, C; Albouy-Llaty, M; Migeot, V; Contet-Audonneau, N
2011-09-01
The objective of the survey was to describe the practices of clinical laboratories in terms of cultures in medical mycology. We have implemented this project within the members of the French Society for Medical Mycology (SFMM) to evaluate the analytical processes of the mycological examination in our laboratories. This preliminary study would help to suggest the future French guidelines. A questionnaire regarding the processing of mycology analysis was sent to the 227 members of the SFMM in 2009. The data involved 21 types of samples, direct microscopic examination with or without colouring and the reagents, the number of culture media, the types of media (Sabouraud, Sabouraud antibiotic, Sabouraud cycloheximide and chromogenic medium), temperature and duration of the incubation (days) and the existence of a first result before the end of the incubation period. The analytical processes were compared to an accredited laboratory according to EN ISO 15189. A great heterogeneity was observed in the 36 forms from 27 (75%) laboratories belonging to university hospitals among the 38 existing in France. As for deep samples, two microscopic exams were performed, only one was usually done. A more sensitive technique was preferred to the wet-mount for some samples. Routine samples are often inoculated on a chromogenic media. For deep samples two medium are inoculated (chromogenic media, Sabouraud and antibiotics). If the temperature of incubation is unique, 30°C was chosen. A temperature of 37°C was preferred for samples where Candida spp. is selected. When there are two temperatures of incubation, 27°C and 37°C were preferred. Each biologist can compare his proceedings to the other laboratories and to a laboratory already accredited. The question is to find the best strategies for each medical mycology specimen. They will aid the process of accreditation according to EN ISO 15189, which now applies in all laboratories in Europe. Copyright © 2011 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Trevisan, L.; Illangasekare, T. H.; Rodriguez, D.; Sakaki, T.; Cihan, A.; Birkholzer, J. T.; Zhou, Q.
2011-12-01
Geological storage of carbon dioxide in deep geologic formations is being considered as a technical option to reduce greenhouse gas loading to the atmosphere. The processes associated with the movement and stable trapping are complex in deep naturally heterogeneous formations. Three primary mechanisms contribute to trapping; capillary entrapment due to immobilization of the supercritical fluid CO2 within soil pores, liquid CO2 dissolving in the formation water and mineralization. Natural heterogeneity in the formation is expected to affect all three mechanisms. A research project is in progress with the primary goal to improve our understanding of capillary and dissolution trapping during injection and post-injection process, focusing on formation heterogeneity. It is expected that this improved knowledge will help to develop site characterization methods targeting on obtaining the most critical parameters that capture the heterogeneity to design strategies and schemes to maximize trapping. This research combines experiments at the laboratory scale with multiphase modeling to upscale relevant trapping processes to the field scale. This paper presents the results from a set of experiments that were conducted in an intermediate scale test tanks. Intermediate scale testing provides an attractive alternative to investigate these processes under controlled conditions in the laboratory. Conducting these types of experiments is highly challenging as methods have to be developed to extrapolate the data from experiments that are conducted under ambient laboratory conditions to high temperatures and pressures settings in deep geologic formations. We explored the use of a combination of surrogate fluids that have similar density, viscosity contrasts and analogous solubility and interfacial tension as supercritical CO2-brine in deep formations. The extrapolation approach involves the use of dimensionless numbers such as Capillary number (Ca) and the Bond number (Bo). A set of experiments that captures some of the complexities of the geologic heterogeneity and injection scenarios are planned in a 4.8 m long tank. To test the experimental methods and instrumentation, a set of preliminary experiments were conducted in a smaller tank with dimensions 90 cm x 60 cm. The tank was packed to represent both homogeneous and heterogeneous conditions. Using the surrogate fluids, different injection scenarios were tested. Images of the migration plume showed the critical role that heterogeneity plays in stable entrapment. Destructive sampling done at the end of the experiments provided data on the final saturation distributions. Preliminary analysis suggests the entrapment configuration is controlled by the large-scale heterogeneities as well as the pore-scale entrapment mechanisms. The data was used in modeling analysis that is presented in a companion abstract.
Climate, carbon cycling, and deep-ocean ecosystems.
Smith, K L; Ruhl, H A; Bett, B J; Billett, D S M; Lampitt, R S; Kaufmann, R S
2009-11-17
Climate variation affects surface ocean processes and the production of organic carbon, which ultimately comprises the primary food supply to the deep-sea ecosystems that occupy approximately 60% of the Earth's surface. Warming trends in atmospheric and upper ocean temperatures, attributed to anthropogenic influence, have occurred over the past four decades. Changes in upper ocean temperature influence stratification and can affect the availability of nutrients for phytoplankton production. Global warming has been predicted to intensify stratification and reduce vertical mixing. Research also suggests that such reduced mixing will enhance variability in primary production and carbon export flux to the deep sea. The dependence of deep-sea communities on surface water production has raised important questions about how climate change will affect carbon cycling and deep-ocean ecosystem function. Recently, unprecedented time-series studies conducted over the past two decades in the North Pacific and the North Atlantic at >4,000-m depth have revealed unexpectedly large changes in deep-ocean ecosystems significantly correlated to climate-driven changes in the surface ocean that can impact the global carbon cycle. Climate-driven variation affects oceanic communities from surface waters to the much-overlooked deep sea and will have impacts on the global carbon cycle. Data from these two widely separated areas of the deep ocean provide compelling evidence that changes in climate can readily influence deep-sea processes. However, the limited geographic coverage of these existing time-series studies stresses the importance of developing a more global effort to monitor deep-sea ecosystems under modern conditions of rapidly changing climate.
[Oncologic gynecology and the Internet].
Gizler, Robert; Bielanów, Tomasz; Kulikiewicz, Krzysztof
2002-11-01
The strategy of World Wide Web searching for medical sites was presented in this article. The "deep web" and "surface web" resources were searched. The 10 best sites connected with the gynecological oncology, according to authors' opinion, were presented.
Development of a Hybrid Deep Drawing Process to Reduce Springback of AHSS
NASA Astrophysics Data System (ADS)
Boskovic, Vladimir; Sommitsch, Christoph; Kicin, Mustafa
2017-09-01
In future, the steel manufacturers will strive for the implementation of Advanced High Strength Steels (AHSS) in the automotive industry to reduce mass and improve structural performance. A key challenge is the definition of optimal and cost effective processes as well as solutions to introduce complex steel products in cold forming. However, the application of these AHSS often leads to formability problems such as springback. One promising approach in order to minimize springback is the relaxation of stress through the targeted heating of materials in the radius area after the deep drawing process. In this study, experiments are conducted on a Dual Phase (DP) and TWining Induced Plasticity (TWIP) steel for the process feasibility study. This work analyses the influence of various heat treatment temperatures on the springback reduction of deep drawn AHSS.
How School Climate Influences Teachers’ Emotional Exhaustion: The Mediating Role of Emotional Labor
Yao, Xiuping; Yao, Meilin; Zong, Xiaoli; Li, Yulan; Li, Xiying; Guo, Fangfang; Cui, Guanyu
2015-01-01
Currently, in China, improving the quality of teachers’ emotional labor has become an urgent need for most pre-kindergarten through 12th grade (p–12) schools because the new curriculum reform highlights the role of emotion in teaching. A total of 703 primary and high school teachers in Mainland China were investigated regarding their perceptions of school climate, emotional labor strategy and emotional exhaustion via questionnaires. The findings revealed that the teachers’ perceptions of the school climate negatively affected surface acting but positively affected deep acting. Surface acting positively predicted emotional exhaustion, and deep acting had no significant effect on emotional exhaustion. Moreover, emotional labor mediated the relationship between the teachers’ perceptions of the school climate and emotional exhaustion. Programs aimed at improving the school climate and the teachers’ use of appropriate emotional labor strategies should be implemented in schools in Mainland China. PMID:26457713
How School Climate Influences Teachers' Emotional Exhaustion: The Mediating Role of Emotional Labor.
Yao, Xiuping; Yao, Meilin; Zong, Xiaoli; Li, Yulan; Li, Xiying; Guo, Fangfang; Cui, Guanyu
2015-10-08
Currently, in China, improving the quality of teachers' emotional labor has become an urgent need for most pre-kindergarten through 12th grade (p-12) schools because the new curriculum reform highlights the role of emotion in teaching. A total of 703 primary and high school teachers in Mainland China were investigated regarding their perceptions of school climate, emotional labor strategy and emotional exhaustion via questionnaires. The findings revealed that the teachers' perceptions of the school climate negatively affected surface acting but positively affected deep acting. Surface acting positively predicted emotional exhaustion, and deep acting had no significant effect on emotional exhaustion. Moreover, emotional labor mediated the relationship between the teachers' perceptions of the school climate and emotional exhaustion. Programs aimed at improving the school climate and the teachers' use of appropriate emotional labor strategies should be implemented in schools in Mainland China.
NASA Astrophysics Data System (ADS)
Shi, Bibo; Hou, Rui; Mazurowski, Maciej A.; Grimm, Lars J.; Ren, Yinhao; Marks, Jeffrey R.; King, Lorraine M.; Maley, Carlo C.; Hwang, E. Shelley; Lo, Joseph Y.
2018-02-01
Purpose: To determine whether domain transfer learning can improve the performance of deep features extracted from digital mammograms using a pre-trained deep convolutional neural network (CNN) in the prediction of occult invasive disease for patients with ductal carcinoma in situ (DCIS) on core needle biopsy. Method: In this study, we collected digital mammography magnification views for 140 patients with DCIS at biopsy, 35 of which were subsequently upstaged to invasive cancer. We utilized a deep CNN model that was pre-trained on two natural image data sets (ImageNet and DTD) and one mammographic data set (INbreast) as the feature extractor, hypothesizing that these data sets are increasingly more similar to our target task and will lead to better representations of deep features to describe DCIS lesions. Through a statistical pooling strategy, three sets of deep features were extracted using the CNNs at different levels of convolutional layers from the lesion areas. A logistic regression classifier was then trained to predict which tumors contain occult invasive disease. The generalization performance was assessed and compared using repeated random sub-sampling validation and receiver operating characteristic (ROC) curve analysis. Result: The best performance of deep features was from CNN model pre-trained on INbreast, and the proposed classifier using this set of deep features was able to achieve a median classification performance of ROC-AUC equal to 0.75, which is significantly better (p<=0.05) than the performance of deep features extracted using ImageNet data set (ROCAUC = 0.68). Conclusion: Transfer learning is helpful for learning a better representation of deep features, and improves the prediction of occult invasive disease in DCIS.
Spitzer Space Telescope Sequencing Operations Software, Strategies, and Lessons Learned
NASA Technical Reports Server (NTRS)
Bliss, David A.
2006-01-01
The Space Infrared Telescope Facility (SIRTF) was launched in August, 2003, and renamed to the Spitzer Space Telescope in 2004. Two years of observing the universe in the wavelength range from 3 to 180 microns has yielded enormous scientific discoveries. Since this magnificent observatory has a limited lifetime, maximizing science viewing efficiency (ie, maximizing time spent executing activities directly related to science observations) was the key operational objective. The strategy employed for maximizing science viewing efficiency was to optimize spacecraft flexibility, adaptability, and use of observation time. The selected approach involved implementation of a multi-engine sequencing architecture coupled with nondeterministic spacecraft and science execution times. This approach, though effective, added much complexity to uplink operations and sequence development. The Jet Propulsion Laboratory (JPL) manages Spitzer s operations. As part of the uplink process, Spitzer s Mission Sequence Team (MST) was tasked with processing observatory inputs from the Spitzer Science Center (SSC) into efficiently integrated, constraint-checked, and modeled review and command products which accommodated the complexity of non-deterministic spacecraft and science event executions without increasing operations costs. The MST developed processes, scripts, and participated in the adaptation of multi-mission core software to enable rapid processing of complex sequences. The MST was also tasked with developing a Downlink Keyword File (DKF) which could instruct Deep Space Network (DSN) stations on how and when to configure themselves to receive Spitzer science data. As MST and uplink operations developed, important lessons were learned that should be applied to future missions, especially those missions which employ command-intensive operations via a multi-engine sequence architecture.
The Effects of Interactive Graphics Analogies on Recall of Concepts in Science
1976-08-01
processing , in the Craik and Lockhart sense, were induced by this postlesson condition. 3. The fact that students were able to deal with both...higher scores on a graphics posttest in Experiment III. These results suggest that both shallow and deep processing , in the Craik and Lockhart ...graphics posttest in Experiment III. These results suggest that both shallow and deep processing , in the Cralk and Lockhart sense, were induced by
Quan, X; Yi, J; Ye, T H; Tian, S Y; Zou, L; Yu, X R; Huang, Y G
2013-04-01
Thirty volunteers randomly received either mild or deep propofol sedation, to assess its effect on explicit and implicit memory. Blood oxygen level-dependent functional magnetic resonance during sedation examined brain activation by auditory word stimulus and a process dissociation procedure was performed 4 h after scanning. Explicit memory formation did not occur in either group. Implicit memories were formed during mild but not deep sedation (p = 0.04). Mild propofol sedation inhibited superior temporal gyrus activation (Z value 4.37, voxel 167). Deep propofol sedation inhibited superior temporal gyrus (Z value 4.25, voxel 351), middle temporal gyrus (Z value 4.39, voxel 351) and inferior parietal lobule (Z value 5.06, voxel 239) activation. Propofol only abolishes implicit memory during deep sedation. The superior temporal gyrus is associated with explicit memory processing, while the formation of both implicit and explicit memories is associated with superior and middle temporal gyri and inferior parietal lobule activation. Anaesthesia © 2013 The Association of Anaesthetists of Great Britain and Ireland.
Ghoneim, Mohamed Tarek; Hussain, Muhammad Mustafa
2017-04-01
A highly manufacturable deep reactive ion etching based process involving a hybrid soft/hard mask process technology shows high aspect ratio complex geometry Lego-like silicon electronics formation enabling free-form (physically flexible, stretchable, and reconfigurable) electronic systems. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Akyol, Zehra; Garrison, D. Randy
2011-01-01
This paper focuses on deep and meaningful learning approaches and outcomes associated with online and blended communities of inquiry. Applying mixed methodology for the research design, the study used transcript analysis, learning outcomes, perceived learning, satisfaction, and interviews to assess learning processes and outcomes. The findings for…
Implementation of an Antenna Array Signal Processing Breadboard for the Deep Space Network
NASA Technical Reports Server (NTRS)
Navarro, Robert
2006-01-01
The Deep Space Network Large Array will replace/augment 34 and 70 meter antenna assets. The array will mainly be used to support NASA's deep space telemetry, radio science, and navigation requirements. The array project will deploy three complexes in the western U.S., Australia, and European longitude each with 400 12m downlink antennas and a DSN central facility at JPL. THis facility will remotely conduct all real-time monitor and control for the network. Signal processing objectives include: provide a means to evaluate the performance of the Breadboard Array's antenna subsystem; design and build prototype hardware; demonstrate and evaluate proposed signal processing techniques; and gain experience with various technologies that may be used in the Large Array. Results are summarized..
Effect of deep pressure input on parasympathetic system in patients with wisdom tooth surgery.
Chen, Hsin-Yung; Yang, Hsiang; Meng, Ling-Fu; Chan, Pei-Ying Sarah; Yang, Chia-Yen; Chen, Hsin-Ming
2016-10-01
Deep pressure input is used to normalize physiological arousal due to stress. Wisdom tooth surgery is an invasive dental procedure with high stress levels, and an alleviation strategy is rarely applied during extraction. In this study, we investigated the effects of deep pressure input on autonomic responses to wisdom tooth extraction in healthy adults. A randomized, controlled, crossover design was used for dental patients who were allocated to experimental and control groups that received treatment with or without deep pressure input, respectively. Autonomic indicators, namely the heart rate (HR), percentage of low-frequency (LF) HR variability (LF-HRV), percentage of high-frequency (HF) HRV (HF-HRV), and LF/HF HRV ratio (LF/HF-HRV), were assessed at the baseline, during wisdom tooth extraction, and in the posttreatment phase. Wisdom tooth extraction caused significant autonomic parameter changes in both groups; however, differential response patterns were observed between the two groups. In particular, deep pressure input in the experimental group was associated with higher HF-HRV and lower LF/HF-HRV during extraction compared with those in the control group. LF/HF-HRV measurement revealed balanced sympathovagal activation in response to deep pressure application. The results suggest that the application of deep pressure alters the response of HF-HRV and facilitates maintaining sympathovagal balance during wisdom tooth extraction. Copyright © 2016. Published by Elsevier B.V.
Zhang, Chao; Jia, Yongzhong; Jing, Yan; Wang, Huaiyou; Hong, Kai
2014-08-01
The infrared spectrum of deep eutectic solvent of choline chloride and magnesium chloride hexahydrate was measured by the FTIR spectroscopy and analyzed with the aid of DFT calculations. The main chemical species and molecular structure in deep eutectic solvent of [MgClm(H2O)6-m]2-m and [ChxCly]x+y complexes were mainly identified and the active ion of magnesium complex during the electrochemical process was obtained. The mechanism of the electrochemical process of deep eutectic solvent of choline chloride and magnesium chloride hexahydrate was well explained by combination theoretical calculations and experimental. Besides, based on our results we proposed a new system for the dehydration study of magnesium chloride hexahydrate.
NASA Astrophysics Data System (ADS)
Gan, Wen-Cong; Shu, Fu-Wen
Quantum many-body problem with exponentially large degrees of freedom can be reduced to a tractable computational form by neural network method [G. Carleo and M. Troyer, Science 355 (2017) 602, arXiv:1606.02318.] The power of deep neural network (DNN) based on deep learning is clarified by mapping it to renormalization group (RG), which may shed lights on holographic principle by identifying a sequence of RG transformations to the AdS geometry. In this paper, we show that any network which reflects RG process has intrinsic hyperbolic geometry, and discuss the structure of entanglement encoded in the graph of DNN. We find the entanglement structure of DNN is of Ryu-Takayanagi form. Based on these facts, we argue that the emergence of holographic gravitational theory is related to deep learning process of the quantum-field theory.
NASA Astrophysics Data System (ADS)
Candra, S.; Batan, I. M. L.; Berata, W.; Pramono, A. S.
2017-11-01
This paper presents the mathematical approach of minimum blank holder force to prevent wrinkling in deep drawing process of the cylindrical cup. Based on the maximum of minor-major strain ratio, the slab method was applied to determine the modeling of minimum variable blank holder force (VBHF) and it compared to FE simulation. The Tin steel sheet of T4-CA grade, with the thickness of 0.2 mm was used in this study. The modeling of minimum VBHF can be used as a simple reference to prevent wrinkling in deep drawing.
Major technological innovations introduced in the large antennas of the Deep Space Network
NASA Technical Reports Server (NTRS)
Imbriale, W. A.
2002-01-01
The NASA Deep Space Network (DSN) is the largest and most sensitive scientific, telecommunications and radio navigation network in the world. Its principal responsibilities are to provide communications, tracking, and science services to most of the world's spacecraft that travel beyond low Earth orbit. The network consists of three Deep Space Communications Complexes. Each of the three complexes consists of multiple large antennas equipped with ultra sensitive receiving systems. A centralized Signal Processing Center (SPC) remotely controls the antennas, generates and transmits spacecraft commands, and receives and processes the spacecraft telemetry.
Process Studies on Laser Welding of Copper with Brilliant Green and Infrared Lasers
NASA Astrophysics Data System (ADS)
Engler, Sebastian; Ramsayer, Reiner; Poprawe, Reinhart
Copper materials are classified as difficult to weld with state-of-the-art lasers. High thermal conductivity in combination with low absorption at room temperature require high intensities for reaching a deep penetration welding process. The low absorption also causes high sensitivity to variations in surface conditions. Green laser radiation shows a considerable higher absorption at room temperature. This reduces the threshold intensity for deep penetration welding significantly. The influence of the green wavelength on energy coupling during heat conduction welding and deep penetration welding as well as the influence on the weld shape has been investigated.
Cohen, Michael S.; Rissman, Jesse; Suthana, Nanthia A.; Castel, Alan D.; Knowlton, Barbara J.
2014-01-01
A number of prior fMRI studies have focused on the ways in which the midbrain dopaminergic reward system co-activates with hippocampus to potentiate memory for valuable items. However, another means by which people could selectively remember more valuable to-be-remembered items is to be selective in their use of effective but effortful encoding strategies. To broadly examine the neural mechanisms of value on subsequent memory, we used fMRI to examine how differences in brain activity at encoding as a function of value relate to subsequent free recall for words. Each word was preceded by an arbitrarily assigned point value, and participants went through multiple study-test cycles with feedback on their point total at the end of each list, allowing for sculpting of cognitive strategies. We examined the correlation between value-related modulation of brain activity and participants’ selectivity index, a measure of how close participants were to their optimal point total given the number of items recalled. Greater selectivity scores were associated with greater differences in activation of semantic processing regions, including left inferior frontal gyrus and left posterior lateral temporal cortex, during encoding of high-value words relative to low-value words. Although we also observed value-related modulation within midbrain and ventral striatal reward regions, our fronto-temporal findings suggest that strategic engagement of deep semantic processing may be an important mechanism for selectively encoding valuable items. PMID:24683066
Cohen, Michael S; Rissman, Jesse; Suthana, Nanthia A; Castel, Alan D; Knowlton, Barbara J
2014-06-01
A number of prior fMRI studies have focused on the ways in which the midbrain dopaminergic reward system coactivates with hippocampus to potentiate memory for valuable items. However, another means by which people could selectively remember more valuable to-be-remembered items is to be selective in their use of effective but effortful encoding strategies. To broadly examine the neural mechanisms of value on subsequent memory, we used fMRI to assess how differences in brain activity at encoding as a function of value relate to subsequent free recall for words. Each word was preceded by an arbitrarily assigned point value, and participants went through multiple study-test cycles with feedback on their point total at the end of each list, allowing for sculpting of cognitive strategies. We examined the correlation between value-related modulation of brain activity and participants' selectivity index, which measures how close participants were to their optimal point total, given the number of items recalled. Greater selectivity scores were associated with greater differences in the activation of semantic processing regions, including left inferior frontal gyrus and left posterior lateral temporal cortex, during the encoding of high-value words relative to low-value words. Although we also observed value-related modulation within midbrain and ventral striatal reward regions, our fronto-temporal findings suggest that strategic engagement of deep semantic processing may be an important mechanism for selectively encoding valuable items.
Nakhaei, Maryam; Khankeh, Hamid Reza; Masoumi, Gholam Reza; Hosseini, Mohammad Ali; Parsa-Yekta, Zohreh
2016-01-01
Background Since life recovery after disasters is a subjective and multifaceted construct influenced by different factors, and survivors’ main concerns and experiences are not clear, the researchers intended to explore this process. Materials and Methods This study was conducted in 2011 - 2014 based on the grounded theory approach. Participants were selected by purposeful sampling followed by theoretical sampling to achieve conceptual and theoretical saturation. Data were collected through interviews, observation, focus group discussion, and document reviews. Data were analyzed by Strauss and Corbin’s (2008) recommended approach. Results Transcribed data from 26 interviews (managers, health care providers, and receivers), field notes, and other documents were analyzed, and 1,652 open codes were identified. The codes were categorized, using constant comparative analysis, into five main categories including reactive exposure, subsiding emotions, need for comprehensive health recovery, improvement of normalization (new normality achievement), and contextual factors. The process of life recovery after disaster was also explored. Conclusions The results clarified a deep perception of participants’ experiences after disaster. The path of life recovery after disasters involves participants’ striving to achieve a comprehensive health recovery, which starts with the need for all-inclusive health recovery as a main concern; this is the motivator for a responding strategy. This strategy is participatory, and the process is progressive; achievement of a new normality is the final goal, with new development and levels of empowerment. PMID:27703797
Deep Crustal Melting and the Survival of Continental Crust
NASA Astrophysics Data System (ADS)
Whitney, D.; Teyssier, C. P.; Rey, P. F.; Korchinski, M.
2017-12-01
Plate convergence involving continental lithosphere leads to crustal melting, which ultimately stabilizes the crust because it drives rapid upward flow of hot deep crust, followed by rapid cooling at shallow levels. Collision drives partial melting during crustal thickening (at 40-75 km) and/or continental subduction (at 75-100 km). These depths are not typically exceeded by crustal rocks that are exhumed in each setting because partial melting significantly decreases viscosity, facilitating upward flow of deep crust. Results from numerical models and nature indicate that deep crust moves laterally and then vertically, crystallizing at depths as shallow as 2 km. Deep crust flows en masse, without significant segregation of melt into magmatic bodies, over 10s of kms of vertical transport. This is a major mechanism by which deep crust is exhumed and is therefore a significant process of heat and mass transfer in continental evolution. The result of vertical flow of deep, partially molten crust is a migmatite dome. When lithosphere is under extension or transtension, the deep crust is solicited by faulting of the brittle upper crust, and the flow of deep crust in migmatite domes traverses nearly the entire thickness of orogenic crust in <10 million years. This cycle of burial, partial melting, rapid ascent, and crystallization/cooling preserves the continents from being recycled into the mantle by convergent tectonic processes over geologic time. Migmatite domes commonly preserve a record of high-T - low-P metamorphism. Domes may also contain rocks or minerals that record high-T - high-P conditions, including high-P metamorphism broadly coeval with host migmatite, evidence for the deep crustal origin of migmatite. There exists a spectrum of domes, from entirely deep-sourced to mixtures of deep and shallow sources. Controlling factors in deep vs. shallow sources are relative densities of crustal layers and rate of extension: fast extension (cm/yr) promotes efficient ascent of deep crust, whereas slow extension (mm/yr) produces significantly less exhumation. Recognition of the importance of migmatite (gneiss) domes as archives of orogenic deep crust is applicable to determining the chemical and physical properties of continental crust, as well as mechanisms and timescales of crustal differentiation.
Integrative and Deep Learning through a Learning Community: A Process View of Self
ERIC Educational Resources Information Center
Mahoney, Sandra; Schamber, Jon
2011-01-01
This study investigated deep learning produced in a community of general education courses. Student speeches on liberal education were analyzed for discovering a grounded theory of ideas about self. The study found that learning communities cultivate deep, integrative learning that makes the value of a liberal education relevant to students.…
ERIC Educational Resources Information Center
Hamm, Simon; Robertson, Ian
2010-01-01
This research tests the proposition that the integration of a multimedia assessment activity into a Diploma of Events Management program promotes a deep learning approach. Firstly, learners' preferences for deep or surface learning were evaluated using the revised two-factor Study Process Questionnaire. Secondly, after completion of an assessment…
Strategies for achieving healthy energy balance among African Americans in the Mississippi Delta.
Parham, Groesbeck P; Scarinci, Isabel C
2007-10-01
Low-income African Americans who live in rural areas of the Deep South are particularly vulnerable to diseases associated with unhealthy energy imbalance. The Centers for Disease Control and Prevention (CDC) has suggested various physical activity strategies to achieve healthy energy balance. Our objective was to conduct formal, open-ended discussions with low-income African Americans in the Mississippi Delta to determine 1) their dietary habits and physical activity levels, 2) their attitudes toward CDC's suggested physical activity strategies, and 3) their suggestions on how to achieve CDC's strategies within their own environment. A qualitative method (focus groups) was used to conduct the study during 2005. Prestudy meetings were held with African American lay health workers to formulate a focus group topic guide, establish inclusion criteria for focus group participants, select meeting sites and times, and determine group segmentation guidelines. Focus groups were divided into two phases. All discussions and focus group meetings were held in community centers within African American neighborhoods in the Mississippi Delta and were led by trained African American moderators. Phase I focus groups identified the following themes: overeating, low self-esteem, low income, lack of physical exercise, unhealthy methods of food preparation, a poor working definition of healthy energy balance, and superficial knowledge of strategies for achieving healthy energy balance. Phase 2 focus groups identified a preference for social support-based strategies for increasing physical activity levels. Energy balance strategies targeting low-income, rural African Americans in the Deep South may be more effective if they emphasize social interaction at the community and family levels and incorporate the concept of community volunteerism.
Orbit determination of highly elliptical Earth orbiters using improved Doppler data-processing modes
NASA Technical Reports Server (NTRS)
Estefan, J. A.
1995-01-01
A navigation error covariance analysis of four highly elliptical Earth orbits is described, with apogee heights ranging from 20,000 to 76,800 km and perigee heights ranging from 1,000 to 5,000 km. This analysis differs from earlier studies in that improved navigation data-processing modes were used to reduce the radio metric data. For this study, X-band (8.4-GHz) Doppler data were assumed to be acquired from two Deep Space Network radio antennas and reconstructed orbit errors propagated over a single day. Doppler measurements were formulated as total-count phase measurements and compared to the traditional formulation of differenced-count frequency measurements. In addition, an enhanced data-filtering strategy was used, which treated the principal ground system calibration errors affecting the data as filter parameters. Results suggest that a 40- to 60-percent accuracy improvement may be achievable over traditional data-processing modes in reconstructed orbit errors, with a substantial reduction in reconstructed velocity errors at perigee. Historically, this has been a regime in which stringent navigation requirements have been difficult to meet by conventional methods.
3D active edge silicon sensors: Device processing, yield and QA for the ATLAS-IBL production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Da Vià, Cinzia; Boscardil, Maurizio; Dalla Betta, GianFranco
2013-01-01
3D silicon sensors, where plasma micromachining is used to etch deep narrow apertures in the silicon substrate to form electrodes of PIN junctions, were successfully manufactured in facilities in Europe and USA. In 2011 the technology underwent a qualification process to establish its maturity for a medium scale production for the construction of a pixel layer for vertex detection, the Insertable B-Layer (IBL) at the CERN-LHC ATLAS experiment. The IBL collaboration, following that recommendation from the review panel, decided to complete the production of planar and 3D sensors and endorsed the proposal to build enough modules for a mixed IBLmore » sensor scenario where 25% of 3D modules populate the forward and backward part of each stave. The production of planar sensors will also allow coverage of 100% of the IBL, in case that option was required. This paper will describe the processing strategy which allowed successful 3D sensor production, some of the Quality Assurance (QA) tests performed during the pre-production phase and the production yield to date.« less
Fuel consumption optimization for smart hybrid electric vehicle during a car-following process
NASA Astrophysics Data System (ADS)
Li, Liang; Wang, Xiangyu; Song, Jian
2017-03-01
Hybrid electric vehicles (HEVs) provide large potential to save energy and reduce emission, and smart vehicles bring out great convenience and safety for drivers. By combining these two technologies, vehicles may achieve excellent performances in terms of dynamic, economy, environmental friendliness, safety, and comfort. Hence, a smart hybrid electric vehicle (s-HEV) is selected as a platform in this paper to study a car-following process with optimizing the fuel consumption. The whole process is a multi-objective optimal problem, whose optimal solution is not just adding an energy management strategy (EMS) to an adaptive cruise control (ACC), but a deep fusion of these two methods. The problem has more restricted conditions, optimal objectives, and system states, which may result in larger computing burden. Therefore, a novel fuel consumption optimization algorithm based on model predictive control (MPC) is proposed and some search skills are adopted in receding horizon optimization to reduce computing burden. Simulations are carried out and the results indicate that the fuel consumption of proposed method is lower than that of the ACC+EMS method on the condition of ensuring car-following performances.
The giant deep-sea octopus Haliphron atlanticus forages on gelatinous fauna
Hoving, H.J.T.; Haddock, S.H.D.
2017-01-01
Feeding strategies and predator-prey interactions of many deep-sea pelagic organisms are still unknown. This is also true for pelagic cephalopods, some of which are very abundant in oceanic ecosystems and which are known for their elaborate behaviors and central role in many foodwebs. We report on the first observations of the giant deep-sea octopus Haliphron atlanticus with prey. Using remotely operated vehicles, we saw these giant octopods holding medusae in their arms. One of the medusae could be identified as Phacellophora camtschatica (the egg-yolk jelly). Stomach content analysis confirmed predation on cnidarians and gelatinous organisms. The relationship between medusae and H. atlanticus is discussed, also in comparison with other species of the Argonautoidea, all of which have close relationships with gelatinous zooplankton. PMID:28344325
Hydride vapor phase GaN films with reduced density of residual electrons and deep traps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polyakov, A. Y., E-mail: aypolyakov@gmail.com; Smirnov, N. B.; Govorkov, A. V.
2014-05-14
Electrical properties and deep electron and hole traps spectra are compared for undoped n-GaN films grown by hydride vapor phase epitaxy (HVPE) in the regular process (standard HVPE samples) and in HVPE process optimized for decreasing the concentration of residual donor impurities (improved HVPE samples). It is shown that the residual donor density can be reduced by optimization from ∼10{sup 17} cm{sup −3} to (2–5) × 10{sup 14} cm{sup −3}. The density of deep hole traps and deep electron traps decreases with decreased donor density, so that the concentration of deep hole traps in the improved samples is reduced to ∼5 × 10{sup 13} cm{sup −3} versusmore » 2.9 × 10{sup 16} cm{sup −3} in the standard samples, with a similar decrease in the electron traps concentration.« less
Impacts of Groundwater Constraints on Saudi Arabia's Low-Carbon Electricity Supply Strategy.
Parkinson, Simon C; Djilali, Ned; Krey, Volker; Fricko, Oliver; Johnson, Nils; Khan, Zarrar; Sedraoui, Khaled; Almasoud, Abdulrahman H
2016-02-16
Balancing groundwater depletion, socioeconomic development and food security in Saudi Arabia will require policy that promotes expansion of unconventional freshwater supply options, such as wastewater recycling and desalination. As these processes consume more electricity than conventional freshwater supply technologies, Saudi Arabia's electricity system is vulnerable to groundwater conservation policy. This paper examines strategies for adapting to long-term groundwater constraints in Saudi Arabia's freshwater and electricity supply sectors with an integrated modeling framework. The approach combines electricity and freshwater supply planning models across provinces to provide an improved representation of coupled infrastructure systems. The tool is applied to study the interaction between policy aimed at a complete phase-out of nonrenewable groundwater extraction and concurrent policy aimed at achieving deep reductions in electricity sector carbon emissions. We find that transitioning away from nonrenewable groundwater use by the year 2050 could increase electricity demand by more than 40% relative to 2010 conditions, and require investments similar to strategies aimed at transitioning away from fossil fuels in the electricity sector. Higher electricity demands under groundwater constraints reduce flexibility of supply side options in the electricity sector to limit carbon emissions, making it more expensive to fulfill climate sustainability objectives. The results of this analysis underscore the importance of integrated long-term planning approaches for Saudi Arabia's electricity and freshwater supply systems.
Nutritional strategies of the hydrothermal ecosystem bivalves
NASA Astrophysics Data System (ADS)
Le Pennec, Marcel; Donval, Anne; Herry, Angèle
Studies of deep-sea hydrothermal bivalves have revealed that the species, which are strictly dependent upon the interstitial fluid emissions, derive their food indirectly via symbiotic relationships with chemosynthetic bacteria present in their gill tissues. As the gill plays the main trophic role, structural and ultrastructural modifications occur in the digestive tract. Scanning and transmission electron microscope studies reveal that the digestive system of species belonging to the genera Calyptogena, Bathymodiolus and Bathypecten have anatomical differences. In Calyptogena, the reduction of several parts of the digestive tract and the stomach content which is either empty or full, according to the various species examined indicate that the digestive system is hardly if at all functional. In Bathymodiolus, the labial palps are well developed, the stomach is always full with particles and the two cellular types, digestive and secretory, are present in the digestive gland. All these characteristics indicate that the digestive system is functional. In Bathypecten, the digestive tract is well developed and it seems that it plays the main trophic role. We conclude that the nutritional strategies of the hydrothermal vents bivalves are quite varied. They range from a normal trophic process, through a mixotrophic diet, to one based purely on chemoautotrophic bacteria. The strategy of each species is adapted to and influences its distribution.
Role of bond adaptability in the passivation of colloidal quantum dot solids.
Thon, Susanna M; Ip, Alexander H; Voznyy, Oleksandr; Levina, Larissa; Kemp, Kyle W; Carey, Graham H; Masala, Silvia; Sargent, Edward H
2013-09-24
Colloidal quantum dot (CQD) solids are attractive materials for photovoltaic devices due to their low-cost solution-phase processing, high absorption cross sections, and their band gap tunability via the quantum size effect. Recent advances in CQD solar cell performance have relied on new surface passivation strategies. Specifically, cadmium cation passivation of surface chalcogen sites in PbS CQDs has been shown to contribute to lowered trap state densities and improved photovoltaic performance. Here we deploy a generalized solution-phase passivation strategy as a means to improving CQD surface management. We connect the effects of the choice of metal cation on solution-phase surface passivation, film-phase trap density of states, minority carrier mobility, and photovoltaic power conversion efficiency. We show that trap passivation and midgap density of states determine photovoltaic device performance and are strongly influenced by the choice of metal cation. Supported by density functional theory simulations, we propose a model for the role of cations, a picture wherein metals offering the shallowest electron affinities and the greatest adaptability in surface bonding configurations eliminate both deep and shallow traps effectively even in submonolayer amounts. This work illustrates the importance of materials choice in designing a flexible passivation strategy for optimum CQD device performance.
Developing and evaluating prediactive strategies to elucidate the mode of biological activity of environmental chemicals is a major objective of the concerted efforts of the US-EPA's computational toxicology program.
Neurosurgery education: the pursuit of excellence.
Benzel, Edward C
2010-01-01
The pursuit of excellence in education is a noble endeavor. Such has been the object of education and the goal of educators for eons. Neurosurgery education is no different from other domains in this regard. As with any discipline, this pursuit is complex and obligatorily multifaceted. It involves the use of what is often a broad and deep foundation of experience and knowledge. On this foundation, a modern and evolving infrastructure/suprastructure should be developed and nurtured. Once the infrastructure/suprastructure has taken form, a resident education plan can be derived. This plan, once enacted and executed, should be revisited, revised, and re-executed over and over again. One should never become satisfied with the status quo. A continued search for strategies and tools that achieve improvements over prior renditions of the education plan is mandatory if we hope to perpetually upgrade our process of education. Neurosurgical educators should seek criticism, admit mistakes, and modify educational behaviors accordingly. A strategy for achieving these goals regarding the pursuit of excellence in neurosurgical education is described in the pages that follow.
Enhanced orbit determination filter sensitivity analysis: Error budget development
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Burkhart, P. D.
1994-01-01
An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.
Del Giudice, G; Padulano, R; Siciliano, D
2016-01-01
The lack of geometrical and hydraulic information about sewer networks often excludes the adoption of in-deep modeling tools to obtain prioritization strategies for funds management. The present paper describes a novel statistical procedure for defining the prioritization scheme for preventive maintenance strategies based on a small sample of failure data collected by the Sewer Office of the Municipality of Naples (IT). Novelty issues involve, among others, considering sewer parameters as continuous statistical variables and accounting for their interdependences. After a statistical analysis of maintenance interventions, the most important available factors affecting the process are selected and their mutual correlations identified. Then, after a Box-Cox transformation of the original variables, a methodology is provided for the evaluation of a vulnerability map of the sewer network by adopting a joint multivariate normal distribution with different parameter sets. The goodness-of-fit is eventually tested for each distribution by means of a multivariate plotting position. The developed methodology is expected to assist municipal engineers in identifying critical sewers, prioritizing sewer inspections in order to fulfill rehabilitation requirements.
Recent advances in methods for the analysis of protein o-glycosylation at proteome level.
You, Xin; Qin, Hongqiang; Ye, Mingliang
2018-01-01
O-Glycosylation, which refers to the glycosylation of the hydroxyl group of side chains of Serine/Threonine/Tyrosine residues, is one of the most common post-translational modifications. Compared with N-linked glycosylation, O-glycosylation is less explored because of its complex structure and relatively low abundance. Recently, O-glycosylation has drawn more and more attention for its various functions in many sophisticated biological processes. To obtain a deep understanding of O-glycosylation, many efforts have been devoted to develop effective strategies to analyze the two most abundant types of O-glycosylation, i.e. O-N-acetylgalactosamine and O-N-acetylglucosamine glycosylation. In this review, we summarize the proteomics workflows to analyze these two types of O-glycosylation. For the large-scale analysis of mucin-type glycosylation, the glycan simplification strategies including the ''SimpleCell'' technology were introduced. A variety of enrichment methods including lectin affinity chromatography, hydrophilic interaction chromatography, hydrazide chemistry, and chemoenzymatic method were introduced for the proteomics analysis of O-N-acetylgalactosamine and O-N-acetylglucosamine glycosylation. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Pastur-Romay, Lucas Antón; Cedrón, Francisco; Pazos, Alejandro; Porto-Pazos, Ana Belén
2016-08-11
Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure-Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron-Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods.
Dynamic autoinoculation and the microbial ecology of a deep water hydrocarbon irruption
Valentine, David L.; Mezić, Igor; Maćešić, Senka; Črnjarić-Žic, Nelida; Ivić, Stefan; Hogan, Patrick J.; Fonoberov, Vladimir A.; Loire, Sophie
2012-01-01
The irruption of gas and oil into the Gulf of Mexico during the Deepwater Horizon event fed a deep sea bacterial bloom that consumed hydrocarbons in the affected waters, formed a regional oxygen anomaly, and altered the microbiology of the region. In this work, we develop a coupled physical–metabolic model to assess the impact of mixing processes on these deep ocean bacterial communities and their capacity for hydrocarbon and oxygen use. We find that observed biodegradation patterns are well-described by exponential growth of bacteria from seed populations present at low abundance and that current oscillation and mixing processes played a critical role in distributing hydrocarbons and associated bacterial blooms within the northeast Gulf of Mexico. Mixing processes also accelerated hydrocarbon degradation through an autoinoculation effect, where water masses, in which the hydrocarbon irruption had caused blooms, later returned to the spill site with hydrocarbon-degrading bacteria persisting at elevated abundance. Interestingly, although the initial irruption of hydrocarbons fed successive blooms of different bacterial types, subsequent irruptions promoted consistency in the structure of the bacterial community. These results highlight an impact of mixing and circulation processes on biodegradation activity of bacteria during the Deepwater Horizon event and suggest an important role for mixing processes in the microbial ecology of deep ocean environments. PMID:22233808
Pastur-Romay, Lucas Antón; Cedrón, Francisco; Pazos, Alejandro; Porto-Pazos, Ana Belén
2016-01-01
Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure–Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron–Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods. PMID:27529225
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neuhauser, K.
Through discussion of five case studies (test homes), this project evaluates strategies to elevate the performance of existing homes to a level commensurate with best-in-class implementation of high-performance new construction homes. The test homes featured in this research activity participated in Deep Energy Retrofit (DER) Pilot Program sponsored by the electric and gas utility National Grid in Massachusetts and Rhode Island. Building enclosure retrofit strategies are evaluated for impact on durability and indoor air quality in addition to energy performance.
A Novel Combination of Thermal Ablation and Heat-Inducible Gene therapy for Breast Cancer Treatment
2009-04-01
intensity focused ultrasound ( HIFU ) has been developed as an emerging non-invasive strategy for cancer treatment by thermal ablation of tumor tissue. The...Leenders, G., et al., Histopathological changes associated with high intensity focused ultrasound ( HIFU ) treatment for localised adenocarcinoma of...invasive strategy for cancer therapy [1, 2]. Through HIFU exposure, acoustic energy is focused into a deep-sited tumor volume and converted into heat
NASA Astrophysics Data System (ADS)
Grzelak, Katarzyna; Kotwicki, Lech
2016-06-01
Three deep basins in the Baltic Sea were investigated within the framework of the CHEMSEA project (Chemical Munitions Search & Assessment), which aims to evaluate the ecological impact of chemical warfare agents dumped after World War II. Nematode communities, which comprise the most numerous and diverse organisms in the surveyed areas, were investigated as a key group of benthic fauna. One of the most successful nematode species was morphologically identified as Halomonhystera disjuncta (Bastian, 1865). The presence of this species, which is an active coloniser that is highly resistant to disturbed environments, may indicate that the sediments of these disposal sites are characterised by toxic conditions that are unfavourable for other metazoans. Moreover, ovoviviparous reproductive behaviour in which parents carry their brood internally, which is an important adaptation to harsh environmental conditions, was observed for specimens from Gdansk Deep and Gotland Deep. This reproductive strategy, which is uncommon for marine nematodes, has not previously been reported for nematodes from the Baltic Sea sediment.
Age, growth rates, and paleoclimate studies of deep sea corals
Prouty, Nancy G; Roark, E. Brendan; Andrews, Allen; Robinson, Laura; Hill, Tessa; Sherwood, Owen; Williams, Branwen; Guilderson, Thomas P.; Fallon, Stewart
2015-01-01
Deep-water corals are some of the slowest growing, longest-lived skeletal accreting marine organisms. These habitat-forming species support diverse faunal assemblages that include commercially and ecologically important organisms. Therefore, effective management and conservation strategies for deep-sea corals can be informed by precise and accurate age, growth rate, and lifespan characteristics for proper assessment of vulnerability and recovery from perturbations. This is especially true for the small number of commercially valuable, and potentially endangered, species that are part of the black and precious coral fisheries (Tsounis et al. 2010). In addition to evaluating time scales of recovery from disturbance or exploitation, accurate age and growth estimates are essential for understanding the life history and ecology of these habitat-forming corals. Given that longevity is a key factor for population maintenance and fishery sustainability, partly due to limited and complex genetic flow among coral populations separated by great distances, accurate age structure for these deep-sea coral communities is essential for proper, long-term resource management.
The effect of aerosol-derived changes in the warm phase on the properties of deep convective clouds
NASA Astrophysics Data System (ADS)
Chen, Qian; Koren, Ilan; Altaratz, Orit; Heiblum, Reuven; Dagan, Guy
2017-04-01
The aerosol impact on deep convective clouds starts in an increased number of cloud droplets in higher aerosol loading environment. This change drives many others, like enhanced condensational growth, delay in collision-coalescence and others. Since the warm processes serve as the initial and boundary conditions for the mixed and cold-phase processes in deep clouds, it is highly important to understand the aerosol effect on them. The weather research and forecasting model (WRF) with spectral bin microphysics was used to study a deep convective system over the Marshall Islands, during the Kwajalein Experiment (KWAJEX). Three simulations were conducted with aerosol concentrations of 100, 500 and 2000 cm-3, to reflect clean, semipolluted, and polluted conditions. The results of the clean run agreed well with the radar profiles and rain rate observations. The more polluted simulations resulted in larger total cloud mass, larger upper level cloud fraction and rain rates. There was an increased mass both below and above the zero temperature level. It indicates of more efficient growth processes both below and above the zero level. In addition the polluted runs showed an increased upward transport (across the zero level) of liquid water due to both stronger updrafts and larger droplet mobility. In this work we discuss the transport of cloud mass crossing the zero temperature level (in both directions) in order to gain a process level understanding of how aerosol effects on the warm processes affect the macro- and micro-properties of deep convective clouds.
Yang, Guang; Yu, Simiao; Dong, Hao; Slabaugh, Greg; Dragotti, Pier Luigi; Ye, Xujiong; Liu, Fangde; Arridge, Simon; Keegan, Jennifer; Guo, Yike; Firmin, David; Keegan, Jennifer; Slabaugh, Greg; Arridge, Simon; Ye, Xujiong; Guo, Yike; Yu, Simiao; Liu, Fangde; Firmin, David; Dragotti, Pier Luigi; Yang, Guang; Dong, Hao
2018-06-01
Compressed sensing magnetic resonance imaging (CS-MRI) enables fast acquisition, which is highly desirable for numerous clinical applications. This can not only reduce the scanning cost and ease patient burden, but also potentially reduce motion artefacts and the effect of contrast washout, thus yielding better image quality. Different from parallel imaging-based fast MRI, which utilizes multiple coils to simultaneously receive MR signals, CS-MRI breaks the Nyquist-Shannon sampling barrier to reconstruct MRI images with much less required raw data. This paper provides a deep learning-based strategy for reconstruction of CS-MRI, and bridges a substantial gap between conventional non-learning methods working only on data from a single image, and prior knowledge from large training data sets. In particular, a novel conditional Generative Adversarial Networks-based model (DAGAN)-based model is proposed to reconstruct CS-MRI. In our DAGAN architecture, we have designed a refinement learning method to stabilize our U-Net based generator, which provides an end-to-end network to reduce aliasing artefacts. To better preserve texture and edges in the reconstruction, we have coupled the adversarial loss with an innovative content loss. In addition, we incorporate frequency-domain information to enforce similarity in both the image and frequency domains. We have performed comprehensive comparison studies with both conventional CS-MRI reconstruction methods and newly investigated deep learning approaches. Compared with these methods, our DAGAN method provides superior reconstruction with preserved perceptual image details. Furthermore, each image is reconstructed in about 5 ms, which is suitable for real-time processing.