Reconfigurable data path processor
NASA Technical Reports Server (NTRS)
Donohoe, Gregory (Inventor)
2005-01-01
A reconfigurable data path processor comprises a plurality of independent processing elements. Each of the processing elements advantageously comprising an identical architecture. Each processing element comprises a plurality of data processing means for generating a potential output. Each processor is also capable of through-putting an input as a potential output with little or no processing. Each processing element comprises a conditional multiplexer having a first conditional multiplexer input, a second conditional multiplexer input and a conditional multiplexer output. A first potential output value is transmitted to the first conditional multiplexer input, and a second potential output value is transmitted to the second conditional multiplexer output. The conditional multiplexer couples either the first conditional multiplexer input or the second conditional multiplexer input to the conditional multiplexer output, according to an output control command. The output control command is generated by processing a set of arithmetic status-bits through a logical mask. The conditional multiplexer output is coupled to a first processing element output. A first set of arithmetic bits are generated according to the processing of the first processable value. A second set of arithmetic bits may be generated from a second processing operation. The selection of the arithmetic status-bits is performed by an arithmetic-status bit multiplexer selects the desired set of arithmetic status bits from among the first and second set of arithmetic status bits. The conditional multiplexer evaluates the select arithmetic status bits according to logical mask defining an algorithm for evaluating the arithmetic status bits.
Rudolf, Klaus-Dieter; Kus, Sandra; Chung, Kevin C; Johnston, Marie; LeBlanc, Monique; Cieza, Alarcos
2012-01-01
A formal decision-making and consensus process was applied to develop the first version of the International Classification on Functioning, Disability and Health (ICF) Core Sets for Hand Conditions. To convene an international panel to develop the ICF Core Sets for Hand Conditions (HC), preparatory studies were conducted, which included an expert survey, a systematic literature review, a qualitative study and an empirical data collection process involving persons with hand conditions. A consensus conference was convened in Switzerland in May 2009 that was attended by 23 healthcare professionals, who treat hand conditions, representing 22 countries. The preparatory studies identified a set of 743 ICF categories at the second, third or fourth hierarchical level. Altogether, 117 chapter-, second-, or third-level categories were included in the comprehensive ICF Core Set for HC. The brief ICF Core Set for HC included a total of 23 chapter- and second-level categories. A formal consensus process integrating evidence and expert opinion based on the ICF led to the formal adoption of the ICF Core Sets for Hand Conditions. The next phase of this ICF project is to conduct a formal validation process to establish its applicability in clinical settings.
A Tool for Conditions Tag Management in ATLAS
NASA Astrophysics Data System (ADS)
Sharmazanashvili, A.; Batiashvili, G.; Gvaberidze, G.; Shekriladze, L.; Formica, A.; Atlas Collaboration
2014-06-01
ATLAS Conditions data include about 2 TB in a relational database and 400 GB of files referenced from the database. Conditions data is entered and retrieved using COOL, the API for accessing data in the LCG Conditions Database infrastructure. It is managed using an ATLAS-customized python based tool set. Conditions data are required for every reconstruction and simulation job, so access to them is crucial for all aspects of ATLAS data taking and analysis, as well as by preceding tasks to derive optimal corrections to reconstruction. Optimized sets of conditions for processing are accomplished using strict version control on those conditions: a process which assigns COOL Tags to sets of conditions, and then unifies those conditions over data-taking intervals into a COOL Global Tag. This Global Tag identifies the set of conditions used to process data so that the underlying conditions can be uniquely identified with 100% reproducibility should the processing be executed again. Understanding shifts in the underlying conditions from one tag to another and ensuring interval completeness for all detectors for a set of runs to be processed is a complex task, requiring tools beyond the above mentioned python utilities. Therefore, a JavaScript /PHP based utility called the Conditions Tag Browser (CTB) has been developed. CTB gives detector and conditions experts the possibility to navigate through the different databases and COOL folders; explore the content of given tags and the differences between them, as well as their extent in time; visualize the content of channels associated with leaf tags. This report describes the structure and PHP/ JavaScript classes of functions of the CTB.
Bukachi, Salome A; Onyango-Ouma, Washington; Siso, Jared Maaka; Nyamongo, Isaac K; Mutai, Joseph K; Hurtig, Anna Karin; Olsen, Oystein Evjen; Byskov, Jens
2014-01-01
In resource-poor settings, the accountability for reasonableness (A4R) has been identified as an important advance in priority setting that helps to operationalize fair priority setting in specific contexts. The four conditions of A4R are backed by theory, not evidence, that conformance with them improves the priority setting decisions. This paper describes the healthcare priority setting processes in Malindi district, Kenya, prior to the implementation of A4R in 2008 and evaluates the process for its conformance with the conditions for A4R. In-depth interviews and focus group discussions with key players in the Malindi district health system and a review of key policy documents and national guidelines show that the priority setting process in the district relies heavily on guidelines from the national level, making it more of a vertical, top-down orientation. Multilateral and donor agencies, national government, budgetary requirements, traditions and local culture influence the process. The four conditions of A4R are present within the priority setting process, albeit to varying degrees and referred to by different terms. There exists an opportunity for A4R to provide a guiding approach within which its four conditions can be strengthened and assessed to establish whether conformance helps improve on the priority setting process. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Decker, Arthur J.
2001-01-01
Artificial neural networks have been used for a number of years to process holography-generated characteristic patterns of vibrating structures. This technology depends critically on the selection and the conditioning of the training sets. A scaling operation called folding is discussed for conditioning training sets optimally for training feed-forward neural networks to process characteristic fringe patterns. Folding allows feed-forward nets to be trained easily to detect damage-induced vibration-displacement-distribution changes as small as 10 nm. A specific application to aerospace of neural-net processing of characteristic patterns is presented to motivate the conditioning and optimization effort.
Sung, Kyongje
2008-12-01
Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the results suggested parallel rather than serial processing, even though the tasks produced significant set-size effects. Serial processing was produced only in a condition with a difficult discrimination and a very large set-size effect. The results support C. Bundesen's (1990) claim that an extreme set-size effect leads to serial processing. Implications for parallel models of visual selection are discussed.
Decision support for operations and maintenance (DSOM) system
Jarrell, Donald B [Kennewick, WA; Meador, Richard J [Richland, WA; Sisk, Daniel R [Richland, WA; Hatley, Darrel D [Kennewick, WA; Brown, Daryl R [Richland, WA; Keibel, Gary R [Richland, WA; Gowri, Krishnan [Richland, WA; Reyes-Spindola, Jorge F [Richland, WA; Adams, Kevin J [San Bruno, CA; Yates, Kenneth R [Lake Oswego, OR; Eschbach, Elizabeth J [Fort Collins, CO; Stratton, Rex C [Richland, WA
2006-03-21
A method for minimizing the life cycle cost of processes such as heating a building. The method utilizes sensors to monitor various pieces of equipment used in the process, for example, boilers, turbines, and the like. The method then performs the steps of identifying a set optimal operating conditions for the process, identifying and measuring parameters necessary to characterize the actual operating condition of the process, validating data generated by measuring those parameters, characterizing the actual condition of the process, identifying an optimal condition corresponding to the actual condition, comparing said optimal condition with the actual condition and identifying variances between the two, and drawing from a set of pre-defined algorithms created using best engineering practices, an explanation of at least one likely source and at least one recommended remedial action for selected variances, and providing said explanation as an output to at least one user.
Automated method for the systematic interpretation of resonance peaks in spectrum data
Damiano, B.; Wood, R.T.
1997-04-22
A method is described for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical model. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system. 1 fig.
Automated method for the systematic interpretation of resonance peaks in spectrum data
Damiano, Brian; Wood, Richard T.
1997-01-01
A method for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system.
Measuring the effect of attention on simple visual search.
Palmer, J; Ames, C T; Lindsey, D T
1993-02-01
Set-size in visual search may be due to 1 or more of 3 factors: sensory processes such as lateral masking between stimuli, attentional processes limiting the perception of individual stimuli, or attentional processes affecting the decision rules for combining information from multiple stimuli. These possibilities were evaluated in tasks such as searching for a longer line among shorter lines. To evaluate sensory contributions, display set-size effects were compared with cuing conditions that held sensory phenomena constant. Similar effects for the display and cue manipulations suggested that sensory processes contributed little under the conditions of this experiment. To evaluate the contribution of decision processes, the set-size effects were modeled with signal detection theory. In these models, a decision effect alone was sufficient to predict the set-size effects without any attentional limitation due to perception.
Ayuso-Mateos, José L; Avila, Carolina C; Anaya, Celia; Cieza, Alarcos; Vieta, Eduard
2013-01-01
The International Classification of Functioning, Disability and Health (ICF) is a tool of the World Health Organization (WHO) designed to be a guide to identify and classify relevant domains of human experience affected by health conditions. The purpose of this article is to describe the process for the development of two Core Sets for bipolar disorder (BD) in the framework of the ICF. The Comprehensive ICF Core Set for BD intends to be a guide for multidisciplinary assessment of patients diagnosed with this condition, while the Brief ICF Core Set for BD will be useful when rating aspects of patient's experience for clinical practice or epidemiological studies. An international consensus conference involving a sample of experts with different professional backgrounds was performed using the nominal group technique. Various preparatory studies identified a set of 743 potential ICF categories to be included in the Core Sets. A total of 38 ICF categories were selected to be included in the Comprehensive Core Set for BD. A total of 19 ICF categories from the Comprehensive Core Set were chosen as the most significant to constitute the Brief Core Set for BD. The formal consensus process integrating evidence and expert opinion on the ICF led to the formal adoption of the ICF Core Sets for BD. The most important categories included are representative of the characteristics usually associated with BD. The next phase of this ICF project is to conduct a formal validation process to establish its applicability in clinical settings. Implications for Rehabilitation Bipolar disorder (BD) is a prevalent condition that has a great impact on people who suffer it, not only in health but also in daily functioning and quality of life. No standard has been defined so far regarding the problems in functioning of persons with BDs. The process described in this article defines the set of areas of functioning to be addressed in clinical assessments of persons with BD and establish the starting point for the development of condition-specific outcome measures.
Neuro-parity pattern recognition system and method
Gross, Kenneth C.; Singer, Ralph M.; Van Alstine, Rollin G.; Wegerich, Stephan W.; Yue, Yong
2000-01-01
A method and system for monitoring a process and determining its condition. Initial data is sensed, a first set of virtual data is produced by applying a system state analyzation to the initial data, a second set of virtual data is produced by applying a neural network analyzation to the initial data and a parity space analyzation is applied to the first and second set of virtual data and also to the initial data to provide a parity space decision about the condition of the process. A logic test can further be applied to produce a further system decision about the state of the process.
De Loof, Esther; Van Opstal, Filip; Verguts, Tom
2016-04-01
Theories on visual awareness claim that predicted stimuli reach awareness faster than unpredicted ones. In the current study, we disentangle whether prior information about the upcoming stimulus affects visual awareness of stimulus location (i.e., individuation) by modulating processing efficiency or threshold setting. Analogous research on stimulus identification revealed that prior information modulates threshold setting. However, as identification and individuation are two functionally and neurally distinct processes, the mechanisms underlying identification cannot simply be extrapolated directly to individuation. The goal of this study was therefore to investigate how individuation is influenced by prior information about the upcoming stimulus. To do so, a drift diffusion model was fitted to estimate the processing efficiency and threshold setting for predicted versus unpredicted stimuli in a cued individuation paradigm. Participants were asked to locate a picture, following a cue that was congruent, incongruent or neutral with respect to the picture's identity. Pictures were individuated faster in the congruent and neutral condition compared to the incongruent condition. In the diffusion model analysis, the processing efficiency was not significantly different across conditions. However, the threshold setting was significantly higher following an incongruent cue compared to both congruent and neutral cues. Our results indicate that predictive information about the upcoming stimulus influences visual awareness by shifting the threshold for individuation rather than by enhancing processing efficiency. Copyright © 2016 Elsevier Ltd. All rights reserved.
Parallel effects of memory set activation and search on timing and working memory capacity.
Schweickert, Richard; Fortin, Claudette; Xi, Zhuangzhuang; Viau-Quesnel, Charles
2014-01-01
Accurately estimating a time interval is required in everyday activities such as driving or cooking. Estimating time is relatively easy, provided a person attends to it. But a brief shift of attention to another task usually interferes with timing. Most processes carried out concurrently with timing interfere with it. Curiously, some do not. Literature on a few processes suggests a general proposition, the Timing and Complex-Span Hypothesis: A process interferes with concurrent timing if and only if process performance is related to complex span. Complex-span is the number of items correctly recalled in order, when each item presented for study is followed by a brief activity. Literature on task switching, visual search, memory search, word generation and mental time travel supports the hypothesis. Previous work found that another process, activation of a memory set in long term memory, is not related to complex-span. If the Timing and Complex-Span Hypothesis is true, activation should not interfere with concurrent timing in dual-task conditions. We tested such activation in single-task memory search task conditions and in dual-task conditions where memory search was executed with concurrent timing. In Experiment 1, activating a memory set increased reaction time, with no significant effect on time production. In Experiment 2, set size and memory set activation were manipulated. Activation and set size had a puzzling interaction for time productions, perhaps due to difficult conditions, leading us to use a related but easier task in Experiment 3. In Experiment 3 increasing set size lengthened time production, but memory activation had no significant effect. Results here and in previous literature on the whole support the Timing and Complex-Span Hypotheses. Results also support a sequential organization of activation and search of memory. This organization predicts activation and set size have additive effects on reaction time and multiplicative effects on percent correct, which was found.
Developing core outcome measurement sets for clinical trials: OMERACT filter 2.0.
Boers, Maarten; Kirwan, John R; Wells, George; Beaton, Dorcas; Gossec, Laure; d'Agostino, Maria-Antonietta; Conaghan, Philip G; Bingham, Clifton O; Brooks, Peter; Landewé, Robert; March, Lyn; Simon, Lee S; Singh, Jasvinder A; Strand, Vibeke; Tugwell, Peter
2014-07-01
Lack of standardization of outcome measures limits the usefulness of clinical trial evidence to inform health care decisions. This can be addressed by agreeing on a minimum core set of outcome measures per health condition, containing measures relevant to patients and decision makers. Since 1992, the Outcome Measures in Rheumatology (OMERACT) consensus initiative has successfully developed core sets for many rheumatologic conditions, actively involving patients since 2002. Its expanding scope required an explicit formulation of its underlying conceptual framework and process. Literature searches and iterative consensus process (surveys and group meetings) of stakeholders including patients, health professionals, and methodologists within and outside rheumatology. To comprehensively sample patient-centered and intervention-specific outcomes, a framework emerged that comprises three core "Areas," namely Death, Life Impact, and Pathophysiological Manifestations; and one strongly recommended Resource Use. Through literature review and consensus process, core set development for any specific health condition starts by identifying at least one core "Domain" within each of the Areas to formulate the "Core Domain Set." Next, at least one applicable measurement instrument for each core Domain is identified to formulate a "Core Outcome Measurement Set." Each instrument must prove to be truthful (valid), discriminative, and feasible. In 2012, 96% of the voting participants (n=125) at the OMERACT 11 consensus conference endorsed this model and process. The OMERACT Filter 2.0 explicitly describes a comprehensive conceptual framework and a recommended process to develop core outcome measurement sets for rheumatology likely to be useful as a template in other areas of health care. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Priority-setting and hospital strategic planning: a qualitative case study.
Martin, Douglas; Shulman, Ken; Santiago-Sorrell, Patricia; Singer, Peter
2003-10-01
To describe and evaluate the priority-setting element of a hospital's strategic planning process. Qualitative case study and evaluation against the conditions of 'accountability for reasonableness' of a strategic planning process at a large urban university-affiliated hospital. The hospital's strategic planning process met the conditions of 'accountability for reasonableness' in large part. Specifically: the hospital based its decisions on reasons (both information and criteria) that the participants felt were relevant to the hospital; the number and type of participants were very extensive; the process, decisions and reasons were well communicated throughout the organization, using multiple communication vehicles; and the process included an ethical framework linked to an effort to evaluate and improve the process. However, there were opportunities to improve the process, particularly by giving participants more time to absorb the information relevant to priority-setting decisions, more time to take difficult decisions and some means to appeal or revise decisions. A case study linked to an evaluation using 'accountability for reasonableness' can serve to improve priority-setting in the context of hospital strategic planning.
Developing Organizational Adaptability for Complex Environment
ERIC Educational Resources Information Center
Boylan, Steven A.; Turner, Kenneth A.
2017-01-01
Developing organizations capable of adapting requires leaders to set conditions. Setting conditions normally requires purposeful activities by the leadership to foster and develop leader and individual adaptability, supported by processes and activities that enable adaptive behaviors through the totality of the organization (Goldstein, Hazy, &…
Cleary, Susan; Molyneux, Sassy; English, Mike
2017-01-01
Abstract This paper describes and evaluates the budgeting and planning processes in public hospitals in Kenya. We used a qualitative case study approach to examine these processes in two hospitals in Kenya. We collected data by in-depth interviews of national level policy makers, hospital managers, and frontline practitioners in the case study hospitals (n = 72), a review of documents, and non-participant observations within the hospitals over a 7 month period. We applied an evaluative framework that considers both consequentialist and proceduralist conditions as important to the quality of priority-setting processes. The budgeting and planning process in the case study hospitals was characterized by lack of alignment, inadequate role clarity and the use of informal priority-setting criteria. With regard to consequentialist conditions, the hospitals incorporated economic criteria by considering the affordability of alternatives, but rarely considered the equity of allocative decisions. In the first hospital, stakeholders were aware of - and somewhat satisfied with - the budgeting and planning process, while in the second hospital they were not. Decision making in both hospitals did not result in reallocation of resources. With regard to proceduralist conditions, the budgeting and planning process in the first hospital was more inclusive and transparent, with the stakeholders more empowered compared to the second hospital. In both hospitals, decisions were not based on evidence, implementation of decisions was poor and the community was not included. There were no mechanisms for appeals or to ensure that the proceduralist conditions were met in both hospitals. Public hospitals in Kenya could improve their budgeting and planning processes by harmonizing these processes, improving role clarity, using explicit priority-setting criteria, and by incorporating both consequentialist (efficiency, equity, stakeholder satisfaction and understanding, shifted priorities, implementation of decisions), and proceduralist (stakeholder engagement and empowerment, transparency, use of evidence, revisions, enforcement, and incorporating community values) conditions. PMID:27679522
Barasa, Edwine W; Cleary, Susan; Molyneux, Sassy; English, Mike
2017-04-01
This paper describes and evaluates the budgeting and planning processes in public hospitals in Kenya. We used a qualitative case study approach to examine these processes in two hospitals in Kenya. We collected data by in-depth interviews of national level policy makers, hospital managers, and frontline practitioners in the case study hospitals (n = 72), a review of documents, and non-participant observations within the hospitals over a 7 month period. We applied an evaluative framework that considers both consequentialist and proceduralist conditions as important to the quality of priority-setting processes. The budgeting and planning process in the case study hospitals was characterized by lack of alignment, inadequate role clarity and the use of informal priority-setting criteria. With regard to consequentialist conditions, the hospitals incorporated economic criteria by considering the affordability of alternatives, but rarely considered the equity of allocative decisions. In the first hospital, stakeholders were aware of - and somewhat satisfied with - the budgeting and planning process, while in the second hospital they were not. Decision making in both hospitals did not result in reallocation of resources. With regard to proceduralist conditions, the budgeting and planning process in the first hospital was more inclusive and transparent, with the stakeholders more empowered compared to the second hospital. In both hospitals, decisions were not based on evidence, implementation of decisions was poor and the community was not included. There were no mechanisms for appeals or to ensure that the proceduralist conditions were met in both hospitals. Public hospitals in Kenya could improve their budgeting and planning processes by harmonizing these processes, improving role clarity, using explicit priority-setting criteria, and by incorporating both consequentialist (efficiency, equity, stakeholder satisfaction and understanding, shifted priorities, implementation of decisions), and proceduralist (stakeholder engagement and empowerment, transparency, use of evidence, revisions, enforcement, and incorporating community values) conditions. © The Author 2016. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine.
Science-policy processes for transboundary water governance.
Armitage, Derek; de Loë, Rob C; Morris, Michelle; Edwards, Tom W D; Gerlak, Andrea K; Hall, Roland I; Huitema, Dave; Ison, Ray; Livingstone, David; MacDonald, Glen; Mirumachi, Naho; Plummer, Ryan; Wolfe, Brent B
2015-09-01
In this policy perspective, we outline several conditions to support effective science-policy interaction, with a particular emphasis on improving water governance in transboundary basins. Key conditions include (1) recognizing that science is a crucial but bounded input into water resource decision-making processes; (2) establishing conditions for collaboration and shared commitment among actors; (3) understanding that social or group-learning processes linked to science-policy interaction are enhanced through greater collaboration; (4) accepting that the collaborative production of knowledge about hydrological issues and associated socioeconomic change and institutional responses is essential to build legitimate decision-making processes; and (5) engaging boundary organizations and informal networks of scientists, policy makers, and civil society. We elaborate on these conditions with a diverse set of international examples drawn from a synthesis of our collective experiences in assessing the opportunities and constraints (including the role of power relations) related to governance for water in transboundary settings.
Process-based reference conditions: An alternative approach for managed river systems
NASA Astrophysics Data System (ADS)
Grams, P.; Melis, T.; Wright, S.; Schmidt, J.; Topping, D.
2008-12-01
Physical reference conditions, whether based on historic information or the condition of nearby less impaired systems, provide necessary information that contributes to an assessment of stream condition and the nature of channel transformation. In many cases, however, the utility of this traditional 'reference' approach may end at the assessment stage and not be applicable to establishing and implementing restoration goals. Ongoing impacts such as continued existence of an upstream dam or the persistence of invasive vegetation may render restoration based on a physical reference infeasible. In these circumstances, an alternative approach is to identify and describe reference processes in place of physical reference conditions. This is the case for the Colorado River where large dams, a commitment to hydropower production, and legal mandates for regional distribution and off- channel consumption of water greatly reduce the relevance of historical conditions in setting goals for rehabilitation. In this setting, two strategies are available for setting reference conditions. One is maintenance of post-dam sediment mass balance, which attempts to ensure that the channel does not continue to degrade or aggrade and that riverine habitats do not continue to diverge from their historical condition. Post- dam sediment mass balance can be quantified at a reconnaissance or project scale. The second strategy is to define key processes that maintain the native ecosystem. These processes may, or may not, be consistent with maintenance of sediment mass balance, but they may be key to rejuvenation of spawning and rearing habitats, maintenance of historical ranges of temperature and turbidity, maintenance of a sustainable food base for the native aquatic community, or maintaining other riverine resources. Both strategies require careful monitoring of processes (e.g. sediment flux), which may add considerably to the cost and complexity of a monitoring program. An additional challenge in adopting the second strategy is that it is difficult to define when a process is adequately restored, since many ecosystem processes collectively limit recovery of populations of native communities.
Using lean methodology to improve efficiency of electronic order set maintenance in the hospital.
Idemoto, Lori; Williams, Barbara; Blackmore, Craig
2016-01-01
Order sets, a series of orders focused around a diagnosis, condition, or treatment, can reinforce best practice, help eliminate outdated practice, and provide clinical guidance. However, order sets require regular updates as evidence and care processes change. We undertook a quality improvement intervention applying lean methodology to create a systematic process for order set review and maintenance. Root cause analysis revealed challenges with unclear prioritization of requests, lack of coordination between teams, and lack of communication between producers and requestors of order sets. In March of 2014, we implemented a systematic, cyclical order set review process, with a set schedule, defined responsibilities for various stakeholders, formal meetings and communication between stakeholders, and transparency of the process. We first identified and deactivated 89 order sets which were infrequently used. Between March and August 2014, 142 order sets went through the new review process. Processing time for the build duration of order sets decreased from a mean of 79.6 to 43.2 days (p<.001, CI=22.1, 50.7). Applying Lean production principles to the order set review process resulted in significant improvement in processing time and increased quality of orders. As use of order sets and other forms of clinical decision support increase, regular evidence and process updates become more critical.
2017-01-01
In the field of evaluative conditioning (EC), two opposing theories—propositional single-process theory versus dual-process theory—are currently being discussed in the literature. The present set of experiments test a crucial prediction to adjudicate between these two theories: Dual-process theory postulates that evaluative conditioning can occur without awareness of the contingency between conditioned stimulus (CS) and unconditioned stimulus (US); in contrast, single-process propositional theory postulates that EC requires CS-US contingency awareness. In a set of three studies, we experimentally manipulate contingency awareness by presenting the CSs very briefly, thereby rendering it unlikely to be processed consciously. We address potential issues with previous studies on EC with subliminal or near-threshold CSs that limited their interpretation. Across two experiments, we consistently found an EC effect for CSs presented for 1000 ms and consistently failed to find an EC effect for briefly presented CSs. In a third pre-registered experiment, we again found evidence for an EC effect with CSs presented for 1000 ms, and we found some indication for an EC effect for CSs presented for 20 ms. PMID:28989730
Heycke, Tobias; Aust, Frederik; Stahl, Christoph
2017-09-01
In the field of evaluative conditioning (EC), two opposing theories-propositional single-process theory versus dual-process theory-are currently being discussed in the literature. The present set of experiments test a crucial prediction to adjudicate between these two theories: Dual-process theory postulates that evaluative conditioning can occur without awareness of the contingency between conditioned stimulus (CS) and unconditioned stimulus (US); in contrast, single-process propositional theory postulates that EC requires CS-US contingency awareness. In a set of three studies, we experimentally manipulate contingency awareness by presenting the CSs very briefly, thereby rendering it unlikely to be processed consciously. We address potential issues with previous studies on EC with subliminal or near-threshold CSs that limited their interpretation. Across two experiments, we consistently found an EC effect for CSs presented for 1000 ms and consistently failed to find an EC effect for briefly presented CSs. In a third pre-registered experiment, we again found evidence for an EC effect with CSs presented for 1000 ms, and we found some indication for an EC effect for CSs presented for 20 ms.
Automated defect spatial signature analysis for semiconductor manufacturing process
Tobin, Jr., Kenneth W.; Gleason, Shaun S.; Karnowski, Thomas P.; Sari-Sarraf, Hamed
1999-01-01
An apparatus and method for performing automated defect spatial signature alysis on a data set representing defect coordinates and wafer processing information includes categorizing data from the data set into a plurality of high level categories, classifying the categorized data contained in each high level category into user-labeled signature events, and correlating the categorized, classified signature events to a present or incipient anomalous process condition.
Tom Leuschen; Dale Wade; Paula Seamon
2001-01-01
The success of a fire use program is in large part dependent on a solid foundation set in clear and concise planning. The planning process results in specific goals and measurable objectives for fire application, provides a means of setting priorities, and establishes a mechanism for evaluating and refining the process to meet the desired future condition. It is an...
Mustaqima, Millaty; Yoo, Pilsun; Huang, Wei; Lee, Bo Wha; Liu, Chunli
2015-01-01
We report the preparation of (111) preferentially oriented CoFe2O4 thin films on Pt(111)/TiO2/SiO2/Si substrates using a spin-coating process. The post-annealing conditions and film thickness were varied for cobalt ferrite (CFO) thin films, and Pt/CFO/Pt structures were prepared to investigate the resistance switching behaviors. Our results showed that resistance switching without a forming process is preferred to obtain less fluctuation in the set voltage, which can be regulated directly from the preparation conditions of the CFO thin films. Therefore, instead of thicker film, CFO thin films deposited by two times spin-coating with a thickness about 100 nm gave stable resistance switching with the most stable set voltage. Since the forming process and the large variation in set voltage have been considered as serious obstacles for the practical application of resistance switching for non-volatile memory devices, our results could provide meaningful insights in improving the performance of ferrite material-based resistance switching memory devices.
Watershed Health: The Need for a New Perspective
NASA Astrophysics Data System (ADS)
Reeves, G.
2017-12-01
Watershed health is a measure of the condition of the aquatic ecosystem within a watershed and is indicated by a specific set of environmental conditions that provide desired ecological, social, and legal amenities. A watershed is deemed "healthy" if it has these attributes and the traditional management approach to maintaining or developing a healthy watershed is to create and maintain these specific conditions within the watershed. However, this approach may not be applicable to situations in which processes are complex, non-linear, and poorly understood. The focus on a specific set of conditions comes at the expense of recognizing the ecological processes that create and maintain habitats for an aquatic organisms and the ecological context in which they evolved, and may lead to further degradation or compromising of the ecosystems and landscapes of interest. An emerging perspective suggests that aquatic-riparian ecosystems possess a range of processes and attributes that are inherently complex, nonlinear, and dynamic and because of the variation in the size and asynchronous nature of disturbance events, conditions will vary over time among watersheds, resulting in a mosaic of biophysical conditions across the landscape. Thus, watershed health may not be a single condition but rather a suite of conditions similar to how terrestrial ecosystems are viewed, requiring an integrated assessment of a range of ecological conditions and consideration of the intactness of key ecological processes.
Some considerations of two alleged kinds of selective attention.
Keren, G
1976-12-01
The present article deals with selective attention phenomena and elaborates on a stimulus material classification, "stimulus set" versus "response set," proposed by Broadbent (1970, 1971)9 Stimulus set is defined by some distinct and conspicuous physical properties that are inherent in the stimulus. Response set is characterized by the meaning it conveys, and thus its properties are determined by cognitive processing on the part of the organism. Broadbent's framework is related to Neisser's (1967) distinction between two perceptual-cognitive processes, namely, preattentive control and focal attention. Three experiments are reported. A before-after paradigm was employed in Experiment 1, together with a sptial arrangement manipulation of relevant versus irrelevant stimuli (being grouped or mixed). The results indicated that before-after instruction had a stronger effect under stimulus set than under response set conditions. Spatial arrangement, on the other hand, affected performances under response set but not under stimulus set conditions. These results were interpreted as supporting the idea that stimulus set material, which is handled by preattentive mechanisms, may be processed in parallel, while response set material requires focal attention that is probably serial in nature. Experiment 2 used a search task with different levels of noise elements. Although subjects were not able to avoid completely the processing of noise elements, they had much more control under stimulus set than under response set conditions. Experiment 3 dealt with memory functions and suggests differential levels of perceptual processing depending on the nature of the stimulus material. This extends the memory framework suggested by Craik and Lockhart (1972). The results of these experiments, together with evidence from other behavioral and physiological studies, lend strong support to the proposed theory. At the theoretical level, it is suggested that the distinction between stimulus and response set, and the corresponding one between preattentive mechanisms and focal attention, are on a continuum rather than being an all-or-none classification. Thus, it permits greater congnitive flexibility on the part of the organism, which is reflected through the assumption that both preattentive mechanisms and focal attention may operate simultaneously and differ only in the salience of their functioning. From a methodological point of view, the distinction between stimulus material and organismic processes is emphasized. It is argued that researchers have not given sufficient attention to the properties of the stimulus materials that they have used, and as a consequence have reached unwarranted conclusions, as exemplified by a few studies that are briefly discussed.
Studies of the physical, yield and failure behavior of aliphatic polyketones
NASA Astrophysics Data System (ADS)
Karttunen, Nicole Renee
This thesis describes an investigation into the multiaxial yield and failure behavior of an aliphatic polyketone terpolymer. The behavior is studied as a function of: stress state, strain rate, temperature, and sample processing conditions. Results of this work include: elucidation of the behavior of a recently commercialized polymer, increased understanding of the effects listed above, insight into the effects of processing conditions on the morphology of the polyketone, and a description of yield strength of this material as a function of stress state, temperature, and strain rate. The first portion of work focuses on the behavior of a set of samples that are extruded under "common" processing conditions. Following this reference set of tests, the effect of testing this material at different temperatures is studied. A total of four different temperatures are examined. In addition, the effect of altering strain rate is examined. Testing is performed under pseudo-strain rate control at constant nominal octahedral shear strain rate for each failure envelope. A total of three different rates are studied. An extension of the first portion of work involves modeling the yield envelope. This is done by combining two approaches: continuum level and molecular level. The use of both methods allows the description of the yield envelope as a function of stress state, strain rate and temperature. The second portion of work involves the effects of processing conditions. For this work, additional samples are extruded with different shear and thermal histories than the "standard" material. One set of samples is processed with shear rates higher and lower than the standard. A second set is processed at higher and lower cooling rates than the standard. In order to understand the structural cause for changes in behavior with processing conditions, morphological characterization is performed on these samples. In particular, the effect on spherulitic structure is important. Residual stresses are also determined to be important to the behavior of the samples. Finally, an investigation into the crystalline structure of a family of aliphatic polyketones is performed. The effects of side group concentration and size are described.
Evolution of permafrost landscapes under technogenic impacts
NASA Astrophysics Data System (ADS)
Kerimov, A. G.; Grebenets, V. I.; Streletskiy, D. A.; Shiklomanov, N. I.; Nyland, K. E.
2014-12-01
Economic development of Russian Northern Regions on permafrost resulted in a new pattern of geocryological conditions, different from natural environment. This pattern is characterized by drastic landscape transformations; changes of heat and mass transfer in the permafrost/atmosphere system; and by engineering and technical pressure upon the permafrost, leading to alteration of its physical, thermal and mechanical properties. In the northern cities this causes increase of ground temperature and intensification of hazardous cryogenic processes in areas under engineering development, reducing stability of geotechnical environment. For example, facility deformations in Norilsk in the last 15 years, became much more abundant than these revealed throughout the previous 50 years. Increase in accident risk for facilities (pipelines, industrial enterprises, etc.) enhances the technogenic pressure on permafrost of the territories under development, leading to the new milestone of changes in permafrost, i.e. to creation of a new set of geocryological conditions. Cryogenic processes within the urban cryolithozone are seldom similar with these under the natural conditions: they either occur more intensively or, vice versa, attenuate under technogenic impacts, new cryogenic processes and phenomena occur, which have not been typical for a given region hitherto. A geographical distribution, evolution and other features of cryogenic processes differ considerably from natural conditions or are unprecedented at all. Peculiar natural-technogenic geocryological complices (NTGC) are formed in the urban centers, which are remarkable by the vector of permafrost evolution, by the set of cryogenic processes, by temperature trends and the other characteristics. NTGC types depend on initial natural settings and on kinds, intensity and duration of technogenic pressure. Our field surveys of permafrost and geological conditions resulted in mapping of 17 NTGC types in Norilsk, 11 types in Yamburg gas field, and 32 types along gas and oil pipelines in the north of Western Siberia. NTGC dynamics, depending on climate change, the scale of urban system, on the set of its elements and on duration of impact upon nature, and on degree of stability of natural permafrost, attracts the particular interest.
Effects of set-size and lateral masking in visual search.
Põder, Endel
2004-01-01
In the present research, the roles of lateral masking and central processing limitations in visual search were studied. Two search conditions were used: (1) target differed from distractors by presence/absence of a simple feature; (2) target differed by relative position of the same components only. The number of displayed stimuli (set-size) and the distance between neighbouring stimuli were varied as independently as possible in order to measure the effect of both. The effect of distance between stimuli (lateral masking) was found to be similar in both conditions. The effect of set-size was much larger for relative position stimuli. The results support the view that perception of relative position of stimulus components is limited mainly by the capacity of central processing.
Sensory Over-Responsivity in Adults with Autism Spectrum Conditions
ERIC Educational Resources Information Center
Tavassoli, Teresa; Miller, Lucy J.; Schoen, Sarah A.; Nielsen, Darci M.; Baron-Cohen, Simon
2014-01-01
Anecdotal reports and empirical evidence suggest that sensory processing issues are a key feature of autism spectrum conditions. This study set out to investigate whether adults with autism spectrum conditions report more sensory over-responsivity than adults without autism spectrum conditions. Another goal of the study was to identify whether…
Kursawe, Michael A; Zimmer, Hubert D
2015-06-01
We investigated the impact of perceptual processing demands on visual working memory of coloured complex random polygons during change detection. Processing load was assessed by pupil size (Exp. 1) and additionally slow wave potentials (Exp. 2). Task difficulty was manipulated by presenting different set sizes (1, 2, 4 items) and by making different features (colour, shape, or both) task-relevant. Memory performance in the colour condition was better than in the shape and both condition which did not differ. Pupil dilation and the posterior N1 increased with set size independent of type of feature. In contrast, slow waves and a posterior P2 component showed set size effects but only if shape was task-relevant. In the colour condition slow waves did not vary with set size. We suggest that pupil size and N1 indicates different states of attentional effort corresponding to the number of presented items. In contrast, slow waves reflect processes related to encoding and maintenance strategies. The observation that their potentials vary with the type of feature (simple colour versus complex shape) indicates that perceptual complexity already influences encoding and storage and not only comparison of targets with memory entries at the moment of testing. Copyright © 2015 Elsevier B.V. All rights reserved.
Modeling hydrologic controls on sulfur processes in sulfate-impacted wetland and stream sediments
NASA Astrophysics Data System (ADS)
Ng, G.-H. C.; Yourd, A. R.; Johnson, N. W.; Myrbo, A. E.
2017-09-01
Recent studies show sulfur redox processes in terrestrial settings are more important than previously considered, but much remains uncertain about how these processes respond to dynamic hydrologic conditions in natural field settings. We used field observations from a sulfate-impacted wetland and stream in the mining region of Minnesota (USA) to calibrate a reactive transport model and evaluate sulfur and coupled geochemical processes under contrasting hydrogeochemical scenarios. Simulations of different hydrological conditions showed that flux and chemistry differences between surface water and deeper groundwater strongly control hyporheic zone geochemical profiles. However, model results for the stream channel versus wetlands indicate sediment organic carbon content to be the more important driver of sulfate reduction rates. A complex nonlinear relationship between sulfate reduction rates and geochemical conditions is apparent from the model's higher sensitivity to sulfate concentrations in settings with higher organic content. Across all scenarios, simulated e- balance results unexpectedly showed that sulfate reduction dominates iron reduction, which is contrary to the traditional thermodynamic ladder but corroborates recent experimental findings by Hansel et al. (2015) that "cryptic" sulfur cycling could drive sulfate reduction in preference over iron reduction. Following the thermodynamic ladder, our models shows that high surface water sulfate slows methanogenesis in shallow sediments, but field observations suggest that sulfate reduction may not entirely suppress methane. Overall, our results show that sulfate reduction may serve as a major component making up and influencing terrestrial redox processes, with dynamic hyporheic fluxes controlling sulfate concentrations and reaction rates, especially in high organic content settings.
Atomic and molecular data for spacecraft re-entry plasmas
NASA Astrophysics Data System (ADS)
Celiberto, R.; Armenise, I.; Cacciatore, M.; Capitelli, M.; Esposito, F.; Gamallo, P.; Janev, R. K.; Laganà, A.; Laporta, V.; Laricchiuta, A.; Lombardi, A.; Rutigliano, M.; Sayós, R.; Tennyson, J.; Wadehra, J. M.
2016-06-01
The modeling of atmospheric gas, interacting with the space vehicles in re-entry conditions in planetary exploration missions, requires a large set of scattering data for all those elementary processes occurring in the system. A fundamental aspect of re-entry problems is represented by the strong non-equilibrium conditions met in the atmospheric plasma close to the surface of the thermal shield, where numerous interconnected relaxation processes determine the evolution of the gaseous system towards equilibrium conditions. A central role is played by the vibrational exchanges of energy, so that collisional processes involving vibrationally excited molecules assume a particular importance. In the present paper, theoretical calculations of complete sets of vibrationally state-resolved cross sections and rate coefficients are reviewed, focusing on the relevant classes of collisional processes: resonant and non-resonant electron-impact excitation of molecules, atom-diatom and molecule-molecule collisions as well as gas-surface interaction. In particular, collisional processes involving atomic and molecular species, relevant to Earth (N2, O2, NO), Mars (CO2, CO, N2) and Jupiter (H2, He) atmospheres are considered.
System and method for statistically monitoring and analyzing sensed conditions
Pebay, Philippe P [Livermore, CA; Brandt, James M [Dublin, CA; Gentile, Ann C [Dublin, CA; Marzouk, Youssef M [Oakland, CA; Hale, Darrian J [San Jose, CA; Thompson, David C [Livermore, CA
2011-01-04
A system and method of monitoring and analyzing a plurality of attributes for an alarm condition is disclosed. The attributes are processed and/or unprocessed values of sensed conditions of a collection of a statistically significant number of statistically similar components subjected to varying environmental conditions. The attribute values are used to compute the normal behaviors of some of the attributes and also used to infer parameters of a set of models. Relative probabilities of some attribute values are then computed and used along with the set of models to determine whether an alarm condition is met. The alarm conditions are used to prevent or reduce the impact of impending failure.
System and method for statistically monitoring and analyzing sensed conditions
Pebay, Philippe P [Livermore, CA; Brandt, James M [Dublin, CA; Gentile, Ann C [Dublin, CA; Marzouk, Youssef M [Oakland, CA; Hale, Darrian J [San Jose, CA; Thompson, David C [Livermore, CA
2011-01-25
A system and method of monitoring and analyzing a plurality of attributes for an alarm condition is disclosed. The attributes are processed and/or unprocessed values of sensed conditions of a collection of a statistically significant number of statistically similar components subjected to varying environmental conditions. The attribute values are used to compute the normal behaviors of some of the attributes and also used to infer parameters of a set of models. Relative probabilities of some attribute values are then computed and used along with the set of models to determine whether an alarm condition is met. The alarm conditions are used to prevent or reduce the impact of impending failure.
System and method for statistically monitoring and analyzing sensed conditions
Pebay, Philippe P [Livermore, CA; Brandt, James M. , Gentile; Ann C. , Marzouk; Youssef M. , Hale; Darrian J. , Thompson; David, C [Livermore, CA
2010-07-13
A system and method of monitoring and analyzing a plurality of attributes for an alarm condition is disclosed. The attributes are processed and/or unprocessed values of sensed conditions of a collection of a statistically significant number of statistically similar components subjected to varying environmental conditions. The attribute values are used to compute the normal behaviors of some of the attributes and also used to infer parameters of a set of models. Relative probabilities of some attribute values are then computed and used along with the set of models to determine whether an alarm condition is met. The alarm conditions are used to prevent or reduce the impact of impending failure.
ECOSYSTEM RESEARCH: THE WESTERN PILOT
The products of this research include tools, monitoring data and assessments. Tools include biological indicators and a process for setting expectation or reference conditions against which to evaluate the indicators. It will also include a prioritized set of indicators on anthr...
Byskov, Jens; Marchal, Bruno; Maluka, Stephen; Zulu, Joseph M; Bukachi, Salome A; Hurtig, Anna-Karin; Blystad, Astrid; Kamuzora, Peter; Michelo, Charles; Nyandieka, Lillian N; Ndawi, Benedict; Bloch, Paul; Olsen, Oystein E
2014-08-20
Priority-setting decisions are based on an important, but not sufficient set of values and thus lead to disagreement on priorities. Accountability for Reasonableness (AFR) is an ethics-based approach to a legitimate and fair priority-setting process that builds upon four conditions: relevance, publicity, appeals, and enforcement, which facilitate agreement on priority-setting decisions and gain support for their implementation. This paper focuses on the assessment of AFR within the project REsponse to ACcountable priority setting for Trust in health systems (REACT). This intervention study applied an action research methodology to assess implementation of AFR in one district in Kenya, Tanzania, and Zambia, respectively. The assessments focused on selected disease, program, and managerial areas. An implementing action research team of core health team members and supporting researchers was formed to implement, and continually assess and improve the application of the four conditions. Researchers evaluated the intervention using qualitative and quantitative data collection and analysis methods. The values underlying the AFR approach were in all three districts well-aligned with general values expressed by both service providers and community representatives. There was some variation in the interpretations and actual use of the AFR in the decision-making processes in the three districts, and its effect ranged from an increase in awareness of the importance of fairness to a broadened engagement of health team members and other stakeholders in priority setting and other decision-making processes. District stakeholders were able to take greater charge of closing the gap between nationally set planning and the local realities and demands of the served communities within the limited resources at hand. This study thus indicates that the operationalization of the four broadly defined and linked conditions is both possible and seems to be responding to an actual demand. This provides arguments for the continued application and further assessment of the potential of AFR in supporting priority-setting and other decision-making processes in health systems to achieve better agreed and more sustainable health improvements linked to a mutual democratic learning with potential wider implications.
2014-01-01
Background Priority-setting decisions are based on an important, but not sufficient set of values and thus lead to disagreement on priorities. Accountability for Reasonableness (AFR) is an ethics-based approach to a legitimate and fair priority-setting process that builds upon four conditions: relevance, publicity, appeals, and enforcement, which facilitate agreement on priority-setting decisions and gain support for their implementation. This paper focuses on the assessment of AFR within the project REsponse to ACcountable priority setting for Trust in health systems (REACT). Methods This intervention study applied an action research methodology to assess implementation of AFR in one district in Kenya, Tanzania, and Zambia, respectively. The assessments focused on selected disease, program, and managerial areas. An implementing action research team of core health team members and supporting researchers was formed to implement, and continually assess and improve the application of the four conditions. Researchers evaluated the intervention using qualitative and quantitative data collection and analysis methods. Results The values underlying the AFR approach were in all three districts well-aligned with general values expressed by both service providers and community representatives. There was some variation in the interpretations and actual use of the AFR in the decision-making processes in the three districts, and its effect ranged from an increase in awareness of the importance of fairness to a broadened engagement of health team members and other stakeholders in priority setting and other decision-making processes. Conclusions District stakeholders were able to take greater charge of closing the gap between nationally set planning and the local realities and demands of the served communities within the limited resources at hand. This study thus indicates that the operationalization of the four broadly defined and linked conditions is both possible and seems to be responding to an actual demand. This provides arguments for the continued application and further assessment of the potential of AFR in supporting priority-setting and other decision-making processes in health systems to achieve better agreed and more sustainable health improvements linked to a mutual democratic learning with potential wider implications. PMID:25142148
Hartley, S E; Stockley, R C
2016-12-01
Collaborative goal setting is an integral component of treatment planning for adults with neuromuscular disorders (NMD). However, due to the unique challenges for these individuals, identifying a process for goal setting that is advantageous for all can be problematic. This study aimed to evaluate collaborative goal setting at a specialist NMD centre, as reported by service users attending physiotherapy. It also aimed to generate discussion about collaborative goal setting and the practice of goal setting in adults with NMD in order to inform future practice. Specialist NMD community-based centre in the UK. One hundred and four adults with NMD who attended the centre. Cross-sectional survey. Thematic and content analyses of goals set were performed alongside demographic data collection. One hundred and four patients (34 females) with a range of neuromuscular conditions - including Becker, facioscapularhumeral, limb girdle, Duchenne and myotonic muscular dystrophies - completed the survey. Thirty-six respondents (37%) stated that they had set goals with the physiotherapist, whilst 62 (63%) stated that they had not set goals with the physiotherapist. Respondents' goals were grouped into four themes: symptom management, maintenance, improving physical condition, and learning to live with the condition. Readiness to take part in collaborative goal setting is unique to each individual. Physiotherapists need to be skilful in supporting adults with NMD through the goal-setting process until they are capable of sharing responsibility. Setting personal goals to improve emotional well-being may help to develop confidence to take more control of their situation, hence facilitating skills in self-management. Copyright © 2015 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
The Introduction of Standardized External Testing in Ukraine: Challenges and Successes
ERIC Educational Resources Information Center
Kovalchuk, Serhiy; Koroliuk, Svitlana
2012-01-01
Standardized external testing (SET) began to be implemented in Ukraine in 2008 as an instrument for combating corruption in higher education and ensuring fair university admission. This article examines the conditions and processes that led to the introduction of SET, overviews its implementation over three years (2008-10), analyzes SET and…
SET: a pupil detection method using sinusoidal approximation
Javadi, Amir-Homayoun; Hakimi, Zahra; Barati, Morteza; Walsh, Vincent; Tcheang, Lili
2015-01-01
Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as “SET”) that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations (“Natural”); and images of less challenging indoor scenes (“CASIA-Iris-Thousand”). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library (“DLL”), which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk). PMID:25914641
Iannone, Maria; Ventre, Maurizio; Formisano, Lucia; Casalino, Laura; Patriarca, Eduardo J; Netti, Paolo A
2015-03-11
The initial conditions for morphogenesis trigger a cascade of events that ultimately dictate structure and functions of tissues and organs. Here we report that surface nanopatterning can control the initial assembly of focal adhesions, hence guiding human mesenchymal stem cells (hMSCs) through the process of self-organization and differentiation. This process self-sustains, leading to the development of macroscopic tissues with molecular profiles and microarchitecture reminiscent of embryonic tendons. Therefore, material surfaces can be in principle engineered to set off the hMSC program toward tissuegenesis in a deterministic manner by providing adequate sets of initial environmental conditions.
Influence of central set on anticipatory and triggered grip-force adjustments
NASA Technical Reports Server (NTRS)
Winstein, C. J.; Horak, F. B.; Fisher, B. E.; Peterson, B. W. (Principal Investigator)
2000-01-01
The effects of predictability of load magnitude on anticipatory and triggered grip-force adjustments were studied as nine normal subjects used a precision grip to lift, hold, and replace an instrumented test object. Experience with a predictable stimulus has been shown to enhance magnitude scaling of triggered postural responses to different amplitudes of perturbations. However, this phenomenon, known as a central-set effect, has not been tested systematically for grip-force responses in the hand. In our study, predictability was manipulated by applying load perturbations of different magnitudes to the test object under conditions in which the upcoming load magnitude was presented repeatedly or under conditions in which the load magnitudes were presented randomly, each with two different pre-load grip conditions (unconstrained and constrained). In constrained conditions, initial grip forces were maintained near the minimum level necessary to prevent pre-loaded object slippage, while in unconstrained conditions, no initial grip force restrictions were imposed. The effect of predictable (blocked) and unpredictable (random) load presentations on scaling of anticipatory and triggered grip responses was tested by comparing the slopes of linear regressions between the imposed load and grip response magnitude. Anticipatory and triggered grip force responses were scaled to load magnitude in all conditions. However, regardless of pre-load grip force constraint, the gains (slopes) of grip responses relative to load magnitudes were greater when the magnitude of the upcoming load was predictable than when the load increase was unpredictable. In addition, a central-set effect was evidenced by the fewer number of drop trials in the predictable relative to unpredictable load conditions. Pre-load grip forces showed the greatest set effects. However, grip responses showed larger set effects, based on prediction, when pre-load grip force was constrained to lower levels. These results suggest that anticipatory processes pertaining to load magnitude permit the response gain of both voluntary and triggered rapid grip force adjustments to be set, at least partially, prior to perturbation onset. Comparison of anticipatory set effects for reactive torque and lower extremity EMG postural responses triggered by surface translation perturbations suggests a more general rule governing anticipatory processes.
PA, JUDY; POSSIN, KATHERINE L.; WILSON, STEPHEN M.; QUITANIA, LOVINGLY C.; KRAMER, JOEL H.; BOXER, ADAM L.; WEINER, MICHAEL W.; JOHNSON, JULENE K.
2010-01-01
There is increasing recognition that set-shifting, a form of cognitive control, is mediated by different neural structures. However, these regions have not yet been carefully identified as many studies do not account for the influence of component processes (e.g., motor speed). We investigated gray matter correlates of set-shifting while controlling for component processes. Using the Design Fluency (DF), Trail Making Test (TMT), and Color Word Interference (CWI) subtests from the Delis-Kaplan Executive Function System (D-KEFS), we investigated the correlation between set-shifting performance and gray matter volume in 160 subjects with neurodegenerative disease, mild cognitive impairment, and healthy older adults using voxel-based morphometry. All three set-shifting tasks correlated with multiple, widespread gray matter regions. After controlling for the component processes, set-shifting performance correlated with focal regions in prefrontal and posterior parietal cortices. We also identified bilateral prefrontal cortex and the right posterior parietal lobe as common sites for set-shifting across the three tasks. There was a high degree of multicollinearity between the set-shifting conditions and the component processes of TMT and CWI, suggesting DF may better isolate set-shifting regions. Overall, these findings highlight the neuroanatomical correlates of set-shifting and the importance of controlling for component processes when investigating complex cognitive tasks. PMID:20374676
Implementing accountability for reasonableness--the case of pharmaceutical reimbursement in Sweden.
Jansson, Sandra
2007-04-01
This paper aims to describe the priority-setting procedure for new original pharmaceuticals practiced by the Swedish Pharmaceutical Benefits Board (LFN), to analyse the outcome of the procedure in terms of decisions and the relative importance of ethical principles, and to examine the reactions of stakeholders. All the 'principally important' decisions made by the LFN during its first 33 months of operation were analysed. The study is theoretically anchored in the theory of fair and legitimate priority-setting procedures by Daniels and Sabin, and is based on public documents, media articles, and semi-structured interviews. Only nine cases resulted in a rejection of a subsidy by the LFN and 15 in a limited or conditional subsidy. Total rejections rather than limitations gave rise to actions by stakeholders. Primarily, the principle of cost-effectiveness was used when limiting/conditioning or totally rejecting a subsidy. This study suggests that implementing a priority-setting process that fulfils the conditions of accountability for reasonableness can result in a priority-setting process which is generally perceived as fair and legitimate by the major stakeholders and may increase social learning in terms of accepting the necessity of priority setting in health care. The principle of cost-effectiveness increased in importance when the demand for openness and transparency increased.
The fetal programming of telomere biology hypothesis: an update
Entringer, Sonja; Buss, Claudia; Wadhwa, Pathik D.
2018-01-01
Research on mechanisms underlying fetal programming of health and disease risk has focused primarily on processes that are specific to cell types, organs or phenotypes of interest. However, the observation that developmental conditions concomitantly influence a diverse set of phenotypes, the majority of which are implicated in age-related disorders, raises the possibility that such developmental conditions may additionally exert effects via a common underlying mechanism that involves cellular/molecular ageing–related processes. In this context, we submit that telomere biology represents a process of particular interest in humans because, firstly, this system represents among the most salient antecedent cellular phenotypes for common age-related disorders; secondly, its initial (newborn) setting appears to be particularly important for its long-term effects; and thirdly, its initial setting appears to be plastic and under developmental regulation. We propose that the effects of suboptimal intrauterine conditions on the initial setting of telomere length and telomerase expression/activity capacity may be mediated by the programming actions of stress-related maternal–placental–fetal oxidative, immune, endocrine and metabolic pathways in a manner that may ultimately accelerate cellular dysfunction, ageing and disease susceptibility over the lifespan. This perspectives paper provides an overview of each of the elements underlying this hypothesis, with an emphasis on recent developments, findings and future directions. This article is part of the theme issue ‘Understanding diversity in telomere dynamics’. PMID:29335381
The fetal programming of telomere biology hypothesis: an update.
Entringer, Sonja; de Punder, Karin; Buss, Claudia; Wadhwa, Pathik D
2018-03-05
Research on mechanisms underlying fetal programming of health and disease risk has focused primarily on processes that are specific to cell types, organs or phenotypes of interest. However, the observation that developmental conditions concomitantly influence a diverse set of phenotypes, the majority of which are implicated in age-related disorders, raises the possibility that such developmental conditions may additionally exert effects via a common underlying mechanism that involves cellular/molecular ageing-related processes. In this context, we submit that telomere biology represents a process of particular interest in humans because, firstly, this system represents among the most salient antecedent cellular phenotypes for common age-related disorders; secondly, its initial (newborn) setting appears to be particularly important for its long-term effects; and thirdly, its initial setting appears to be plastic and under developmental regulation. We propose that the effects of suboptimal intrauterine conditions on the initial setting of telomere length and telomerase expression/activity capacity may be mediated by the programming actions of stress-related maternal-placental-fetal oxidative, immune, endocrine and metabolic pathways in a manner that may ultimately accelerate cellular dysfunction, ageing and disease susceptibility over the lifespan. This perspectives paper provides an overview of each of the elements underlying this hypothesis, with an emphasis on recent developments, findings and future directions.This article is part of the theme issue 'Understanding diversity in telomere dynamics'. © 2018 The Author(s).
Andrés-Toro, B; Girón-Sierra, J M; Fernández-Blanco, P; López-Orozco, J A; Besada-Portas, E
2004-04-01
This paper describes empirical research on the model, optimization and supervisory control of beer fermentation. Conditions in the laboratory were made as similar as possible to brewery industry conditions. Since mathematical models that consider realistic industrial conditions were not available, a new mathematical model design involving industrial conditions was first developed. Batch fermentations are multiobjective dynamic processes that must be guided along optimal paths to obtain good results. The paper describes a direct way to apply a Pareto set approach with multiobjective evolutionary algorithms (MOEAs). Successful finding of optimal ways to drive these processes were reported. Once obtained, the mathematical fermentation model was used to optimize the fermentation process by using an intelligent control based on certain rules.
Sensory over-responsivity in adults with autism spectrum conditions.
Tavassoli, Teresa; Miller, Lucy J; Schoen, Sarah A; Nielsen, Darci M; Baron-Cohen, Simon
2014-05-01
Anecdotal reports and empirical evidence suggest that sensory processing issues are a key feature of autism spectrum conditions. This study set out to investigate whether adults with autism spectrum conditions report more sensory over-responsivity than adults without autism spectrum conditions. Another goal of the study was to identify whether autistic traits in adults with and without autism spectrum conditions were associated with sensory over-responsivity. Adults with (n = 221) and without (n = 181) autism spectrum conditions participated in an online survey. The Autism Spectrum Quotient, the Raven Matrices and the Sensory Processing Scale were used to characterize the sample. Adults with autism spectrum conditions reported more sensory over-responsivity than control participants across various sensory domains (visual, auditory, tactile, olfactory, gustatory and proprioceptive). Sensory over-responsivity correlated positively with autistic traits (Autism Spectrum Quotient) at a significant level across groups and within groups. Adults with autism spectrum conditions experience sensory over-responsivity to daily sensory stimuli to a high degree. A positive relationship exists between sensory over-responsivity and autistic traits. Understanding sensory over-responsivity and ways of measuring it in adults with autism spectrum conditions has implications for research and clinical settings.
Asymptotic Equivalence of Probability Measures and Stochastic Processes
NASA Astrophysics Data System (ADS)
Touchette, Hugo
2018-03-01
Let P_n and Q_n be two probability measures representing two different probabilistic models of some system (e.g., an n-particle equilibrium system, a set of random graphs with n vertices, or a stochastic process evolving over a time n) and let M_n be a random variable representing a "macrostate" or "global observable" of that system. We provide sufficient conditions, based on the Radon-Nikodym derivative of P_n and Q_n, for the set of typical values of M_n obtained relative to P_n to be the same as the set of typical values obtained relative to Q_n in the limit n→ ∞. This extends to general probability measures and stochastic processes the well-known thermodynamic-limit equivalence of the microcanonical and canonical ensembles, related mathematically to the asymptotic equivalence of conditional and exponentially-tilted measures. In this more general sense, two probability measures that are asymptotically equivalent predict the same typical or macroscopic properties of the system they are meant to model.
Electrophysiological evidence for parallel and serial processing during visual search.
Luck, S J; Hillyard, S A
1990-12-01
Event-related potentials were recorded from young adults during a visual search task in order to evaluate parallel and serial models of visual processing in the context of Treisman's feature integration theory. Parallel and serial search strategies were produced by the use of feature-present and feature-absent targets, respectively. In the feature-absent condition, the slopes of the functions relating reaction time and latency of the P3 component to set size were essentially identical, indicating that the longer reaction times observed for larger set sizes can be accounted for solely by changes in stimulus identification and classification time, rather than changes in post-perceptual processing stages. In addition, the amplitude of the P3 wave on target-present trials in this condition increased with set size and was greater when the preceding trial contained a target, whereas P3 activity was minimal on target-absent trials. These effects are consistent with the serial self-terminating search model and appear to contradict parallel processing accounts of attention-demanding visual search performance, at least for a subset of search paradigms. Differences in ERP scalp distributions further suggested that different physiological processes are utilized for the detection of feature presence and absence.
Kim, Dongcheol; Rhee, Sehun
2002-01-01
CO(2) welding is a complex process. Weld quality is dependent on arc stability and minimizing the effects of disturbances or changes in the operating condition commonly occurring during the welding process. In order to minimize these effects, a controller can be used. In this study, a fuzzy controller was used in order to stabilize the arc during CO(2) welding. The input variable of the controller was the Mita index. This index estimates quantitatively the arc stability that is influenced by many welding process parameters. Because the welding process is complex, a mathematical model of the Mita index was difficult to derive. Therefore, the parameter settings of the fuzzy controller were determined by performing actual control experiments without using a mathematical model of the controlled process. The solution, the Taguchi method was used to determine the optimal control parameter settings of the fuzzy controller to make the control performance robust and insensitive to the changes in the operating conditions.
Core outcome sets for research and clinical practice.
Chiarotto, Alessandro; Ostelo, Raymond W; Turk, Dennis C; Buchbinder, Rachelle; Boers, Maarten
This masterclass introduces the topic of core outcome sets, describing rationale and methods for developing them, and providing some examples that are relevant for clinical research and practice. A core outcome set is a minimum consensus-based set of outcomes that should be measured and reported in all clinical trials for a specific health condition and/or intervention. Issues surrounding outcome assessment, such as selective reporting and inconsistency across studies, can be addressed by the development of a core set. As suggested by key initiatives in this field (i.e. OMERACT and COMET), the development requires achieving consensus on: (1) core outcome domains and (2) core outcome measurement instruments. Different methods can be used to reach consensus, including: literature systematic reviews to inform the process, qualitative research with clinicians and patients, group discussions (e.g. nominal group technique), and structured surveys (e.g. Delphi technique). Various stakeholders should be involved in the process, with particular attention to patients. Several COSs have been developed for musculoskeletal conditions including a longstanding one for low back pain, IMMPACT recommendations on outcomes for chronic pain, and OMERACT COSs for hip, knee and hand osteoarthritis. There is a lack of COSs for neurological, geriatric, cardio-respiratory and pediatric conditions, therefore, future research could determine the value of developing COSs for these conditions. Copyright © 2017 Associação Brasileira de Pesquisa e Pós-Graduação em Fisioterapia. Publicado por Elsevier Editora Ltda. All rights reserved.
Uncertainties in predicting debris flow hazards following wildfire [Chapter 19
Kevin D. Hyde; Karin Riley; Cathelijne Stoof
2017-01-01
Wildfire increases the probability of debris flows posing hazardous conditions where valuesâatârisk exist downstream of burned areas. Conditions and processes leading to postfire debris flows usually follow a general sequence defined here as the postfire debris flow hazard cascade: biophysical setting, fire processes, fire effects, rainfall, debris flow, and valuesâatâ...
Condition assessment of nonlinear processes
Hively, Lee M.; Gailey, Paul C.; Protopopescu, Vladimir A.
2002-01-01
There is presented a reliable technique for measuring condition change in nonlinear data such as brain waves. The nonlinear data is filtered and discretized into windowed data sets. The system dynamics within each data set is represented by a sequence of connected phase-space points, and for each data set a distribution function is derived. New metrics are introduced that evaluate the distance between distribution functions. The metrics are properly renormalized to provide robust and sensitive relative measures of condition change. As an example, these measures can be used on EEG data, to provide timely discrimination between normal, preseizure, seizure, and post-seizure states in epileptic patients. Apparatus utilizing hardware or software to perform the method and provide an indicative output is also disclosed.
Priority setting in developing countries health care institutions: the case of a Ugandan hospital
Kapiriri, Lydia; Martin, Douglas K
2006-01-01
Background Because the demand for health services outstrips the available resources, priority setting is one of the most difficult issues faced by health policy makers, particularly those in developing countries. However, there is lack of literature that describes and evaluates priority setting in these contexts. The objective of this paper is to describe priority setting in a teaching hospital in Uganda and evaluate the description against an ethical framework for fair priority setting processes – Accountability for Reasonableness. Methods A case study in a 1,500 bed national referral hospital receiving 1,320 out patients per day and an average budget of US$ 13.5 million per year. We reviewed documents and carried out 70 in-depth interviews (14 health planners, 40 doctors, and 16 nurses working at the hospital). Interviews were recorded and transcribed. Data analysis employed the modified thematic approach to describe priority setting, and the description was evaluated using the four conditions of Accountability for Reasonableness: relevance, publicity, revisions and enforcement. Results Senior managers, guided by the hospital strategic plan make the hospital budget allocation decisions. Frontline practitioners expressed lack of knowledge of the process. Relevance: Priority is given according to a cluster of factors including need, emergencies and patient volume. However, surgical departments and departments whose leaders "make a lot of noise" are also prioritized. Publicity: Decisions, but not reasons, are publicized through general meetings and circulars, but this information does not always reach the frontline practitioners. Publicity to the general public was through ad hoc radio programs and to patients who directly ask. Revisions: There were no formal mechanisms for challenging the reasoning. Enforcement: There were no mechanisms to ensure adherence to the four conditions of a fair process. Conclusion Priority setting decisions at this hospital do not satisfy the conditions of fairness. To improve, the hospital should: (i) engage frontline practitioners, (ii) publicize the reasons for decisions both within the hospital and to the general public, and (iii) develop formal mechanisms for challenging the reasoning. In addition, capacity strengthening is required for senior managers who must accept responsibility for ensuring that the above three conditions are met. PMID:17026761
Mavadat, Maryam; Ghasemzadeh-Barvarz, Massoud; Turgeon, Stéphane; Duchesne, Carl; Laroche, Gaétan
2013-12-23
We investigated the effect of various plasma parameters (relative density of atomic N and H, plasma temperature, and vibrational temperature) and process conditions (pressure and H2/(N2 + H2) ratio) on the chemical composition of modified poly(tetrafluoroethylene) (PTFE). The plasma parameters were measured by means of near-infrared (NIR) and UV-visible emission spectroscopy with and without actinometry. The process conditions of the N2-H2 microwave discharges were set at various pressures ranging from 100 to 2000 mTorr and H2/(N2+H2) gas mixture ratios between 0 and 0.4. The surface chemical composition of the modified polymers was determined by X-ray photoelectron spectroscopy (XPS). A mathematical model was constructed using the partial least-squares regression algorithm to correlate the plasma information (process condition and plasma parameters as determined by emission spectroscopy) with the modified surface characteristics. To construct the model, a set of data input variables containing process conditions and plasma parameters were generated, as well as a response matrix containing the surface composition of the polymer. This model was used to predict the composition of PTFE surfaces subjected to N2-H2 plasma treatment. Contrary to what is generally accepted in the literature, the present data demonstrate that hydrogen is not directly involved in the defluorination of the surface but rather produces atomic nitrogen and/or NH radicals that are shown to be at the origin of fluorine atom removal from the polymer surface. The results show that process conditions alone do not suffice in predicting the surface chemical composition and that the plasma characteristics, which cannot be easily correlated with these conditions, should be considered. Process optimization and control would benefit from plasma diagnostics, particularly infrared emission spectroscopy.
[Breast-feeding experience for women workers and students from a public university].
Silva, Isilia Aparecida
2005-01-01
This qualitative research aimed to know the main interfering elements in the breast-feeding process as experienced by professional women and by students, that was carried out with 65 professional women and students from a public university in São Paulo state. The data collection was proceeded by interviews which contents were analyzed according to Taylor and Bogdan and Symbolic Interactionism approaches. Results indicated that the breast-feeding process for these women demonstrated to be conditioned and highlighted by the conditions the women encounter in their domestic, professional, and study settings. The physical setting and the relations among their relatives, superiors and peers exerts a strong influence on their determination to keep on breastfeeding.
42 CFR 410.143 - Requirements for approved accreditation organizations.
Code of Federal Regulations, 2013 CFR
2013-10-01
...) Notice of any proposed changes in its accreditation standards and requirements or evaluation process. If... enforcement of its standards to a set of quality standards (described in § 410.144) and processes when any of the following conditions exist: (i) CMS imposes new requirements or changes its process for approving...
Carrera, Marinete Pinheiro; Carey, Robert J; Cruz Dias, Flávia Regina; dos Santos Sampaio, Maria de Fátima; de Matos, Liana Wermelinger
2013-01-01
Re-exposure to conditioned drug stimuli triggers re-consolidation processes. In the present study post-trial apomorphine treatments were administered in order to interact with the re-consolidation of an apomorphine conditioned/sensitized locomotor response. A low (0.05 mg/kg) and a high (2.0mg/kg) dose were used to inhibit or to enhance dopamine activity, respectively. Initially, groups received 5 daily apomorphine (2.0mg/kg)/vehicle treatments either paired or unpaired to open-field placement. The paired treatments generated a progressive locomotor response. Subsequently, all groups received a 5 min non-drug test for conditioning and a conditioned locomotor response was observed in the paired group. The groups received another apomorphine (2.0mg/kg)/vehicle treatment as a re-induction treatment. At this stage the post-trial protocol was initiated. One set of paired, unpaired and vehicle groups were given a low dose of apomorphine (0.05 mg/kg) post-trial; another set received a high dose of apomorphine (2.0mg/kg) post-trial. The remaining group set received vehicle post-trial. The low dose post-trial treatment eliminated the conditioned and sensitized locomotor response and the high dose post-trial treatment enhanced the conditioned and sensitized locomotor response. The efficacy of the post-trial apomorphine treatments to modify the conditioned and the sensitized response after a brief non-drug exposure to test cues supports the proposition that exteroceptive cues control conditioning and sensitization and that the interoceptive drug cues make little or no associational contribution to apomorphine conditioning and sensitization. In addition, the findings point to the importance of dopamine activation in both the acquisition and re-consolidation of conditioning processes. Copyright © 2012 Elsevier B.V. All rights reserved.
Generated spiral bevel gears: Optimal machine-tool settings and tooth contact analysis
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Tsung, W. J.; Coy, J. J.; Heine, C.
1985-01-01
Geometry and kinematic errors were studied for Gleason generated spiral bevel gears. A new method was devised for choosing optimal machine settings. These settings provide zero kinematic errors and an improved bearing contact. The kinematic errors are a major source of noise and vibration in spiral bevel gears. The improved bearing contact gives improved conditions for lubrication. A computer program for tooth contact analysis was developed, and thereby the new generation process was confirmed. The new process is governed by the requirement that during the generation process there is directional constancy of the common normal of the contacting surfaces for generator and generated surfaces of pinion and gear.
Kamp, Rachelle J A; van Berkel, Henk J M; Popeijus, Herman E; Leppink, Jimmie; Schmidt, Henk G; Dolmans, Diana H J M
2014-03-01
Even though peer process feedback is an often used tool to enhance the effectiveness of collaborative learning environments like PBL, the conditions under which it is best facilitated still need to be investigated. Therefore, this study investigated the effects of individual versus shared reflection and goal setting on students' individual contributions to the group and their academic achievement. In addition, the influence of prior knowledge on the effectiveness of peer feedback was studied. In this pretest-intervention-posttest study 242 first year students were divided into three conditions: condition 1 (individual reflection and goal setting), condition 2 (individual and shared reflection and goal setting), and condition 3 (control group). Results indicated that the quality of individual contributions to the tutorial group did not improve after receiving the peer feedback, nor did it differ between the three conditions. With regard to academic achievement, only males in conditions 1 and 2 showed better academic achievement compared with condition 3. However, there was no difference between both ways of reflection and goal setting with regard to achievement, indicating that both ways are equally effective. Nevertheless, it is still too early to conclude that peer feedback combined with reflection and goal setting is not effective in enhancing students' individual contributions. Students only had a limited number of opportunities to improve their contributions. Therefore, future research should investigate whether an increase in number of tutorial group meetings can enhance the effectiveness of peer feedback. In addition, the effect of quality of reflection and goal setting could be taken into consideration in future research.
NASA Technical Reports Server (NTRS)
Decker, Arthur J. (Inventor)
2006-01-01
An artificial neural network is disclosed that processes holography generated characteristic pattern of vibrating structures along with finite-element models. The present invention provides for a folding operation for conditioning training sets for optimally training forward-neural networks to process characteristic fringe pattern. The folding pattern increases the sensitivity of the feed-forward network for detecting changes in the characteristic pattern The folding routine manipulates input pixels so as to be scaled according to the location in an intensity range rather than the position in the characteristic pattern.
KAMO: towards automated data processing for microcrystals.
Yamashita, Keitaro; Hirata, Kunio; Yamamoto, Masaki
2018-05-01
In protein microcrystallography, radiation damage often hampers complete and high-resolution data collection from a single crystal, even under cryogenic conditions. One promising solution is to collect small wedges of data (5-10°) separately from multiple crystals. The data from these crystals can then be merged into a complete reflection-intensity set. However, data processing of multiple small-wedge data sets is challenging. Here, a new open-source data-processing pipeline, KAMO, which utilizes existing programs, including the XDS and CCP4 packages, has been developed to automate whole data-processing tasks in the case of multiple small-wedge data sets. Firstly, KAMO processes individual data sets and collates those indexed with equivalent unit-cell parameters. The space group is then chosen and any indexing ambiguity is resolved. Finally, clustering is performed, followed by merging with outlier rejections, and a report is subsequently created. Using synthetic and several real-world data sets collected from hundreds of crystals, it was demonstrated that merged structure-factor amplitudes can be obtained in a largely automated manner using KAMO, which greatly facilitated the structure analyses of challenging targets that only produced microcrystals. open access.
NASA Astrophysics Data System (ADS)
Guo, W. C.; Yang, J. D.; Chen, J. P.; Peng, Z. Y.; Zhang, Y.; Chen, C. C.
2016-11-01
Load rejection test is one of the essential tests that carried out before the hydroelectric generating set is put into operation formally. The test aims at inspecting the rationality of the design of the water diversion and power generation system of hydropower station, reliability of the equipment of generating set and the dynamic characteristics of hydroturbine governing system. Proceeding from different accident conditions of hydroelectric generating set, this paper presents the transient processes of load rejection corresponding to different accident conditions, and elaborates the characteristics of different types of load rejection. Then the numerical simulation method of different types of load rejection is established. An engineering project is calculated to verify the validity of the method. Finally, based on the numerical simulation results, the relationship among the different types of load rejection and their functions on the design of hydropower station and the operation of load rejection test are pointed out. The results indicate that: The load rejection caused by the accident within the hydroelectric generating set is realized by emergency distributing valve, and it is the basis of the optimization for the closing law of guide vane and the calculation of regulation and guarantee. The load rejection caused by the accident outside the hydroelectric generating set is realized by the governor. It is the most efficient measure to inspect the dynamic characteristics of hydro-turbine governing system, and its closure rate of guide vane set in the governor depends on the optimization result in the former type load rejection.
Chronic exercise conditioning has been shown to alter basal thermoregulatory processes (change in thermoregulatory set-point) as well as the response to infectious fever. Chlorpyrifos (CHP), an organophosphate pesticide, causes an acute period of hypothermia followed by a delaye...
Finke, Mareike; Barceló, Francisco; Garolera, Maite; Cortiñas, Miriam; Garrido, Gemma; Pajares, Marta; Escera, Carles
2011-07-01
An accurate representation of task-set information is needed for successful goal directed behavior. Recent studies point to disturbances in the early processing stages as plausible causes for task-switching deficits in schizophrenia. A task-cueing protocol was administered to a group of schizophrenic patients and compared with a sample of age-matched healthy controls. Patients responded slower and less accurate compared with controls in all conditions. The concurrent recording of event-related brain potentials to contextual cues and target events revealed abnormalities in the early processing of both cue-locked and target-locked N1 potentials. Abnormally enhanced target-locked P2 amplitudes were observed in schizophrenic patients for task-switch trials only, suggesting disrupted stimulus evaluation and memory retrieval processes. The endogenous P3 potentials discriminated between task conditions but without further differences between groups. These results suggest that the observed impairments in task-switching behavior were not specifically related to anticipatory set-shifting, but derived from a deficit in the implementation of task-set representations at target onset in the presence of irrelevant and conflicting information. Copyright © 2011 Elsevier B.V. All rights reserved.
Centralized drug review processes: are they fair?
Mitton, Craig R; McMahon, Meghan; Morgan, Steve; Gibson, Jennifer
2006-07-01
Numerous countries have implemented centralized drug review processes to assist in making drug coverage decisions. In addition to examining the final recommendations of these bodies, it is also important to ensure fairness in decision making. Accountability for reasonableness is an ethics-based framework for examining the fairness of priority setting processes. The objective of this study was to assess the fairness of four internationally established centralized drug review processes using accountability for reasonableness. Semi-structured telephone interviews were conducted with stakeholders in Canada, New Zealand, Australia and the UK (n=16). Participants were asked to evaluate their country's centralized drug review process against the four conditions of accountability for reasonableness. Each centralized drug review process satisfied at least one of the four ethical conditions, but none satisfied all four conditions. All participants viewed transparency as critical to both the legitimacy and fairness of centralized drug review processes. Additional strides need to be made in each of the four countries under study to improve the fairness of their centralized drug review processes. Ideally, a fair priority setting process should foster constructive stakeholder engagement and enhance the legitimacy of decisions made in assessing pharmaceutical products for funding. As policy makers are under increasing scrutiny in allocating limited resources, fair process should be seen as a critical component of such activity. This study represents the first attempt to conduct an international comparison of the fairness of centralized drug review agencies in the eyes of participating stakeholders.
Effects of and preference for pay for performance: an analogue analysis.
Long, Robert D; Wilder, David A; Betz, Alison; Dutta, Ami
2012-01-01
We examined the effects of 2 payment systems on the rate of check processing and time spent on task by participants in a simulated work setting. Three participants experienced individual pay-for-performance (PFP) without base pay and pay-for-time (PFT) conditions. In the last phase, we asked participants to choose which system they preferred. For all participants, the PFP condition produced higher rates of check processing and more time spent on task than did the PFT condition, but choice of payment system varied both within and across participants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert
2016-06-21
A new mesh smoothing method designed to cluster mesh cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function being computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered elds, such as amore » volume fraction or index function, is provided. Results show that the low-order level set works equally well for the weight function as the actual level set. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Dynamic cases for moving interfaces are presented to demonstrate the method's potential usefulness to arbitrary Lagrangian Eulerian (ALE) methods.« less
Dilution: atheoretical burden or just load? A reply to Tsal and Benoni (2010).
Lavie, Nilli; Torralbo, Ana
2010-12-01
Load theory of attention proposes that distractor processing is reduced in tasks with high perceptual load that exhaust attentional capacity within task-relevant processing. In contrast, tasks of low perceptual load leave spare capacity that spills over, resulting in the perception of task-irrelevant, potentially distracting stimuli. Tsal and Benoni (2010) find that distractor response competition effects can be reduced under conditions with a high search set size but low perceptual load (due to a singleton color target). They claim that the usual effect of search set size on distractor processing is not due to attentional load but instead attribute this to lower level visual interference. Here, we propose an account for their findings within load theory. We argue that in tasks of low perceptual load but high set size, an irrelevant distractor competes with the search nontargets for remaining capacity. Thus, distractor processing is reduced under conditions in which the search nontargets receive the spillover of capacity instead of the irrelevant distractor. We report a new experiment testing this prediction. Our new results demonstrate that, when peripheral distractor processing is reduced, it is the search nontargets nearest to the target that are perceived instead. Our findings provide new evidence for the spare capacity spillover hypothesis made by load theory and rule out accounts in terms of lower level visual interference (or mere "dilution") for cases of reduced distractor processing under low load in displays of high set size. We also discuss additional evidence that discounts the viability of Tsal and Benoni's dilution account as an alternative to perceptual load.
Lavie, Nilli; Torralbo, Ana
2010-01-01
Load theory of attention proposes that distractor processing is reduced in tasks with high perceptual load that exhaust attentional capacity within task-relevant processing. In contrast, tasks of low perceptual load leave spare capacity that spills over, resulting in the perception of task-irrelevant, potentially distracting stimuli. Tsal and Benoni (2010) find that distractor response competition effects can be reduced under conditions with a high search set size but low perceptual load (due to a singleton color target). They claim that the usual effect of search set size on distractor processing is not due to attentional load but instead attribute this to lower level visual interference. Here, we propose an account for their findings within load theory. We argue that in tasks of low perceptual load but high set size, an irrelevant distractor competes with the search nontargets for remaining capacity. Thus, distractor processing is reduced under conditions in which the search nontargets receive the spillover of capacity instead of the irrelevant distractor. We report a new experiment testing this prediction. Our new results demonstrate that, when peripheral distractor processing is reduced, it is the search nontargets nearest to the target that are perceived instead. Our findings provide new evidence for the spare capacity spillover hypothesis made by load theory and rule out accounts in terms of lower level visual interference (or mere “dilution”) for cases of reduced distractor processing under low load in displays of high set size. We also discuss additional evidence that discounts the viability of Tsal and Benoni's dilution account as an alternative to perceptual load. PMID:21133554
Tromp, Noor; Prawiranegara, Rozar; Subhan Riparev, Harris; Siregar, Adiatma; Sunjaya, Deni; Baltussen, Rob
2015-04-01
Indonesia has insufficient resources to adequately respond to the HIV/AIDS epidemic, and thus faces a great challenge in prioritizing interventions. In many countries, such priority setting processes are typically ad hoc and not transparent leading to unfair decisions. Here, we evaluated the priority setting process in HIV/AIDS control in West Java province against the four conditions of the accountability for reasonableness (A4R) framework: relevance, publicity, appeals and revision, and enforcement. We reviewed government documents and conducted semi-structured qualitative interviews based on the A4R framework with 22 participants of the 5-year HIV/AIDS strategy development for 2008-13 (West Java province) and 2007-11 (Bandung). We found that criteria for priority setting were used implicitly and that the strategies included a wide range of programmes. Many stakeholders were involved in the process but their contribution could be improved and particularly the public and people living with HIV/AIDS could be better engaged. The use of appeal and publicity mechanisms could be more transparent and formally stated. Public regulations are not yet installed to ensure fair priority setting. To increase fairness in HIV/AIDS priority setting, West Java should make improvements on all four conditions of the A4R framework. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2014; all rights reserved.
Nosofsky, Robert M; Cox, Gregory E; Cao, Rui; Shiffrin, Richard M
2014-11-01
Experiments were conducted to test a modern exemplar-familiarity model on its ability to account for both short-term and long-term probe recognition within the same memory-search paradigm. Also, making connections to the literature on attention and visual search, the model was used to interpret differences in probe-recognition performance across diverse conditions that manipulated relations between targets and foils across trials. Subjects saw lists of from 1 to 16 items followed by a single item recognition probe. In a varied-mapping condition, targets and foils could switch roles across trials; in a consistent-mapping condition, targets and foils never switched roles; and in an all-new condition, on each trial a completely new set of items formed the memory set. In the varied-mapping and all-new conditions, mean correct response times (RTs) and error proportions were curvilinear increasing functions of memory set size, with the RT results closely resembling ones from hybrid visual-memory search experiments reported by Wolfe (2012). In the consistent-mapping condition, new-probe RTs were invariant with set size, whereas old-probe RTs increased slightly with increasing study-test lag. With appropriate choice of psychologically interpretable free parameters, the model accounted well for the complete set of results. The work provides support for the hypothesis that a common set of processes involving exemplar-based familiarity may govern long-term and short-term probe recognition across wide varieties of memory- search conditions. PsycINFO Database Record (c) 2014 APA, all rights reserved.
An automated design process for short pulse laser driven opacity experiments
Martin, M. E.; London, R. A.; Goluoglu, S.; ...
2017-12-21
Stellar-relevant conditions can be reached by heating a buried layer target with a short pulse laser. Previous design studies of iron buried layer targets found that plasma conditions are dominantly controlled by the laser energy while the accuracy of the inferred opacity is limited by tamper emission and optical depth effects. In this paper, we developed a process to simultaneously optimize laser and target parameters to meet a variety of design goals. We explored two sets of design cases: a set focused on conditions relevant to the upper radiative zone of the sun (electron temperatures of 200 to 400 eVmore » and densities greater than 1/10 of solid density) and a set focused on reaching temperatures consistent with deep within the radiative zone of the sun (500 to 1000 eV) at a fixed density. We found optimized designs for iron targets and determined that the appropriate dopant, for inferring plasma conditions, depends on the goal temperature: magnesium for up to 300 eV, aluminum for 300 to 500 eV, and sulfur for 500 to 1000 eV. The optimal laser energy and buried layer thickness increase with goal temperature. The accuracy of the inferred opacity is limited to between 11% and 31%, depending on the design. Finally, overall, short pulse laser heated iron experiments reaching stellar-relevant conditions have been designed with consideration of minimizing tamper emission and optical depth effects while meeting plasma condition and x-ray emission goals.« less
An automated design process for short pulse laser driven opacity experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, M. E.; London, R. A.; Goluoglu, S.
Stellar-relevant conditions can be reached by heating a buried layer target with a short pulse laser. Previous design studies of iron buried layer targets found that plasma conditions are dominantly controlled by the laser energy while the accuracy of the inferred opacity is limited by tamper emission and optical depth effects. In this paper, we developed a process to simultaneously optimize laser and target parameters to meet a variety of design goals. We explored two sets of design cases: a set focused on conditions relevant to the upper radiative zone of the sun (electron temperatures of 200 to 400 eVmore » and densities greater than 1/10 of solid density) and a set focused on reaching temperatures consistent with deep within the radiative zone of the sun (500 to 1000 eV) at a fixed density. We found optimized designs for iron targets and determined that the appropriate dopant, for inferring plasma conditions, depends on the goal temperature: magnesium for up to 300 eV, aluminum for 300 to 500 eV, and sulfur for 500 to 1000 eV. The optimal laser energy and buried layer thickness increase with goal temperature. The accuracy of the inferred opacity is limited to between 11% and 31%, depending on the design. Finally, overall, short pulse laser heated iron experiments reaching stellar-relevant conditions have been designed with consideration of minimizing tamper emission and optical depth effects while meeting plasma condition and x-ray emission goals.« less
Ono, Daiki; Bamba, Takeshi; Oku, Yuichi; Yonetani, Tsutomu; Fukusaki, Eiichiro
2011-09-01
In this study, we constructed prediction models by metabolic fingerprinting of fresh green tea leaves using Fourier transform near-infrared (FT-NIR) spectroscopy and partial least squares (PLS) regression analysis to objectively optimize of the steaming process conditions in green tea manufacture. The steaming process is the most important step for manufacturing high quality green tea products. However, the parameter setting of the steamer is currently determined subjectively by the manufacturer. Therefore, a simple and robust system that can be used to objectively set the steaming process parameters is necessary. We focused on FT-NIR spectroscopy because of its simple operation, quick measurement, and low running costs. After removal of noise in the spectral data by principal component analysis (PCA), PLS regression analysis was performed using spectral information as independent variables, and the steaming parameters set by experienced manufacturers as dependent variables. The prediction models were successfully constructed with satisfactory accuracy. Moreover, the results of the demonstrated experiment suggested that the green tea steaming process parameters could be predicted on a larger manufacturing scale. This technique will contribute to improvement of the quality and productivity of green tea because it can objectively optimize the complicated green tea steaming process and will be suitable for practical use in green tea manufacture. Copyright © 2011 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Setting research priorities by applying the combined approach matrix.
Ghaffar, Abdul
2009-04-01
Priority setting in health research is a dynamic process. Different organizations and institutes have been working in the field of research priority setting for many years. In 1999 the Global Forum for Health Research presented a research priority setting tool called the Combined Approach Matrix or CAM. Since its development, the CAM has been successfully applied to set research priorities for diseases, conditions and programmes at global, regional and national levels. This paper briefly explains the CAM methodology and how it could be applied in different settings, giving examples and describing challenges encountered in the process of setting research priorities and providing recommendations for further work in this field. The construct and design of the CAM is explained along with different steps needed, including planning and organization of a priority-setting exercise and how it could be applied in different settings. The application of the CAM are described by using three examples. The first concerns setting research priorities for a global programme, the second describes application at the country level and the third setting research priorities for diseases. Effective application of the CAM in different and diverse environments proves its utility as a tool for setting research priorities. Potential challenges encountered in the process of research priority setting are discussed and some recommendations for further work in this field are provided.
Bacterial contamination of ex vivo processed PBPC products under clean room conditions.
Ritter, Markus; Schwedler, Joachim; Beyer, Jörg; Movassaghi, Kamran; Mutters, Reinier; Neubauer, Andreas; Schwella, Nimrod
2003-11-01
Patients undergoing high-dose radio- and/or chemotherapy and autologous or allogeneic PBPC transplantation are at high risk for infections owing to profound immunosuppression. In this study, the rate of microbial contamination of ex vivo processed PBPC products was analyzed, comparing preparation under clean room conditions to standard laboratory conditions. After implementation of good manufacturing practice conditions in the two participating institutions, the microbial contamination rate of 366 PBPC harvests from 198 patients was determined under certified clean room conditions (Group A) from 2000 until 2002. To investigate influence of improved environmental conditions along with other parameters, this set of samples was compared with a historical control set of 1413 PBPC products, which have been processed ex vivo under a clean bench in a regular laboratory room and were harvested from 626 patients (Group B) from 1989 until 2000. In Group B microbial contamination was found in 74 PBPC products (5.2%) from 57 patients. In Group A microbial growth was detected in 3 leukapheresis products (0.8%) from 3 patients. After exclusion of PBPC products, which were probably contaminated before manipulation, statistical analysis showed a significant difference (chi2= 10.339; p < 0.001). These data suggest an impact of clean room conditions on the bacterial contamination rate of PBPC products. To identify confounding variables, variables like technique of leukapheresis, culture methodology, and microbial colonization of central venous catheters were taken into account. Further variables might be identified in following studies.
Wilén, B M; Lumley, D; Mattsson, A; Mino, T
2006-01-01
The effect of rain events on effluent quality dynamics was studied at a full scale activated sludge wastewater treatment plant which has a process solution incorporating pre-denitrification in activated sludge with post-nitrification in trickling filters. The incoming wastewater flow varies significantly due to a combined sewer system. Changed flow conditions have an impact on the whole treatment process since the recirculation to the trickling filters is set by the hydraulic limitations of the secondary settlers. Apart from causing different hydraulic conditions in the plant, increased flow due to rain or snow-melting, changes the properties of the incoming wastewater which affects process performance and effluent quality, especially the particle removal efficiency. A comprehensive set of on-line and laboratory data were collected and analysed to assess the impact of rain events on the plant performance.
An Efficient Distributed Compressed Sensing Algorithm for Decentralized Sensor Network.
Liu, Jing; Huang, Kaiyu; Zhang, Guoxian
2017-04-20
We consider the joint sparsity Model 1 (JSM-1) in a decentralized scenario, where a number of sensors are connected through a network and there is no fusion center. A novel algorithm, named distributed compact sensing matrix pursuit (DCSMP), is proposed to exploit the computational and communication capabilities of the sensor nodes. In contrast to the conventional distributed compressed sensing algorithms adopting a random sensing matrix, the proposed algorithm focuses on the deterministic sensing matrices built directly on the real acquisition systems. The proposed DCSMP algorithm can be divided into two independent parts, the common and innovation support set estimation processes. The goal of the common support set estimation process is to obtain an estimated common support set by fusing the candidate support set information from an individual node and its neighboring nodes. In the following innovation support set estimation process, the measurement vector is projected into a subspace that is perpendicular to the subspace spanned by the columns indexed by the estimated common support set, to remove the impact of the estimated common support set. We can then search the innovation support set using an orthogonal matching pursuit (OMP) algorithm based on the projected measurement vector and projected sensing matrix. In the proposed DCSMP algorithm, the process of estimating the common component/support set is decoupled with that of estimating the innovation component/support set. Thus, the inaccurately estimated common support set will have no impact on estimating the innovation support set. It is proven that under the condition the estimated common support set contains the true common support set, the proposed algorithm can find the true innovation set correctly. Moreover, since the innovation support set estimation process is independent of the common support set estimation process, there is no requirement for the cardinality of both sets; thus, the proposed DCSMP algorithm is capable of tackling the unknown sparsity problem successfully.
Working Conditions: Job Design. Working Paper #4.
ERIC Educational Resources Information Center
Gersten, Russell; And Others
This summary report presents an integration of findings on teachers' perceptions of their working conditions, based on survey and interview data from special educators in six large urban school districts. Emphasis is on perceptions of problems related to job design, the highly interrelated set of structures, systems, and processes intended to…
Code of Federal Regulations, 2010 CFR
2010-10-01
... alteration of existing environmental conditions or creation of a new set of environmental conditions, adverse... material that can be used as a raw material in an industrial process in which it is transformed into a new product replacing the use of a depletable natural resource. (h) Marine Terminal Operator means a person...
Engineering of layered, lipid-encapsulated drug nanoparticles through spray-drying.
Sapra, Mahak; Mayya, Y S; Venkataraman, Chandra
2017-06-01
Drug-containing nanoparticles have been synthesized through the spray-drying of submicron droplet aerosols by using matrix materials such as lipids and biopolymers. Understanding layer formation in composite nanoparticles is essential for the appropriate engineering of particle substructures. The present study developed a droplet-shrinkage model for predicting the solid-phase formation of two non-volatile solutes-stearic acid lipid and a set of drugs, by considering molecular volume and solubility. Nanoparticle formation was simulated to define the parameter space of material properties and process conditions for the formation of a layered structure with the preferential accumulation of the lipid in the outer layer. Moreover, lipid-drug demarcation diagrams representing a set of critical values of ratios of solute properties at which the two solutes precipitate simultaneously were developed. The model was validated through the preparation of stearic acid-isoniazid nanoparticles under controlled processing conditions. The developed model can guide the selection of solvents, lipids, and processing conditions such that drug loading and lipid encapsulation in composite nanoparticles are optimized. Copyright © 2017 Elsevier B.V. All rights reserved.
Mining subspace clusters from DNA microarray data using large itemset techniques.
Chang, Ye-In; Chen, Jiun-Rung; Tsai, Yueh-Chi
2009-05-01
Mining subspace clusters from the DNA microarrays could help researchers identify those genes which commonly contribute to a disease, where a subspace cluster indicates a subset of genes whose expression levels are similar under a subset of conditions. Since in a DNA microarray, the number of genes is far larger than the number of conditions, those previous proposed algorithms which compute the maximum dimension sets (MDSs) for any two genes will take a long time to mine subspace clusters. In this article, we propose the Large Itemset-Based Clustering (LISC) algorithm for mining subspace clusters. Instead of constructing MDSs for any two genes, we construct only MDSs for any two conditions. Then, we transform the task of finding the maximal possible gene sets into the problem of mining large itemsets from the condition-pair MDSs. Since we are only interested in those subspace clusters with gene sets as large as possible, it is desirable to pay attention to those gene sets which have reasonable large support values in the condition-pair MDSs. From our simulation results, we show that the proposed algorithm needs shorter processing time than those previous proposed algorithms which need to construct gene-pair MDSs.
Posterior consistency in conditional distribution estimation
Pati, Debdeep; Dunson, David B.; Tokdar, Surya T.
2014-01-01
A wide variety of priors have been proposed for nonparametric Bayesian estimation of conditional distributions, and there is a clear need for theorems providing conditions on the prior for large support, as well as posterior consistency. Estimation of an uncountable collection of conditional distributions across different regions of the predictor space is a challenging problem, which differs in some important ways from density and mean regression estimation problems. Defining various topologies on the space of conditional distributions, we provide sufficient conditions for posterior consistency focusing on a broad class of priors formulated as predictor-dependent mixtures of Gaussian kernels. This theory is illustrated by showing that the conditions are satisfied for a class of generalized stick-breaking process mixtures in which the stick-breaking lengths are monotone, differentiable functions of a continuous stochastic process. We also provide a set of sufficient conditions for the case where stick-breaking lengths are predictor independent, such as those arising from a fixed Dirichlet process prior. PMID:25067858
42 CFR 405.853 - Expedited appeals process.
Code of Federal Regulations, 2010 CFR
2010-10-01
....853 Expedited appeals process. (a) Conditions for use of expedited appeals process (EAP). A party may use the EAP set forth in § 405.718 of this chapter to request court review in place of the ALJ hearing... the request for an EAP. (b) Content of the request for EAP. The request for an EAP: (1) Alleges that...
USDA-ARS?s Scientific Manuscript database
Shiga Toxin-Producing Escherichia coli (STEC) are frequently implicated in foodborne illness outbreaks and recalls of ground beef. In this study we determined the High Pressure Processing (HPP) D-10 value (the processing conditions needed to reduce the microbial population by 1 log) of 39 individua...
Introducing priority setting and resource allocation in home and community care programs.
Urquhart, Bonnie; Mitton, Craig; Peacock, Stuart
2008-01-01
To use evidence from research to identify and implement priority setting and resource allocation that incorporates both ethical practices and economic principles. Program budgeting and marginal analysis (PBMA) is based on two key economic principles: opportunity cost (i.e. doing one thing instead of another) and the margin (i.e. resource allocation should result in maximum benefit for available resources). An ethical framework for priority setting and resource allocation known as Accountability for Reasonableness (A4R) focuses on making sure that resource allocations are based on a fair decision-making process. It includes the following four conditions: publicity; relevance; appeals; and enforcement. More recent literature on the topic suggests that a fifth condition, that of empowerment, should be added to the Framework. The 2007-08 operating budget for Home and Community Care, excluding the residential sector, was developed using PBMA and incorporating the A4R conditions. Recommendations developed using PBMA were forwarded to the Executive Committee, approved and implemented for the 2007-08 fiscal year operating budget. In addition there were two projects approved for approximately $200,000. PBMA is an improvement over previous practice. Managers of Home and Community Care are committed to using the process for the 2008-09 fiscal year operating budget and expanding its use to include mental health and addictions services. In addition, managers of public health prevention and promotion services are considering using the process.
Moore, Shirley M.; Schiffman, Rachel; Waldrop-Valverde, Drenna; Redeker, Nancy S.; McCloskey, Donna Jo; Kim, Miyong T.; Heitkemper, Margaret M.; Guthrie, Barbara J.; Dorsey, Susan G.; Docherty, Sharron L.; Barton, Debra; Bailey, Donald E.; Austin, Joan K.; Grady, Patricia
2017-01-01
Purpose Common data elements (CDEs) are increasingly being used by researchers to promote data sharing across studies. The purposes of this article are to (a) describe the theoretical, conceptual, and definition issues in the development of a set of CDEs for research addressing self-management of chronic conditions; (b) propose an initial set of CDEs and their measures to advance the science of self-management; and (c) recommend implications for future research and dissemination. Design and Methods Between July 2014 and December 2015 the directors of the National Institute of Nursing Research (NINR)-funded P20 and P30 centers of excellence and NINR staff met in a series of telephone calls and a face-to-face NINR-sponsored meeting to select a set of recommended CDEs to be used in self-management research. A list of potential CDEs was developed from examination of common constructs in current self-management frameworks, as well as identification of variables frequently used in studies conducted in the centers of excellence. Findings The recommended CDEs include measures of three self-management processes: activation, self-regulation, and self-efficacy for managing chronic conditions, and one measure of a self-management outcome, global health. Conclusions The self-management of chronic conditions, which encompasses a considerable number of processes, behaviors, and outcomes across a broad range of chronic conditions, presents several challenges in the identification of a parsimonious set of CDEs. This initial list of recommended CDEs for use in self-management research is provisional in that it is expected that over time it will be refined. Comment and recommended revisions are sought from the research and practice communities. Clinical Relevance The use of CDEs can facilitate generalizability of research findings across diverse population and interventions. PMID:27486851
Moore, Shirley M; Schiffman, Rachel; Waldrop-Valverde, Drenna; Redeker, Nancy S; McCloskey, Donna Jo; Kim, Miyong T; Heitkemper, Margaret M; Guthrie, Barbara J; Dorsey, Susan G; Docherty, Sharron L; Barton, Debra; Bailey, Donald E; Austin, Joan K; Grady, Patricia
2016-09-01
Common data elements (CDEs) are increasingly being used by researchers to promote data sharing across studies. The purposes of this article are to (a) describe the theoretical, conceptual, and definition issues in the development of a set of CDEs for research addressing self-management of chronic conditions; (b) propose an initial set of CDEs and their measures to advance the science of self-management; and (c) recommend implications for future research and dissemination. Between July 2014 and December 2015 the directors of the National Institute of Nursing Research (NINR)-funded P20 and P30 centers of excellence and NINR staff met in a series of telephone calls and a face-to-face NINR-sponsored meeting to select a set of recommended CDEs to be used in self-management research. A list of potential CDEs was developed from examination of common constructs in current self-management frameworks, as well as identification of variables frequently used in studies conducted in the centers of excellence. The recommended CDEs include measures of three self-management processes: activation, self-regulation, and self-efficacy for managing chronic conditions, and one measure of a self-management outcome, global health. The self-management of chronic conditions, which encompasses a considerable number of processes, behaviors, and outcomes across a broad range of chronic conditions, presents several challenges in the identification of a parsimonious set of CDEs. This initial list of recommended CDEs for use in self-management research is provisional in that it is expected that over time it will be refined. Comment and recommended revisions are sought from the research and practice communities. The use of CDEs can facilitate generalizability of research findings across diverse population and interventions. © 2016 Sigma Theta Tau International.
Process-time Optimization of Vacuum Degassing Using a Genetic Alloy Design Approach
Dilner, David; Lu, Qi; Mao, Huahai; Xu, Wei; van der Zwaag, Sybrand; Selleby, Malin
2014-01-01
This paper demonstrates the use of a new model consisting of a genetic algorithm in combination with thermodynamic calculations and analytical process models to minimize the processing time during a vacuum degassing treatment of liquid steel. The model sets multiple simultaneous targets for final S, N, O, Si and Al levels and uses the total slag mass, the slag composition, the steel composition and the start temperature as optimization variables. The predicted optimal conditions agree well with industrial practice. For those conditions leading to the shortest process time the target compositions for S, N and O are reached almost simultaneously. PMID:28788286
Genetic Algorithms Evolve Optimized Transforms for Signal Processing Applications
2005-04-01
coefficient sets describing inverse transforms and matched forward/ inverse transform pairs that consistently outperform wavelets for image compression and reconstruction applications under conditions subject to quantization error.
Impact Of The Material Variability On The Stamping Process: Numerical And Analytical Analysis
NASA Astrophysics Data System (ADS)
Ledoux, Yann; Sergent, Alain; Arrieux, Robert
2007-05-01
The finite element simulation is a very useful tool in the deep drawing industry. It is used more particularly for the development and the validation of new stamping tools. It allows to decrease cost and time for the tooling design and set up. But one of the most important difficulties to have a good agreement between the simulation and the real process comes from the definition of the numerical conditions (mesh, punch travel speed, limit conditions,…) and the parameters which model the material behavior. Indeed, in press shop, when the sheet set changes, often a variation of the formed part geometry is observed according to the variability of the material properties between these different sets. This last parameter represents probably one of the main source of process deviation when the process is set up. That's why it is important to study the influence of material data variation on the geometry of a classical stamped part. The chosen geometry is an omega shaped part because of its simplicity and it is representative one in the automotive industry (car body reinforcement). Moreover, it shows important springback deviations. An isotropic behaviour law is assumed. The impact of the statistical deviation of the three law coefficients characterizing the material and the friction coefficient around their nominal values is tested. A Gaussian distribution is supposed and their impact on the geometry variation is studied by FE simulation. An other approach is envisaged consisting in modeling the process variability by a mathematical model and then, in function of the input parameters variability, it is proposed to define an analytical model which leads to find the part geometry variability around the nominal shape. These two approaches allow to predict the process capability as a function of the material parameter variability.
Poor sleep quality predicts deficient emotion information processing over time in early adolescence.
Soffer-Dudek, Nirit; Sadeh, Avi; Dahl, Ronald E; Rosenblat-Stein, Shiran
2011-11-01
There is deepening understanding of the effects of sleep on emotional information processing. Emotion information processing is a key aspect of social competence, which undergoes important maturational and developmental changes in adolescence; however, most research in this area has focused on adults. Our aim was to test the links between sleep and emotion information processing during early adolescence. Sleep and facial information processing were assessed objectively during 3 assessment waves, separated by 1-year lags. Data were obtained in natural environments-sleep was assessed in home settings, and facial information processing was assessed at school. 94 healthy children (53 girls, 41 boys), aged 10 years at Time 1. N/A. Facial information processing was tested under neutral (gender identification) and emotional (emotional expression identification) conditions. Sleep was assessed in home settings using actigraphy for 7 nights at each assessment wave. Waking > 5 min was considered a night awakening. Using multilevel modeling, elevated night awakenings and decreased sleep efficiency significantly predicted poor performance only in the emotional information processing condition (e.g., b = -1.79, SD = 0.52, confidence interval: lower boundary = -2.82, upper boundary = -0.076, t(416.94) = -3.42, P = 0.001). Poor sleep quality is associated with compromised emotional information processing during early adolescence, a sensitive period in socio-emotional development.
Thermodynamical Interactions: Subtleties of Heat and Work Concepts
ERIC Educational Resources Information Center
Anacleto, Joaquim; Anacleto, Joaquim Alberto C.
2008-01-01
This paper focuses on the determination of the final equilibrium state when two ideal gases, isolated from the exterior and starting from preset initial conditions, interact with each other through a piston. Depending on the piston properties, different processes take place and also different sets of equilibrium conditions must be satisfied. Three…
Parenting and the Process of Migration: Possibilities within South Asian Families
ERIC Educational Resources Information Center
Deepak, Anne C.
2005-01-01
The migration experience creates a unique set of challenges for families, which can result in intergenerational conflict and create the conditions for abuse or neglect. Alternatively, families can cope with these challenges in creative and seemingly contradictory ways, thus strengthening family relationships. This article introduces the process of…
ERIC Educational Resources Information Center
De Marco, Allison; De Marco, Molly
2010-01-01
Interest in the effects of neighborhood context on individual wellbeing has increased in recent years. We now know that neighborhood conditions, such as poverty and deprivation, negatively impact residents. However, most of the extant work has taken an urban focus. Less is known about these processes in rural settings. Neighborhood…
The effects of auditive and visual settings on perceived restoration likelihood
Jahncke, Helena; Eriksson, Karolina; Naula, Sanna
2015-01-01
Research has so far paid little attention to how environmental sounds might affect restorative processes. The aim of the present study was to investigate the effects of auditive and visual stimuli on perceived restoration likelihood and attitudes towards varying environmental resting conditions. Assuming a condition of cognitive fatigue, all participants (N = 40) were presented with images of an open plan office and urban nature, each under four sound conditions (nature sound, quiet, broadband noise, office noise). After the presentation of each setting/sound combination, the participants assessed it according to restorative qualities, restoration likelihood and attitude. The results mainly showed predicted effects of the sound manipulations on the perceived restorative qualities of the settings. Further, significant interactions between auditive and visual stimuli were found for all measures. Both nature sounds and quiet more positively influenced evaluations of the nature setting compared to the office setting. When office noise was present, both settings received poor evaluations. The results agree with expectations that nature sounds and quiet areas support restoration, while office noise and broadband noise (e.g. ventilation, traffic noise) do not. The findings illustrate the significance of environmental sound for restorative experience. PMID:25599752
Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert
2017-01-27
A new mesh smoothing method designed to cluster cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered fields, such as a volume fractionmore » or index function, is provided. Results show that the low-order level set works equally well as the actual level set for mesh smoothing. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Lastly, dynamic cases with moving interfaces show the new method is capable of maintaining a desired resolution near the interface with an acceptable number of relaxation iterations per time step, which demonstrates the method's potential to be used as a mesh relaxer for arbitrary Lagrangian Eulerian (ALE) methods.« less
Method and system for fault accommodation of machines
NASA Technical Reports Server (NTRS)
Goebel, Kai Frank (Inventor); Subbu, Rajesh Venkat (Inventor); Rausch, Randal Thomas (Inventor); Frederick, Dean Kimball (Inventor)
2011-01-01
A method for multi-objective fault accommodation using predictive modeling is disclosed. The method includes using a simulated machine that simulates a faulted actual machine, and using a simulated controller that simulates an actual controller. A multi-objective optimization process is performed, based on specified control settings for the simulated controller and specified operational scenarios for the simulated machine controlled by the simulated controller, to generate a Pareto frontier-based solution space relating performance of the simulated machine to settings of the simulated controller, including adjustment to the operational scenarios to represent a fault condition of the simulated machine. Control settings of the actual controller are adjusted, represented by the simulated controller, for controlling the actual machine, represented by the simulated machine, in response to a fault condition of the actual machine, based on the Pareto frontier-based solution space, to maximize desirable operational conditions and minimize undesirable operational conditions while operating the actual machine in a region of the solution space defined by the Pareto frontier.
Heimann, Mikael; Edorsson, Angelica; Sundqvist, Annette; Koch, Felix-Sebastian
2017-01-01
Gergely et al. (2002) reported that children imitated a novel action – illuminating a light-box by using the forehead – after a delay significantly more often if the hands of the experimenter had been visible in comparison with if they had been covered. In an attempt to explore these findings we conducted two studies with a total N of 63 children. Both studies investigated deferred imitation of the action in two conditions, with the hands of the experimenter visible or covered, but the settings differed. Study 1 (n = 30; mean age = 16.6 months) was carried out in an unfamiliar environment (a laboratory setting) while Study 2 (n = 33; mean age = 13.3 months) was conducted in familiar surroundings (at home or at day care). The results showed that 50% of the children in Study 1 and 42.4% in Study 2 evidenced deferred imitation as compared to only 4.9% (n = 2) in the baseline condition. However, in none of the studies did the children use inferential processes when imitating, we detected no significant differences between the two conditions, hands visible or hands covered. The findings add to the validity of the head touch procedure as a measure of declarative-like memory processes in the pre-verbal child. At the same time the findings question the robustness of the concept ‘rational imitation,’ it seems not as easy as expected to elicit a response based on rational inferential processes in this age group. PMID:29312055
Goal-setting in clinical medicine.
Bradley, E H; Bogardus, S T; Tinetti, M E; Inouye, S K
1999-07-01
The process of setting goals for medical care in the context of chronic disease has received little attention in the medical literature, despite the importance of goal-setting in the achievement of desired outcomes. Using qualitative research methods, this paper develops a theory of goal-setting in the care of patients with dementia. The theory posits several propositions. First, goals are generated from embedded values but are distinct from values. Goals vary based on specific circumstances and alternatives whereas values are person-specific and relatively stable in the face of changing circumstances. Second, goals are hierarchical in nature, with complex mappings between general and specific goals. Third, there are a number of factors that modify the goal-setting process, by affecting the generation of goals from values or the translation of general goals to specific goals. Modifying factors related to individuals include their degree of risk-taking, perceived self-efficacy, and acceptance of the disease. Disease factors that modify the goal-setting process include the urgency and irreversibility of the medical condition. Pertinent characteristics of the patient-family-clinician interaction include the level of participation, control, and trust among patients, family members, and clinicians. The research suggests that the goal-setting process in clinical medicine is complex, and the potential for disagreements regarding goals substantial. The nature of the goal-setting process suggests that explicit discussion of goals for care may be necessary to promote effective patient-family-clinician communication and adequate care planning.
The New Drug Conditional Approval Process in China: Challenges and Opportunities.
Yao, Xuefang; Ding, Jinxi; Liu, Yingfang; Li, Penghui
2017-05-01
Our aim was to characterize the newly established new drug conditional approval process in China and discuss the challenges and opportunities with respect to new drug research and development and registration. We examined the new approval program through literature review, law analysis, and data analysis. Data were derived from published materials, such as journal articles, government publications, press releases, and news articles, along with statistical data from INSIGHT-China Pharma Databases, the China Food and Drug Administration website, the Center for Drug Evaluation website, the US Food and Drug Administration website, and search results published by Google. Currently, there is a large backlog of New Drug Applications in China, mainly because of the prolonged review time at the China Food and Drug Administration, resulting in a lag in drug approvals. In 2015, the Chinese government implemented the drug review and registration system reform and tackled this issue through various approaches, such as setting up a drug review fee system, adjusting the drug registration classification, and establishing innovative review pathways, including the conditional approval process. In Europe and the United States, programs comparable to the conditional approval program in China have been well developed. The conditional approval program recently established in China is an expedited new drug approval process that is expected to affect new drug development at home and abroad and profoundly influence the public health and the pharmaceutical industry in China. Like any program in its initial stage, the conditional approval program is facing several challenges, including setting up a robust system, formatting new drug clinical research requirements, and improving the regulatory agency's function for drug review and approval. The program is expected to evolve and improve as part of the government mandate of the drug registration system reform. Copyright © 2017 Elsevier HS Journals, Inc. All rights reserved.
On dynamic tumor eradication conditions under combined chemical/anti-angiogenic therapies
NASA Astrophysics Data System (ADS)
Starkov, Konstantin E.
2018-02-01
In this paper ultimate dynamics of the five-dimensional cancer tumor growth model at the angiogenesis phase is studied. This model elaborated by Pinho et al. in 2014 describes interactions between normal/cancer/endothelial cells under chemotherapy/anti-angiogenic agents in tumor growth process. The author derives ultimate upper bounds for normal/tumor/endothelial cells concentrations and ultimate upper and lower bounds for chemical/anti-angiogenic concentrations. Global asymptotic tumor clearance conditions are obtained for two versions: the use of only chemotherapy and the combined application of chemotherapy and anti-angiogenic therapy. These conditions are established as the attraction conditions to the maximum invariant set in the tumor free plane, and furthermore, the case is examined when this set consists only of tumor free equilibrium points.
Defense Facility Condition: Revised Guidance Needed to Improve Oversight of Assessments and Ratings
2016-06-01
are to implement the standardized process in part by assessing the condition of buildings, pavement , and rail using the same set of software tools...facility to current standards; costs for labor, equipment, materials, and currency exchange rates overseas; costs for project planning and design ...example, the services are to assess the condition of buildings, pavement , and rail using Sustainment Management System software tools developed by the
A minimization method on the basis of embedding the feasible set and the epigraph
NASA Astrophysics Data System (ADS)
Zabotin, I. Ya; Shulgina, O. N.; Yarullin, R. S.
2016-11-01
We propose a conditional minimization method of the convex nonsmooth function which belongs to the class of cutting-plane methods. During constructing iteration points a feasible set and an epigraph of the objective function are approximated by the polyhedral sets. In this connection, auxiliary problems of constructing iteration points are linear programming problems. In optimization process there is some opportunity of updating sets which approximate the epigraph. These updates are performed by periodically dropping of cutting planes which form embedding sets. Convergence of the proposed method is proved, some realizations of the method are discussed.
Maluka, Stephen; Kamuzora, Peter; San Sebastiån, Miguel; Byskov, Jens; Olsen, Øystein E; Shayo, Elizabeth; Ndawi, Benedict; Hurtig, Anna-Karin
2010-08-01
Priority-setting has become one of the biggest challenges faced by health decision-makers worldwide. Fairness is a key goal of priority-setting and Accountability for Reasonableness has emerged as a guiding framework for fair priority-setting. This paper describes the processes of setting health care priorities in Mbarali district, Tanzania, and evaluates the descriptions against Accountability for Reasonableness. Key informant interviews were conducted with district health managers, local government officials and other stakeholders using a semi-structured interview guide. Relevant documents were also gathered and group priority-setting in the district was observed. The results indicate that, while Tanzania has a decentralized public health care system, the reality of the district level priority-setting process was that it was not nearly as participatory as the official guidelines suggest it should have been. Priority-setting usually occurred in the context of budget cycles and the process was driven by historical allocation. Stakeholders' involvement in the process was minimal. Decisions (but not the reasoning behind them) were publicized through circulars and notice boards, but there were no formal mechanisms in place to ensure that this information reached the public. There were neither formal mechanisms for challenging decisions nor an adequate enforcement mechanism to ensure that decisions were made in a fair and equitable manner. Therefore, priority-setting in Mbarali district did not satisfy all four conditions of Accountability for Reasonableness; namely relevance, publicity, appeals and revision, and enforcement. This paper aims to make two important contributions to this problematic situation. First, it provides empirical analysis of priority-setting at the district level in the contexts of low-income countries. Second, it provides guidance to decision-makers on how to improve fairness, legitimacy, and sustainability of the priority-setting process. (c) 2010 Elsevier Ltd. All rights reserved.
Smith, Sherri L; Pichora-Fuller, M Kathleen; Alexander, Genevieve
The purpose of this study was to develop the Word Auditory Recognition and Recall Measure (WARRM) and to conduct the inaugural evaluation of the performance of younger adults with normal hearing, older adults with normal to near-normal hearing, and older adults with pure-tone hearing loss on the WARRM. The WARRM is a new test designed for concurrently assessing word recognition and auditory working memory performance in adults who may have pure-tone hearing loss. The test consists of 100 monosyllabic words based on widely used speech-recognition test materials. The 100 words are presented in recall set sizes of 2, 3, 4, 5, and 6 items, with 5 trials in each set size. The WARRM yields a word-recognition score and a recall score. The WARRM was administered to all participants in three listener groups under two processing conditions in a mixed model (between-subjects, repeated measures) design. The between-subjects factor was group, with 48 younger listeners with normal audiometric thresholds (younger listeners with normal hearing [YNH]), 48 older listeners with normal thresholds through 3000 Hz (older listeners with normal hearing [ONH]), and 48 older listeners with sensorineural hearing loss (older listeners with hearing loss [OHL]). The within-subjects factor was WARRM processing condition (no additional task or with an alphabet judgment task). The associations between results on the WARRM test and results on a battery of other auditory and memory measures were examined. Word-recognition performance on the WARRM was not affected by processing condition or set size and was near ceiling for the YNH and ONH listeners (99 and 98%, respectively) with both groups performing significantly better than the OHL listeners (83%). The recall results were significantly better for the YNH, ONH, and OHL groups with no processing (93, 84, and 75%, respectively) than with the alphabet processing (86, 77, and 70%). In both processing conditions, recall was best for YNH, followed by ONH, and worst for OHL listeners. WARRM recall scores were significantly correlated with other memory measures. In addition, WARRM recall scores were correlated with results on the Words-In-Noise (WIN) test for the OHL listeners in the no processing condition and for ONH listeners in the alphabet processing condition. Differences in the WIN and recall scores of these groups are consistent with the interpretation that the OHL listeners found listening to be sufficiently demanding to affect recall even in the no processing condition, whereas the ONH group listeners did not find it so demanding until the additional alphabet processing task was added. These findings demonstrate the feasibility of incorporating an auditory memory test into a word-recognition test to obtain measures of both word recognition and working memory simultaneously. The correlation of WARRM recall with scores from other memory measures is evidence of construct validity. The observation of correlations between the WIN thresholds with each of the older groups and recall scores in certain processing conditions suggests that recall depends on listeners' word-recognition abilities in noise in combination with the processing demands of the task. The recall score provides additional information beyond the pure-tone audiogram and word-recognition scores that may help rehabilitative audiologists assess the listening abilities of patients with hearing loss.
Physiological Factors in Adult Learning and Instruction. Research to Practice Series.
ERIC Educational Resources Information Center
Verner, Coolie; Davison, Catherine V.
The physiological condition of the adult learner as related to his learning capability is discussed. The design of the instructional process, the selection of learning tasks, the rate at which instruction occurs, and the nature of the instructional setting may all be modified by the instructor to accomodate the variable physiological conditions of…
Ganni, Venkatarao
2008-08-12
A unique process cycle and apparatus design separates the consumer (cryogenic) load return flow from most of the recycle return flow of a refrigerator and/or liquefier process cycle. The refrigerator and/or liquefier process recycle return flow is recompressed by a multi-stage compressor set and the consumer load return flow is recompressed by an independent consumer load compressor set that maintains a desirable constant suction pressure using a consumer load bypass control valve and the consumer load return pressure control valve that controls the consumer load compressor's suction pressure. The discharge pressure of this consumer load compressor is thereby allowed to float at the intermediate pressure in between the first and second stage recycle compressor sets. Utilizing the unique gas management valve regulation, the unique process cycle and apparatus design in which the consumer load return flow is separate from the recycle return flow, the pressure ratios of each recycle compressor stage and all main pressures associated with the recycle return flow are allowed to vary naturally, thus providing a naturally regulated and balanced floating pressure process cycle that maintains optimal efficiency at design and off-design process cycle capacity and conditions automatically.
Ganni, Venkatarao
2007-10-09
A unique process cycle and apparatus design separates the consumer (cryogenic) load return flow from most of the recycle return flow of a refrigerator and/or liquefier process cycle. The refrigerator and/or liquefier process recycle return flow is recompressed by a multi-stage compressor set and the consumer load return flow is recompressed by an independent consumer load compressor set that maintains a desirable constant suction pressure using a consumer load bypass control valve and the consumer load return pressure control valve that controls the consumer load compressor's suction pressure. The discharge pressure of this consumer load compressor is thereby allowed to float at the intermediate pressure in between the first and second stage recycle compressor sets. Utilizing the unique gas management valve regulation, the unique process cycle and apparatus design in which the consumer load return flow is separate from the recycle return flow, the pressure ratios of each recycle compressor stage and all main pressures associated with the recycle return flow are allowed to vary naturally, thus providing a naturally regulated and balanced floating pressure process cycle that maintains optimal efficiency at design and off-design process cycle capacity and conditions automatically.
Multidatabase Query Processing with Uncertainty in Global Keys and Attribute Values.
ERIC Educational Resources Information Center
Scheuermann, Peter; Li, Wen-Syan; Clifton, Chris
1998-01-01
Presents an approach for dynamic database integration and query processing in the absence of information about attribute correspondences and global IDs. Defines different types of equivalence conditions for the construction of global IDs. Proposes a strategy based on ranked role-sets that makes use of an automated semantic integration procedure…
Teacher Involvement in Curriculum Design: Need for Support to Enhance Teachers' Design Expertise
ERIC Educational Resources Information Center
Huizinga, Tjark; Handelzalts, Adam; Nieveen, Nienke; Voogt, Joke M.
2014-01-01
Teacher involvement in curriculum design has a long tradition. However, although it fosters implementation of curriculum reforms, teachers encounter various problems while designing related to conditions set for the design process, and lack the knowledge and skills needed to enact collaborative design processes. Providing support to enhance…
Reading Rate, Readability and Variations in Task-Induced Processing.
ERIC Educational Resources Information Center
Coke, Esther U.
This study examined the adaptability of reading rate to passage difficulty under different conditions of task-induced processing. Sixteen experimental passages varying in subject matter and ranging from 85 to 171 words were selected from a set of 32 texts rated for comprehensibility. The eight easiest and eight hardest texts were selected. Another…
Teaching Special Education Teachers How to Conduct Functional Analysis in Natural Settings
ERIC Educational Resources Information Center
Erbas, Dilek; Tekin-Iftar, Elif; Yucesoy, Serife
2006-01-01
Effects of a training program utilized to teach how to conduct functional analysis process to teachers of children with developmental disabilities was examined. Furthermore, teachers' opinions regarding this process were investigated. A multiple probe design across subjects with probe conditions was used. Teacher training was in two phases. In the…
Hydrologic processes in the pinyon-juniper woodlands: A literature review
Peter F. Ffolliott; Gerald J. Gottfried
2012-01-01
Hydrologic processes in the pinyon-juniper woodlands of the western region of the United States are variable because of the inherent interactions among the occurring precipitation regimes, geomorphological settings, and edaphic conditions that characterize the ecosystem. A wide range of past and present land-use practices further complicates comprehensive evaluations...
An Image Processing Algorithm Based On FMAT
NASA Technical Reports Server (NTRS)
Wang, Lui; Pal, Sankar K.
1995-01-01
Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.
12 CFR 219.3 - Cost reimbursement.
Code of Federal Regulations, 2010 CFR
2010-01-01
... preparing financial records for shipment. Search and processing costs shall not cover analysis of material... Conditions for Payment set forth in § 219.5 of this part. Copies of photographs, films and other materials...
Tensor products of process matrices with indefinite causal structure
NASA Astrophysics Data System (ADS)
Jia, Ding; Sakharwade, Nitica
2018-03-01
Theories with indefinite causal structure have been studied from both the fundamental perspective of quantum gravity and the practical perspective of information processing. In this paper we point out a restriction in forming tensor products of objects with indefinite causal structure in certain models: there exist both classical and quantum objects the tensor products of which violate the normalization condition of probabilities, if all local operations are allowed. We obtain a necessary and sufficient condition for when such unrestricted tensor products of multipartite objects are (in)valid. This poses a challenge to extending communication theory to indefinite causal structures, as the tensor product is the fundamental ingredient in the asymptotic setting of communication theory. We discuss a few options to evade this issue. In particular, we show that the sequential asymptotic setting does not suffer the violation of normalization.
Dimensional threshold for fracture linkage and hooking
NASA Astrophysics Data System (ADS)
Lamarche, Juliette; Chabani, Arezki; Gauthier, Bertrand D. M.
2018-03-01
Fracture connectivity in rocks depends on spatial properties of the pattern including length, abundance and orientation. When fractures form a single-strike set, they hardly cross-cut each other and the connectivity is limited. Linkage probability increases with increasing fracture abundance and length as small fractures connect to each other to form longer ones. A process for parallel fracture linkage is the "hooking", where two converging fracture tips mutually deviate and then converge to connect due to the interaction of their crack-tip stresses. Quantifying the processes and conditions for fracture linkage in single-strike fracture sets is crucial to better predicting fluid flow in Naturally Fractured Reservoirs. For 1734 fractures in Permian shales of the Lodève Basin, SE France, we measured geometrical parameters in 2D, characterizing three stages of the hooking process: underlapping, overlapping and linkage. We deciphered the threshold values, shape ratios and limiting conditions to switch from one stage to another one. The hook set up depends on the spacing (S) and fracture length (Lh) with the relation S ≈ 0.15 Lh. Once the hooking is initiated, with the fracture deviation length (L) L ≈ 0.4 Lh, the fractures reaches the linkage stage only when the spacing is reduced to S ≈ 0.02 Lh and the convergence (C) is < 0.1 L. These conditions apply to multi-scale fractures with a shape ratio L/S = 10 and for fracture curvature of 10°-20°.
Ultrasonic monitoring of the setting of silicone elastomeric impression materials.
Kanazawa, Tomoe; Murayama, Ryosuke; Furuichi, Tetsuya; Imai, Arisa; Suda, Shunichi; Kurokawa, Hiroyasu; Takamizawa, Toshiki; Miyazaki, Masashi
2017-01-31
This study used an ultrasonic measurement device to monitor the setting behavior of silicone elastomeric impression materials, and the influence of temperature on setting behavior was determined. The ultrasonic device consisted of a pulser-receiver, transducers, and an oscilloscope. The two-way transit time through the mixing material was divided by two to account for the down-and-back travel path; then it was multiplied by the sonic velocity. Analysis of variance and the Tukey honest significant difference test were used. In the early stages of the setting process, most of the ultrasonic energy was absorbed by the elastomers and the second echoes were relatively weak. As the elastomers hardened, the sonic velocities increased until they plateaued. The changes in sonic velocities varied among the elastomers tested, and were affected by temperature conditions. The ultrasonic method used in this study has considerable potential for determining the setting processes of elastomeric impression materials.
Process Setting through General Linear Model and Response Surface Method
NASA Astrophysics Data System (ADS)
Senjuntichai, Angsumalin
2010-10-01
The objective of this study is to improve the efficiency of the flow-wrap packaging process in soap industry through the reduction of defectives. At the 95% confidence level, with the regression analysis, the sealing temperature, temperatures of upper and lower crimper are found to be the significant factors for the flow-wrap process with respect to the number/percentage of defectives. Twenty seven experiments have been designed and performed according to three levels of each controllable factor. With the general linear model (GLM), the suggested values for the sealing temperature, temperatures of upper and lower crimpers are 185, 85 and 85° C, respectively while the response surface method (RSM) provides the optimal process conditions at 186, 89 and 88° C. Due to different assumptions between percentage of defective and all three temperature parameters, the suggested conditions from the two methods are then slightly different. Fortunately, the estimated percentage of defectives at 5.51% under GLM process condition and the predicted percentage of defectives at 4.62% under RSM process condition are not significant different. But at 95% confidence level, the percentage of defectives under RSM condition can be much lower approximately 2.16% than those under GLM condition in accordance with wider variation. Lastly, the percentages of defectives under the conditions suggested by GLM and RSM are reduced by 55.81% and 62.95%, respectively.
Emotion based attentional priority for storage in visual short-term memory.
Simione, Luca; Calabrese, Lucia; Marucci, Francesco S; Belardinelli, Marta Olivetti; Raffone, Antonino; Maratos, Frances A
2014-01-01
A plethora of research demonstrates that the processing of emotional faces is prioritised over non-emotive stimuli when cognitive resources are limited (this is known as 'emotional superiority'). However, there is debate as to whether competition for processing resources results in emotional superiority per se, or more specifically, threat superiority. Therefore, to investigate prioritisation of emotional stimuli for storage in visual short-term memory (VSTM), we devised an original VSTM report procedure using schematic (angry, happy, neutral) faces in which processing competition was manipulated. In Experiment 1, display exposure time was manipulated to create competition between stimuli. Participants (n = 20) had to recall a probed stimulus from a set size of four under high (150 ms array exposure duration) and low (400 ms array exposure duration) perceptual processing competition. For the high competition condition (i.e. 150 ms exposure), results revealed an emotional superiority effect per se. In Experiment 2 (n = 20), we increased competition by manipulating set size (three versus five stimuli), whilst maintaining a constrained array exposure duration of 150 ms. Here, for the five-stimulus set size (i.e. maximal competition) only threat superiority emerged. These findings demonstrate attentional prioritisation for storage in VSTM for emotional faces. We argue that task demands modulated the availability of processing resources and consequently the relative magnitude of the emotional/threat superiority effect, with only threatening stimuli prioritised for storage in VSTM under more demanding processing conditions. Our results are discussed in light of models and theories of visual selection, and not only combine the two strands of research (i.e. visual selection and emotion), but highlight a critical factor in the processing of emotional stimuli is availability of processing resources, which is further constrained by task demands.
NASA Astrophysics Data System (ADS)
Reyes, J. J.; Adam, J. C.; Tague, C.
2016-12-01
Grasslands play an important role in agricultural production as forage for livestock; they also provide a diverse set of ecosystem services including soil carbon (C) storage. The partitioning of C between above and belowground plant compartments (i.e. allocation) is influenced by both plant characteristics and environmental conditions. The objectives of this study are to 1) develop and evaluate a hybrid C allocation strategy suitable for grasslands, and 2) apply this strategy to examine the importance of various parameters related to biogeochemical cycling, photosynthesis, allocation, and soil water drainage on above and belowground biomass. We include allocation as an important process in quantifying the model parameter uncertainty, which identifies the most influential parameters and what processes may require further refinement. For this, we use the Regional Hydro-ecologic Simulation System, a mechanistic model that simulates coupled water and biogeochemical processes. A Latin hypercube sampling scheme was used to develop parameter sets for calibration and evaluation of allocation strategies, as well as parameter uncertainty analysis. We developed the hybrid allocation strategy to integrate both growth-based and resource-limited allocation mechanisms. When evaluating the new strategy simultaneously for above and belowground biomass, it produced a larger number of less biased parameter sets: 16% more compared to resource-limited and 9% more compared to growth-based. This also demonstrates its flexible application across diverse plant types and environmental conditions. We found that higher parameter importance corresponded to sub- or supra-optimal resource availability (i.e. water, nutrients) and temperature ranges (i.e. too hot or cold). For example, photosynthesis-related parameters were more important at sites warmer than the theoretical optimal growth temperature. Therefore, larger values of parameter importance indicate greater relative sensitivity in adequately representing the relevant process to capture limiting resources or manage atypical environmental conditions. These results may inform future experimental work by focusing efforts on quantifying specific parameters under various environmental conditions or across diverse plant functional types.
NASA Astrophysics Data System (ADS)
Esposito, C.; Bianchi-Fasani, G.; Martino, S.; Scarascia-Mugnozza, G.
2013-10-01
This paper focuses on a study aimed at defining the role of geological-structural setting and Quaternary morpho-structural evolution on the onset and development of a deep-seated gravitational slope deformation which affects the western slope of Mt. Genzana ridge (Central Apennines, Italy). This case history is particularly significant as it comprises several aspects of such gravitational processes both in general terms and with particular reference to the Apennines. In fact: i) the morpho-structural setting is representative of widespread conditions in Central Apennines; ii) the deforming slope partially evolved in a large rockslide-avalanche; iii) the deformational process provides evidence of an ongoing state of activity; iv) the rockslide-avalanche debris formed a stable natural dam, thus implying significant variations in the morphologic, hydraulic and hydrogeological setting; v) the gravitational deformation as well as the rockslide-avalanche reveal a strong structural control. The main study activities were addressed to define a detailed geological model of the gravity-driven process, by means of geological, structural, geomorphological and geomechanical surveys. As a result, a robust hypothesis about the kinematics of the process was possible, with particular reference to the identification of geological-structural constraints. The process, in fact, involves a specific section of the slope exactly where a dextral transtensional structure is present, thus implying local structural conditions that favor sliding processes: the rock mass is intensively jointed by high angle discontinuity sets and the bedding attitude is quite parallel to the slope angle. Within this frame the gravitational process can be classified as a structurally constrained translational slide, locally evolved into a rockslide-avalanche. The activation of such a deformation can be in its turn related to the Quaternary morphological evolution of the area, which was affected by a significant topographic stress increase, testified by stratigraphic and morphologic evidence.
Street, Helen
2003-09-01
This study explores depression in cancer patients with reference to conditional goal setting (CGS) theory. CGS theory proposes that depressed individuals believe that personal happiness is conditional upon attainment of specific goals (personal CGS). Other individuals may set important goals believing that goal achievement is a necessary prerequisite of social acceptance and approval (social CGS). CGS has been found to contribute to depression in normal populations. 15.2% of the 67 newly diagnosed cancer patients in this study showed clinical levels of depression. A significant relationship was identified between personal CGS, rumination and depression, as predicted in CGS theory. Two months later, 46.7% of patients demonstrated clinical levels of depression. This later experience of depression was significantly related to social CGS. The results suggest CGS involving a misdirected pursuit of happiness is initially associated with depression whereas subsequent experiences of depression are related to a misdirected pursuit of social acceptance. Implications are discussed in terms of understanding the cancer patients' motivations controlling goal setting. It is suggested that successful psychotherapy for depression in cancer patients needs to examine the motivations controlling goal setting in addition to the process of goal pursuit. Copyright 2003 John Wiley & Sons, Ltd.
Cognitive caching promotes flexibility in task switching: evidence from event-related potentials.
Lange, Florian; Seer, Caroline; Müller, Dorothea; Kopp, Bruno
2015-12-08
Time-consuming processes of task-set reconfiguration have been shown to contribute to the costs of switching between cognitive tasks. We describe and probe a novel mechanism serving to reduce the costs of task-set reconfiguration. We propose that when individuals are uncertain about the currently valid task, one task set is activated for execution while other task sets are maintained at a pre-active state in cognitive cache. We tested this idea by assessing an event-related potential (ERP) index of task-set reconfiguration in a three-rule task-switching paradigm involving varying degrees of task uncertainty. In high-uncertainty conditions, two viable tasks were equally likely to be correct whereas in low-uncertainty conditions, one task was more likely than the other. ERP and performance measures indicated substantial costs of task-set reconfiguration when participants were required to switch away from a task that had been likely to be correct. In contrast, task-set-reconfiguration costs were markedly reduced when the previous task set was chosen under high task uncertainty. These results suggest that cognitive caching of alternative task sets adds to human cognitive flexibility under high task uncertainty.
Providing QoS through machine-learning-driven adaptive multimedia applications.
Ruiz, Pedro M; Botía, Juan A; Gómez-Skarmeta, Antonio
2004-06-01
We investigate the optimization of the quality of service (QoS) offered by real-time multimedia adaptive applications through machine learning algorithms. These applications are able to adapt in real time their internal settings (i.e., video sizes, audio and video codecs, among others) to the unpredictably changing capacity of the network. Traditional adaptive applications just select a set of settings to consume less than the available bandwidth. We propose a novel approach in which the selected set of settings is the one which offers a better user-perceived QoS among all those combinations which satisfy the bandwidth restrictions. We use a genetic algorithm to decide when to trigger the adaptation process depending on the network conditions (i.e., loss-rate, jitter, etc.). Additionally, the selection of the new set of settings is done according to a set of rules which model the user-perceived QoS. These rules are learned using the SLIPPER rule induction algorithm over a set of examples extracted from scores provided by real users. We will demonstrate that the proposed approach guarantees a good user-perceived QoS even when the network conditions are constantly changing.
Cognitive caching promotes flexibility in task switching: evidence from event-related potentials
Lange, Florian; Seer, Caroline; Müller, Dorothea; Kopp, Bruno
2015-01-01
Time-consuming processes of task-set reconfiguration have been shown to contribute to the costs of switching between cognitive tasks. We describe and probe a novel mechanism serving to reduce the costs of task-set reconfiguration. We propose that when individuals are uncertain about the currently valid task, one task set is activated for execution while other task sets are maintained at a pre-active state in cognitive cache. We tested this idea by assessing an event-related potential (ERP) index of task-set reconfiguration in a three-rule task-switching paradigm involving varying degrees of task uncertainty. In high-uncertainty conditions, two viable tasks were equally likely to be correct whereas in low-uncertainty conditions, one task was more likely than the other. ERP and performance measures indicated substantial costs of task-set reconfiguration when participants were required to switch away from a task that had been likely to be correct. In contrast, task-set-reconfiguration costs were markedly reduced when the previous task set was chosen under high task uncertainty. These results suggest that cognitive caching of alternative task sets adds to human cognitive flexibility under high task uncertainty. PMID:26643146
45 CFR 1308.8 - Eligibility criteria: Emotional/behavioral disorders.
Code of Federal Regulations, 2010 CFR
2010-10-01
... or emotional functioning in multiple settings. (c) The evaluation process must include a review of the child's regular Head Start physical examination to eliminate the possibility of misdiagnosis due to an underlying physical condition. ...
Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo
2012-12-01
A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.
Breaking difficult news in a newborn setting: Down syndrome as a paradigm.
Dent, Karin M; Carey, John C
2006-08-15
Breaking the difficult news of an unexpected diagnosis to parents in the newborn setting is a common occurrence in genetic counseling. Many clinical geneticists and genetic counselors have had the challenge of delivering a postnatal diagnosis of Down syndrome to parents of newborns. Down syndrome is a common chromosome condition occurring in approximately 1 in 800 live births. Presenting the diagnosis to families must be accomplished in a supportive, positive, caring, and honest manner. However, there are few scientific data and little instruction in training programs on how best to convey this news in an appropriate manner. Several articles in the literature over the last three decades have proposed various guidelines for the so-called informing interview. Discussions of parents' preferences and experiences in receiving this news have also been documented. Few reports, however, have focused on breaking difficult news of the diagnosis of a genetic condition to parents in a newborn setting in the genetics literature. In this paper, we will review the medical literature on delivering difficult news, specifically focused on that regarding the diagnosis of Down syndrome in the newborn setting. We propose a theoretical framework from which the informing interview can be planned and future outcome data can be measured. In this way, researchers of this theme can investigate the process, including the healthcare professionals' delivery of difficult news and make recommendations for continued improvement of the process. Our model can be generalized to breaking difficult news for a variety of other congenital conditions.
Forest conditions and trends in the northern United States
Stephen R. Shifley; Francisco X. Aguilar; Nianfu Song; Susan I. Stewart; David J. Nowak; Dale D. Gormanson; W. Keith Moser; Sherri Wormstead; Eric J. Greenfield
2012-01-01
This section describes current conditions and trends for the 20 Northern States by focusing on selected characteristics associated with forest sustainability. Its format is based upon a set of 64 indicators within 7 broad criteria that the United States and 11 other countries have adopted under the auspices of the Montréal Process Working Group on Criteria and...
Viral Aggregation: Impact on Virus Behavior in the Environment.
Gerba, Charles P; Betancourt, Walter Q
2017-07-05
Aggregates of viruses can have a significant impact on quantification and behavior of viruses in the environment. Viral aggregates may be formed in numerous ways. Viruses may form crystal like structures and aggregates in the host cell during replication or may form due to changes in environmental conditions after virus particles are released from the host cells. Aggregates tend to form near the isoelectric point of the virus, under the influence of certain salts and salt concentrations in solution, cationic polymers, and suspended organic matter. The given conditions under which aggregates form in the environment are highly dependent on the type of virus, type of salts in solution (cation, anion. monovalent, divalent) and pH. However, virus type greatly influences the conditions when aggregation/disaggregation will occur, making predictions difficult under any given set of water quality conditions. Most studies have shown that viral aggregates increase the survival of viruses in the environment and resistance to disinfectants, especially with more reactive disinfectants. The presence of viral aggregates may also result in overestimation of removal by filtration processes. Virus aggregation-disaggregation is a complex process and predicting the behavior of any individual virus is difficult under a given set of environmental circumstances without actual experimental data.
Interface conditions of two-shot molded parts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kisslinger, Thomas, E-mail: thomas.kisslinger@pccl.at; Bruckmoser, Katharina, E-mail: katharina.bruckmoser@unileoben.ac.at; Resch, Katharina, E-mail: katharina.resch@unileoben.ac.at
2014-05-15
The focus of this work is on interfaces of two-shot molded parts. It is well known that e.g. material combination, process parameters and contact area structures show significant effects on the bond strength of multi-component injection molded parts. To get information about the bond strength at various process parameter settings and material combinations a test mold with core back technology was used to produce two-component injection molded tensile test specimens. At the core back process the different materials are injected consecutively, so each component runs through the whole injection molding cycle (two-shot process). Due to this consecutive injection molding processes,more » a cold interface is generated. This is defined as overmolding of a second melt to a solidified polymer preform. Strong interest lies in the way the interface conditions change during the adhesion formation between the individual components. Hence the interface conditions were investigated by computed tomography and Raman spectroscopy. By analyzing these conditions the understanding of the adhesion development during the multi-component injection molding was improved.« less
NASA Astrophysics Data System (ADS)
De Lange, Gert J.; Krijgsman, Wout
2014-05-01
The Messinian Salinity Crisis (MSC) is a dramatic event that took place ~ 5.9 Ma ago, and resulted in the deposition of 0.3-3 km thick evaporites at the Mediterranean seafloor. A considerable and long-lasting controversy existed on the modes of their formation. During the CIESM Almeria Workshop a consensus was reached on several aspects. In addition, remaining issues to be solved were identified, such as for the observed shallow gypsum versus deep dolostone deposits for the early phase of MSC. The onset of MSC is marked by deposition of gypsum/sapropel-like alternations, thought to relate to arid/humid climate conditions. Gypsum precipitation only occurred at marginal settings, while dolomite containing rocks have been reported from deeper settings. A range of potential explanations have been reported, most of which cannot satisfactorily explain all observations. Biogeochemical processes during MSC are poorly understood and commonly neglected. These may, however, explain that different deposits formed in shallow versus deep environments without needing exceptional physical boundary conditions for each. We present here a unifying mechanism in which gypsum formation occurs at all shallow water depths but its preservation is mostly limited to shallow sedimentary settings. In contrast, ongoing anoxic organic matter (OM) degradation processes in the deep basin result in the formation of dolomite. Gypsum precipitation in evaporating seawater takes place at 3-7 times concentrated seawater; seawater is always largely oversaturated relative to dolomite but its formation is thought to be inhibited by the presence of dissolved sulphate. Thus the conditions for formation of gypsum exclude those for the formation of dolomite and vice versa. Another process that links the saturation states of gypsum and dolomite is that of OM degradation by sulphate reduction. In stagnant deep water, oxygen is rapidly depleted through OM degradation, then sulphate becomes the main oxidant for OM mineralization, thus reducing the deep-water sulphate content. In addition, considerable amounts of dissolved carbonate are formed. This means that low-sulphate conditions as for MSC deepwater, i.e. unfavorable conditions for gypsum formation, always coincide with anoxic, i.e. oxygen-free conditions. Thus one would expect a bath-tub rim of gypsum at all shallow depths, but gypsum appears mainly at silled marginal basins. However, a thick package of heavy gypsum on top of more liquid mud in a marginal/slope setting is highly unstable, thus any physical disturbance such as tectonic activity or sea-level change, would easily lead to downslope transport of such marginal gypsum deposits. The absence of gypsum and the presence of erosional unconformities at the sill-less Mediterranean passive margins concord to such removal mechanism. In addition, large-scale re-sedimentation of gypsum has also been found for deep Messinian settings in the Northern Apennines and Sicily. Only at those marginal settings that were silled, the marginal gypsum deposits have been preserved. Including the dynamic biogeochemical processes in the thusfar static interpretations of evaporite formation mechanisms can thus account for the paradoxal, isochronous formation of shallow gypsum and deep-dolomite during the early MSC (1). (1) De Lange G.J. and Krijgsman W. (2010) Mar. Geol. 275, 273-277.
Clustering Words to Match Conditions: An Algorithm for Stimuli Selection in Factorial Designs
ERIC Educational Resources Information Center
Guasch, Marc; Haro, Juan; Boada, Roger
2017-01-01
With the increasing refinement of language processing models and the new discoveries about which variables can modulate these processes, stimuli selection for experiments with a factorial design is becoming a tough task. Selecting sets of words that differ in one variable, while matching these same words into dozens of other confounding variables…
Housing Seasonal Workers for the Minnesota Processed Vegetable Industry
ERIC Educational Resources Information Center
Ziebarth, Ann
2006-01-01
The place where we live and work is a reflection of a complex set of economic conditions and social relationships. Very little information is available regarding housing for Minnesota's migrant workers. It is estimated that approximately 20,000 people migrate to Minnesota each summer to work in the production and processing of green peas and sweet…
Friction spinning - Twist phenomena and the capability of influencing them
NASA Astrophysics Data System (ADS)
Lossen, Benjamin; Homberg, Werner
2016-10-01
The friction spinning process can be allocated to the incremental forming techniques. The process consists of process elements from both metal spinning and friction welding. The selective combination of process elements from these two processes results in the integration of friction sub-processes in a spinning process. This implies self-induced heat generation with the possibility of manufacturing functionally graded parts from tube and sheets. Compared with conventional spinning processes, this in-process heat treatment permits the extension of existing forming limits and also the production of more complex geometries. Furthermore, the defined adjustment of part properties like strength, grain size/orientation and surface conditions can be achieved through the appropriate process parameter settings and consequently by setting a specific temperature profile in combination with the degree of deformation. The results presented from tube forming start with an investigation into the resulting twist phenomena in flange processing. In this way, the influence of the main parameters, such as rotation speed, feed rate, forming paths and tool friction surface, and their effects on temperature, forces and finally the twist behavior are analyzed. Following this, the significant correlations with the parameters and a new process strategy are set out in order to visualize the possibility of achieving a defined grain texture orientation.
Insights on Information Absorption and Transmission Rates in C2I Settings
1985-09-01
Development Activity, Ft Leavenworth, KS, 1978. Craik , F.I.M., and Lockhart , R.S. Levels of processing : A framework for memory research. Journal of Verbal... Craik & Lockhart , 1972; Baddeley, 1978). However, the idea is that the resulting information is relegated to central processing and thereby brought to...SYSTEM 9 ambient conditions: * luminence level , legibility t2 Information Processing : e training decoding, translating/trans- USER 0 experience scribing
NASA Astrophysics Data System (ADS)
Kurchatkin, I. V.; Gorshkalev, A. A.; Blagin, E. V.
2017-01-01
This article deals with developed methods of the working processes modelling in the combustion chamber of an internal combustion engine (ICE). Methods includes description of the preparation of a combustion chamber 3-d model, setting of the finite-element mesh, boundary condition setting and solution customization. Aircraft radial engine M-14 was selected for modelling. The cycle of cold blowdown in the ANSYS IC Engine software was carried out. The obtained data were compared to results of known calculation methods. A method of engine’s induction port improvement was suggested.
Age, familiarity, and visual processing schemes.
De Haven, D T; Roberts-Gray, C
1978-10-01
In a partial-report task adults and 5-yr.-old children identified stimuli of two types (common objects and familiar common objects) in two representations (black-and-white line drawings or full color photographs). It was hypothesized that familiar items and photographic representation would enhance the children's accuracy. Although both children and adults were more accurate when the stimuli were from the familiar set, children performed more accurate when the stimuli were from the familiar set, children performed poorly in all stimulus conditions. Results suggest that the age difference in this task reflects the "concrete" nature of the perceptual process in children.
Baik, Seong-Yi; Crabtree, Benjamin F; Gonzales, Junius J
2013-11-01
Depression is prevalent in primary care (PC) practices and poses a considerable public health burden in the United States. Despite nearly four decades of efforts to improve depression care quality in PC practices, a gap remains between desired treatment outcomes and the reality of how depression care is delivered. This article presents a real-world PC practice model of depression care, elucidating the processes and their influencing conditions. Grounded theory methodology was used for the data collection and analysis to develop a depression care model. Data were collected from 70 individual interviews (60 to 70 min each), three focus group interviews (n = 24, 2 h each), two surveys per clinician, and investigators' field notes on practice environments. Interviews were audiotaped and transcribed for analysis. Surveys and field notes complemented interview data. Seventy primary care clinicians from 52 PC offices in the Midwest: 28 general internists, 28 family physicians, and 14 nurse practitioners. A depression care model was developed that illustrates how real-world conditions infuse complexity into each step of the depression care process. Depression care in PC settings is mediated through clinicians' interactions with patients, practice, and the local community. A clinician's interactional familiarity ("familiarity capital") was a powerful facilitator for depression care. For the recognition of depression, three previously reported processes and three conditions were confirmed. For the management of depression, 13 processes and 11 conditions were identified. Empowering the patient was a parallel process to the management of depression. The clinician's ability to develop and utilize interactional relationships and resources needed to recognize and treat a person with depression is key to depression care in primary care settings. The interactional context of depression care makes empowering the patient central to depression care delivery.
Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I.
NASA Technical Reports Server (NTRS)
Bautista, M. A.; Fivet, V.; Quinet, P.; Dunn, J.; Gull, T. R.; Kallman, T. R.; Mendoza, C.
2013-01-01
We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data.We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of Oiii and Fe ii and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe ii]. Key words: atomic data - atomic processes - line: formation - methods: data analysis - molecular data - molecular processes - techniques: spectroscopic
Techniques and potential capabilities of multi-resolutional information (knowledge) processing
NASA Technical Reports Server (NTRS)
Meystel, A.
1989-01-01
A concept of nested hierarchical (multi-resolutional, pyramidal) information (knowledge) processing is introduced for a variety of systems including data and/or knowledge bases, vision, control, and manufacturing systems, industrial automated robots, and (self-programmed) autonomous intelligent machines. A set of practical recommendations is presented using a case study of a multiresolutional object representation. It is demonstrated here that any intelligent module transforms (sometimes, irreversibly) the knowledge it deals with, and this tranformation affects the subsequent computation processes, e.g., those of decision and control. Several types of knowledge transformation are reviewed. Definite conditions are analyzed, satisfaction of which is required for organization and processing of redundant information (knowledge) in the multi-resolutional systems. Providing a definite degree of redundancy is one of these conditions.
Tesoriero, A.J.; Spruill, T.B.; Eimers, J.L.
2004-01-01
Ground-water chemistry data from coastal plain environments have been examined to determine the geochemical conditions and processes that occur in these areas and assess their implications for aquifer susceptibility. Two distinct geochemical environments were studied to represent a range of conditions: an inner coastal plain setting having more well-drained soils and lower organic carbon (C) content and an outer coastal plain environment that has more poorly drained soils and high organic C content. Higher concentrations of most major ions and dissolved inorganic and organic C in the outer coastal plain setting indicate a greater degree of mineral dissolution and organic matter oxidation. Accordingly, outer coastal plain waters are more reducing than inner coastal plain waters. Low dissolved oxygen (O2) and nitrate (NO 3-) concentrations and high iron (Fe) concentrations indicate that ferric iron (Fe (III)) is an important electron acceptor in this setting, while dissolved O2 is the most common terminal electron acceptor in the inner coastal plain setting. The presence of a wide range of redox conditions in the shallow aquifer system examined here underscores the importance of providing a detailed geochemical characterization of ground water when assessing the intrinsic susceptibility of coastal plain settings. The greater prevalence of aerobic conditions in the inner coastal plain setting makes this region more susceptible to contamination by constituents that are more stable under these conditions and is consistent with the significantly (p<0.05) higher concentrations of NO3- found in this setting. Herbicides and their transformation products were frequently detected (36% of wells sampled), however concentrations were typically low (<0.1 ??g/L). Shallow water table depths often found in coastal plain settings may result in an increased risk of the detection of pesticides (e.g., alachlor) that degrade rapidly in the unsaturated zone.
Interest and attention in facial recognition.
Burgess, Melinda C R; Weaver, George E
2003-04-01
When applied to facial recognition, the levels of processing paradigm has yielded consistent results: faces processed in deep conditions are recognized better than faces processed under shallow conditions. However, there are multiple explanations for this occurrence. The own-race advantage in facial recognition, the tendency to recognize faces from one's own race better than faces from another race, is also consistently shown but not clearly explained. This study was designed to test the hypothesis that the levels of processing findings in facial recognition are a result of interest and attention, not differences in processing. This hypothesis was tested for both own and other faces with 105 Caucasian general psychology students. Levels of processing was manipulated as a between-subjects variable; students were asked to answer one of four types of study questions, e.g., "deep" or "shallow" processing questions, while viewing the study faces. Students' recognition of a subset of previously presented Caucasian and African-American faces from a test-set with an equal number of distractor faces was tested. They indicated their interest in and attention to the task. The typical levels of processing effect was observed with better recognition performance in the deep conditions than in the shallow conditions for both own- and other-race faces. The typical own-race advantage was also observed regardless of level of processing condition. For both own- and other-race faces, level of processing explained a significant portion of the recognition variance above and beyond what was explained by interest in and attention to the task.
Lethbridge, Jessica; Watson, Hunna J; Egan, Sarah J; Street, Helen; Nathan, Paula R
2011-08-01
This study examined the role of perfectionism (self-oriented and socially prescribed), shape and weight overvaluation, dichotomous thinking, and conditional goal setting in eating disorder psychopathology. Perfectionism and shape and weight overvaluation have had longstanding implication in the development and maintenance of eating disorders. A leading evidence-based theory of eating disorders (Fairburn, Cooper & Shafran, 2003) outlines perfectionism as a maintaining mechanism of eating disorder psychopathology and as a proximal risk factor for the development of shape and weight overvaluation. These constructs have been linked to other cognitive processes relevant to eating disorders, specifically, dichotomous thinking and conditional goal setting. Women with DSM-IV eating disorders (N=238) were compared to women in the general community (N=248) and, as hypothesised, scores on measures of these constructs were pronounced in the clinical sample. Hierarchical regression analyses predicting eating disorder psychopathology showed that for both groups, dichotomous thinking and conditional goal setting significantly improved model fit beyond perfectionism and shape and weight overvaluation alone. Self-oriented perfectionism, but not socially prescribed perfectionism, was relevant to eating disorder psychopathology. We discuss the implications for current treatment protocols and early intervention. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.
Gonneaud, Julie; Kalpouzos, Grégoria; Bon, Laetitia; Viader, Fausto; Eustache, Francis; Desgranges, Béatrice
2011-01-01
Prospective memory (PM) is the ability to remember to perform an action at a specific point in the future. Regarded as multidimensional, PM involves several cognitive functions that are known to be impaired in normal aging. In the present study, we set out to investigate the cognitive correlates of PM impairment in normal aging. Manipulating cognitive load, we assessed event- and time-based PM, as well as several cognitive functions, including executive functions, working memory and retrospective episodic memory, in healthy subjects covering the entire adulthood. We found that normal aging was characterized by PM decline in all conditions and that event-based PM was more sensitive to the effects of aging than time-based PM. Whatever the conditions, PM was linked to inhibition and processing speed. However, while event-based PM was mainly mediated by binding and retrospective memory processes, time-based PM was mainly related to inhibition. The only distinction between high- and low-load PM cognitive correlates lays in an additional, but marginal, correlation between updating and the high-load PM condition. The association of distinct cognitive functions, as well as shared mechanisms with event- and time-based PM confirms that each type of PM relies on a different set of processes. PMID:21678154
A Rule-Based Modeling for the Description of Flexible and Self-healing Business Processes
NASA Astrophysics Data System (ADS)
Boukhebouze, Mohamed; Amghar, Youssef; Benharkat, Aïcha-Nabila; Maamar, Zakaria
In this paper we discuss the importance of ensuring that business processes are label robust and agile at the same time robust and agile. To this end, we consider reviewing the way business processes are managed. For instance we consider offering a flexible way to model processes so that changes in regulations are handled through some self-healing mechanisms. These changes may raise exceptions at run-time if not properly reflected on these processes. To this end we propose a new rule based model that adopts the ECA rules and is built upon formal tools. The business logic of a process can be summarized with a set of rules that implement an organization’s policies. Each business rule is formalized using our ECAPE formalism (Event-Condition-Action-Post condition- post Event). This formalism allows translating a process into a graph of rules that is analyzed in terms of reliably and flexibility.
Eye movement related brain responses to emotional scenes during free viewing
Simola, Jaana; Torniainen, Jari; Moisala, Mona; Kivikangas, Markus; Krause, Christina M.
2013-01-01
Emotional stimuli are preferentially processed over neutral stimuli. Previous studies, however, disagree on whether emotional stimuli capture attention preattentively or whether the processing advantage is dependent on allocation of attention. The present study investigated attention and emotion processes by measuring brain responses related to eye movement events while 11 participants viewed images selected from the International Affective Picture System (IAPS). Brain responses to emotional stimuli were compared between serial and parallel presentation. An “emotional” set included one image with high positive or negative valence among neutral images. A “neutral” set comprised four neutral images. The participants were asked to indicate which picture—if any—was emotional and to rate that picture on valence and arousal. In the serial condition, the event-related potentials (ERPs) were time-locked to the stimulus onset. In the parallel condition, the ERPs were time-locked to the first eye entry on an image. The eye movement results showed facilitated processing of emotional, especially unpleasant information. The EEG results in both presentation conditions showed that the LPP (“late positive potential”) amplitudes at 400–500 ms were enlarged for the unpleasant and pleasant pictures as compared to neutral pictures. Moreover, the unpleasant scenes elicited stronger responses than pleasant scenes. The ERP results did not support parafoveal emotional processing, although the eye movement results suggested faster attention capture by emotional stimuli. Our findings, thus, suggested that emotional processing depends on overt attentional resources engaged in the processing of emotional content. The results also indicate that brain responses to emotional images can be analyzed time-locked to eye movement events, although the response amplitudes were larger during serial presentation. PMID:23970856
Dufoo-Hurtado, Miguel D.; Huerta-Ocampo, José Á.; Barrera-Pacheco, Alberto; Barba de la Rosa, Ana P.; Mercado-Silva, Edmundo M.
2015-01-01
Low-temperature conditioning of garlic “seed” cloves substitutes the initial climatic requirements of the crop and accelerates the cycle. We have reported that “seed” bulbs from “Coreano” variety conditioned at 5°C for 5 weeks reduces growth and plant weight as well as the crop yields and increases the synthesis of phenolic compounds and anthocyanins. Therefore, this treatment suggests a cold stress. Plant acclimation to stress is associated with deep changes in proteome composition. Since proteins are directly involved in plant stress response, proteomics studies can significantly contribute to unravel the possible relationships between protein abundance and plant stress acclimation. The aim of this work was to study the changes in the protein profiles of garlic “seed” cloves subjected to conditioning at low-temperature using proteomics approach. Two sets of garlic bulbs were used, one set was stored at room temperature (23°C), and the other was conditioned at low temperature (5°C) for 5 weeks. Total soluble proteins were extracted from sprouts of cloves and separated by two-dimensional gel electrophoresis. Protein spots showing statistically significant changes in abundance were analyzed by LC-ESI-MS/MS and identified by database search analysis using the Mascot search engine. The results revealed that low-temperature conditioning of garlic “seed” cloves causes alterations in the accumulation of proteins involved in different physiological processes such as cellular growth, antioxidative/oxidative state, macromolecules transport, protein folding and transcription regulation process. The metabolic pathways affected include protein biosynthesis and quality control system, photosynthesis, photorespiration, energy production, and carbohydrate and nucleotide metabolism. These processes can work cooperatively to establish a new cellular homeostasis that might be related with the physiological and biochemical changes observed in previous studies. PMID:26029231
Dufoo-Hurtado, Miguel D; Huerta-Ocampo, José Á; Barrera-Pacheco, Alberto; Barba de la Rosa, Ana P; Mercado-Silva, Edmundo M
2015-01-01
Low-temperature conditioning of garlic "seed" cloves substitutes the initial climatic requirements of the crop and accelerates the cycle. We have reported that "seed" bulbs from "Coreano" variety conditioned at 5°C for 5 weeks reduces growth and plant weight as well as the crop yields and increases the synthesis of phenolic compounds and anthocyanins. Therefore, this treatment suggests a cold stress. Plant acclimation to stress is associated with deep changes in proteome composition. Since proteins are directly involved in plant stress response, proteomics studies can significantly contribute to unravel the possible relationships between protein abundance and plant stress acclimation. The aim of this work was to study the changes in the protein profiles of garlic "seed" cloves subjected to conditioning at low-temperature using proteomics approach. Two sets of garlic bulbs were used, one set was stored at room temperature (23°C), and the other was conditioned at low temperature (5°C) for 5 weeks. Total soluble proteins were extracted from sprouts of cloves and separated by two-dimensional gel electrophoresis. Protein spots showing statistically significant changes in abundance were analyzed by LC-ESI-MS/MS and identified by database search analysis using the Mascot search engine. The results revealed that low-temperature conditioning of garlic "seed" cloves causes alterations in the accumulation of proteins involved in different physiological processes such as cellular growth, antioxidative/oxidative state, macromolecules transport, protein folding and transcription regulation process. The metabolic pathways affected include protein biosynthesis and quality control system, photosynthesis, photorespiration, energy production, and carbohydrate and nucleotide metabolism. These processes can work cooperatively to establish a new cellular homeostasis that might be related with the physiological and biochemical changes observed in previous studies.
[Present-day metal-cutting tools and working conditions].
Kondratiuk, V P
1990-01-01
Polyfunctional machine-tools of a processing centre type are characterized by a set of hygienic advantages as compared to universal machine-tools. But low degree of mechanization and automation of some auxiliary processes, and constructional defects which decrease the ergonomic characteristics of the tools, involve labour intensity in multi-machine processing. The article specifies techniques of allowable noise level assessment, and proposes hygienic recommendations, some of which have been introduced into practice.
Exploiting Virtual Synchrony in Distributed Systems
1987-02-01
for distributed systems yield the best performance relative to the level of synchronization guaranteed by the primitive . A pro- grammer could then... synchronization facility. Semaphores Replicated binary and general semaphores . Monitors Monitor lock, condition variables and signals. Deadlock detection...We describe applications of a new software abstraction called the virtually synchronous process group. Such a group consists of a set of processes
Susan J. Alexander; Sonja N. Oswalt; Marla R. Emery
2011-01-01
The United States, in partnership with 11 other countries, participates in the Montreal Process. Each country assesses national progress toward the sustainable management of forest resources by using a set of criteria and indicators agreed on by all member countries. Several indicators focus on nontimber forest products (NTFPs). In the United States, permit and...
Human-Modified Permafrost Complexes in Urbanized Areas of the Russian North
NASA Astrophysics Data System (ADS)
Grebenets, V. I.; Streletskiy, D. A.
2013-12-01
Economic development in permafrost regions is accompanied by modification of natural geocryological conditions. Drastic landscape transformations in urbanized areas on permafrost are characterized by changes of heat and moisture exchange in permafrost - atmosphere system, and by engineering and technogenic influence upon the frozen ground, leading to alteration of its physical, thermal and mechanical properties. In northern cities this leads to overall increase of ground temperature relative to undisturbed areas and intensification of hazardous cryogenic processes in areas under engineering development, which together leads to reduction in stability of geotechnical environment. For example, deformations of structures in Norilsk district, Northern Siberia, in the last 15 years, became much more abundant than those revealed throughout the previous 50 years. About 250 large buildings in the local towns were deformed considerably due to deterioration of geocryological conditions, about 100 structures were functioning in emergency state, and almost 50 nine- and five-storey houses, built in the 1960-80s, have been recently disassembled. Increase in accident risk for various facilities (water and oil pipelines, industrial enterprises, etc.) enhances the technogenic pressure on permafrost, leading to the new milestone of changes in permafrost characteristics, i.e. to creation of 'another reality' of geocryological conditions. Social and natural factors dictate clustered spatial pattern of industrial development in permafrost regions. Cryogenic processes within the urban areas on permafrost are seldom similar with those under the natural conditions as intensity, duration and extent of the processes changes under technogenic impacts. Moreover, new cryogenic processes and phenomena may occur, which have not been typical for a given region. This makes mapping and characterization of these processes difficult task. Peculiar natural-technogenic geocryological complexes (NTGC) are formed in the urban territories, which are characterized by modified permafrost characteristics, by the new set of cryogenic processes, and by modified temperature trends. NTGC classification depends on initial natural settings and on type, intensity and duration of technogenic pressure. For instance, field reconnaissance of permafrost and geological conditions resulted in characterization of 17 NTGC types in Norilsk industrial area, 11 types in Yamburg Gas Condensate Field, Tazovsky Peninsula, and 32 types along gas and oil pipelines in the north of Western Siberia. Particular interest presents the dynamics of NTGC depending on the scale of urban system, on the set of its elements and on duration of technogenic impacts on permafrost. Important aspect is assessment of climate change impacts on structures and environment in various areas on permafrost
The influence of speech rate and accent on access and use of semantic information.
Sajin, Stanislav M; Connine, Cynthia M
2017-04-01
Circumstances in which the speech input is presented in sub-optimal conditions generally lead to processing costs affecting spoken word recognition. The current study indicates that some processing demands imposed by listening to difficult speech can be mitigated by feedback from semantic knowledge. A set of lexical decision experiments examined how foreign accented speech and word duration impact access to semantic knowledge in spoken word recognition. Results indicate that when listeners process accented speech, the reliance on semantic information increases. Speech rate was not observed to influence semantic access, except in the setting in which unusually slow accented speech was presented. These findings support interactive activation models of spoken word recognition in which attention is modulated based on speech demands.
Camera Based Closed Loop Control for Partial Penetration Welding of Overlap Joints
NASA Astrophysics Data System (ADS)
Abt, F.; Heider, A.; Weber, R.; Graf, T.; Blug, A.; Carl, D.; Höfler, H.; Nicolosi, L.; Tetzlaff, R.
Welding of overlap joints with partial penetration in automotive applications is a challenging process, since the laser power must be set very precisely to achieve a proper connection between the two joining partners without damaging the backside of the sheet stack. Even minor changes in welding conditions can lead to bad results. To overcome this problem a camera based closed loop control for partial penetration welding of overlap joints was developed. With this closed loop control it is possible to weld such configurations with a stable process result even under changing welding conditions.
Dynamic processes at stress promoters regulate the bimodal expression of HOG response genes
2011-01-01
Osmotic stress triggers the activation of the HOG (high osmolarity glycerol) pathway in Saccharomyces cerevisiae. This signaling cascade culminates in the activation of the MAPK (mitogen-activated protein kinase) Hog1. Quantitative single cell measurements revealed a discrepancy between kinase- and transcriptional activities of Hog1. While kinase activity increases proportionally to stress stimulus, gene expression is inhibited under low stress conditions. Interestingly, a slow stochastic gene activation process is responsible for setting a tunable threshold for gene expression under basal or low stress conditions, which generates a bimodal expression pattern at intermediate stress levels. PMID:22446531
Permitted and forbidden sets in symmetric threshold-linear networks.
Hahnloser, Richard H R; Seung, H Sebastian; Slotine, Jean-Jacques
2003-03-01
The richness and complexity of recurrent cortical circuits is an inexhaustible source of inspiration for thinking about high-level biological computation. In past theoretical studies, constraints on the synaptic connection patterns of threshold-linear networks were found that guaranteed bounded network dynamics, convergence to attractive fixed points, and multistability, all fundamental aspects of cortical information processing. However, these conditions were only sufficient, and it remained unclear which were the minimal (necessary) conditions for convergence and multistability. We show that symmetric threshold-linear networks converge to a set of attractive fixed points if and only if the network matrix is copositive. Furthermore, the set of attractive fixed points is nonconnected (the network is multiattractive) if and only if the network matrix is not positive semidefinite. There are permitted sets of neurons that can be coactive at a stable steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we provide a formulation of long-term memory that is more general than the traditional perspective of fixed-point attractor networks. There is a close correspondence between threshold-linear networks and networks defined by the generalized Lotka-Volterra equations.
The application and use of chemical space mapping to interpret crystallization screening results
Snell, Edward H.; Nagel, Ray M.; Wojtaszcyk, Ann; O’Neill, Hugh; Wolfley, Jennifer L.; Luft, Joseph R.
2008-01-01
Macromolecular crystallization screening is an empirical process. It often begins by setting up experiments with a number of chemically diverse cocktails designed to sample chemical space known to promote crystallization. Where a potential crystal is seen a refined screen is set up, optimizing around that condition. By using an incomplete factorial sampling of chemical space to formulate the cocktails and presenting the results graphically, it is possible to readily identify trends relevant to crystallization, coarsely sample the phase diagram and help guide the optimization process. In this paper, chemical space mapping is applied to both single macromolecules and to a diverse set of macromolecules in order to illustrate how visual information is more readily understood and assimilated than the same information presented textually. PMID:19018100
The application and use of chemical space mapping to interpret crystallization screening results.
Snell, Edward H; Nagel, Ray M; Wojtaszcyk, Ann; O'Neill, Hugh; Wolfley, Jennifer L; Luft, Joseph R
2008-12-01
Macromolecular crystallization screening is an empirical process. It often begins by setting up experiments with a number of chemically diverse cocktails designed to sample chemical space known to promote crystallization. Where a potential crystal is seen a refined screen is set up, optimizing around that condition. By using an incomplete factorial sampling of chemical space to formulate the cocktails and presenting the results graphically, it is possible to readily identify trends relevant to crystallization, coarsely sample the phase diagram and help guide the optimization process. In this paper, chemical space mapping is applied to both single macromolecules and to a diverse set of macromolecules in order to illustrate how visual information is more readily understood and assimilated than the same information presented textually.
The use of genetic programming to develop a predictor of swash excursion on sandy beaches
NASA Astrophysics Data System (ADS)
Passarella, Marinella; Goldstein, Evan B.; De Muro, Sandro; Coco, Giovanni
2018-02-01
We use genetic programming (GP), a type of machine learning (ML) approach, to predict the total and infragravity swash excursion using previously published data sets that have been used extensively in swash prediction studies. Three previously published works with a range of new conditions are added to this data set to extend the range of measured swash conditions. Using this newly compiled data set we demonstrate that a ML approach can reduce the prediction errors compared to well-established parameterizations and therefore it may improve coastal hazards assessment (e.g. coastal inundation). Predictors obtained using GP can also be physically sound and replicate the functionality and dependencies of previous published formulas. Overall, we show that ML techniques are capable of both improving predictability (compared to classical regression approaches) and providing physical insight into coastal processes.
NASA Astrophysics Data System (ADS)
Shtripling, L. O.; Kholkin, E. G.
2018-01-01
The article presents the procedure for determining the basic geometrical setting parameters for the oil-contaminated soils decontamination with reagent encapsulation method. An installation is considered for the operational elimination of the emergency consequences accompanied with oil spills, and the installation is adapted to winter conditions. In the installations exothermic process thermal energy of chemical neutralization of oil-contaminated soils released during the decontamination is used to thaw frozen subsequent portions of oil-contaminated soil. Installation for oil-contaminated soil decontamination as compared with other units has an important advantage, and it is, if necessary (e.g., in winter) in using the heat energy released at each decontamination process stage of oil-contaminated soil, in normal conditions the heat is dispersed into the environment. In addition, the short-term forced carbon dioxide delivery at the decontamination process final stage to a high concentration directly into the installation allows replacing the long process of microcapsule shells formation and hardening that occur in natural conditions in the open air.
Cahyadi, Christine; Heng, Paul Wan Sia; Chan, Lai Wah
2011-03-01
The aim of this study was to identify and optimize the critical process parameters of the newly developed Supercell quasi-continuous coater for optimal tablet coat quality. Design of experiments, aided by multivariate analysis techniques, was used to quantify the effects of various coating process conditions and their interactions on the quality of film-coated tablets. The process parameters varied included batch size, inlet temperature, atomizing pressure, plenum pressure, spray rate and coating level. An initial screening stage was carried out using a 2(6-1(IV)) fractional factorial design. Following these preliminary experiments, optimization study was carried out using the Box-Behnken design. Main response variables measured included drug-loading efficiency, coat thickness variation, and the extent of tablet damage. Apparent optimum conditions were determined by using response surface plots. The process parameters exerted various effects on the different response variables. Hence, trade-offs between individual optima were necessary to obtain the best compromised set of conditions. The adequacy of the optimized process conditions in meeting the combined goals for all responses was indicated by the composite desirability value. By using response surface methodology and optimization, coating conditions which produced coated tablets of high drug-loading efficiency, low incidences of tablet damage and low coat thickness variation were defined. Optimal conditions were found to vary over a large spectrum when different responses were considered. Changes in processing parameters across the design space did not result in drastic changes to coat quality, thereby demonstrating robustness in the Supercell coating process. © 2010 American Association of Pharmaceutical Scientists
Analysis of the optimal laminated target made up of discrete set of materials
NASA Technical Reports Server (NTRS)
Aptukov, Valery N.; Belousov, Valentin L.
1991-01-01
A new class of problems was analyzed to estimate an optimal structure of laminated targets fabricated from the specified set of homogeneous materials. An approximate description of the perforation process is based on the model of radial hole extension. The problem is solved by using the needle-type variation technique. The desired optimization conditions and quantitative/qualitative estimations of optimal targets were obtained and are discussed using specific examples.
Silane-Pyrolysis Reactor With Nonuniform Heating
NASA Technical Reports Server (NTRS)
Iya, Sridhar K.
1991-01-01
Improved reactor serves as last stage in system processing metallurgical-grade silicon feedstock into silicon powder of ultrahigh purity. Silane pyrolized to silicon powder and hydrogen gas via homogeneous decomposition reaction in free space. Features set of individually adjustable electrical heaters and purge flow of hydrogen to improve control of pyrolysis conditions. Power supplied to each heater set in conjunction with flow in reactor to obtain desired distribution of temperature as function of position along reactor.
Vasilopoulos, Terrie; Franz, Carol E; Panizzon, Matthew S; Xian, Hong; Grant, Michael D; Lyons, Michael J; Toomey, Rosemary; Jacobson, Kristen C; Kremen, William S
2012-03-01
To examine how genes and environments contribute to relationships among Trail Making Test (TMT) conditions and the extent to which these conditions have unique genetic and environmental influences. Participants included 1,237 middle-aged male twins from the Vietnam Era Twin Study of Aging. The Delis-Kaplan Executive Function System TMT included visual searching, number and letter sequencing, and set-shifting components. Phenotypic correlations among TMT conditions ranged from 0.29 to 0.60, and genes accounted for the majority (58-84%) of each correlation. Overall heritability ranged from 0.34 to 0.62 across conditions. Phenotypic factor analysis suggested a single factor. In contrast, genetic models revealed a single common genetic factor but also unique genetic influences separate from the common factor. Genetic variance (i.e., heritability) of number and letter sequencing was completely explained by the common genetic factor while unique genetic influences separate from the common factor accounted for 57% and 21% of the heritabilities of visual search and set shifting, respectively. After accounting for general cognitive ability, unique genetic influences accounted for 64% and 31% of those heritabilities. A common genetic factor, most likely representing a combination of speed and sequencing, accounted for most of the correlation among TMT 1-4. Distinct genetic factors, however, accounted for a portion of variance in visual scanning and set shifting. Thus, although traditional phenotypic shared variance analysis techniques suggest only one general factor underlying different neuropsychological functions in nonpatient populations, examining the genetic underpinnings of cognitive processes with twin analysis can uncover more complex etiological processes.
Study on Heat Transfer Agent Models of Transmission Line and Transformer
NASA Astrophysics Data System (ADS)
Wang, B.; Zhang, P. P.
2018-04-01
When using heat transfer simulation to study the dynamic overload of transmission line and transformer, it needs to establish the mathematical expression of heat transfer. However, the formula is a nonlinear differential equation or equation set and it is not easy to get general solutions. Aiming at this problem, some different temperature change processes caused by different initial conditions are calculated by differential equation and equation set. New agent models are developed according to the characteristics of different temperature change processes. The results show that the agent models have high precision and can solve the problem that the original equation cannot be directly applied in some practical engineers.
Putting sex education in its place.
Cassell, C
1981-04-01
In order to help reduce fears and anxieties regarding the influence of sex education in a public school setting, school and community sexuality educators need to better articulate the difference between formal and structured sex education and non-formal, informal and incidental sex learning. Sex education is only 1 aspect of the sexual learning process. 2 main points have to be clarified for parents and the general public to set the stage for a new way to view the school and community involvement in the sexual learning process: the schools' sexuality education courses constitute only a small portion of the sexual learning process; and sexual learning is not an event for youth only, but a process spanning life. Sex education (the process) connotates an academic setting with a specific curricula taught by a trained instructor, but sexual learning relates to environmental, non-formal incidental learning from a multitude of sources. Studies indicate that teenagers receive about 90% of their contraceptive and sexuality informaation from peers and mass media and that these sources of information are becoming their preferred sources of sex education. What is needed is a way to address and improve the conditions of sexual learning in the community. As home is the ideal environment for primary and positive sexual learning, parents need support in their role as sex educators. Classroom sexuality education curricula in all school settings have a solid place in the process of sexual learning.
Demonstration of UXO-PenDepth for the Estimation of Projectile Penetration Depth
2010-08-01
Effects (JTCG/ME) in August 2001. The accreditation process included verification and validation (V&V) by a subject matter expert (SME) other than...Within UXO-PenDepth, there are three sets of input parameters that are required: impact conditions (Fig. 1a), penetrator properties , and target... properties . The impact conditions that need to be defined are projectile orientation and impact velocity. The algorithm has been evaluated against
Reuse Requirements for Generating Long Term Climate Data Sets
NASA Astrophysics Data System (ADS)
Fleig, A. J.
2007-12-01
Creating long term climate data sets from remotely sensed data requires a specialized form of code reuse. To detect long term trends in a geophysical parameter, such as global ozone amount or mean sea surface temperature, it is essential to be able to differentiate between real changes in the measurement and artifacts related to changes in processing algorithms or instrument characteristics. The ability to rerun the exact algorithm used to produce a given data set many years after the data was originally made is essential to create consistent long term data sets. It is possible to quickly develop a basic algorithm that will convert a perfect instrument measurement into a geophysical parameter value for a well specified set of conditions. However the devil is in the details and it takes a massive effort to develop and verify a processing system to generate high quality global climate data over all necessary conditions. As an example, from 1976 until now, over a hundred man years and eight complete reprocessings have been spent on deriving thirty years of total ozone data from multiple backscattered ultraviolet instruments. To obtain a global data set it is necessary to make numerous assumptions and to handle many special conditions (e.g. "What happens at high solar zenith angles with scattered clouds for snow covered terrain at high altitudes"?) It is easier to determine the precision of a remotely sensed data set than to determine its absolute accuracy. Fortunately if the entire data set is made with a single instrument and a constant algorithm the ability to detect long term trends is primarily determined by the precision of the measurement system rather than its absolute accuracy. However no instrument runs forever and new processing algorithms are developed over time. Introducing the resulting changes can impact the estimate of product precision and reduce the ability to estimate long term trends.Given an extended period of time when both the initial measurement system and the new one provide simultaneous measurements it may be possible to identify differences between the two systems and produce a consistent merged long term data set. Unfortunately this is often not the case. Instead it is necessary to understand the exact details of all the assumptions built into the initial processing system and to evaluate the impact of changes in each of these assumptions and of new features introduced into the next generation processing system. This is not possible without complete understanding of exactly how the original data was produced. While scientific papers and algorithm theoretical basis documents provide substantial details about the concepts they do not provide the necessary detail. Only exact processing codes with all the necessary ancillary data to run them provide the needed information. Since it will be necessary to modify the code for the new instrument it is also necessary to provide all of the tools such as table generation routines and input parameters used to generate the code. This has not been a problem for the people that make the first set of measurements of a given parameter. There was no similar predecessor global data set to match and they know what they assumed in making their measurements. But we are entering an era when it is necessary to consider the next generation. For instance the entire 30 year global ozone data set that started with the Total Ozone Mapping Spectrometer instrument launched in 1978 on the Nimbus 7 spacecraft was produced by a single science team. Similar measurements will be made well into the middle of the coming century with instruments to be flown on the National Polar Orbiting Environmental Satellite System but the original science team (unfortunately) will not be there to explain what they did over that period
Braun, Mischa; Weinrich, Christiane; Finke, Carsten; Ostendorf, Florian; Lehmann, Thomas-Nicolas; Ploner, Christoph J
2011-03-01
Converging evidence from behavioral and imaging studies suggests that within the human medial temporal lobe (MTL) the hippocampal formation may be particularly involved in recognition memory of associative information. However, it is unclear whether the hippocampal formation processes all types of associations or whether there is a specialization for processing of associations involving spatial information. Here, we investigated this issue in six patients with postsurgical lesions of the right MTL affecting the hippocampal formation and in ten healthy controls. Subjects performed a battery of delayed match-to-sample tasks with two delays (900/5,000 ms) and three set sizes. Subjects were requested to remember either single features (colors, locations, shapes, letters) or feature associations (color-location, color-shape, color-letter). In the single-feature conditions, performance of patients did not differ from controls. In the association conditions, a significant delay-dependent deficit in memory of color-location associations was found. This deficit was largely independent of set size. By contrast, performance in the color-shape and color-letter conditions was normal. These findings support the hypothesis that a region within the right MTL, presumably the hippocampal formation, does not equally support all kinds of visual memory but rather has a bias for processing of associations involving spatial information. Recruitment of this region during memory tasks appears to depend both on processing type (associative/nonassociative) and to-be-remembered material (spatial/nonspatial). Copyright © 2010 Wiley-Liss, Inc.
Scheduling Operational Operational-Level Courses of Action
2003-10-01
Process modelling and analysis – process synchronisation techniques Information and knowledge management – Collaborative planning systems – Workflow...logistics – Some tasks may consume resources The military user may wish to impose synchronisation constraints among tasks A military end state can be...effects, – constrained with resource and synchronisation considerations, and – lead to the achievement of conditions set in the end state. The COA is
Managing expectations: cognitive authority and experienced control in complex healthcare processes.
Hunt, Katherine J; May, Carl R
2017-07-05
Balancing the normative expectations of others (accountabilities) against the personal and distributed resources available to meet them (capacity) is a ubiquitous feature of social relations in many settings. This is an important problem in the management of long-term conditions, because of widespread problems of non-adherence to treatment regimens. Using long-term conditions as an example, we set out middle range theory of this balancing work. A middle-range theory was constructed four stages. First, a qualitative elicitation study of men with heart failure was used to develop general propositions about patient and care giver experience, and about the ways that the organisation and delivery of care affected this. Second, these propositions were developed and confirmed through a systematic review of qualitative research literature. Third, theoretical propositions and constructs were built, refined and presented as a logic model associated with two main theoretical propositions. Finally, a construct validation exercise was undertaken, in which construct definitions informed reanalysis of a set of systematic reviews of studies of patient and caregiver experiences of heart failure that had been included in an earlier meta-review. Cognitive Authority Theory identifies, characterises and explains negotiation processes in in which people manage their relations with the expectations of normative systems - like those encountered in the management of long-term conditions. Here, their cognitive authority is the product of an assessment of competence, trustworthiness and credibility made about a person by other participants in a healthcare process; and their experienced control is a function of the degree to which they successfully manage the external process-specific limiting factors that make it difficult to otherwise perform in their role. Cognitive Authority Theory assists in explaining how participants in complex social processes manage important relational aspects of inequalities in power and expertise. It can play an important part in understanding the dynamics of participation in healthcare processes. It suggests ways in which these burdens may lead to relationally induced non-adherence to treatment regimens and self-care programmes, and points to targets where intervention may reduce these adverse outcomes.
Ellouze, M; Pichaud, M; Bonaiti, C; Coroller, L; Couvert, O; Thuault, D; Vaillant, R
2008-11-30
Time temperature integrators or indicators (TTIs) are effective tools making the continuous monitoring of the time temperature history of chilled products possible throughout the cold chain. Their correct setting is of critical importance to ensure food quality. The objective of this study was to develop a model to facilitate accurate settings of the CRYOLOG biological TTI, TRACEO. Experimental designs were used to investigate and model the effects of the temperature, the TTI inoculum size, pH, and water activity on its response time. The modelling process went through several steps addressing growth, acidification and inhibition phenomena in dynamic conditions. The model showed satisfactory results and validations in industrial conditions gave clear evidence that such a model is a valuable tool, not only to predict accurate response times of TRACEO, but also to propose precise settings to manufacture the appropriate TTI to trace a particular food according to a given time temperature scenario.
Korzynska, Anna; Roszkowiak, Lukasz; Pijanowska, Dorota; Kozlowski, Wojciech; Markiewicz, Tomasz
2014-01-01
The aim of this study is to compare the digital images of the tissue biopsy captured with optical microscope using bright field technique under various light conditions. The range of colour's variation in immunohistochemically stained with 3,3'-Diaminobenzidine and Haematoxylin tissue samples is immense and coming from various sources. One of them is inadequate setting of camera's white balance to microscope's light colour temperature. Although this type of error can be easily handled during the stage of image acquisition, it can be eliminated with use of colour adjustment algorithms. The examination of the dependence of colour variation from microscope's light temperature and settings of the camera is done as an introductory research to the process of automatic colour standardization. Six fields of view with empty space among the tissue samples have been selected for analysis. Each field of view has been acquired 225 times with various microscope light temperature and camera white balance settings. The fourteen randomly chosen images have been corrected and compared, with the reference image, by the following methods: Mean Square Error, Structural SIMilarity and visual assessment of viewer. For two types of backgrounds and two types of objects, the statistical image descriptors: range, median, mean and its standard deviation of chromaticity on a and b channels from CIELab colour space, and luminance L, and local colour variability for objects' specific area have been calculated. The results have been averaged for 6 images acquired in the same light conditions and camera settings for each sample. The analysis of the results leads to the following conclusions: (1) the images collected with white balance setting adjusted to light colour temperature clusters in certain area of chromatic space, (2) the process of white balance correction for images collected with white balance camera settings not matched to the light temperature moves image descriptors into proper chromatic space but simultaneously the value of luminance changes. So the process of the image unification in a sense of colour fidelity can be solved in separate introductory stage before the automatic image analysis.
Estimating Soil Hydraulic Parameters using Gradient Based Approach
NASA Astrophysics Data System (ADS)
Rai, P. K.; Tripathi, S.
2017-12-01
The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.
Hydrologic Process-oriented Optimization of Electrical Resistivity Tomography
NASA Astrophysics Data System (ADS)
Hinnell, A.; Bechtold, M.; Ferre, T. A.; van der Kruk, J.
2010-12-01
Electrical resistivity tomography (ERT) is commonly used in hydrologic investigations. Advances in joint and coupled hydrogeophysical inversion have enhanced the quantitative use of ERT to construct and condition hydrologic models (i.e. identify hydrologic structure and estimate hydrologic parameters). However the selection of which electrical resistivity data to collect and use is often determined by a combination of data requirements for geophysical analysis, intuition on the part of the hydrogeophysicist and logistical constraints of the laboratory or field site. One of the advantages of coupled hydrogeophysical inversion is the direct link between the hydrologic model and the individual geophysical data used to condition the model. That is, there is no requirement to collect geophysical data suitable for independent geophysical inversion. The geophysical measurements collected can be optimized for estimation of hydrologic model parameters rather than to develop a geophysical model. Using a synthetic model of drip irrigation we evaluate the value of individual resistivity measurements to describe the soil hydraulic properties and then use this information to build a data set optimized for characterizing hydrologic processes. We then compare the information content in the optimized data set with the information content in a data set optimized using a Jacobian sensitivity analysis.
Nasiri, Rasoul
2016-01-01
The role of boundary conditions at the interface for both Boltzmann equation and the set of Navier-Stokes equations have been suggested to be important for studying of multiphase flows such as evaporation/condensation process which doesn’t always obey the equilibrium conditions. Here we present aspects of transition-state theory (TST) alongside with kinetic gas theory (KGT) relevant to the study of quasi-equilibrium interfacial phenomena and the equilibrium gas phase processes, respectively. A two-state mathematical model for long-chain hydrocarbons which have multi-structural specifications is introduced to clarify how kinetics and thermodynamics affect evaporation/condensation process at the surface of fuel droplet, liquid and gas phases and then show how experimental observations for a number of n-alkane may be reproduced using a hybrid framework TST and KGT with physically reasonable parameters controlling the interface, gas and liquid phases. The importance of internal activation dynamics at the surface of n-alkane droplets is established during the evaporation/condensation process. PMID:27215897
Mark H. Eisenbies; W. Brian Hughes
2000-01-01
Hydrologic processes are the main determinants of the type of wetland located on a site. Precipitation, groundwater, or flooding interact with soil properties and geomorphic setting to yield a complex matrix of conditions that control groundwater flux, water storage and discharge, water chemistry, biotic produvtivity, biodiversity, and biogeochemical cycling....
Identifying Opportunities for Vertical Integration of Biochemistry and Clinical Medicine.
Wendelberger, Karen J.; Burke, Rebecca; Haas, Arthur L.; Harenwattananon, Marisa; Simpson, Deborah
1998-01-01
Objectives: Retention of basic science knowledge, as judged by National Board of Medical Examiners' (NBME) data, suffers due to lack of apparent relevance and isolation of instruction from clinical application, especially in biochemistry. However, the literature reveals no systematic process for identifying key biochemical concepts and associated clinical conditions. This study systematically identified difficult biochemical concepts and their common clinical conditions as a critical step towards enhancing relevance and retention of biochemistry.Methods: A multi-step/ multiple stakeholder process was used to: (1) identify important biochemistry concepts; (2) determine students' perceptions of concept difficulty; (3) assess biochemistry faculty, student, and clinical teaching scholars' perceived relevance of identified concepts; and (4) identify associated common clinical conditions for relevant and difficult concepts. Surveys and a modified Delphi process were used to gather data, subsequently analyzed using SPSS for Windows.Results: Sixteen key biochemical concepts were identified. Second year medical students rated 14/16 concepts as extremely difficult while fourth year students rated nine concepts as moderately to extremely difficult. On average, each teaching scholar generated common clinical conditions for 6.2 of the 16 concepts, yielding a set of seven critical concepts and associated clinical conditions.Conclusions: Key stakeholders in the instructional process struggle to identify biochemistry concepts that are critical, difficult to learn and associated with common clinical conditions. However, through a systematic process beginning with identification of concepts and associated clinical conditions, relevance of basic science instruction can be enhanced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nee, K.; Bryan, S.; Levitskaia, T.
The reliability of chemical processes can be greatly improved by implementing inline monitoring systems. Combining multivariate analysis with non-destructive sensors can enhance the process without interfering with the operation. Here, we present here hierarchical models using both principal component analysis and partial least square analysis developed for different chemical components representative of solvent extraction process streams. A training set of 380 samples and an external validation set of 95 samples were prepared and Near infrared and Raman spectral data as well as conductivity under variable temperature conditions were collected. The results from the models indicate that careful selection of themore » spectral range is important. By compressing the data through Principal Component Analysis (PCA), we lower the rank of the data set to its most dominant features while maintaining the key principal components to be used in the regression analysis. Within the studied data set, concentration of five chemical components were modeled; total nitrate (NO 3 -), total acid (H +), neodymium (Nd 3+), sodium (Na +), and ionic strength (I.S.). The best overall model prediction for each of the species studied used a combined data set comprised of complementary techniques including NIR, Raman, and conductivity. Finally, our study shows that chemometric models are powerful but requires significant amount of carefully analyzed data to capture variations in the chemistry.« less
Nee, K.; Bryan, S.; Levitskaia, T.; ...
2017-12-28
The reliability of chemical processes can be greatly improved by implementing inline monitoring systems. Combining multivariate analysis with non-destructive sensors can enhance the process without interfering with the operation. Here, we present here hierarchical models using both principal component analysis and partial least square analysis developed for different chemical components representative of solvent extraction process streams. A training set of 380 samples and an external validation set of 95 samples were prepared and Near infrared and Raman spectral data as well as conductivity under variable temperature conditions were collected. The results from the models indicate that careful selection of themore » spectral range is important. By compressing the data through Principal Component Analysis (PCA), we lower the rank of the data set to its most dominant features while maintaining the key principal components to be used in the regression analysis. Within the studied data set, concentration of five chemical components were modeled; total nitrate (NO 3 -), total acid (H +), neodymium (Nd 3+), sodium (Na +), and ionic strength (I.S.). The best overall model prediction for each of the species studied used a combined data set comprised of complementary techniques including NIR, Raman, and conductivity. Finally, our study shows that chemometric models are powerful but requires significant amount of carefully analyzed data to capture variations in the chemistry.« less
Priority setting at the micro-, meso- and macro-levels in Canada, Norway and Uganda.
Kapiriri, Lydia; Norheim, Ole Frithjof; Martin, Douglas K
2007-06-01
The objectives of this study were (1) to describe the process of healthcare priority setting in Ontario-Canada, Norway and Uganda at the three levels of decision-making; (2) to evaluate the description using the framework for fair priority setting, accountability for reasonableness; so as to identify lessons of good practices. We carried out case studies involving key informant interviews, with 184 health practitioners and health planners from the macro-level, meso-level and micro-level from Canada-Ontario, Norway and Uganda (selected by virtue of their varying experiences in priority setting). Interviews were audio-recorded, transcribed and analyzed using a modified thematic approach. The descriptions were evaluated against the four conditions of "accountability for reasonableness", relevance, publicity, revisions and enforcement. Areas of adherence to these conditions were identified as lessons of good practices; areas of non-adherence were identified as opportunities for improvement. (i) at the macro-level, in all three countries, cabinet makes most of the macro-level resource allocation decisions and they are influenced by politics, public pressure, and advocacy. Decisions within the ministries of health are based on objective formulae and evidence. International priorities influenced decisions in Uganda. Some priority-setting reasons are publicized through circulars, printed documents and the Internet in Canada and Norway. At the meso-level, hospital priority-setting decisions were made by the hospital managers and were based on national priorities, guidelines, and evidence. Hospital departments that handle emergencies, such as surgery, were prioritized. Some of the reasons are available on the hospital intranet or presented at meetings. Micro-level practitioners considered medical and social worth criteria. These reasons are not publicized. Many practitioners lacked knowledge of the macro- and meso-level priority-setting processes. (ii) Evaluation-relevance: medical evidence and economic criteria were thought to be relevant, but lobbying was thought to be irrelevant. Publicity: all cases lacked clear and effective mechanisms for publicity. REVISIONS: formal mechanisms, following the planning hierarchy, were considered less effective, informal political mechanisms were considered more effective. Canada and Norway had patients' relations officers to deal with patients' dissensions; however, revisions were more difficult in Uganda. Enforcement: leadership for ensuring decision-making fairness was not apparent. The different levels of priority setting in the three countries fulfilled varying conditions of accountability for reasonableness, none satisfied all the four conditions. To improve, decision makers at the three levels in all three cases should engage frontline practitioners, develop more effectively publicized reasons, and develop formal mechanisms for challenging and revising decisions.
A data mining approach to optimize pellets manufacturing process based on a decision tree algorithm.
Ronowicz, Joanna; Thommes, Markus; Kleinebudde, Peter; Krysiński, Jerzy
2015-06-20
The present study is focused on the thorough analysis of cause-effect relationships between pellet formulation characteristics (pellet composition as well as process parameters) and the selected quality attribute of the final product. The shape using the aspect ratio value expressed the quality of pellets. A data matrix for chemometric analysis consisted of 224 pellet formulations performed by means of eight different active pharmaceutical ingredients and several various excipients, using different extrusion/spheronization process conditions. The data set contained 14 input variables (both formulation and process variables) and one output variable (pellet aspect ratio). A tree regression algorithm consistent with the Quality by Design concept was applied to obtain deeper understanding and knowledge of formulation and process parameters affecting the final pellet sphericity. The clear interpretable set of decision rules were generated. The spehronization speed, spheronization time, number of holes and water content of extrudate have been recognized as the key factors influencing pellet aspect ratio. The most spherical pellets were achieved by using a large number of holes during extrusion, a high spheronizer speed and longer time of spheronization. The described data mining approach enhances knowledge about pelletization process and simultaneously facilitates searching for the optimal process conditions which are necessary to achieve ideal spherical pellets, resulting in good flow characteristics. This data mining approach can be taken into consideration by industrial formulation scientists to support rational decision making in the field of pellets technology. Copyright © 2015 Elsevier B.V. All rights reserved.
Model coupling intraparticle diffusion/sorption, nonlinear sorption, and biodegradation processes
Karapanagioti, Hrissi K.; Gossard, Chris M.; Strevett, Keith A.; Kolar, Randall L.; Sabatini, David A.
2001-01-01
Diffusion, sorption and biodegradation are key processes impacting the efficiency of natural attenuation. While each process has been studied individually, limited information exists on the kinetic coupling of these processes. In this paper, a model is presented that couples nonlinear and nonequilibrium sorption (intraparticle diffusion) with biodegradation kinetics. Initially, these processes are studied independently (i.e., intraparticle diffusion, nonlinear sorption and biodegradation), with appropriate parameters determined from these independent studies. Then, the coupled processes are studied, with an initial data set used to determine biodegradation constants that were subsequently used to successfully predict the behavior of a second data set. The validated model is then used to conduct a sensitivity analysis, which reveals conditions where biodegradation becomes desorption rate-limited. If the chemical is not pre-equilibrated with the soil prior to the onset of biodegradation, then fast sorption will reduce aqueous concentrations and thus biodegradation rates. Another sensitivity analysis demonstrates the importance of including nonlinear sorption in a coupled diffusion/sorption and biodegradation model. While predictions based on linear sorption isotherms agree well with solution concentrations, for the conditions evaluated this approach overestimates the percentage of contaminant biodegraded by as much as 50%. This research demonstrates that nonlinear sorption should be coupled with diffusion/sorption and biodegradation models in order to accurately predict bioremediation and natural attenuation processes. To our knowledge this study is unique in studying nonlinear sorption coupled with intraparticle diffusion and biodegradation kinetics with natural media.
Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network
NASA Astrophysics Data System (ADS)
Li, Zhiqiang; Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu
2018-04-01
This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit.
Subordination to periodic processes and synchronization
NASA Astrophysics Data System (ADS)
Ascolani, Gianluca; Bologna, Mauro; Grigolini, Paolo
2009-07-01
We study the subordination to a process that is periodic in the natural time scale, and equivalent to a clock with N states. The rationale for this investigation is given by a set of many interacting clocks with N states. The natural time scale representation corresponds to the dynamics of an individual clock with no interaction with the other clocks of this set. We argue that the cooperation among the clocks of this set has the effect of generating a global clock, whose times of sojourn in each of its N states are described by a distribution density with an inverse power law form and power index μ<2. This is equivalent to extending the widely used subordination method from fluctuation-dissipation processes to periodic processes, thereby raising the question of whether special conditions exist of perfect synchronization, signaled by regular oscillations, and especially by oscillations with no damping. We study first the case of a Poisson subordination function. We show that in spite of the random nature of the subordination method the procedure has the effect of creating damped oscillations, whose damping vanishes in the limiting case of N≫1, thereby suggesting a condition of perfect synchronization in this limit. The Bateman’s mathematical arguments [H. Bateman, Higher Transcendental Functions, vol. III, Robert K Krieger, Publishing Company, Inc. Krim.Fr. Drive Malabar, FL; Copyright 1953 by McGraw-Hill Book Company Inc.] indicate that the condition of perfect synchronization is possible also in the non-Poisson case, with μ<2, although it may lie beyond the range of computer simulation. To make the theoretical predictions accessible to numerical simulation, we use a subordination function whose survival probability is a Mittag-Leffler exponential function. This method prevents us from directly establishing the macroscopic coherence emerging from μ=2, which generates a perfect form of 1/f noise. However, it affords indirect evidence that perfect synchronization signaled by undamped regular oscillations may be produced in this case. Furthermore, we explore a condition characterized by an excellent agreement between theory and numerical simulation, where the long-time region relaxation, with a perfect inverse power law decay, emerging from the subordination to ordinary fluctuation-dissipation processes, is replaced by exponentially damped regular oscillations.
An automated distinction of DICOM images for lung cancer CAD system
NASA Astrophysics Data System (ADS)
Suzuki, H.; Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nishitani, H.; Ohmatsu, H.; Eguchi, K.; Kaneko, M.; Moriyama, N.
2009-02-01
Automated distinction of medical images is an important preprocessing in Computer-Aided Diagnosis (CAD) systems. The CAD systems have been developed using medical image sets with specific scan conditions and body parts. However, varied examinations are performed in medical sites. The specification of the examination is contained into DICOM textual meta information. Most DICOM textual meta information can be considered reliable, however the body part information cannot always be considered reliable. In this paper, we describe an automated distinction of DICOM images as a preprocessing for lung cancer CAD system. Our approach uses DICOM textual meta information and low cost image processing. Firstly, the textual meta information such as scan conditions of DICOM image is distinguished. Secondly, the DICOM image is set to distinguish the body parts which are identified by image processing. The identification of body parts is based on anatomical structure which is represented by features of three regions, body tissue, bone, and air. The method is effective to the practical use of lung cancer CAD system in medical sites.
Why pilots are least likely to get good decision making precisely when they need it most
NASA Technical Reports Server (NTRS)
Maher, John W.
1991-01-01
Studies of commercial aircraft incidents and accidents indicate that, in flight conditions not covered by standard operating procedures, as well as when the environment is saturated with information or unmanaged stress, cognitive shortcuts dominate aircrews' decisionmaking processes. Multidisciplinary research on such situations with high-fidelity simulators becomes critically important, as do psychometric tools which examine vigilance, personality resiliency before stressful conditions, and decisional and interpersonal mind-sets.
Permutation flow-shop scheduling problem to optimize a quadratic objective function
NASA Astrophysics Data System (ADS)
Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu
2017-09-01
A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winschel, R. A.; Robbins, G. A.; Burke, F. P.
1986-11-01
Conoco Coal Research Division is characterizing samples of direct coal liquefaction process oils based on a variety of analytical techniques to provide a detailed description of the chemical composition of the oils to more fully understand the interrelationship of process oil composition and process operations, to aid in plant operation, and to lead to process improvements. The approach taken is to obtain analyses of a large number of well-defined process oils taken during periods of known operating conditions and known process performance. A set of thirty-one process oils from the Hydrocarbon Research, Inc. (HRI) Catalytic Two-Stage Liquefaction (CTSL) bench unitmore » was analyzed to provide information on process performance. The Fourier-Transform infrared (FTIR) spectroscopic method for the determination of phenolics in cola liquids was further verified. A set of four tetahydrofuran-soluble products from Purdue Research Foundation's reactions of coal/potassium/crown ether, analyzed by GC/MS and FTIR, were found to consist primarily of paraffins (excluding contaminants). Characterization data (elemental analyses, /sup 1/H-NMR and phenolic concentrations) were obtained on a set of twenty-seven two-stage liquefaction oils. Two activities were begun but not completed. First, analyses were started on oils from Wilsonville Run 250 (close- coupled ITSL). Also, a carbon isotopic method is being examined for utility in determining the relative proportion of coal and petroleum products in coprocessing oils.« less
NASA Astrophysics Data System (ADS)
Rohmer, Jeremy; Idier, Deborah; Bulteau, Thomas; Paris, François
2016-04-01
From a risk management perspective, it can be of high interest to identify the critical set of offshore conditions that lead to inundation on key assets for the studied territory (e.g., assembly points, evacuation routes, hospitals, etc.). This inverse approach of risk assessment (Idier et al., NHESS, 2013) can be of primary importance either for the estimation of the coastal flood hazard return period or for constraining the early warning networks based on hydro-meteorological forecast or observations. However, full-process based models for coastal flooding simulation have very large computational time cost (typically of several hours), which often limits the analysis to a few scenarios. Recently, it has been shown that meta-modelling approaches can efficiently handle this difficulty (e.g., Rohmer & Idier, NHESS, 2012). Yet, the full-process based models are expected to present strong non-linearities (non-regularities) or shocks (discontinuities), i.e. dynamics controlled by thresholds. For instance, in case of coastal defense, the dynamics is characterized first by a linear behavior of the waterline position (increase with increasing offshore conditions), as long as there is no overtopping, and then by a very strong increase (as soon as the offshore conditions are energetic enough to lead to wave overtopping, and then overflow). Such behavior might make the training phase of the meta-model very tedious. In the present study, we propose to explore the feasibility of active learning techniques, aka semi-supervised machine learning, to track the set of critical conditions with a reduced number of long-running simulations. The basic idea relies on identifying the simulation scenarios which should both reduce the meta-model error and improve the prediction of the critical contour of interest. To overcome the afore-described difficulty related to non-regularity, we rely on Support Vector Machines, which have shown very high performance for structural reliability assessment. The developments are done on a cross-shore case, using the process-based SWASH model. The related computational time is 10 hours for a single run. The dynamic forcing conditions are parametrized by several factors (storm surge S, significant wave height Hs, dephasing between tide and surge, etc.). In particular, we validated the approach with respect to a reference set of 400 long-running simulations in the domain of (S ; Hs). Our tests showed that the tracking of the critical contour can be achieved with a reasonable number of long-running simulations of a few tens.
Investigation of Current Methods to Identify Helicopter Gear Health
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Lewicki, David G.; Le, Dy D.
2007-01-01
This paper provides an overview of current vibration methods used to identify the health of helicopter transmission gears. The gears are critical to the transmission system that provides propulsion, lift and maneuvering of the helicopter. This paper reviews techniques used to process vibration data to calculate conditions indicators (CI's), guidelines used by the government aviation authorities in developing and certifying the Health and Usage Monitoring System (HUMS), condition and health indicators used in commercial HUMS, and different methods used to set thresholds to detect damage. Initial assessment of a method to set thresholds for vibration based condition indicators applied to flight and test rig data by evaluating differences in distributions between comparable transmissions are also discussed. Gear condition indicator FM4 values are compared on an OH58 helicopter during 14 maneuvers and an OH58 transmission test stand during crack propagation tests. Preliminary results show the distributions between healthy helicopter and rig data are comparable and distributions between healthy and damaged gears show significant differences.
Investigation of Current Methods to Identify Helicopter Gear Health
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Lewicki, David G.; Le, Dy D.
2007-01-01
This paper provides an overview of current vibration methods used to identify the health of helicopter transmission gears. The gears are critical to the transmission system that provides propulsion, lift and maneuvering of the helicopter. This paper reviews techniques used to process vibration data to calculate conditions indicators (CI s), guidelines used by the government aviation authorities in developing and certifying the Health and Usage Monitoring System (HUMS), condition and health indicators used in commercial HUMS, and different methods used to set thresholds to detect damage. Initial assessment of a method to set thresholds for vibration based condition indicators applied to flight and test rig data by evaluating differences in distributions between comparable transmissions are also discussed. Gear condition indicator FM4 values are compared on an OH58 helicopter during 14 maneuvers and an OH58 transmission test stand during crack propagation tests. Preliminary results show the distributions between healthy helicopter and rig data are comparable and distributions between healthy and damaged gears show significant differences.
Preliminary analysis of cold stress responsive proteins in Mesocestoides corti larvae.
Canclini, Lucía; Esteves, Adriana
2007-07-01
Many parasites undergo sudden changes in environmental conditions at some stage during their life cycle. The molecular response to this variation is characterised by a rapid transcriptional activation of a specific set of genes coding for proteins generically known as stress proteins. They appear to be also involved in various biological processes including cell proliferation and differentiation. The platyhelminth parasite, Mesocestoides corti (Cestoda) presents important properties as a model organism. Under stress conditions, key molecules involved in metabolic pathways as well as in the growth and differentiation of the parasite can be identified. 2D protein expression profile of tetrathyridia of M. corti, submitted to nutritional starvation and cold stress is described, as well as the recovery pattern. A set of specifically expressed proteins was observed in each experimental condition. Quantitative and qualitative differences and stress recovery pattern are also reported. This work makes evident the high plasticity and resistance to extreme environmental conditions of these parasites at the molecular level.
Detection of global state predicates
NASA Technical Reports Server (NTRS)
Marzullo, Keith; Neiger, Gil
1991-01-01
The problem addressed here arises in the context of Meta: how can a set of processes monitor the state of a distributed application in a consistent manner? For example, consider the simple distributed application as shown here. Each of the three processes in the application has a light, and the control processes would each like to take an action when some specified subset of the lights are on. The application processes are instrumented with stubs that determine when the process turns its lights on or off. This information is disseminated to the control processes, each of which then determines when its condition of interest is met. Meta is built on top of the ISIS toolkit, and so we first built the sensor dissemination mechanism using atomic broadcast. Atomic broadcast guarantees that all recipients receive the messages in the same order and that this order is consistent with causality. Unfortunately, the control processes are somewhat limited in what they can deduce when they find that their condition of interest holds.
Yang, Chih-Cheng; Liu, Chang-Lun
2016-08-12
Cold forging is often applied in the fastener industry. Wires in coil form are used as semi-finished products for the production of billets. This process usually requires preliminarily drawing wire coil in order to reduce the diameter of products. The wire usually has to be annealed to improve its cold formability. The quality of spheroidizing annealed wire affects the forming quality of screws. In the fastener industry, most companies use a subcritical process for spheroidized annealing. Various parameters affect the spheroidized annealing quality of steel wire, such as the spheroidized annealing temperature, prolonged heating time, furnace cooling time and flow rate of nitrogen (protective atmosphere). The effects of the spheroidized annealing parameters affect the quality characteristics of steel wire, such as the tensile strength and hardness. A series of experimental tests on AISI 1022 low carbon steel wire are carried out and the Taguchi method is used to obtain optimum spheroidized annealing conditions to improve the mechanical properties of steel wires for cold forming. The results show that the spheroidized annealing temperature and prolonged heating time have the greatest effect on the mechanical properties of steel wires. A comparison between the results obtained using the optimum spheroidizing conditions and the measures using the original settings shows the new spheroidizing parameter settings effectively improve the performance measures over their value at the original settings. The results presented in this paper could be used as a reference for wire manufacturers.
Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo
2012-01-01
Design spaces for multiple dose strengths of tablets were constructed using a Bayesian estimation method with one set of design of experiments (DoE) of only the highest dose-strength tablet. The lubricant blending process for theophylline tablets with dose strengths of 100, 50, and 25 mg is used as a model manufacturing process in order to construct design spaces. The DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) for theophylline 100-mg tablet. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) of the 100-mg tablet were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. Three experiments under an optimal condition and two experiments under other conditions were performed using 50- and 25-mg tablets, respectively. The response surfaces of the highest-strength tablet were corrected to those of the lower-strength tablets by Bayesian estimation using the manufacturing data of the lower-strength tablets. Experiments under three additional sets of conditions of lower-strength tablets showed that the corrected design space made it possible to predict the quality of lower-strength tablets more precisely than the design space of the highest-strength tablet. This approach is useful for constructing design spaces of tablets with multiple strengths.
Phillips, R; Bartholomew, L; Dovey, S; Fryer, G; Miyoshi, T; Green, L
2004-01-01
Background: The epidemiology, risks, and outcomes of errors in primary care are poorly understood. Malpractice claims brought for negligent adverse events offer a useful insight into errors in primary care. Methods: Physician Insurers Association of America malpractice claims data (1985–2000) were analyzed for proportions of negligent claims by primary care specialty, setting, severity, health condition, and attributed cause. We also calculated risks of a claim for condition-specific negligent events relative to the prevalence of those conditions in primary care. Results: Of 49 345 primary care claims, 26 126 (53%) were peer reviewed and 5921 (23%) were assessed as negligent; 68% of claims were for negligent events in outpatient settings. No single condition accounted for more than 5% of all negligent claims, but the underlying causes were more clustered with "diagnosis error" making up one third of claims. The ratios of condition-specific negligent event claims relative to the frequency of those conditions in primary care revealed a significantly disproportionate risk for a number of conditions (for example, appendicitis was 25 times more likely to generate a claim for negligence than breast cancer). Conclusions: Claims data identify conditions and processes where primary health care in the United States is prone to go awry. The burden of severe outcomes and death from malpractice claims made against primary care physicians was greater in primary care outpatient settings than in hospitals. Although these data enhance information about error related negligent events in primary care, particularly when combined with other primary care data, there are many operating limitations. PMID:15069219
Investigation of aerosol indirect effects on simulated flash-flood heavy rainfall over Korea
NASA Astrophysics Data System (ADS)
Lim, Kyo-Sun Sunny; Hong, Song-You
2012-11-01
This study investigates aerosol indirect effects on the development of heavy rainfall near Seoul, South Korea, on 12 July 2006, focusing on precipitation amount. The impact of the aerosol concentration on simulated precipitation is evaluated by varying the initial cloud condensation nuclei (CCN) number concentration in the Weather Research and Forecasting (WRF) Double-Moment 6-class (WDM6) microphysics scheme. The simulations are performed under clean, semi-polluted, and polluted conditions. Detailed analysis of the physical processes that are responsible for surface precipitation, including moisture and cloud microphysical budgets shows enhanced ice-phase processes to be the primary driver of increased surface precipitation under the semi-polluted condition. Under the polluted condition, suppressed auto-conversion and the enhanced evaporation of rain cause surface precipitation to decrease. To investigate the role of environmental conditions on precipitation response under different aerosol number concentrations, a set of sensitivity experiments are conducted with a 5 % decrease in relative humidity at the initial time, relative to the base simulations. Results show ice-phase processes having small sensitivity to CCN number concentration, compared with the base simulations. Surface precipitation responds differently to CCN number concentration under the lower humidity initial condition, being greatest under the clean condition, followed by the semi-polluted and polluted conditions.
Does parental anxiety cause biases in the processing of child-relevant threat material?
Cartwright-Hatton, Sam; Abeles, Paul; Dixon, Clare; Holliday, Christine; Hills, Becky
2014-06-01
Anxiety leads to biases in processing personally relevant information. This study set out to examine whether anxious parents also experience biases in processing child-relevant material. Ninety parents acted as a control condition, or received a social anxiety or child-related anxiety induction. They completed a task examining attentional biases in relation to child-threat words and social-threat words, and a task examining ability to categorize emotion in children's faces and voices. There was a trend indicating group differences in attentional bias towards social-threat words, and this appears to have been only in the social anxiety condition, but not the child anxiety or control conditions. For child-threat words, attentional bias was present in the child anxiety condition, but not the social anxiety or control conditions. In the emotion recognition task, there was no difference between the control and child anxiety conditions, but the social anxiety condition were more likely to erroneously label children's faces and voices as sad. Parents' anxious biases may spill over into their child's world. Parents' anxious biases may spill over into their child's world. Anxious parents may have attentional biases towards threats in their children's environment. Anxious parents may over-attribute negative emotion to children. © 2013 The British Psychological Society.
Lu, Haifeng; Han, Ting; Zhang, Guangming; Ma, Shanshan; Zhang, Yuanhui; Li, Baoming; Cao, Wei
2018-01-01
Photosynthetic bacteria (PSB) have two sets of metabolic pathways. They can degrade pollutants through light metabolic under light-anaerobic or oxygen metabolic pathways under dark-aerobic conditions. Both metabolisms function under natural light-microaerobic condition, which demands less energy input. This work investigated the characteristics of PSB wastewater treatment process under that condition. Results showed that PSB had very strong adaptability to chemical oxygen demand (COD) concentration; with F/M of 5.2-248.5 mg-COD/mg-biomass, the biomass increased three times and COD removal reached above 91.5%. PSB had both advantages of oxygen metabolism in COD removal and light metabolism in resource recovery under natural light-microaerobic condition. For pollutants' degradation, COD, total organic carbon, nitrogen, and phosphorus removal reached 96.2%, 91.0%, 70.5%, and 92.7%, respectively. For resource recovery, 74.2% of C in wastewater was transformed into biomass. Especially, coexistence of light and oxygen promote N recovery ratio to 70.9%, higher than with the other two conditions. Further, 93.7% of N-removed was synthesized into biomass. Finally, CO 2 emission reduced by 62.6% compared with the traditional process. PSB wastewater treatment under this condition is energy-saving, highly effective, and environment friendly, and can achieve pollution control and resource recovery.
Wang, Xun; Sun, Beibei; Liu, Boyang; Fu, Yaping; Zheng, Pan
2017-01-01
Experimental design focuses on describing or explaining the multifactorial interactions that are hypothesized to reflect the variation. The design introduces conditions that may directly affect the variation, where particular conditions are purposely selected for observation. Combinatorial design theory deals with the existence, construction and properties of systems of finite sets whose arrangements satisfy generalized concepts of balance and/or symmetry. In this work, borrowing the concept of "balance" in combinatorial design theory, a novel method for multifactorial bio-chemical experiments design is proposed, where balanced templates in combinational design are used to select the conditions for observation. Balanced experimental data that covers all the influencing factors of experiments can be obtianed for further processing, such as training set for machine learning models. Finally, a software based on the proposed method is developed for designing experiments with covering influencing factors a certain number of times.
Liu, Chunbo; Pan, Feng; Li, Yun
2016-07-29
Glutamate is of great importance in food and pharmaceutical industries. There is still lack of effective statistical approaches for fault diagnosis in the fermentation process of glutamate. To date, the statistical approach based on generalized additive model (GAM) and bootstrap has not been used for fault diagnosis in fermentation processes, much less the fermentation process of glutamate with small samples sets. A combined approach of GAM and bootstrap was developed for the online fault diagnosis in the fermentation process of glutamate with small sample sets. GAM was first used to model the relationship between glutamate production and different fermentation parameters using online data from four normal fermentation experiments of glutamate. The fitted GAM with fermentation time, dissolved oxygen, oxygen uptake rate and carbon dioxide evolution rate captured 99.6 % variance of glutamate production during fermentation process. Bootstrap was then used to quantify the uncertainty of the estimated production of glutamate from the fitted GAM using 95 % confidence interval. The proposed approach was then used for the online fault diagnosis in the abnormal fermentation processes of glutamate, and a fault was defined as the estimated production of glutamate fell outside the 95 % confidence interval. The online fault diagnosis based on the proposed approach identified not only the start of the fault in the fermentation process, but also the end of the fault when the fermentation conditions were back to normal. The proposed approach only used a small sample sets from normal fermentations excitements to establish the approach, and then only required online recorded data on fermentation parameters for fault diagnosis in the fermentation process of glutamate. The proposed approach based on GAM and bootstrap provides a new and effective way for the fault diagnosis in the fermentation process of glutamate with small sample sets.
Levels-of-processing effects on a task of olfactory naming.
Royet, Jean-Pierre; Koenig, Olivier; Paugam-Moisy, Helene; Puzenat, Didier; Chasse, Jean-Luc
2004-02-01
The effects of odor processing were investigated at various analytical levels, from simple sensory analysis to deep or semantic analysis, on a subsequent task of odor naming. Students (106 women, 23.6 +/- 5.5 yr. old; 65 men, 25.1 +/- 7.1 yr. old) were tested. The experimental procedure included two successive sessions, a first session to characterize a set of 30 odors with criteria that used various depths of processing and a second session to name the odors as quickly as possible. Four processing conditions rated the odors using descriptors before naming the odor. The control condition did not rate the odors before naming. The processing conditions were based on lower-level olfactory judgments (superficial processing), higher-level olfactory-gustatory-somesthetic judgments (deep processing), and higher-level nonolfactory judgments (Deep-Control processing, with subjects rating odors with auditory and visual descriptors). One experimental condition successively grouped lower- and higher-level olfactory judgments (Superficial-Deep processing). A naming index which depended on response accuracy and the subjects' response time were calculated. Odor naming was modified for 18 out of 30 odorants as a function of the level of processing required. For 94.5% of significant variations, the scores for odor naming were higher following those tasks for which it was hypothesized that the necessary olfactory processing was carried out at a deeper level. Performance in the naming task was progressively improved as follows: no rating of odors, then superficial, deep-control, deep, and superficial-deep processings. These data show that the deepest olfactory encoding was later associated with progressively higher performance in naming.
Experimental Studies on Hypersonic Stagnation Point Chemical Environment
2006-02-01
conditions [60]. Having this complete definition we will focus on the chemical environment produce in the SPR. 3.2 Chemical environment evaluation Flow ... chemistry involves a very large number of processes and microscopic phenomena, they are usually summarized in a set of chemical reactions, with their own
Vasilopoulos, Terrie; Franz, Carol E.; Panizzon, Matthew S.; Xian, Hong; Grant, Michael D.; Lyons, Michael J; Toomey, Rosemary; Jacobson, Kristen C.; Kremen, William S.
2012-01-01
Objective To examine how genes and environments contribute to relationships among Trail Making test conditions and the extent to which these conditions have unique genetic and environmental influences. Method Participants included 1237 middle-aged male twins from the Vietnam-Era Twin Study of Aging (VESTA). The Delis-Kaplan Executive Function System Trail Making test included visual searching, number and letter sequencing, and set-shifting components. Results Phenotypic correlations among Trails conditions ranged from 0.29 – 0.60, and genes accounted for the majority (58–84%) of each correlation. Overall heritability ranged from 0.34 to 0.62 across conditions. Phenotypic factor analysis suggested a single factor. In contrast, genetic models revealed a single common genetic factor but also unique genetic influences separate from the common factor. Genetic variance (i.e., heritability) of number and letter sequencing was completely explained by the common genetic factor while unique genetic influences separate from the common factor accounted for 57% and 21% of the heritabilities of visual search and set-shifting, respectively. After accounting for general cognitive ability, unique genetic influences accounted for 64% and 31% of those heritabilities. Conclusions A common genetic factor, most likely representing a combination of speed and sequencing accounted for most of the correlation among Trails 1–4. Distinct genetic factors, however, accounted for a portion of variance in visual scanning and set-shifting. Thus, although traditional phenotypic shared variance analysis techniques suggest only one general factor underlying different neuropsychological functions in non-patient populations, examining the genetic underpinnings of cognitive processes with twin analysis can uncover more complex etiological processes. PMID:22201299
Featherstone, Cara R; Waterman, Mitch G; Morrison, Catriona M
2012-03-01
Research into similarities between music and language processing is currently experiencing a strong renewed interest. Recent methodological advances have led to neuroimaging studies presenting striking similarities between neural patterns associated with the processing of music and language--notably, in the study of participants' responses to elements that are incongruous with their musical or linguistic context. Responding to a call for greater systematicity by leading researchers in the field of music and language psychology, this article describes the creation, selection, and validation of a set of auditory stimuli in which both congruence and resolution were manipulated in equivalent ways across harmony, rhythm, semantics, and syntax. Three conditions were created by changing the contexts preceding and following musical and linguistic incongruities originally used for effect by authors and composers: Stimuli in the incongruous-resolved condition reproduced the original incongruity and resolution into the same context; stimuli in the incongruous-unresolved condition reproduced the incongruity but continued postincongruity with a new context dictated by the incongruity; and stimuli in the congruous condition presented the same element of interest, but the entire context was adapted to match it so that it was no longer incongruous. The manipulations described in this article rendered unrecognizable the original incongruities from which the stimuli were adapted, while maintaining ecological validity. The norming procedure and validation study resulted in a significant increase in perceived oddity from congruous to incongruous-resolved and from incongruous-resolved to incongruous-unresolved in all four components of music and language, making this set of stimuli a theoretically grounded and empirically validated resource for this growing area of research.
Donohue, Sarah E.; Appelbaum, Lawrence G.; McKay, Cameron C.; Woldorff, Marty G.
2016-01-01
Both stimulus and response conflict can disrupt behavior by slowing response times and decreasing accuracy. Although several neural activations have been associated with conflict processing, it is unclear how specific any of these are to the type of stimulus conflict or the amount of response conflict. Here, we recorded electrical brain activity, while manipulating the type of stimulus conflict in the task (spatial [Flanker] versus semantic [Stroop]) and the amount of response conflict (two versus four response choices). Behaviorally, responses were slower to incongruent versus congruent stimuli across all task and response types, along with overall slowing for higher response-mapping complexity. The earliest incongruency-related neural effect was a short-duration frontally-distributed negativity at ~200 ms that was only present in the Flanker spatial-conflict task. At longer latencies, the classic fronto-central incongruency-related negativity ‘Ninc’ was observed for all conditions, which was larger and ~100 ms longer in duration with more response options. Further, the onset of the motor-related lateralized readiness potential (LRP) was earlier for the two vs. four response sets, indicating that smaller response sets enabled faster motor-response preparation. The late positive complex (LPC) was present in all conditions except the two-response Stroop task, suggesting this late conflict-related activity is not specifically related to task type or response-mapping complexity. Importantly, across tasks and conditions, the LRP onset at or before the conflict-related Ninc, indicating that motor preparation is a rapid, automatic process that interacts with the conflict-detection processes after it has begun. Together, these data highlight how different conflict-related processes operate in parallel and depend on both the cognitive demands of the task and the number of response options. PMID:26827917
Donohue, Sarah E; Appelbaum, Lawrence G; McKay, Cameron C; Woldorff, Marty G
2016-04-01
Both stimulus and response conflict can disrupt behavior by slowing response times and decreasing accuracy. Although several neural activations have been associated with conflict processing, it is unclear how specific any of these are to the type of stimulus conflict or the amount of response conflict. Here, we recorded electrical brain activity, while manipulating the type of stimulus conflict in the task (spatial [Flanker] versus semantic [Stroop]) and the amount of response conflict (two versus four response choices). Behaviorally, responses were slower to incongruent versus congruent stimuli across all task and response types, along with overall slowing for higher response-mapping complexity. The earliest incongruency-related neural effect was a short-duration frontally-distributed negativity at ~200 ms that was only present in the Flanker spatial-conflict task. At longer latencies, the classic fronto-central incongruency-related negativity 'N(inc)' was observed for all conditions, but was larger and ~100 ms longer in duration with more response options. Further, the onset of the motor-related lateralized readiness potential (LRP) was earlier for the two vs. four response sets, indicating that smaller response sets enabled faster motor-response preparation. The late positive complex (LPC) was present in all conditions except the two-response Stroop task, suggesting this late conflict-related activity is not specifically related to task type or response-mapping complexity. Importantly, across tasks and conditions, the LRP onset at or before the conflict-related N(inc), indicating that motor preparation is a rapid, automatic process that interacts with the conflict-detection processes after it has begun. Together, these data highlight how different conflict-related processes operate in parallel and depend on both the cognitive demands of the task and the number of response options. Copyright © 2016 Elsevier Ltd. All rights reserved.
Yotova, Galina I; Tsitouridou, Roxani; Tsakovski, Stefan L; Simeonov, Vasil D
2016-01-01
The present article deals with assessment of urban air by using monitoring data for 10 different aerosol fractions (0.015-16 μm) collected at a typical urban site in City of Thessaloniki, Greece. The data set was subject to multivariate statistical analysis (cluster analysis and principal components analysis) and, additionally, to HYSPLIT back trajectory modeling in order to assess in a better way the impact of the weather conditions on the pollution sources identified. A specific element of the study is the effort to clarify the role of outliers in the data set. The reason for the appearance of outliers is strongly related to the atmospheric condition on the particular sampling days leading to enhanced concentration of pollutants (secondary emissions, sea sprays, road and soil dust, combustion processes) especially for ultra fine and coarse particles. It is also shown that three major sources affect the urban air quality of the location studied-sea sprays, mineral dust and anthropogenic influences (agricultural activity, combustion processes, and industrial sources). The level of impact is related to certain extent to the aerosol fraction size. The assessment of the meteorological conditions leads to defining of four downwind patterns affecting the air quality (Pelagic, Western and Central Europe, Eastern and Northeastern Europe and Africa and Southern Europe). Thus, the present study offers a complete urban air assessment taking into account the weather conditions, pollution sources and aerosol fractioning.
An ERP investigation of conditional reasoning with emotional and neutral contents.
Blanchette, Isabelle; El-Deredy, Wael
2014-11-01
In two experiments we investigate conditional reasoning using event-related potentials (ERPs). Our goal was to examine the time course of inference making in two conditional forms, one logically valid (Modus Ponens, MP) and one logically invalid (Affirming the Consequent, AC). We focus particularly on the involvement of semantically-based inferential processes potentially marked by modulations of the N400. We also compared reasoning about emotional and neutral contents with separate sets of stimuli of differing linguistic complexity across the two experiments. Both MP and AC modulated the N400 component, suggesting the involvement of a semantically-based inferential mechanism common across different logical forms, content types, and linguistic features of the problems. Emotion did not have an effect on early components, and did not interact with components related to inference making. There was a main effect of emotion in the 800-1050 ms time window, consistent with an effect on sustained attention. The results suggest that conditional reasoning is not a purely formal process but that it importantly implicates semantic processing, and that the effect of emotion on reasoning does not primarily operate through a modulation of early automatic stages of information processing. Copyright © 2014 Elsevier Inc. All rights reserved.
Conditioning from an information processing perspective.
Gallistel, C R.
2003-04-28
The framework provided by Claude Shannon's [Bell Syst. Technol. J. 27 (1948) 623] theory of information leads to a quantitatively oriented reconceptualization of the processes that mediate conditioning. The focus shifts from processes set in motion by individual events to processes sensitive to the information carried by the flow of events. The conception of what properties of the conditioned and unconditioned stimuli are important shifts from the tangible properties to the intangible properties of number, duration, frequency and contingency. In this view, a stimulus becomes a CS if its onset substantially reduces the subject's uncertainty about the time of occurrence of the next US. One way to represent the subject's knowledge of that time of occurrence is by the cumulative probability function, which has two limiting forms: (1) The state of maximal uncertainty (minimal knowledge) is represented by the inverse exponential function for the random rate condition, in which the US is equally likely at any moment. (2) The limit to the subject's attainable certainty is represented by the cumulative normal function, whose momentary expectation is the CS-US latency minus the time elapsed since CS onset. Its standard deviation is the Weber fraction times the CS-US latency.
Schlessinger, Daniel I; Iyengar, Sanjana; Yanes, Arianna F; Chiren, Sarah G; Godinez-Puig, Victoria; Chen, Brian R; Kurta, Anastasia O; Schmitt, Jochen; Deckert, Stefanie; Furlan, Karina C; Poon, Emily; Cartee, Todd V; Maher, Ian A; Alam, Murad; Sobanko, Joseph F
2017-07-12
Squamous cell carcinoma (SCC) is a common skin cancer that poses a risk of metastasis. Clinical investigations into SCC treatment are common, but the outcomes reported are highly variable, omitted, or clinically irrelevant. The outcome heterogeneity and reporting bias of these studies leave clinicians unable to accurately compare studies. Core outcome sets (COSs) are an agreed minimum set of outcomes recommended to be measured and reported in all clinical trials of a given condition or disease. Although COSs are under development for several dermatologic conditions, work has yet to be done to identify core outcomes specific for SCC. Outcome extraction for COS generation will occur via four methods: (1) systematic literature review; (2) patient interviews; (3) other published sources; and (4) input from stakeholders in medicine, pharmacy, and other relevant industries. The list of outcomes will be revaluated by the Measuring PRiority Outcome Variables via Excellence in Dermatologic surgery (IMPROVED) Steering Committee. Delphi processes will be performed separately by expert clinicians and patients to condense the list of outcomes generated. A consensus meeting with relevant stakeholders will be conducted after the Delphi exercise to further select outcomes, taking into account participant scores. At the end of the meeting, members will vote and decide on a final recommended set of core outcomes. The Core Outcome Measures in Effectiveness Trials (COMET) organization and the Cochrane Skin Group - Core Outcome Set Initiative (CSG-COUSIN) will serve as advisers throughout the COS generation process. Comparison of clinical trials via systematic reviews and meta-analyses is facilitated when investigators study outcomes that are relevant and similar. The aim of this project is to develop a COS to guide use for future clinical trials.
NASA Astrophysics Data System (ADS)
Štaffenová, Daniela; Rybárik, Ján; Jakubčík, Miroslav
2017-06-01
The aim of experimental research in the area of exterior walls and windows suitable for wooden buildings was to build special pavilion laboratories. These laboratories are ideally isolated from the surrounding environment, airtight and controlled by the constant internal climate. The principle of experimental research is measuring and recording of required physical parameters (e.g. temperature or relative humidity). This is done in layers of experimental fragment sections in the direction from exterior to interior, as well as in critical places by stable interior and real exterior climatic conditions. The outputs are evaluations of experimental structures behaviour during the specified time period, possibly during the whole year by stable interior and real exterior boundary conditions. The main aim of this experimental research is processing of long-term measurements of experimental structures and the subsequent analysis. The next part of the research consists of collecting measurements obtained with assistance of the experimental detached weather station, analysis, evaluation for later setting up of reference data set for the research locality, from the point of view of its comparison to the data sets from Slovak Hydrometeorological Institute (SHMU) and to localities with similar climate conditions. Later on, the data sets could lead to recommendations for design of wooden buildings.
NASA Astrophysics Data System (ADS)
Ujianto, O.; Jollands, M.; Kao, N.
2018-03-01
A comparative study on effect of internal mixer on high density Polyethylene (HDPE)/clay nanocomposites preparation was done. Effect of temperature, rotor rotation (rpm), and mixing time, as well as rotor type (Roller and Banbury) on mechanical properties and morphology of HDPE/clay nanocomposites were studied using Box-Behnken experimental design. The model was developed according to secant modulus and confirmed to morphology analysis using Transmission Electron Microscopy (TEM). The finding suggests that there is different mechanisms occurred in each rotor to improve the mechanical properties. The mechanism in Roller is medium shear and medium diffusion, while Banbury is high shear and low diffusion. The difference in mechanism to disperse the clay particles attribute to the different optimum processing conditions in each rotor. The settings for roller samples are predicted around mid temperature, mid speed, and mid mixing time. There is no optimum setting for Banbury within the processing boundaries. The best settings for Banbury are at low, high, low settings. The morphology results showed a hybrid composite structure, with some exfoliations and some intercalations. There was a correlation between better mechanical properties and morphology with more exfoliation and thinner intercalated particles.
Koshino, Hideya; Minamoto, Takehiro; Ikeda, Takashi; Osaka, Mariko; Otsuka, Yuki; Osaka, Naoyuki
2011-01-01
The anterior prefrontal cortex (PFC) exhibits activation during some cognitive tasks, including episodic memory, reasoning, attention, multitasking, task sets, decision making, mentalizing, and processing of self-referenced information. However, the medial part of anterior PFC is part of the default mode network (DMN), which shows deactivation during various goal-directed cognitive tasks compared to a resting baseline. One possible factor for this pattern is that activity in the anterior medial PFC (MPFC) is affected by dynamic allocation of attentional resources depending on task demands. We investigated this possibility using an event related fMRI with a face working memory task. Sixteen students participated in a single fMRI session. They were asked to form a task set to remember the faces (Face memory condition) or to ignore them (No face memory condition), then they were given 6 seconds of preparation period before the onset of the face stimuli. During this 6-second period, four single digits were presented one at a time at the center of the display, and participants were asked to add them and to remember the final answer. When participants formed a task set to remember faces, the anterior MPFC exhibited activation during a task preparation period but deactivation during a task execution period within a single trial. The results suggest that the anterior MPFC plays a role in task set formation but is not involved in execution of the face working memory task. Therefore, when attentional resources are allocated to other brain regions during task execution, the anterior MPFC shows deactivation. The results suggest that activation and deactivation in the anterior MPFC are affected by dynamic allocation of processing resources across different phases of processing.
Koshino, Hideya; Minamoto, Takehiro; Ikeda, Takashi; Osaka, Mariko; Otsuka, Yuki; Osaka, Naoyuki
2011-01-01
Background The anterior prefrontal cortex (PFC) exhibits activation during some cognitive tasks, including episodic memory, reasoning, attention, multitasking, task sets, decision making, mentalizing, and processing of self-referenced information. However, the medial part of anterior PFC is part of the default mode network (DMN), which shows deactivation during various goal-directed cognitive tasks compared to a resting baseline. One possible factor for this pattern is that activity in the anterior medial PFC (MPFC) is affected by dynamic allocation of attentional resources depending on task demands. We investigated this possibility using an event related fMRI with a face working memory task. Methodology/Principal Findings Sixteen students participated in a single fMRI session. They were asked to form a task set to remember the faces (Face memory condition) or to ignore them (No face memory condition), then they were given 6 seconds of preparation period before the onset of the face stimuli. During this 6-second period, four single digits were presented one at a time at the center of the display, and participants were asked to add them and to remember the final answer. When participants formed a task set to remember faces, the anterior MPFC exhibited activation during a task preparation period but deactivation during a task execution period within a single trial. Conclusions/Significance The results suggest that the anterior MPFC plays a role in task set formation but is not involved in execution of the face working memory task. Therefore, when attentional resources are allocated to other brain regions during task execution, the anterior MPFC shows deactivation. The results suggest that activation and deactivation in the anterior MPFC are affected by dynamic allocation of processing resources across different phases of processing. PMID:21829668
A twin study of the genetics of fear conditioning.
Hettema, John M; Annas, Peter; Neale, Michael C; Kendler, Kenneth S; Fredrikson, Mats
2003-07-01
Fear conditioning is a traditional model for the acquisition of fears and phobias. Studies of the genetic architecture of fear conditioning may inform gene-finding strategies for anxiety disorders. The objective of this study was to determine the genetic and environmental sources of individual differences in fear conditioning by means of a twin sample. Classic fear conditioning data were experimentally obtained from 173 same-sex twin pairs (90 monozygotic and 83 dizygotic). Sequences of evolutionary fear-relevant (snakes and spiders) and fear-irrelevant (circles and triangles) pictorial stimuli served as conditioned stimuli paired with a mild electric shock serving as the unconditioned stimulus. The outcome measure was the electrodermal skin conductance response. We applied structural equation modeling methods to the 3 conditioning phases of habituation, acquisition, and extinction to determine the extent to which genetic and environmental factors underlie individual variation in associative and nonassociative learning. All components of the fear conditioning process in humans demonstrated moderate heritability, in the range of 35% to 45%. Best-fitting multivariate models suggest that 2 sets of genes may underlie the trait of fear conditioning: one that most strongly affects nonassociative processes of habituation that also is shared with acquisition and extinction, and a second that appears related to associative fear conditioning processes. In addition, these data provide tentative evidence of differences in heritability based on the fear relevance of the stimuli. Genes represent a significant source of individual variation in the habituation, acquisition, and extinction of fears, and genetic effects specific to fear conditioning are involved.
Conditional Random Fields for Activity Recognition
2008-04-01
final match. The final is never used as a training or hold out set. Table 4.1 lists the roles of the CMDragons’07 robot soccer team. The role of Goalie ...is not included because the goalie never changes roles. The classification task, which we formalize below, is to recognize robot roles from the avail...process and pull out the key information from the sensor data. Furthermore, as conditional models, CRFs do not waste modeling effort on the observations
Oceanic Whitecaps and Associated, Bubble-Mediated, Air-Sea Exchange Processes
1992-10-01
experiments performed in laboratory conditions using Air-Sea Exchange Monitoring System (A-SEMS). EXPERIMENTAL SET-UP In a first look, the Air-Sea Exchange...Model 225, equipped with a Model 519 plug-in module. Other complementary information on A-SEMS along with results from first tests and calibration...between 9.50C and 22.40C within the first 24 hours after transferring the water sample into laboratory conditions. The results show an enhancement of
Extended observability of linear time-invariant systems under recurrent loss of output data
NASA Technical Reports Server (NTRS)
Luck, Rogelio; Ray, Asok; Halevi, Yoram
1989-01-01
Recurrent loss of sensor data in integrated control systems of an advanced aircraft may occur under different operating conditions that include detected frame errors and queue saturation in computer networks, and bad data suppression in signal processing. This paper presents an extension of the concept of observability based on a set of randomly selected nonconsecutive outputs in finite-dimensional, linear, time-invariant systems. Conditions for testing extended observability have been established.
Float-zone processing in a weightless environment
NASA Technical Reports Server (NTRS)
Fowle, A. A.; Haggerty, J. S.; Perron, R. R.; Strong, P. F.; Swanson, J. L.
1976-01-01
The results were reported of investigations to: (1) test the validity of analyses which set maximum practical diameters for Si crystals that can be processed by the float zone method in a near weightless environment, (2) determine the convective flow patterns induced in a typical float zone, Si melt under conditions perceived to be advantageous to the crystal growth process using flow visualization techniques applied to a dimensionally scaled model of the Si melt, (3) revise the estimates of the economic impact of space produced Si crystal by the float zone method on the U.S. electronics industry, and (4) devise a rational plan for future work related to crystal growth phenomena wherein low gravity conditions available in a space site can be used to maximum benefit to the U.S. electronics industry.
Toward a functional analysis of the basal ganglia.
Hayes, A E; Davidson, M C; Keele, S W; Rafal, R D
1998-03-01
Parkinson patients were tested in two paradigms to test the hypothesis that the basal ganglia are involved in the shifting of attentional set. Set shifting means a respecification of the conditions that regulate responding, a process sometimes referred to as an executive process. In one paradigm, upon the appearance of each stimulus, subjects were instructed to respond either to its color or to its shape. In a second paradigm, subjects learned to produce short sequences of three keypresses in response to two arbitrary stimuli. Reaction times were compared for the cases where set either remained the same or changed for two successive stimuli. Parkinson patients were slow to change set compared to controls. Parkinson patients were also less able to filter the competing but irrelevant set than were control subjects. The switching deficit appears to be dopamine based; the magnitude of the shifting deficit was related to the degree to which 1-dopa-based medication ameliorated patients' motor symptoms. Moreover, temporary withholding of medication, a so-called off manipulation, increased the time to switch. Using the framework of equilibrium point theory of movement, we discuss how a set switching deficit may also underlie clinical motor disturbances seen in Parkinson's disease.
Neuropsychological impairments on the NEPSY-II among children with FASD.
Rasmussen, Carmen; Tamana, Sukhpreet; Baugh, Lauren; Andrew, Gail; Tough, Suzanne; Zwaigenbaum, Lonnie
2013-01-01
We examined the pattern of neuropsychological impairments of children with FASD (compared to controls) on NEPSY-II measures of attention and executive functioning, language, memory, visuospatial processing, and social perception. Participants included 32 children with FASD and 30 typically developing control children, ranging in age from 6 to 16 years. Children were tested on the following subtests of the NEPSY-II: Attention and Executive Functioning (animal sorting, auditory attention/response set, and inhibition), Language (comprehension of instructions and speeded naming), Memory (memory for names/delayed memory for names), Visual-Spatial Processing (arrows), and Social Perception (theory of mind). Groups were compared using MANOVA. Children with FASD were impaired relative to controls on the following subtests: animal sorting, response set, inhibition (naming and switching conditions), comprehension of instructions, speeded naming, and memory for names total and delayed, but group differences were not significant on auditory attention, inhibition (inhibition condition), arrows, and theory of mind. Among the FASD group, IQ scores were not correlated with performance on the NEPSY-II subtests, and there were no significant differences between those with and without comorbid ADHD. The NEPSY-II is an effective and useful tool for measuring a variety of neuropsychological impairments among children with FASD. Children with FASD displayed a pattern of results with impairments (relative to controls) on measures of executive functioning (set shifting, concept formation, and inhibition), language, and memory, and relative strengths on measures of basic attention, visual spatial processing, and social perception.
Fluid/electrolyte and endocrine changes in space flight
NASA Technical Reports Server (NTRS)
Huntoon, Carolyn Leach
1989-01-01
The primary effects of space flight that influence the endocrine system and fluid and electrolyte regulation are the reduction of hydrostatic gradients, reduction in use and gravitational loading of bone and muscle, and stress. Each of these sets into motion a series of responses that culminates in alteration of some homeostatic set points for the environment of space. Set point alterations are believed to include decreases in venous pressure; red blood cell mass; total body water; plasma volume; and serum sodium, chloride, potassium, and osmolality. Serum calcium and phosphate increase. Hormones such as erythropoietin, atrial natriuretic peptide, aldosterone, cortisol, antidiuretic hormone, and growth hormone are involved in the dynamic processes that bring about the new set points. The inappropriateness of microgravity set points for 1-G conditions contributes to astronaut postflight responses.
Quantification of chemical transport processes from the soil to surface runoff.
Tian, Kun; Huang, Chi-Hua; Wang, Guang-Qian; Fu, Xu-Dong; Parker, Gary
2013-01-01
There is a good conceptual understanding of the processes that govern chemical transport from the soil to surface runoff, but few studies have actually quantified these processes separately. Thus, we designed a laboratory flow cell and experimental procedures to quantify the chemical transport from soil to runoff water in the following individual processes: (i) convection with a vertical hydraulic gradient, (ii) convection via surface flow or the Bernoulli effect, (iii) diffusion, and (iv) soil loss. We applied different vertical hydraulic gradients by setting the flow cell to generate different seepage or drainage conditions. Our data confirmed the general form of the convection-diffusion equation. However, we now have additional quantitative data that describe the contribution of each individual chemical loading process in different surface runoff and soil hydrological conditions. The results of this study will be useful for enhancing our understanding of different geochemical processes in the surface soil mixing zone. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Management by Objectives: When and How Does it Work?
ERIC Educational Resources Information Center
DeFee, Dallas T.
1977-01-01
According to the author, management by objectives (formal goal setting and review) depends a great deal on the kinds of goals the organization has and the commitment of top management to the process. He discusses its potential advantages and disadvantages, conditions for adopting it, and successful implementation. (JT)
USDA-ARS?s Scientific Manuscript database
Agricultural research increasingly seeks to quantify complex interactions of processes for a wide range of environmental conditions and crop management scenarios, leading to investigation where multiple sets of experimental data are examined using tools such as simulation and regression. The use of ...
Rounds Process: Puts Teachers in Charge of Learning
ERIC Educational Resources Information Center
Troen, Vivian; Boles, Katherine C.
2014-01-01
Most people are familiar with the practice of medical rounds, in which interns and mentoring physicians visit patients in an institutional setting, observe their various conditions, discuss what they observed, and analyze possible treatment options and outcomes. In the medical profession, making these rounds is viewed as a significant and highly…
29 CFR 215.3 - Employees represented by a labor organization.
Code of Federal Regulations, 2010 CFR
2010-07-01
... subrecipients, the Department will refer and process each subrecipient's respective portion of the project in... agreement, the referral will be based on those terms and conditions. (4) The referral procedures set forth... area. (1) For applicants with existing protections the Department's referral will be based on those...
USDA-ARS?s Scientific Manuscript database
Circadian clocks synchronize internal processes with environmental cycles to ensure optimal timing of biological events on daily and seasonal timescales. External light and temperature cues set the core molecular oscillator to local conditions. In Arabidopsis, EARLY FLOWERING 3 (ELF3) is thought to ...
NETWORK DESIGN FACTORS FOR ASSESSING TEMPORAL VARIABILITY IN GROUND-WATER QUALITY
A 1.5 year benchmark data Set was collected at biweekly frequency from two siteS in shallow sand and gravel deposits in West Central Illinois. ne site was near a hog-processing facility and the other represented uncontaminated conditions. onsistent sampling and analytical protoco...
ERIC Educational Resources Information Center
Sommer, Robert
This project was conducted to determine the conditions that make a satisfying study environment in colleges and universities and to relay the findings to those who design and manage educational spaces. The investigation focused upon the process of studying and its relation to environmental setting, and data was primarily gathered through site…
USDA-ARS?s Scientific Manuscript database
The objective of this research was to study the greenhouse gas emission and groundwater pollution potentials of the soils amended with various biochars using different biomass feedstocks and thermal processing conditions. Triplicate sets of small pots were designed; control soil consisting of Histi...
Effects of visual familiarity for words on interhemispheric cooperation for lexical processing.
Yoshizaki, K
2001-12-01
The purpose of this study was to examine the effects of visual familiarity of words on interhemispheric lexical processing. Words and pseudowords were tachistoscopically presented in a left, a right, or bilateral visual fields. Two types of words, Katakana-familiar-type and Hiragana-familiar-type, were used as the word stimuli. The former refers to the words which are more frequently written with Katakana script, and the latter refers to the words which are written predominantly in Hiragana script. Two conditions for the words were set up in terms of visual familiarity for a word. In visually familiar condition, words were presented in familiar script form and in visually unfamiliar condition, words were presented in less familiar script form. The 32 right-handed Japanese students were asked to make a lexical decision. Results showed that a bilateral gain, which indicated that the performance in the bilateral visual fields was superior to that in the unilateral visual field, was obtained only in the visually familiar condition, not in the visually unfamiliar condition. These results suggested that the visual familiarity for a word had an influence on the interhemispheric lexical processing.
Fisher's geometric model predicts the effects of random mutations when tested in the wild.
Stearns, Frank W; Fenster, Charles B
2016-02-01
Fisher's geometric model of adaptation (FGM) has been the conceptual foundation for studies investigating the genetic basis of adaptation since the onset of the neo Darwinian synthesis. FGM describes adaptation as the movement of a genotype toward a fitness optimum due to beneficial mutations. To date, one prediction of FGM, the probability of improvement is related to the distance from the optimum, has only been tested in microorganisms under laboratory conditions. There is reason to believe that results might differ under natural conditions where more mutations likely affect fitness, and where environmental variance may obscure the expected pattern. We chemically induced mutations into a set of 19 Arabidopsis thaliana accessions from across the native range of A. thaliana and planted them alongside the premutated founder lines in two habitats in the mid-Atlantic region of the United States under field conditions. We show that FGM is able to predict the outcome of a set of random induced mutations on fitness in a set of A. thaliana accessions grown in the wild: mutations are more likely to be beneficial in relatively less fit genotypes. This finding suggests that FGM is an accurate approximation of the process of adaptation under more realistic ecological conditions. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
3D numerical simulation of transient processes in hydraulic turbines
NASA Astrophysics Data System (ADS)
Cherny, S.; Chirkov, D.; Bannikov, D.; Lapin, V.; Skorospelov, V.; Eshkunova, I.; Avdushenko, A.
2010-08-01
An approach for numerical simulation of 3D hydraulic turbine flows in transient operating regimes is presented. The method is based on a coupled solution of incompressible RANS equations, runner rotation equation, and water hammer equations. The issue of setting appropriate boundary conditions is considered in detail. As an illustration, the simulation results for runaway process are presented. The evolution of vortex structure and its effect on computed runaway traces are analyzed.
Mechanism of Cytokinetic Contractile Ring Constriction in Fission Yeast
Stachowiak, Matthew R.; Laplante, Caroline; Chin, Harvey F.; Guirao, Boris; Karatekin, Erdem; Pollard, Thomas D.; O’Shaughnessy, Ben
2014-01-01
SUMMARY Cytokinesis involves constriction of a contractile actomyosin ring. The mechanisms generating ring tension and setting the constriction rate remain unknown, since the organization of the ring is poorly characterized, its tension was rarely measured, and constriction is coupled to other processes. To isolate ring mechanisms we studied fission yeast protoplasts, where constriction occurs without the cell wall. Exploiting the absence of cell wall and actin cortex, we measured ring tension and imaged ring organization, which was dynamic and disordered. Computer simulations based on the amounts and biochemical properties of the key proteins showed that they spontaneously self-organize into a tension-generating bundle. Together with rapid component turnover, the self-organization mechanism continuously reassembles and remodels the constricting ring. Ring constriction depended on cell shape, revealing that the ring operates close to conditions of isometric tension. Thus, the fission yeast ring sets its own tension, but other processes set the constriction rate. PMID:24914559
Mears, Lisa; Stocks, Stuart M; Albaek, Mads O; Cassells, Benny; Sin, Gürkan; Gernaey, Krist V
2017-07-01
A novel model-based control strategy has been developed for filamentous fungal fed-batch fermentation processes. The system of interest is a pilot scale (550 L) filamentous fungus process operating at Novozymes A/S. In such processes, it is desirable to maximize the total product achieved in a batch in a defined process time. In order to achieve this goal, it is important to maximize both the product concentration, and also the total final mass in the fed-batch system. To this end, we describe the development of a control strategy which aims to achieve maximum tank fill, while avoiding oxygen limited conditions. This requires a two stage approach: (i) calculation of the tank start fill; and (ii) on-line control in order to maximize fill subject to oxygen transfer limitations. First, a mechanistic model was applied off-line in order to determine the appropriate start fill for processes with four different sets of process operating conditions for the stirrer speed, headspace pressure, and aeration rate. The start fills were tested with eight pilot scale experiments using a reference process operation. An on-line control strategy was then developed, utilizing the mechanistic model which is recursively updated using on-line measurements. The model was applied in order to predict the current system states, including the biomass concentration, and to simulate the expected future trajectory of the system until a specified end time. In this way, the desired feed rate is updated along the progress of the batch taking into account the oxygen mass transfer conditions and the expected future trajectory of the mass. The final results show that the target fill was achieved to within 5% under the maximum fill when tested using eight pilot scale batches, and over filling was avoided. The results were reproducible, unlike the reference experiments which show over 10% variation in the final tank fill, and this also includes over filling. The variance of the final tank fill is reduced by over 74%, meaning that it is possible to target the final maximum fill reproducibly. The product concentration achieved at a given set of process conditions was unaffected by the control strategy. Biotechnol. Bioeng. 2017;114: 1459-1468. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Standardization of domestic frying processes by an engineering approach.
Franke, K; Strijowski, U
2011-05-01
An approach was developed to enable a better standardization of domestic frying of potato products. For this purpose, 5 domestic fryers differing in heating power and oil capacity were used. A very defined frying process using a highly standardized model product and a broad range of frying conditions was carried out in these fryers and the development of browning representing an important quality parameter was measured. Product-to-oil ratio, oil temperature, and frying time were varied. Quite different color changes were measured in the different fryers although the same frying process parameters were applied. The specific energy consumption for water evaporation (spECWE) during frying related to product amount was determined for all frying processes to define an engineering parameter for characterizing the frying process. A quasi-linear regression approach was applied to calculate this parameter from frying process settings and fryer properties. The high significance of the regression coefficients and a coefficient of determination close to unity confirmed the suitability of this approach. Based on this regression equation, curves for standard frying conditions (SFC curves) were calculated which describe the frying conditions required to obtain the same level of spECWE in the different domestic fryers. Comparison of browning results from the different fryers operated at conditions near the SFC curves confirmed the applicability of the approach. © 2011 Institute of Food Technologists®
NASA Astrophysics Data System (ADS)
Thomas, Gregory P.; Anderson, David
2013-06-01
Despite science learning in settings such as science museums being recognized as important and given increasing attention in science education circles, the investigation of parents' and their children's metacognition in such settings is still in its infancy. This is despite an individual's metacognition being acknowledged as an important influence on their learning within and across contexts. This research investigated parents' metacognitive procedural and conditional knowledge, a key element of their metacognition, related to (a) what they knew about how they and their children thought and learned, and (b) whether this metacognitive knowledge influenced their interactions with their children during their interaction with a moderately complex simulation in a science museum. Parents reported metacognitive procedural and conditional knowledge regarding their own and their children's thinking and learning processes. Further, parents were aware that this metacognitive knowledge influenced their interactions with their children, seeing this as appropriate pedagogical action for them within the context of the particular exhibit and its task requirements at the science museum, and for the child involved. These findings have implications for exhibit and activity development within science museum settings.
Segmentation of heterogeneous blob objects through voting and level set formulation
Chang, Hang; Yang, Qing; Parvin, Bahram
2009-01-01
Blob-like structures occur often in nature, where they aid in cueing and the pre-attentive process. These structures often overlap, form perceptual boundaries, and are heterogeneous in shape, size, and intensity. In this paper, voting, Voronoi tessellation, and level set methods are combined to delineate blob-like structures. Voting and subsequent Voronoi tessellation provide the initial condition and the boundary constraints for each blob, while curve evolution through level set formulation provides refined segmentation of each blob within the Voronoi region. The paper concludes with the application of the proposed method to a dataset produced from cell based fluorescence assays and stellar data. PMID:19774202
Simulations and observations of cloudtop processes
NASA Technical Reports Server (NTRS)
Siems, S. T.; Bretherton, C. S.; Baker, M. B.
1990-01-01
Turbulent entrainment at zero mean shear stratified interfaces has been studied extensively in the laboratory and theoretically for the classical situation in which density is a passive tracer of the mixing and the turbulent motions producing the entrainment are directed toward the interface. It is the purpose of the numerical simulations and data analysis to investigate these processes and, specifically, to focus on the following questions: (1) Can local cooling below cloudtop play an important role in setting up convective circulations within the cloud, and bringing about entrainment; (2) Can Cloudtop Entrainment Instability (CEI) alone lead to runaway entrainment under geophysically realistic conditions; and (3) What are the important mechanisms of entrainment at cloudtop under zero or low mean shear conditions.
Siódmiak, Jacek; Uher, Jan J; Santamaría-Holek, Ivan; Kruszewska, Natalia; Gadomski, Adam
2007-08-01
A superdiffusive random-walk action in the depletion zone around a growing protein crystal is considered. It stands for a dynamic boundary condition of the growth process and competes steadily with a quasistatic, curvature-involving (thermodynamic) free boundary condition, both of them contributing to interpret the (mainly late-stage) growth process in terms of a prototype ion-channeling effect. An overall diffusion function contains quantitative signatures of both boundary conditions mentioned and indicates whether the new phase grows as an orderly phase or a converse scenario occurs. This situation can be treated in a quite versatile way both numerically and analytically, within a generalized Smoluchowski framework. This study can help in (1) elucidating some dynamic puzzles of a complex crystal formation vs biomolecular aggregation, also those concerning ion-channel formation, and (2) seeing how ion-channel-type dynamics of non-Markovian nature may set properly the pace of model (dis)ordered protein aggregation.
Numerical investigation of solid mixing in a fluidized bed coating process
NASA Astrophysics Data System (ADS)
Kenche, Venkatakrishna; Feng, Yuqing; Ying, Danyang; Solnordal, Chris; Lim, Seng; Witt, Peter J.
2013-06-01
Fluidized beds are widely used in many process industries including the food and pharmaceutical sectors. Despite being an intensive research area, there are no design rules or correlations that can be used to quantitatively predict the solid mixing in a specific system for a given set of operating conditions. This paper presents a numerical study of the gas and solid dynamics in a laboratory scale fluidized bed coating process used for food and pharmaceutical industries. An Eulerian-Eulerian model (EEM) with kinetic theory of granular flow is selected as the modeling technique, with the commercial computational fluid dynamics (CFD) software package ANSYS/Fluent being the numerical platform. The flow structure is investigated in terms of the spatial distribution of gas and solid flow. The solid mixing has been evaluated under different operating conditions. It was found that the solid mixing rate in the horizontal direction is similar to that in the vertical direction under the current design and operating conditions. It takes about 5 s to achieve good mixing.
An Excel Workbook for Identifying Redox Processes in Ground Water
Jurgens, Bryant C.; McMahon, Peter B.; Chapelle, Francis H.; Eberts, Sandra M.
2009-01-01
The reduction/oxidation (redox) condition of ground water affects the concentration, transport, and fate of many anthropogenic and natural contaminants. The redox state of a ground-water sample is defined by the dominant type of reduction/oxidation reaction, or redox process, occurring in the sample, as inferred from water-quality data. However, because of the difficulty in defining and applying a systematic redox framework to samples from diverse hydrogeologic settings, many regional water-quality investigations do not attempt to determine the predominant redox process in ground water. Recently, McMahon and Chapelle (2008) devised a redox framework that was applied to a large number of samples from 15 principal aquifer systems in the United States to examine the effect of redox processes on water quality. This framework was expanded by Chapelle and others (in press) to use measured sulfide data to differentiate between iron(III)- and sulfate-reducing conditions. These investigations showed that a systematic approach to characterize redox conditions in ground water could be applied to datasets from diverse hydrogeologic settings using water-quality data routinely collected in regional water-quality investigations. This report describes the Microsoft Excel workbook, RedoxAssignment_McMahon&Chapelle.xls, that assigns the predominant redox process to samples using the framework created by McMahon and Chapelle (2008) and expanded by Chapelle and others (in press). Assignment of redox conditions is based on concentrations of dissolved oxygen (O2), nitrate (NO3-), manganese (Mn2+), iron (Fe2+), sulfate (SO42-), and sulfide (sum of dihydrogen sulfide [aqueous H2S], hydrogen sulfide [HS-], and sulfide [S2-]). The logical arguments for assigning the predominant redox process to each sample are performed by a program written in Microsoft Visual Basic for Applications (VBA). The program is called from buttons on the main worksheet. The number of samples that can be analyzed is only limited by the number of rows in Excel (65,536 for Excel 2003 and XP; and 1,048,576 for Excel 2007), and is therefore appropriate for large datasets.
NASA Technical Reports Server (NTRS)
Kopasakis, George
1997-01-01
Performance Seeking Control (PSC) attempts to find and control the process at the operating condition that will generate maximum performance. In this paper a nonlinear multivariable PSC methodology will be developed, utilizing the Fuzzy Model Reference Learning Control (FMRLC) and the method of Steepest Descent or Gradient (SDG). This PSC control methodology employs the SDG method to find the operating condition that will generate maximum performance. This operating condition is in turn passed to the FMRLC controller as a set point for the control of the process. The conventional SDG algorithm is modified in this paper in order for convergence to occur monotonically. For the FMRLC control, the conventional fuzzy model reference learning control methodology is utilized, with guidelines generated here for effective tuning of the FMRLC controller.
Time-frequency analysis of pediatric murmurs
NASA Astrophysics Data System (ADS)
Lombardo, Joseph S.; Blodgett, Lisa A.; Rosen, Ron S.; Najmi, Amir-Homayoon; Thompson, W. Reid
1998-05-01
Technology has provided many new tools to assist in the diagnosis of pathologic conditions of the heart. Echocardiography, Ultrafast CT, and MRI are just a few. While these tools are a valuable resource, they are typically too expensive, large and complex in operation for use in rural, homecare, and physician's office settings. Recent advances in computer performance, miniaturization, and acoustic signal processing, have yielded new technologies that when applied to heart sounds can provide low cost screening for pathologic conditions. The short duration and transient nature of these signals requires processing techniques that provide high resolution in both time and frequency. Short-time Fourier transforms, Wigner distributions, and wavelet transforms have been applied to signals form hearts with various pathologic conditions. While no single technique provides the ideal solution, the combination of tools provides a good representation of the acoustic features of the pathologies selected.
Schädler, Marc René; Warzybok, Anna; Ewert, Stephan D; Kollmeier, Birger
2016-05-01
A framework for simulating auditory discrimination experiments, based on an approach from Schädler, Warzybok, Hochmuth, and Kollmeier [(2015). Int. J. Audiol. 54, 100-107] which was originally designed to predict speech recognition thresholds, is extended to also predict psychoacoustic thresholds. The proposed framework is used to assess the suitability of different auditory-inspired feature sets for a range of auditory discrimination experiments that included psychoacoustic as well as speech recognition experiments in noise. The considered experiments were 2 kHz tone-in-broadband-noise simultaneous masking depending on the tone length, spectral masking with simultaneously presented tone signals and narrow-band noise maskers, and German Matrix sentence test reception threshold in stationary and modulated noise. The employed feature sets included spectro-temporal Gabor filter bank features, Mel-frequency cepstral coefficients, logarithmically scaled Mel-spectrograms, and the internal representation of the Perception Model from Dau, Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102(5), 2892-2905]. The proposed framework was successfully employed to simulate all experiments with a common parameter set and obtain objective thresholds with less assumptions compared to traditional modeling approaches. Depending on the feature set, the simulated reference-free thresholds were found to agree with-and hence to predict-empirical data from the literature. Across-frequency processing was found to be crucial to accurately model the lower speech reception threshold in modulated noise conditions than in stationary noise conditions.
Twelve evidence-based principles for implementing self-management support in primary care.
Battersby, Malcolm; Von Korff, Michael; Schaefer, Judith; Davis, Connie; Ludman, Evette; Greene, Sarah M; Parkerton, Melissa; Wagner, Edward H
2010-12-01
Recommendations to improve self-management support and health outcomes for people with chronic conditions in primary care settings are provided on the basis of expert opinion supported by evidence for practices and processes. Practices and processes that could improve self-management support in primary care were identified through a nominal group process. In a targeted search strategy, reviews and meta-analyses were then identifed using terms from a wide range of chronic conditions and behavioral risk factors in combination with Self-Care, Self-Management, and Primary Care. On the basis of these reviews, evidence-based principles for self-management support were developed. The evidence is organized within the framework of the Chronic Care Model. Evidence-based principles in 12 areas were associated with improved patient self-management and/or health outcomes: (1) brief targeted assessment, (2) evidence-based information to guide shared decision-making, (3) use of a nonjudgmental approach, (4) collaborative priority and goal setting, (5) collaborative problem solving, (6) self-management support by diverse providers, (7) self-management interventions delivered by diverse formats, (8) patient self-efficacy, (9) active followup, (10) guideline-based case management for selected patients, (11) linkages to evidence-based community programs, and (12) multifaceted interventions. A framework is provided for implementing these principles in three phases of the primary care visit: enhanced previsit assessment, a focused clinical encounter, and expanded postvisit options. There is a growing evidence base for how self-management support for chronic conditions can be integrated into routine health care.
James C. Lynch,; Phillippe Hensel,; Cahoon, Donald R.
2015-01-01
The National Park Service, in response to the growing evidence and awareness of the effects of climate change on federal lands, determined that monitoring wetland elevation change is a top priority in North Atlantic Coastal parks (Stevens et al, 2010). As a result, the NPS Northeast Coastal and Barrier Network (NCBN) in collaboration with colleagues from the U.S. Geological Survey (USGS) and The National Oceanic and Atmospheric Administration (NOAA) have developed a protocol for monitoring wetland elevation change and other processes important for determining the viability of wetland communities. Although focused on North Atlantic Coastal parks, this document is applicable to all coastal and inland wetland regions. Wetlands exist within a narrow range of elevation which is influenced by local hydrologic conditions. For coastal wetlands in particular, local hydrologic conditions may be changing as sea levels continue to rise. As sea level rises, coastal wetland systems may respond by building elevation to maintain favorable hydrologic conditions for their survival. This protocol provides the reader with instructions and guidelines on designing a monitoring plan or study to: A) Quantify elevation change in wetlands with the Surface Elevation Table (SET). B) Understand the processes that influence elevation change, including vertical accretion (SET and Marker Horizon methods). C) Survey the wetland surface and SET mark to a common reference datum to allow for comparing sample stations to each other and to local tidal datums. D) Survey the SET mark to monitor its relative stability. This document is divided into two parts; the main body that presents an overview of all aspects of monitoring wetland elevation dynamics, and a collection of Standard Operating Procedures (SOP) that describes in detail how to perform or execute each step of the methodology. Detailed instruction on the installation, data collection, data management and analysis are provided in this report and associated SOP’s. A better understanding of these processes will help to determine the present and future viability of coastal wetlands managed by NPS and can help address measures that will ensure these communities exist into the future.
NASA Astrophysics Data System (ADS)
Muranaka, Noriaki; Date, Kei; Tokumaru, Masataka; Imanishi, Shigeru
In recent years, the traffic accident occurs frequently with explosion of traffic density. Therefore, we think that the safe and comfortable transportation system to defend the pedestrian who is the traffic weak is necessary. First, we detect and recognize the pedestrian (the crossing person) by the image processing. Next, we inform all the drivers of the right or left turn that the pedestrian exists by the sound and the image and so on. By prompting a driver to do safe driving in this way, the accident to the pedestrian can decrease. In this paper, we are using a background subtraction method for the movement detection of the movement object. In the background subtraction method, the update method in the background was important, and as for the conventional way, the threshold values of the subtraction processing and background update were identical. That is, the mixing rate of the input image and the background image of the background update was a fixation value, and the fine tuning which corresponded to the environment change of the weather was difficult. Therefore, we propose the update method of the background image that the estimated mistake is difficult to be amplified. We experiment and examines in the comparison about five cases of sunshine, cloudy, evening, rain, sunlight change, except night. This technique can set separately the threshold values of the subtraction processing and background update processing which suited the environmental condition of the weather and so on. Therefore, the fine tuning becomes possible freely in the mixing rate of the input image and the background image of the background update. Because the setting of the parameter which suited an environmental condition becomes important to minimize mistaking percentage, we examine about the setting of a parameter.
Parametric synthesis of a robust controller on a base of mathematical programming method
NASA Astrophysics Data System (ADS)
Khozhaev, I. V.; Gayvoronskiy, S. A.; Ezangina, T. A.
2018-05-01
Considered paper is dedicated to deriving sufficient conditions, linking root indices of robust control quality with coefficients of interval characteristic polynomial, on the base of mathematical programming method. On the base of these conditions, a method of PI- and PID-controllers, providing aperiodic transient process with acceptable stability degree and, subsequently, acceptable setting time, synthesis was developed. The method was applied to a problem of synthesizing a controller for a depth control system of an unmanned underwater vehicle.
Giving them a good start: informatics support of newborn screening and clinical care.
Wetter, T; Haschler, I; Ho, S; Hoffmann, G F; Linderkamp, O; Philipp, F; Skonetzki, S
2005-01-01
Newborns are a vulnerable population: Exposed to dramatically changing environmental conditions, potentially suffering from impairments that cannot realistically be diagnosed during pregnancy, with the risk that unfavorable conditions escalate fast. We have investigated informatics methods and tools to make screening for congenital diseases and containment of critical processes that start in the first days safer and more efficient. This poster present a set of three different methodological approaches that all aim at comprehensive improvement of neonatal care.
Stochastic simulation by image quilting of process-based geological models
NASA Astrophysics Data System (ADS)
Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef
2017-09-01
Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.
Improved silicon nitride for advanced heat engines
NASA Technical Reports Server (NTRS)
Yeh, Hun C.; Fang, Ho T.
1987-01-01
The technology base required to fabricate silicon nitride components with the strength, reliability, and reproducibility necessary for actual heat engine applications is presented. Task 2 was set up to develop test bars with high Weibull slope and greater high temperature strength, and to conduct an initial net shape component fabrication evaluation. Screening experiments were performed in Task 7 on advanced materials and processing for input to Task 2. The technical efforts performed in the second year of a 5-yr program are covered. The first iteration of Task 2 was completed as planned. Two half-replicated, fractional factorial (2 sup 5), statistically designed matrix experiments were conducted. These experiments have identified Denka 9FW Si3N4 as an alternate raw material to GTE SN502 Si3N4 for subsequent process evaluation. A detailed statistical analysis was conducted to correlate processing conditions with as-processed test bar properties. One processing condition produced a material with a 97 ksi average room temperature MOR (100 percent of goal) with 13.2 Weibull slope (83 percent of goal); another condition produced 86 ksi (6 percent over baseline) room temperature strength with a Weibull slope of 20 (125 percent of goal).
Fluid Expulsion, Habitability, and the Search for Life on Mars
NASA Technical Reports Server (NTRS)
Oehler, Dorothy Z.; Allen, Carlton C.
2012-01-01
Habitability assessments are critical for identifying settings in which potential biosignatures could exist in quantities large enough to be detected by rovers. Habitability depends on 1) the potential for long-lived liquid water, 2) conditions affording protection from surface processes destructive to organic biomolecules, and 3) a source of renewing nutrients and energy. Of these criteria, the latter is often overlooked. Here we present an analysis of a large "ghost" crater in northern Chryse Planitia [1] that appears to have satisfied each of these requirements, with several processes providing potential sources of nutrient/energy renewal [1-2]. This analysis can serve as a model for identifying other localities that could provide similarly favorable settings in which to seek evidence of life on Mars.
Qualitative simulation for process modeling and control
NASA Technical Reports Server (NTRS)
Dalle Molle, D. T.; Edgar, T. F.
1989-01-01
A qualitative model is developed for a first-order system with a proportional-integral controller without precise knowledge of the process or controller parameters. Simulation of the qualitative model yields all of the solutions to the system equations. In developing the qualitative model, a necessary condition for the occurrence of oscillatory behavior is identified. Initializations that cannot exhibit oscillatory behavior produce a finite set of behaviors. When the phase-space behavior of the oscillatory behavior is properly constrained, these initializations produce an infinite but comprehensible set of asymptotically stable behaviors. While the predictions include all possible behaviors of the real system, a class of spurious behaviors has been identified. When limited numerical information is included in the model, the number of predictions is significantly reduced.
Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network
Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu
2018-01-01
This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit. PMID:29765629
Li, Tianhao; Fu, Qian-Jie
2011-08-01
(1) To investigate whether voice gender discrimination (VGD) could be a useful indicator of the spectral and temporal processing abilities of individual cochlear implant (CI) users; (2) To examine the relationship between VGD and speech recognition with CI when comparable acoustic cues are used for both perception processes. VGD was measured using two talker sets with different inter-gender fundamental frequencies (F(0)), as well as different acoustic CI simulations. Vowel and consonant recognition in quiet and noise were also measured and compared with VGD performance. Eleven postlingually deaf CI users. The results showed that (1) mean VGD performance differed for different stimulus sets, (2) VGD and speech recognition performance varied among individual CI users, and (3) individual VGD performance was significantly correlated with speech recognition performance under certain conditions. VGD measured with selected stimulus sets might be useful for assessing not only pitch-related perception, but also spectral and temporal processing by individual CI users. In addition to improvements in spectral resolution and modulation detection, the improvement in higher modulation frequency discrimination might be particularly important for CI users in noisy environments.
Weights and topology: a study of the effects of graph construction on 3D image segmentation.
Grady, Leo; Jolly, Marie-Pierre
2008-01-01
Graph-based algorithms have become increasingly popular for medical image segmentation. The fundamental process for each of these algorithms is to use the image content to generate a set of weights for the graph and then set conditions for an optimal partition of the graph with respect to these weights. To date, the heuristics used for generating the weighted graphs from image intensities have largely been ignored, while the primary focus of attention has been on the details of providing the partitioning conditions. In this paper we empirically study the effects of graph connectivity and weighting function on the quality of the segmentation results. To control for algorithm-specific effects, we employ both the Graph Cuts and Random Walker algorithms in our experiments.
Sommers, M S; Kirk, K I; Pisoni, D B
1997-04-01
The purpose of the present studies was to assess the validity of using closed-set response formats to measure two cognitive processes essential for recognizing spoken words---perceptual normalization (the ability to accommodate acoustic-phonetic variability) and lexical discrimination (the ability to isolate words in the mental lexicon). In addition, the experiments were designed to examine the effects of response format on evaluation of these two abilities in normal-hearing (NH), noise-masked normal-hearing (NMNH), and cochlear implant (CI) subject populations. The speech recognition performance of NH, NMNH, and CI listeners was measured using both open- and closed-set response formats under a number of experimental conditions. To assess talker normalization abilities, identification scores for words produced by a single talker were compared with recognition performance for items produced by multiple talkers. To examine lexical discrimination, performance for words that are phonetically similar to many other words (hard words) was compared with scores for items with few phonetically similar competitors (easy words). Open-set word identification for all subjects was significantly poorer when stimuli were produced in lists with multiple talkers compared with conditions in which all of the words were spoken by a single talker. Open-set word recognition also was better for lexically easy compared with lexically hard words. Closed-set tests, in contrast, failed to reveal the effects of either talker variability or lexical difficulty even when the response alternatives provided were systematically selected to maximize confusability with target items. These findings suggest that, although closed-set tests may provide important information for clinical assessment of speech perception, they may not adequately evaluate a number of cognitive processes that are necessary for recognizing spoken words. The parallel results obtained across all subject groups indicate that NH, NMNH, and CI listeners engage similar perceptual operations to identify spoken words. Implications of these findings for the design of new test batteries that can provide comprehensive evaluations of the individual capacities needed for processing spoken language are discussed.
NASA Astrophysics Data System (ADS)
Mia, Mozammel; Al Bashir, Mahmood; Dhar, Nikhil Ranjan
2016-10-01
Hard turning is increasingly employed in machining, lately, to replace time-consuming conventional turning followed by grinding process. An excessive amount of tool wear in hard turning is one of the main hurdles to be overcome. Many researchers have developed tool wear model, but most of them developed it for a particular work-tool-environment combination. No aggregate model is developed that can be used to predict the amount of principal flank wear for specific machining time. An empirical model of principal flank wear (VB) has been developed for the different hardness of workpiece (HRC40, HRC48 and HRC56) while turning by coated carbide insert with different configurations (SNMM and SNMG) under both dry and high pressure coolant conditions. Unlike other developed model, this model includes the use of dummy variables along with the base empirical equation to entail the effect of any changes in the input conditions on the response. The base empirical equation for principal flank wear is formulated adopting the Exponential Associate Function using the experimental results. The coefficient of dummy variable reflects the shifting of the response from one set of machining condition to another set of machining condition which is determined by simple linear regression. The independent cutting parameters (speed, rate, depth of cut) are kept constant while formulating and analyzing this model. The developed model is validated with different sets of machining responses in turning hardened medium carbon steel by coated carbide inserts. For any particular set, the model can be used to predict the amount of principal flank wear for specific machining time. Since the predicted results exhibit good resemblance with experimental data and the average percentage error is <10 %, this model can be used to predict the principal flank wear for stated conditions.
Estimation and Application of Ecological Memory Functions in Time and Space
NASA Astrophysics Data System (ADS)
Itter, M.; Finley, A. O.; Dawson, A.
2017-12-01
A common goal in quantitative ecology is the estimation or prediction of ecological processes as a function of explanatory variables (or covariates). Frequently, the ecological process of interest and associated covariates vary in time, space, or both. Theory indicates many ecological processes exhibit memory to local, past conditions. Despite such theoretical understanding, few methods exist to integrate observations from the recent past or within a local neighborhood as drivers of these processes. We build upon recent methodological advances in ecology and spatial statistics to develop a Bayesian hierarchical framework to estimate so-called ecological memory functions; that is, weight-generating functions that specify the relative importance of local, past covariate observations to ecological processes. Memory functions are estimated using a set of basis functions in time and/or space, allowing for flexible ecological memory based on a reduced set of parameters. Ecological memory functions are entirely data driven under the Bayesian hierarchical framework—no a priori assumptions are made regarding functional forms. Memory function uncertainty follows directly from posterior distributions for model parameters allowing for tractable propagation of error to predictions of ecological processes. We apply the model framework to simulated spatio-temporal datasets generated using memory functions of varying complexity. The framework is also applied to estimate the ecological memory of annual boreal forest growth to local, past water availability. Consistent with ecological understanding of boreal forest growth dynamics, memory to past water availability peaks in the year previous to growth and slowly decays to zero in five to eight years. The Bayesian hierarchical framework has applicability to a broad range of ecosystems and processes allowing for increased understanding of ecosystem responses to local and past conditions and improved prediction of ecological processes.
The Condition for Generous Trust.
Shinya, Obayashi; Yusuke, Inagaki; Hiroki, Takikawa
2016-01-01
Trust has been considered the "cement" of a society and is much studied in sociology and other social sciences. Most studies, however, have neglected one important aspect of trust: it involves an act of forgiving and showing tolerance toward another's failure. In this study, we refer to this concept as "generous trust" and examine the conditions under which generous trust becomes a more viable option when compared to other types of trust. We investigate two settings. First, we introduce two types of uncertainties: uncertainty as to whether trustees have the intention to cooperate, and uncertainty as to whether trustees have enough competence to accomplish the entrusted tasks. Second, we examine the manner in which trust functions in a broader social context, one that involves matching and commitment processes. Since we expect generosity or forgiveness to work differently in the matching and commitment processes, we must differentiate trust strategies into generous trust in the matching process and that in the commitment process. Our analytical strategy is two-fold. First, we analyze the "modified" trust game that incorporates the two types of uncertainties without the matching process. This simplified setting enables us to derive mathematical results using game theory, thereby giving basic insight into the trust mechanism. Second, we investigate socially embedded trust relationships in contexts involving the matching and commitment processes, using agent-based simulation. Results show that uncertainty about partner's intention and competence makes generous trust a viable option. In contrast, too much uncertainty undermines the possibility of generous trust. Furthermore, a strategy that is too generous cannot stand alone. Generosity should be accompanied with moderate punishment. As for socially embedded trust relationships, generosity functions differently in the matching process versus the commitment process. Indeed, these two types of generous trust coexist, and their coexistence enables a society to function well.
NASA Astrophysics Data System (ADS)
Li, Bao-Sheng; Wang, Yuhuang; Proctor, Rupert S. J.; Zhang, Yuexia; Webster, Richard D.; Yang, Song; Song, Baoan; Chi, Yonggui Robin
2016-09-01
Benzyl bromides and related molecules are among the most common substrates in organic synthesis. They are typically used as electrophiles in nucleophilic substitution reactions. These molecules can also be activated via single-electron-transfer (SET) process for radical reactions. Representative recent progress includes α-carbon benzylation of ketones and aldehydes via photoredox catalysis. Here we disclose the generation of (nitro)benzyl radicals via N-heterocyclic carbene (NHC) catalysis under reductive conditions. The radical intermediates generated via NHC catalysis undergo formal 1,2-addition with ketones to eventually afford tertiary alcohol products. The overall process constitutes a formal polarity-inversion of benzyl bromide, allowing a direct coupling of two initially electrophilic carbons. Our study provides a new carbene-catalysed reaction mode that should enable unconventional transformation of (nitro)benzyl bromides under mild organocatalytic conditions.
Opportunity NYC-Family Rewards: An Embedded Child and Family Study of Conditional Cash Transfers
ERIC Educational Resources Information Center
Morris, Pamela; Aber, J. Lawrence; Wolf, Sharon; Berg, Juliette
2011-01-01
This study builds on and informs ecological theory (Bronfenbrenner & Morris, 2006) by focusing on the contextual processes by which individual developmental trajectories can be altered. Ecological theory posits that children are embedded in a nested and interactive set of interrelated contexts beginning with the micro-system (the most…
8 CFR 207.3 - Waivers of inadmissibility.
Code of Federal Regulations, 2010 CFR
2010-01-01
... REFUGEES § 207.3 Waivers of inadmissibility. (a) Authority. Section 207(c)(3) of the Act sets forth grounds... waived in the case of an otherwise qualified refugee and the conditions under which such waivers may be... Refugee for Waiver of Grounds of Inadmissibility, with the Service office processing his or her case. The...
Using Behavioral Interventions to Assist Children with Type 1 Diabetes Manage Blood Glucose Levels
ERIC Educational Resources Information Center
Lasecki, Kim; Olympia, Daniel; Clark, Elaine; Jenson, William; Heathfield, Lora Tuesday
2008-01-01
Treatment and management of chronic disease processes on children occurs across multiple settings, placing demands for consultation and expertise on school personnel, including school psychologists. One such chronic condition in children is type I diabetes. Children with type I insulin dependent diabetes mellitus exhibit high rates of…
ERIC Educational Resources Information Center
Crowder, Kyle; Teachman, Jay
2004-01-01
Persistent effects of childhood living arrangements and family change on adolescent outcomes have often been attributed to differences in socialization and intrafamily processes. We use data from the Panel Study of Income Dynamics to assess an alternative explanation: that neighborhood context and residential mobility represent a central set of…
Fixed point theorems and dissipative processes
NASA Technical Reports Server (NTRS)
Hale, J. K.; Lopes, O.
1972-01-01
The deficiencies of the theories that characterize the maximal compact invariant set of T as asymptotically stable, and that some iterate of T has a fixed point are discussed. It is shown that this fixed point condition is always satisfied for condensing and local dissipative T. Applications are given to a class of neutral functional differential equations.
Conundrums, paradoxes, and surprises: a brave new world of biodiversity conservation
A.E. Lugo
2012-01-01
Anthropogenic activity is altering the global disturbance regime through such processes as urbanization, deforestation, and climate change. These disturbance events alter the environmental conditions under which organisms live and adapt and trigger succession, thus setting the biota in otiion in both ecological and evolutionary space. The result is the mixing of...
Assessment of Abdominal Pain in School-Age Children
ERIC Educational Resources Information Center
Zimmermann, Polly Gerber
2003-01-01
Pediatric abdominal pain can be a difficult condition to accurately assess for the nurse to determine whether the child's need is for teaching, treating, or transferring. This article describes the process as well as practical tips to be used by the nurse in the school setting. Distinguishing characteristics and findings, including key physical…
Time Management and Professional Identity of Students of Pedagogical Universities
ERIC Educational Resources Information Center
Lebedeva, Ekaterina V.; Shchipanova, Dina Ye.; Konovalova, Maria E.; Kutyin, Anton O.
2016-01-01
Topicality of the problem under research is stipulated by the necessity of personal characteristics consideration in the process of organization of educational and vocational activities of the future teachers in the conditions of educational medium, which sets high requirements to the students' time competence. The aim of the article is to study…
Conditions in the Reader that Affect His Embodiment of the Text.
ERIC Educational Resources Information Center
Myers, Jeanette S.
Three factors in the reader have a generalized effect on all perception, including reading: competence, purpose, and set. Competence involves applying past learning to new learning through transference, understanding the conventions of different types of texts, and transforming the text through the perceptual process into a new entity. Competent…
Uyeda, Christopher; Tan, Yichen; Fu, Gregory C; Peters, Jonas C
2013-06-26
Building on the known photophysical properties of well-defined copper-carbazolide complexes, we have recently described photoinduced, copper-catalyzed N-arylations and N-alkylations of carbazoles. Until now, there have been no examples of the use of other families of heteroatom nucleophiles in such photoinduced processes. Herein, we report a versatile photoinduced, copper-catalyzed method for coupling aryl thiols with aryl halides, wherein a single set of reaction conditions, using inexpensive CuI as a precatalyst without the need for an added ligand, is effective for a wide range of coupling partners. As far as we are aware, copper-catalyzed C-S cross-couplings at 0 °C have not previously been achieved, which renders our observation of efficient reaction of an unactivated aryl iodide at -40 °C especially striking. Mechanistic investigations are consistent with these photoinduced C-S cross-couplings following a SET/radical pathway for C-X bond cleavage (via a Cu(I)-thiolate), which contrasts with nonphotoinduced, copper-catalyzed processes wherein a concerted mechanism is believed to occur.
Analysis of the decision-making process leading to appendectomy: a grounded theory study.
Larsson, Gerry; Weibull, Henrik; Larsson, Bodil Wilde
2004-11-01
The aim was to develop a theoretical understanding of the decision-making process leading to appendectomy. A qualitative interview study was performed in the grounded theory tradition using the constant comparative method to analyze data. The study setting was one county hospital and two local hospitals in Sweden, where 11 surgeons and 15 surgical nurses were interviewed. A model was developed which suggests that surgeons' decision making regarding appendectomy is formed by the interplay between their medical assessment of the patient's condition and a set of contextual characteristics. The latter consist of three interacting factors: (1) organizational conditions, (2) the professional actors' individual characteristics and interaction, and (3) the personal characteristics of the patient and his or her family or relatives. In case the outcome of medical assessment is ambiguous, the risk evaluation and final decision will be influenced by an interaction of the contextual characteristics. It was concluded that, compared to existing, rational models of decision making, the model presented identified potentially important contextual characteristics and an outline on when they come into play.
Modeling Growth of Nanostructures in Plasmas
NASA Technical Reports Server (NTRS)
Hwang, Helen H.; Bose, Deepak; Govindan, T. R.; Meyyappan, M.
2004-01-01
As semiconductor circuits shrink to CDs below 0.1 nm, it is becoming increasingly critical to replace and/or enhance existing technology with nanoscale structures, such as nanowires for interconnects. Nanowires grown in plasmas are strongly dependent on processing conditions, such as gas composition and substrate temperature. Growth occurs at specific sites, or step-edges, with the bulk growth rate of the nanowires determined from the equation of motion of the nucleating crystalline steps. Traditional front-tracking algorithms, such as string-based or level set methods, suffer either from numerical complications in higher spatial dimensions, or from difficulties in incorporating surface-intense physical and chemical phenomena. Phase field models have the robustness of the level set method, combined with the ability to implement surface-specific chemistry that is required to model crystal growth, although they do not necessarily directly solve for the advancing front location. We have adopted a phase field approach and will present results of the adatom density and step-growth location in time as a function of processing conditions, such as temperature and plasma gas composition.
Dynamic design of ecological monitoring networks for non-Gaussian spatio-temporal data
Wikle, C.K.; Royle, J. Andrew
2005-01-01
Many ecological processes exhibit spatial structure that changes over time in a coherent, dynamical fashion. This dynamical component is often ignored in the design of spatial monitoring networks. Furthermore, ecological variables related to processes such as habitat are often non-Gaussian (e.g. Poisson or log-normal). We demonstrate that a simulation-based design approach can be used in settings where the data distribution is from a spatio-temporal exponential family. The key random component in the conditional mean function from this distribution is then a spatio-temporal dynamic process. Given the computational burden of estimating the expected utility of various designs in this setting, we utilize an extended Kalman filter approximation to facilitate implementation. The approach is motivated by, and demonstrated on, the problem of selecting sampling locations to estimate July brood counts in the prairie pothole region of the U.S.
NASA Astrophysics Data System (ADS)
Galewsky, J.
2017-12-01
Understanding the processes that govern the relationships between lower tropospheric stability and low-cloud cover is crucial for improved constraints on low-cloud feedbacks and for improving the parameterizations of low-cloud cover used in climate models. The stable isotopic composition of atmospheric water vapor is a sensitive recorder of the balance of moistening and drying processes that set the humidity of the lower troposphere and may thus provide a useful framework for improving our understanding low-cloud processes. In-situ measurements of water vapor isotopic composition collected at the NOAA Mauna Loa Observatory in Hawaii, along with twice-daily soundings from Hilo and remote sensing of cloud cover, show a clear inverse relationship between the estimated inversion strength (EIS) and the mixing ratios and water vapor δ -values, and a positive relationship between EIS, deuterium excess, and Δ δ D, defined as the difference between an observation and a reference Rayleigh distillation curve. These relationships are consistent with reduced moistening and an enhanced upper-tropospheric contribution above the trade inversion under high EIS conditions and stronger moistening under weaker EIS conditions. The cloud fraction, cloud liquid water path, and cloud-top pressure were all found to be higher under low EIS conditions. Inverse modeling of the isotopic data for the highest and lowest terciles of EIS conditions provide quantitative constraints on the cold-point temperatures and mixing fractions that govern the humidity above the trade inversion. The modeling shows the moistening fraction between moist boundary layer air and dry middle tropospheric air 24±1.5% under low EIS conditions is and 6±1.5% under high EIS conditions. A cold-point (last-saturation) temperature of -30C can match the observations for both low and high EIS conditions. The isotopic composition of the moistening source as derived from the inversion (-114±10‰ ) requires moderate fractionation from a pure marine source, indicating a link between inversion strength and moistening of the lower troposphere from the outflow of shallow convection. This approach can be applied in other settings and the results can be used to test parameterizations in climate models.
Political dreams, practical boundaries: the case of the Nursing Minimum Data Set, 1983-1990.
Hobbs, Jennifer
2011-01-01
The initial development of the Nursing Minimum Data Set (NMDS) was analyzed based on archival material from Harriet Werley and Norma Lang, two nurses involved with the project, and American Nurses Association materials. The process of identifying information to be included in the NMDS was contentious. Individual nurses argued on behalf of particular data because of a strong belief in how nursing practice (through information collection) should be structured. Little attention was paid to existing practice conditions that would ultimately determine whether the NMDS would be used.
Subset selective search on the basis of color and preview.
Donk, Mieke
2017-01-01
In the preview paradigm observers are presented with one set of elements (the irrelevant set) followed by the addition of a second set among which the target is presented (the relevant set). Search efficiency in such a preview condition has been demonstrated to be higher than that in a full-baseline condition in which both sets are simultaneously presented, suggesting that a preview of the irrelevant set reduces its influence on the search process. However, numbers of irrelevant and relevant elements are typically not independently manipulated. Moreover, subset selective search also occurs when both sets are presented simultaneously but differ in color. The aim of the present study was to investigate how numbers of irrelevant and relevant elements contribute to preview search in the absence and presence of a color difference between subsets. In two experiments it was demonstrated that a preview reduced the influence of the number of irrelevant elements in the absence but not in the presence of a color difference between subsets. In the presence of a color difference, a preview lowered the effect of the number of relevant elements but only when the target was defined by a unique feature within the relevant set (Experiment 1); when the target was defined by a conjunction of features (Experiment 2), search efficiency as a function of the number of relevant elements was not modulated by a preview. Together the results are in line with the idea that subset selective search is based on different simultaneously operating mechanisms.
NASA Astrophysics Data System (ADS)
Kiss, Gabriella B.; Zagyva, Tamás; Pásztor, Domokos; Zaccarini, Federica
2018-05-01
The Jurassic pillow basalt of the NE Hungarian Szarvaskő Unit is part of an incomplete ophiolitic sequence, formed in a back-arc- or marginal basin of Neotethyan origin. Different, often superimposing hydrothermal processes were studied aiming to characterise them and to discover their relationship with the geotectonic evolution of the region. Closely packed pillow, pillow-fragmented hyaloclastite breccia and transition to peperitic facies of a submarine lava flow were observed. The rocks underwent primary and cooling-related local submarine hydrothermal processes immediately after eruption at ridge setting. Physico-chemical data of this process and volcanic facies analyses revealed distal formation in the submarine lava flow. A superimposing, more extensive fluid circulation system resulted in intense alteration of basalt and in the formation of mostly sulphide-filled cavities. This lower temperature, but larger-scale process was similar to VMS systems and was related to ridge setting. As a peculiarity of the Szarvaskő Unit, locally basalt may be completely altered to a grossular-bearing mineral assemblage formed by rodingitisation s.l. This unique process observed in basalt happened in ridge setting/during spreading, in the absence of known large ultramafic blocks. Epigenetic veins formed also during Alpine regional metamorphism, related to subduction/obduction. The observed hydrothermal minerals represent different steps of the geotectonic evolution of the Szarvaskő Unit, from the ridge setting and spreading till the subduction/obduction. Hence, studying the superimposing alteration mineral assemblages can be a useful tool for reconstructing the tectonic history of an ophiolitic complex. Though the found mineral parageneses are often similar, careful study can help in distinguishing the processes and characterising their P, T, and X conditions.
Oberauer, Klaus
2018-03-12
To function properly, working memory must be rapidly updated. Updating requires the removal of information no longer relevant. I present six experiments designed to explore the boundary conditions and the time course of removal. A condition in which three out of six memory items can be removed was compared to two baseline conditions in which either three or six items were encoded and maintained in working memory. The time for removal was varied. In experiment 1, in the removal condition, a distinct subset of three words was cued to be irrelevant after encoding all six words. With longer removal time, response times in the removal condition approximated those in the set-size 3 baseline, but accuracies stayed at the set-size 6 level. In experiment 2, in which a random subset of three words was cued as irrelevant, there was no evidence for removal. Experiments 3 and 4 showed that when each item is cued as relevant or irrelevant after its encoding, irrelevant items can be removed rapidly and completely. Experiments 5 and 6 showed that complete removal was no longer possible when words had to be processed before being cued as irrelevant. The pattern of findings can be explained by distinguishing two forms of removal: deactivation removes working-memory contents from the set of competitors for retrieval; unbinding contents from their contexts removes them from working memory entirely, so that they also cease to compete for limited capacity. © 2018 New York Academy of Sciences.
Bratzke, Lisa C.; Muehrer, Rebecca J.; Kehl, Karen A.; Lee, Kyoung Suk; Ward, Earlise C.; Kwekkeboom, Kristine L.
2014-01-01
Objectives The purpose of this narrative review was to synthesize current research findings related to self-management, in order to better understand the processes of priority setting and decision-making in among adults with multimorbidity. Design A narrative literature review was undertaken, synthesizing findings from published, peer-reviewed empirical studies that addressed priority setting and/or decision-making in self-management of multimorbidity. Data sources A search of PubMed, PsychINFO, CINAHL and SocIndex databases was conducted from database inception through December 2013. References lists from selected empirical studies and systematic reviews were evaluated to identify any additional relevant articles. Review methods Full text of potentially eligible articles were reviewed and selected for inclusion if they described empirical studies that addressed priority setting or decision-making in self-management of multimorbidity among adults. Two independent reviewers read each selected article and extracted relevant data to an evidence table. Processes and factors and processes of multimorbidity self-management were identified and sorted into categories of priority setting, decision-making, and facilitators/barriers. Results Thirteen articles were selected for inclusion; most were qualitative studies describing processes, facilitators, and barriers of multimorbidity self-management. The findings revealed that patients prioritize a dominant chronic illness and re-prioritize over time as conditions and treatments change; that multiple facilitators (e.g. support programs) and barriers (e.g. lack of financial resources) impact individuals’ self-management priority setting and decision-making ability; as do individual beliefs, preferences, and attitudes (e.g., perceived personal control, preferences regarding treatment). Conclusions Health care providers need to be cognizant that individuals with multimorbidity engage in day-to-day priority setting and decision-making among their multiple chronic illnesses and respective treatments. Researchers need to develop and test interventions that support day-to-day priority setting and decision-making and improve health outcomes for individuals with multimorbidity. PMID:25468131
Budak, Gungor; Srivastava, Rajneesh; Janga, Sarath Chandra
2017-06-01
RNA-binding proteins (RBPs) control the regulation of gene expression in eukaryotic genomes at post-transcriptional level by binding to their cognate RNAs. Although several variants of CLIP (crosslinking and immunoprecipitation) protocols are currently available to study the global protein-RNA interaction landscape at single-nucleotide resolution in a cell, currently there are very few tools that can facilitate understanding and dissecting the functional associations of RBPs from the resulting binding maps. Here, we present Seten, a web-based and command line tool, which can identify and compare processes, phenotypes, and diseases associated with RBPs from condition-specific CLIP-seq profiles. Seten uses BED files resulting from most peak calling algorithms, which include scores reflecting the extent of binding of an RBP on the target transcript, to provide both traditional functional enrichment as well as gene set enrichment results for a number of gene set collections including BioCarta, KEGG, Reactome, Gene Ontology (GO), Human Phenotype Ontology (HPO), and MalaCards Disease Ontology for several organisms including fruit fly, human, mouse, rat, worm, and yeast. It also provides an option to dynamically compare the associated gene sets across data sets as bubble charts, to facilitate comparative analysis. Benchmarking of Seten using eCLIP data for IGF2BP1, SRSF7, and PTBP1 against their corresponding CRISPR RNA-seq in K562 cells as well as randomized negative controls, demonstrated that its gene set enrichment method outperforms functional enrichment, with scores significantly contributing to the discovery of true annotations. Comparative performance analysis using these CRISPR control data sets revealed significantly higher precision and comparable recall to that observed using ChIP-Enrich. Seten's web interface currently provides precomputed results for about 200 CLIP-seq data sets and both command line as well as web interfaces can be used to analyze CLIP-seq data sets. We highlight several examples to show the utility of Seten for rapid profiling of various CLIP-seq data sets. Seten is available on http://www.iupui.edu/∼sysbio/seten/. © 2017 Budak et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.
NASA Astrophysics Data System (ADS)
Li, H.; Plink-Bjorklund, P.
2017-12-01
Studies (e.g., Jerolmack and Paola, 2010) have suggested that autogenic processes act as a filter for high-frequency environmental signals, and the underlying assumption is that autogenic processes can cause fluctuations in sediment and water discharge that modify or shred the signal. This assumption, however, fails to recognize that autogenic processes and their final products are dynamic and that they can respond to allogenic forcings. We compile a database containing published field studies, physical experiments, and numerical modeling works, and analyze the data under different boundary conditions. Our analyses suggest different conclusions. Autogenic processes are intrinsic to the sedimentary system, and they possess distinct patterns under steady boundary conditions. Upon changing boundary conditions, the autogenic patterns are also likely to change (depending on the magnitude of the change in the boundary conditions). Therefore, the pattern change provides us with the opportunity to restore the high-frequency signals that may not pass through the transfer zone. Here we present the theoretical basis for using autogenic deposits to infer high-frequency signals as well as modern and ancient field examples, physical experiments, and modeling works to illustrate the autogenic response to allogenic forcings. The field studies show the potential of using autogenic deposits to restore short-term climatic variability. The experiments demonstrate that autogenic processes in rivers are closely linked to sediment and water discharge. The modeling examples reveal the counteracting effects of some autogenic processes to form a self-organized pattern under a set of specific boundary conditions. We also highlight the limitations and challenges that need more research efforts to restore high-frequency signals. Some critical issues include the magnitude of the signals, the effect of the interference between different signals, and the incompleteness of the autogenic deposits.
Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation
Barasa, Edwine W.; Molyneux, Sassy; English, Mike; Cleary, Susan
2015-01-01
Background: Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. Methods: We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Results: Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Conclusion: Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these complementary schools of thought. PMID:26673332
Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation.
Barasa, Edwine W; Molyneux, Sassy; English, Mike; Cleary, Susan
2015-09-16
Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these complementary schools of thought. © 2015 by Kerman University of Medical Sciences.
NASA Astrophysics Data System (ADS)
De Lange, G. J.; Krijgsman, W.
2015-12-01
The Messinian Salinity Crisis (MSC) is a dramatic event that took place ~ 5.9 Ma ago, resulting in deposition of 1-3 km thick evaporites at the Mediterranean seafloor. A considerable, long-lasting controversy existed on the modes of their formation, including the observed shallow gypsum versus deep dolostone deposits for the early phase of MSC. The onset of MSC is marked by deposition of gypsum/sapropel-like alternations, thought to relate to arid/humid climate conditions at a precessional rhythm. Gypsum precipitation only occurred at marginal- and dolomite formation at deeper settings. A range of potential explanations was given, most of which cannot satisfactorily explain all observations. Biogeochemical processes during MSC are commonly neglected but may explain that different deposits formed in shallow vs deep environments without exceptional physical boundary conditions for each. A unifying mechanism is presented in which gypsum formation occurs at all shallow water depths but its preservation is limited to shallow sedimentary settings. In contrast, ongoing deep-basin anoxic organic matter (OM) degradation processes result in dolomite formation. Gypsum precipitation in evaporating seawater takes place at 3-7 times concentrated seawater; seawater is always oversaturated relative to dolomite but its formation is inhibited by the presence of dissolved sulphate. Thus conditions for formation of gypsum exclude those for formation of dolomite and vice versa. Another process linking the saturation states of gypsum and dolomite is that of OM degradation by sulphate reduction. In stagnant deep water, ongoing OM-degradation may result in reducing the sulphate and enhancing the dissolved carbonate content. Such low-sulphate / high carbonate conditions in MSC deepwater are. unfavorable for gypsum preservation and favorable for dolomite formation, and always coincide with anoxic, i.e. oxygen-free conditions. Including dynamic biogeochemical processes in the thusfar static interpretations of evaporite formation mechanisms can thus account for the paradoxal, isochronous formation of shallow gypsum and deep-dolomite during the early MSC (1). (1) De Lange G.J. and Krijgsman W. (2010) Mar. Geol. 275, 273-277
Karayanidis, Frini; Nicholson, Rebecca; Schall, Ulrich; Meem, Lydia; Fulham, Ross; Michie, Patricia T
2006-10-01
The present study used behavioral and event-related potential (ERP) indices of task-switching to examine whether schizophrenia patients have a specific deficit in anticipatory task-set reconfiguration. Participants switched between univalent tasks in an alternating runs paradigms with blocked response-stimulus interval (RSI) manipulation (150, 300, 600, and 1200ms). Nineteen high functioning people with schizophrenia were compared to controls that were matched for age, gender, education and premorbid IQ estimate. Schizophrenia patients had overall increased RT, but no increase in corrected RT switch cost. In the schizophrenia group, ERPs showed reduced activation of the differential positivity in anticipation of switch trial at the optimal 600ms RSI and reduced activation of the frontal post-stimulus switch negativity at both 600 and 1200ms RSI compared to the control group. Despite no behavioral differences in task switching performance, anticipatory and stimulus-triggered ERP indices of task-switching suggest group differences in processing of switch and repeat trials, especially at longer RSI conditions that for control participants provide opportunity for anticipatory activation of task-set reconfiguration processes. These results are compatible with impaired implementation of endogenously driven processes in schizophrenia and greater reliance on external task cues, especially at long preparation intervals.
Reversing the similarity effect: The effect of presentation format.
Cataldo, Andrea M; Cohen, Andrew L
2018-06-01
A context effect is a change in preference that occurs when alternatives are added to a choice set. Models of preferential choice that account for context effects largely assume a within-dimension comparison process. It has been shown, however, that the format in which a choice set is presented can influence comparison strategies. That is, a by-alternative or by-dimension grouping of the dimension values encourage within-alternative or within-dimension comparisons, respectively. For example, one classic context effect, the compromise effect, is strengthened by a by-dimension presentation format. Extrapolation from this result suggests that a second context effect, the similarity effect, will actually reverse when stimuli are presented in a by-dimension format. In the current study, we presented participants with a series of apartment choice sets designed to elicit the similarity effect, with either a by-alternative or by-dimension presentation format. Participants in the by-alternative condition demonstrated a standard similarity effect; however, participants in the by-dimension condition demonstrated a strong reverse similarity effect. The present data can be accounted for by Multialternative Decision Field Theory (MDFT) and the Multiattribute Linear Ballistic Accumulator (MLBA), but not Elimination by Aspects (EBA). Indeed, when some weak assumptions of within-dimension processes are met, MDFT and the MLBA predict the reverse similarity effect. These modeling results suggest that the similarity effect is governed by either forgetting and inhibition (MDFT), or attention to positive or negative differences (MLBA). These results demonstrate that flexibility in the comparison process needs to be incorporated into theories of preferential choice. Copyright © 2018 Elsevier B.V. All rights reserved.
Active and Passive Hydrologic Tomographic Surveys:A Revolution in Hydrology (Invited)
NASA Astrophysics Data System (ADS)
Yeh, T. J.
2013-12-01
Mathematical forward or inverse problems of flow through geological media always have unique solutions if necessary conditions are givens. Unique mathematical solutions to forward or inverse modeling of field problems are however always uncertain (an infinite number of possibilities) due to many reasons. They include non-representativeness of the governing equations, inaccurate necessary conditions, multi-scale heterogeneity, scale discrepancies between observation and model, noise and others. Conditional stochastic approaches, which derives the unbiased solution and quantifies the solution uncertainty, are therefore most appropriate for forward and inverse modeling of hydrological processes. Conditioning using non-redundant data sets reduces uncertainty. In this presentation, we explain non-redundant data sets in cross-hole aquifer tests, and demonstrate that active hydraulic tomographic survey (using man-made excitations) is a cost-effective approach to collect the same type but non-redundant data sets for reducing uncertainty in the inverse modeling. We subsequently show that including flux measurements (a piece of non-redundant data set) collected in the same well setup as in hydraulic tomography improves the estimated hydraulic conductivity field. We finally conclude with examples and propositions regarding how to collect and analyze data intelligently by exploiting natural recurrent events (river stage fluctuations, earthquakes, lightning, etc.) as energy sources for basin-scale passive tomographic surveys. The development of information fusion technologies that integrate traditional point measurements and active/passive hydrogeophysical tomographic surveys, as well as advances in sensor, computing, and information technologies may ultimately advance our capability of characterizing groundwater basins to achieve resolution far beyond the feat of current science and technology.
Dodich, Alessandra; Cerami, Chiara; Canessa, Nicola; Crespi, Chiara; Iannaccone, Sandro; Marcone, Alessandra; Realmuto, Sabrina; Lettieri, Giada; Perani, Daniela; Cappa, Stefano F
2015-10-01
Theory of Mind (ToM), the process by which an individual imputes mental states to himself and others, is presently considered as a multidimensional cognitive domain, with two main facets (i.e., cognitive and affective ToM) accounting, respectively, for the ability to understand others' intention (intention attribution-IA) and emotions (emotion attribution-EA). Despite the large amount of literature investigating the behavioural and neural bases of mentalizing abilities in neurological conditions, there is still a lack of validated neuropsychological tools specifically designed to assess such skills. Here, we report the normative data of the Story-Based Empathy Task (SET), a non-verbal test developed for the assessment of intention and emotion attribution in the neurodegenerative conditions characterized by the impairment of social-emotional abilities. It is an easy-to-administer task including 18 stimuli, sub-grouped into two experimental conditions assessing, respectively, the ability to infer others' intentions (SET-IA) and emotions (SET-EA), compared to a control condition of causal inference (SET-CI). Normative data were collected in 136 Italian subjects pooled across subgroups homogenous for age (range 20-79 years), sex, and education (at least 5 years). The results show a detrimental effect of age and a beneficial effect of education on both the global score and each subscale, for which we provide correction grids. This new task could be a useful tool to investigate both affective and cognitive aspects of ToM in the course of disorders of socio-emotional behaviour, such as the fronto-temporal dementia spectrum.
NASA Astrophysics Data System (ADS)
Astarita, Antonello; Boccarusso, Luca; Carrino, Luigi; Durante, Massimo; Minutolo, Fabrizio Memola Capece; Squillace, Antonino
2018-05-01
Polycarbonate sheets, 3 mm thick, were successfully friction stir welded in butt joint configuration. Aiming to study the feasibility of the process and the influence of the process parameters joints under different processing conditions, obtained by varying the tool rotational speed and the tool travel speed, were realized. Tensile tests were carried out to characterize the joints. Moreover the forces arising during the process were recorded and carefully studied. The experimental outcomes proved the feasibility of the process when the process parameters are properly set, joints retaining more than 70% of the UTS of the base material were produced. The trend of the forces was described and explained, the influence of the process parameters was also introduced.
Attentional Capture by Superimposed Symbology: Boundary Conditions
NASA Technical Reports Server (NTRS)
McCann, Robert S.; Foyle, David C.; Johnston, James C.; Sridhar, Banavar (Technical Monitor)
1995-01-01
We report new results from an ongoing set of experiments in which subjects view a computer-generated display consisting of a set of stationary symbols (e.g., a "HUD") superimposed on a dynamic view of a runway as it appears to the pilot during final approach. Previous work (McCann, Foyle, & Johnston, 1993) has shown that when subjects process a cueing stimulus and then identify a geometric target, performance is slower when the cue appears on the HUD and the target appears on the runway surface, compared to a control condition where the cue and target are both on the runway surface. This "shift cost" was taken as evidence that the HUD captures attention, which then has to be shifted to the "out-the-world" scene before the target can be identified. New experiments show that the shift cost is eliminated when the cue occupies a known, fixed location on the HUD. Implications for the conditions that produce attentional tunneling are discussed.
Ruiz, Daniel; Cerón, Viviana; Molina, Adriana M.; Quiñónes, Martha L.; Jiménez, Mónica M.; Ahumada, Martha; Gutiérrez, Patricia; Osorio, Salua; Mantilla, Gilma; Connor, Stephen J.; Thomson, Madeleine C.
2014-01-01
As part of the Integrated National Adaptation Pilot project and the Integrated Surveillance and Control System, the Colombian National Institute of Health is working on the design and implementation of a Malaria Early Warning System framework, supported by seasonal climate forecasting capabilities, weather and environmental monitoring, and malaria statistical and dynamic models. In this report, we provide an overview of the local ecoepidemiologic settings where four malaria process-based mathematical models are currently being implemented at a municipal level. The description includes general characteristics, malaria situation (predominant type of infection, malaria-positive cases data, malaria incidence, and seasonality), entomologic conditions (primary and secondary vectors, mosquito densities, and feeding frequencies), climatic conditions (climatology and long-term trends), key drivers of epidemic outbreaks, and non-climatic factors (populations at risk, control campaigns, and socioeconomic conditions). Selected pilot sites exhibit different ecoepidemiologic settings that must be taken into account in the development of the integrated surveillance and control system. PMID:24891460
Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.
Brezis, Noam; Bronfman, Zohar Z; Usher, Marius
2015-06-04
We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.
Clarke, Luka A; Botelho, Hugo M; Sousa, Lisete; Falcao, Andre O; Amaral, Margarida D
2015-11-01
A meta-analysis of 13 independent microarray data sets was performed and gene expression profiles from cystic fibrosis (CF), similar disorders (COPD: chronic obstructive pulmonary disease, IPF: idiopathic pulmonary fibrosis, asthma), environmental conditions (smoking, epithelial injury), related cellular processes (epithelial differentiation/regeneration), and non-respiratory "control" conditions (schizophrenia, dieting), were compared. Similarity among differentially expressed (DE) gene lists was assessed using a permutation test, and a clustergram was constructed, identifying common gene markers. Global gene expression values were standardized using a novel approach, revealing that similarities between independent data sets run deeper than shared DE genes. Correlation of gene expression values identified putative gene regulators of the CF transmembrane conductance regulator (CFTR) gene, of potential therapeutic significance. Our study provides a novel perspective on CF epithelial gene expression in the context of other lung disorders and conditions, and highlights the contribution of differentiation/EMT and injury to gene signatures of respiratory disease. Copyright © 2015 Elsevier Inc. All rights reserved.
Amann, Julia; Zanini, Claudia; Rubinelli, Sara
2016-01-01
Background In order to adapt to societal changes, healthcare systems need to switch from a disease orientation to a patient-centered approach. Virtual patient networks are a promising tool to favor this switch and much can be learned from the open and user innovation literature where the involvement of online user communities in the innovation process is well-documented. Objectives The objectives of this study were 1) to describe the use of online communities as a tool to capture and harness innovative ideas of end users or consumers; and 2) to point to the potential value and challenges of these virtual platforms to function as a tool to inform and promote patient-centered care in the context of chronic health conditions. Methods A scoping review was conducted. A total of seven databases were searched for scientific articles published in English between 1995 and 2014. The search strategy was refined through an iterative process. Results A total of 144 studies were included in the review. Studies were coded inductively according to their research focus to identify groupings of papers. The first set of studies focused on the interplay of factors related to user roles, motivations, and behaviors that shape the innovation process within online communities. Studies of the second set examined the role of firms in online user innovation initiatives, identifying different organizational strategies and challenges. The third set of studies focused on the idea selection process and measures of success with respect to online user innovation initiatives. Finally, the findings from the review are presented in the light of the particularities and challenges discussed in current healthcare research. Conclusion The present paper highlights the potential of virtual patient communities to inform and promote patient-centered care, describes the key challenges involved in this process, and makes recommendations on how to address them. PMID:27272912
Amann, Julia; Zanini, Claudia; Rubinelli, Sara
2016-01-01
In order to adapt to societal changes, healthcare systems need to switch from a disease orientation to a patient-centered approach. Virtual patient networks are a promising tool to favor this switch and much can be learned from the open and user innovation literature where the involvement of online user communities in the innovation process is well-documented. The objectives of this study were 1) to describe the use of online communities as a tool to capture and harness innovative ideas of end users or consumers; and 2) to point to the potential value and challenges of these virtual platforms to function as a tool to inform and promote patient-centered care in the context of chronic health conditions. A scoping review was conducted. A total of seven databases were searched for scientific articles published in English between 1995 and 2014. The search strategy was refined through an iterative process. A total of 144 studies were included in the review. Studies were coded inductively according to their research focus to identify groupings of papers. The first set of studies focused on the interplay of factors related to user roles, motivations, and behaviors that shape the innovation process within online communities. Studies of the second set examined the role of firms in online user innovation initiatives, identifying different organizational strategies and challenges. The third set of studies focused on the idea selection process and measures of success with respect to online user innovation initiatives. Finally, the findings from the review are presented in the light of the particularities and challenges discussed in current healthcare research. The present paper highlights the potential of virtual patient communities to inform and promote patient-centered care, describes the key challenges involved in this process, and makes recommendations on how to address them.
NASA Astrophysics Data System (ADS)
Haapasalo, Erkka; Pellonpää, Juha-Pekka
2017-12-01
Various forms of optimality for quantum observables described as normalized positive-operator-valued measures (POVMs) are studied in this paper. We give characterizations for observables that determine the values of the measured quantity with probabilistic certainty or a state of the system before or after the measurement. We investigate observables that are free from noise caused by classical post-processing, mixing, or pre-processing of quantum nature. Especially, a complete characterization of pre-processing and post-processing clean observables is given, and necessary and sufficient conditions are imposed on informationally complete POVMs within the set of pure states. We also discuss joint and sequential measurements of optimal quantum observables.
Escape rates over potential barriers: variational principles and the Hamilton-Jacobi equation
NASA Astrophysics Data System (ADS)
Cortés, Emilio; Espinosa, Francisco
We describe a rigorous formalism to study some extrema statistics problems, like maximum probability events or escape rate processes, by taking into account that the Hamilton-Jacobi equation completes, in a natural way, the required set of boundary conditions of the Euler-Lagrange equation, for this kind of variational problem. We apply this approach to a one-dimensional stochastic process, driven by colored noise, for a double-parabola potential, where we have one stable and one unstable steady states.
Blood Sampling and Preparation Procedures for Proteomic Biomarker Studies of Psychiatric Disorders.
Guest, Paul C; Rahmoune, Hassan
2017-01-01
A major challenge in proteomic biomarker discovery and validation for psychiatric diseases is the inherent biological complexity underlying these conditions. There are also many technical issues which hinder this process such as the lack of standardization in sampling, processing and storage of bio-samples in preclinical and clinical settings. This chapter describes a reproducible procedure for sampling blood serum and plasma that is specifically designed for maximizing data quality output in two-dimensional gel electrophoresis, multiplex immunoassay and mass spectrometry profiling studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-03-01
This module covers EPA`s Superfund community involvement program, a set of requirements under the National Contingency Plan (NCP) designed to ensure that public is informed about site conditions and given the opportunity to comment on the proposed remedy of a Superfund site. The NCP serves to uphold the public`s right to voice opinions and express concerns about Superfund site activities. EPA must involve communities throughout Superfund process - particularly at critical decision-making steps in the process.
Detecting letters in continuous text: effects of display size.
Healy, A F; Oliver, W L; McNamara, T P
1987-05-01
In three letter detection experiments, subjects responded to each instance of the letter t in continuous text typed in a standard paragraph, typed with one to four words per line, or shown for a fixed duration on a computer screen either one or four words at a time. In the multiword and the standard paragraph conditions, errors were greatest and latencies longest on the word the when it was correctly spelled. This effect was diminished or reversed in the one-word conditions. These findings support a set of unitization hypotheses about the reading process, according to which subjects do not process the constituent letters of a word once that word has been identified unless no other word is in view.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Susan J. Foulk
Project Objective: The objectives of this study are to develop an accurate and stable on-line sensor system to monitor color and composition on-line in polymer melts, to develop a scheme for using the output to control extruders to eliminate the energy, material and operational costs of off-specification product, and to combine or eliminate some extrusion processes. Background: Polymer extrusion processes are difficult to control because the quality achieved in the final product is complexly affected by the properties of the extruder screw, speed of extrusion, temperature, polymer composition, strength and dispersion properties of additives, and feeder system properties. Extruder systemsmore » are engineered to be highly reproducible so that when the correct settings to produce a particular product are found, that product can be reliably produced time after time. However market conditions often require changes in the final product, different products or grades may be processed in the same equipment, and feed materials vary from lot to lot. All of these changes require empirical adjustment of extruder settings to produce a product meeting specifications. Optical sensor systems that can continuously monitor the composition and color of the extruded polymer could detect process upsets, drift, blending oscillations, and changes in dispersion of additives. Development of an effective control algorithm using the output of the monitor would enable rapid corrections for changes in materials and operating conditions, thereby eliminating most of the scrap and recycle of current processing. This information could be used to identify extruder systems issues, diagnose problem sources, and suggest corrective actions in real-time to help keep extruder system settings within the optimum control region. Using these advanced optical sensor systems would give extruder operators real-time feedback from their process. They could reduce the amount of off-spec product produced and significantly reduce energy consumption. Also, because blending and dispersion of additives and components in the final product could be continuously verified, we believe that, in many cases, intermediate compounding steps could be eliminated (saving even more time and energy).« less
Cognitive switching processes in young people with attention-deficit/hyperactivity disorder.
Oades, Robert D; Christiansen, Hanna
2008-01-01
Patients with attention-deficit/hyperactivity disorder (ADHD) can be slow at switching between stimuli, or between sets of stimuli to control behaviour appropriate to changing situations. We examined clinical and experimental parameters that may influence the speed of such processes measured in the trail-making (TMT) and switch-tasks in cases with ADHD combined type, their non-affected siblings and unrelated healthy controls. The latency for completion of the trail-making task controlling for psychomotor processing (TMT-B-A) was longer for ADHD cases, and correlated with Conners' ratings of symptom severity across all subjects. The effect decreased with age. Switch-task responses to questions of "Which number?" and "How many?" between sets of 1/111 or 3/333 elicited differential increases in latency with condition that affected all groups. But there was evidence for increased symptom-related intra-individual variability among the ADHD cases, and across all subjects. Young siblings showed familiality for some measures of TMT and switch-task performance but these were modest. The potential influences of moderator variables on the efficiency of processing stimulus change rather than the speed of processing are discussed.
Scoma, Alberto; Tóth, Szilvia Z
2017-01-01
Under low O 2 concentration (hypoxia) and low light, Chlamydomonas cells can produce H 2 gas in nutrient-replete conditions. This process is hindered by the presence of O 2 , which inactivates the [FeFe]-hydrogenase enzyme responsible for H 2 gas production shifting algal cultures back to normal growth. The main pathways accounting for H 2 production in hypoxia are not entirely understood, as much as culture conditions setting the optimal redox state in the chloroplast supporting long-lasting H 2 production. The reducing power for H 2 production can be provided by photosystem II (PSII) and photofermentative processes during which proteins are degraded via yet unknown pathways. In hetero- or mixotrophic conditions, acetate respiration was proposed to indirectly contribute to H 2 evolution, although this pathway has not been described in detail. Recently, Jurado-Oller et al. (Biotechnol Biofuels 8: 149, 7) proposed that acetate respiration may substantially support H 2 production in nutrient-replete hypoxic conditions. Addition of low amounts of O 2 enhanced acetate respiration rate, particularly in the light, resulting in improved H 2 production. The authors surmised that acetate oxidation through the glyoxylate pathway generates intermediates such as succinate and malate, which would be in turn oxidized in the chloroplast generating FADH 2 and NADH. The latter would enter a PSII-independent pathway at the level of the plastoquinone pool, consistent with the light dependence of H 2 production. The authors concluded that the water-splitting activity of PSII has a minor role in H 2 evolution in nutrient-replete, mixotrophic cultures under hypoxia. However, their results with the PSII inhibitor DCMU also reveal that O 2 or acetate additions promoted acetate respiration over the usually dominant PSII-dependent pathway. The more oxidized state experienced by these cultures in combination with the relatively short experimental time prevented acclimation to hypoxia, thus precluding the PSII-dependent pathway from contributing to H 2 production. In Chlamydomonas , continuous H 2 gas evolution is expected once low O 2 partial pressure and optimal reducing conditions are set. Under nutrient-replete conditions, the electrogenic processes involved in H 2 photoproduction may rely on various electron transport pathways. Understanding how physiological conditions select for specific metabolic routes is key to achieve economic viability of this renewable energy source.
Young, Bridget; Bagley, Heather
2016-01-01
This commentary article describes three interactive workshops that explored how patients can contribute to decisions about what outcomes are measured in clinical trials across the world. Outcomes like quality of life, side-effects and pain are used in trials to measure whether a treatment is effective. Here, we outline how research groups are increasingly coming together to develop 'core outcomes sets' for particular conditions. Core outcome sets are lists of agreed outcomes. Their use will help in identifying which treatments are effective by enabling people to compare the findings of different clinical trials in the same condition. Currently, it is often very difficult to make these comparisons because different studies often measure different outcomes. Delegates attending the workshops included patients, clinicians and researchers. They discussed ways of making core outcome set development more meaningful and accessible for patients, and ensuring that they have a genuine say in the development process. This article summarises these discussions and concludes by identifying three distinctive challenges in securing patient input to core outcome set development: the process and objectives can seem far removed from the immediate concerns of patients, difficulties can arise in securing patient input on an international scale, and difficulties can also arise in bringing multiple stakeholder groups together to achieve consensus. While patient participation, involvement and engagement in core outcome set development can draw on lessons from other research areas, these distinctive challenges point to the need for distinctive solutions to enable meaningful patient input to core outcome set development. Background This article describes three workshops that explored how patients can contribute to decisions about what outcomes are measured in clinical trials. People need evidence about what treatments are best for particular health conditions. The strongest evidence comes from systematic reviews comparing outcomes across different studies of treatments for a particular condition. However, it is often difficult to do these comparisons because the different studies-even though they have all investigated the same condition-often measure different outcomes. To tackle this problem, research teams are increasingly coming together to develop core outcome sets (COS) for particular conditions or treatments. The goal is that across the world, all the research teams working on the same condition or treatment will then use the COS in their research. Main body We report on three interactive workshops that explored how patients and the public can contribute to decision making about what outcomes should be included in a COS. About 100 international delegates, including researchers, clinicians and patients, attended the workshops. The workshops were held in the United Kingdom, Italy and Canada as part of the COMET (Core Outcome Measures in Effectiveness Trials) Initiative annual meetings. Patients who had some experience as research advisors, collaborators, partners or co-ordinators facilitated the workshops together with a researcher. Notes made during each workshop informed the preparation of this article. Workshop discussion focussed on ways of making core outcome set development more meaningful and accessible for patients. Delegates wanted patients to have a genuine say, alongside other stakeholders, in what outcomes are included in COS. Delegates felt that key to ensuring this is recognising that patient participation in COS development alone is not enough, and that patients will also need to be involved in the design of COS development studies. Conclusion We conclude by pointing to some distinctive challenges in including patients in COS development. While the COS development community can draw on the lessons learnt from other research areas about patient participation, involvement and engagement, the distinctive challenges that arise in COS development point to the need for some distinctive solutions too.
Sibillano, Teresa; Ancona, Antonio; Rizzi, Domenico; Lupo, Valentina; Tricarico, Luigi; Lugarà, Pietro Mario
2010-01-01
The plasma optical radiation emitted during CO2 laser welding of stainless steel samples has been detected with a Si-PIN photodiode and analyzed under different process conditions. The discrete wavelet transform (DWT) has been used to decompose the optical signal into various discrete series of sequences over different frequency bands. The results show that changes of the process settings may yield different signal features in the range of frequencies between 200 Hz and 30 kHz. Potential applications of this method to monitor in real time the laser welding processes are also discussed.
NASA Astrophysics Data System (ADS)
Staniek, Marcin
2018-05-01
The article provides a discussion concerning a tool used for road pavement condition assessment based on signals of linear accelerations recorded with high sampling frequency for typical vehicles traversing the road network under real-life road traffic conditions. Specific relationships have been established for the sake of road pavement condition assessment, including identification of road sections of poor technical condition. The data thus acquired have been verified with regard to repeatability of estimated road pavement assessment indices. The data make it possible to describe the road network status against an area in which users of the system being developed move. What proves to be crucial in the assessment process is the scope of the data set based on multiple transfers within the road network.
ERIC Educational Resources Information Center
Szczesny, Thomas Joseph
2017-01-01
Though much is known about the school environments that increase students' access to opportunity, the process for developing conditions that presage such outcomes remains a pertinent area of study. The reality that widespread school performance has yet to realize the promise of true educational equity, particularly in urban settings, attests to…
ERIC Educational Resources Information Center
Dornan, Tim; Muijtjens, Arno; Graham, Jennifer; Scherpbier, Albert; Boshuizen, Henny
2012-01-01
The drive to quality-manage medical education has created a need for valid measurement instruments. Validity evidence includes the theoretical and contextual origin of items, choice of response processes, internal structure, and interrelationship of a measure's variables. This research set out to explore the validity and potential utility of an…
Reduction in Force: Is Your Board Prepared?
ERIC Educational Resources Information Center
Kudlaty, Frank
One of the conditions set forth in the hiring of the new superintendent of a southeast Texas school district was that a reduction in force would be accomplished. How this process, that usually involves six months planning time and and additional year to carry out, was accomplished in a period of six months is detailed in this speech. An analysis…
Wildlife preservation and recreational use: Conflicting goals of wildland management
David N. Cole; Richard L. Knight
1991-01-01
Large tracts of wildland in North America have been set aside as wilderness areas and national parks. More than 200 million acres (88 million ha) of such lands have been formally designated in Canada and the United States (Eidsvik 1989). The primary goal of these designations is the preservation of undisturbed natural conditions and processes.
ERIC Educational Resources Information Center
Knight, David; Wadhwa, Anita
2014-01-01
In this article, we tackle the disadvantaging conditions of zero tolerance policies in school settings and advocate using an alternative approach--critical restorative justice through peacemaking circles--to nurture resilience and open opportunity at the school level. In the process, this article builds on theory and qualitative research and…
A transition-metal-free synthesis of arylcarboxyamides from aryl diazonium salts and isocyanides.
Xia, Zhonghua; Zhu, Qiang
2013-08-16
A transition-metal-free carboxyamidation process, using aryl diazonium tetrafluoroborates and isocyanides under mild conditions, has been developed. This novel conversion was initiated by a base and solvent induced aryl radical, followed by radical addition to isocyanide and single electron transfer (SET) oxidation, affording the corresponding arylcarboxyamide upon hydration of the nitrilium intermediate.
Learning objects and training complex machines.
Martins, Edgard
2012-01-01
There are situations in the operation of complex machinery which is significant pressure. In need of capturing, interpreting and processing information from instruments, often in seconds. This occurs in the middle where it operates the pilot and the aircraft will be established a set of operations that will culminate with a maneuver, consisting of a substantial and binding set of procedures performed for this driver. This has little time to evaluate and act, supported by aircraft instruments and external environmental signals captured by the senses, which will stimulate conditioned actions that, if executed without due accuracy, is reflected in a deadly mistake. These situations cause a state of tension and unpredictability, especially when there is bad weather and / or no visibility and bad wind conditions occur and are not supportive or even shrinkage, or even partial or total ability to operate the airplane happen..
Surface tension determination using liquid sample micromirror property
NASA Astrophysics Data System (ADS)
Hošek, Jan
2007-05-01
This paper presents an application of adaptive optics principle onto small sample of liquid surface tension measurement. The principle of experimental method devised by Ferguson (1924) is based on measurement of pressure difference across a liquid sample placed into small diameter capillary on condition of one flat meniscus of the liquid sample. Planarity or curvature radius of the capillary tip meniscus has to be measured and controlled, in order to fulfill this condition during measurement. Two different optical set-ups using liquid meniscus micromirror property are presented and its suitability for meniscus profile determination is compared. Meniscus radius optical measurement, data processing and control algorithm of the adaptive micromirror profile set are presented too. The presented adaptive optics system can be used for focal length control of microsystems based on liquid micromirrors or microlenses with long focal distances especially.
Kim, Hui Taek; Ahn, Tae Young; Jang, Jae Hoon; Kim, Kang Hee; Lee, Sung Jae; Jung, Duk Young
2017-03-01
Three-dimensional (3D) computed tomography imaging is now being used to generate 3D models for planning orthopaedic surgery, but the process remains time consuming and expensive. For chronic radial head dislocation, we have designed a graphic overlay approach that employs selected 3D computer images and widely available software to simplify the process of osteotomy site selection. We studied 5 patients (2 traumatic and 3 congenital) with unilateral radial head dislocation. These patients were treated with surgery based on traditional radiographs, but they also had full sets of 3D CT imaging done both before and after their surgery: these 3D CT images form the basis for this study. From the 3D CT images, each patient generated 3 sets of 3D-printed bone models: 2 copies of the preoperative condition, and 1 copy of the postoperative condition. One set of the preoperative models was then actually osteotomized and fixed in the manner suggested by our graphic technique. Arcs of rotation of the 3 sets of 3D-printed bone models were then compared. Arcs of rotation of the 3 groups of bone models were significantly different, with the models osteotomized accordingly to our graphic technique having the widest arcs. For chronic radial head dislocation, our graphic overlay approach simplifies the selection of the osteotomy site(s). Three-dimensional-printed bone models suggest that this approach could improve range of motion of the forearm in actual surgical practice. Level IV-therapeutic study.
The perceptual processing capacity of summary statistics between and within feature dimensions
Attarha, Mouna; Moore, Cathleen M.
2015-01-01
The simultaneous–sequential method was used to test the processing capacity of statistical summary representations both within and between feature dimensions. Sixteen gratings varied with respect to their size and orientation. In Experiment 1, the gratings were equally divided into four separate smaller sets, one of which with a mean size that was larger or smaller than the other three sets, and one of which with a mean orientation that was tilted more leftward or rightward. The task was to report the mean size and orientation of the oddball sets. This therefore required four summary representations for size and another four for orientation. The sets were presented at the same time in the simultaneous condition or across two temporal frames in the sequential condition. Experiment 1 showed evidence of a sequential advantage, suggesting that the system may be limited with respect to establishing multiple within-feature summaries. Experiment 2 eliminates the possibility that some aspect of the task, other than averaging, was contributing to this observed limitation. In Experiment 3, the same 16 gratings appeared as one large superset, and therefore the task only required one summary representation for size and another one for orientation. Equal simultaneous–sequential performance indicated that between-feature summaries are capacity free. These findings challenge the view that within-feature summaries drive a global sense of visual continuity across areas of the peripheral visual field, and suggest a shift in focus to seeking an understanding of how between-feature summaries in one area of the environment control behavior. PMID:26360153
Extracting the QCD ΛMS¯ parameter in Drell-Yan process using Collins-Soper-Sterman approach
NASA Astrophysics Data System (ADS)
Taghavi, R.; Mirjalili, A.
2017-03-01
In this work, we directly fit the QCD dimensional transmutation parameter, ΛMS¯, to experimental data of Drell-Yan (DY) observables. For this purpose, we first obtain the evolution of transverse momentum dependent parton distribution functions (TMDPDFs) up to the next-to-next-to-leading logarithm (NNLL) approximation based on Collins-Soper-Sterman (CSS) formalism. As is expecting the TMDPDFs are appearing at larger values of transverse momentum by increasing the energy scales and also the order of approximation. Then we calculate the cross-section related to the TMDPDFs in the DY process. As a consequence of global fitting to the five sets of experimental data at different low center-of-mass energies and one set at high center-of-mass energy, using CETQ06 parametrizations as our boundary condition, we obtain ΛMS¯ = 221 ± 7(stat) ± 54(theory) MeV corresponding to the renormalized coupling constant αs(Mz2) = 0.117 ± 0.001(stat) ± 0.004(theory) which is within the acceptable range for this quantity. The goodness of χ2/d.o.f = 1.34 shows the results for DY cross-section are in good agreement with different experimental sets, containing E288, E605 and R209 at low center-of-mass energies and D0, CDF data at high center-of-mass energy. The repeated calculations, using HERAPDFs parametrizations is yielding us numerical values for fitted parameters very close to what we obtain using CETQ06 PDFs set. This indicates that the obtained results have enough stability by variations in the boundary conditions.
Miyake, Yoshie; Okamoto, Yasumasa; Onoda, Keiichi; Shirao, Naoko; Okamoto, Yuri; Otagaki, Yoko; Yamawaki, Shigeto
2010-04-15
Eating disorders (EDs) are associated with abnormalities of body image perception. The aim of the present study was to investigate the functional abnormalities in brain systems during processing of negative words concerning body images in patients with EDs. Brain responses to negative words concerning body images (task condition) and neutral words (control condition) were measured using functional magnetic resonance imaging in 36 patients with EDs (12 with the restricting type anorexia nervosa; AN-R, 12 with the binging-purging type anorexia nervosa; AN-BP, and 12 with bulimia nervosa; BN) and 12 healthy young women. Participants were instructed to select the most negative word from each negative body-image word set and to select the most neutral word from each neutral word set. In the task relative to the control condition, the right amygdala was activated both in patients with AN-R and in patients with AN-BP. The left medial prefrontal cortex (mPFC) was activated both in patients with BN and in patients with AN-BP. It is suggested that these brain activations may be associated with abnormalities of body image perception. Amygdala activation may be involved in fearful emotional processing of negative words concerning body image and strong fears of gaining weight. One possible interpretation of the finding of mPFC activation is that it may reflect an attempt to regulate the emotion invoked by the stimuli. These abnormal brain functions may help provide better accounts of the psychopathological mechanisms underlying EDs. Copyright 2009 Elsevier Inc. All rights reserved.
A novel way of integrating rule-based knowledge into a web ontology language framework.
Gamberger, Dragan; Krstaçić, Goran; Jović, Alan
2013-01-01
Web ontology language (OWL), used in combination with the Protégé visual interface, is a modern standard for development and maintenance of ontologies and a powerful tool for knowledge presentation. In this work, we describe a novel possibility to use OWL also for the conceptualization of knowledge presented by a set of rules. In this approach, rules are represented as a hierarchy of actionable classes with necessary and sufficient conditions defined by the description logic formalism. The advantages are that: the set of the rules is not an unordered set anymore, the concepts defined in descriptive ontologies can be used directly in the bodies of rules, and Protégé presents an intuitive tool for editing the set of rules. Standard ontology reasoning processes are not applicable in this framework, but experiments conducted on the rule sets have demonstrated that the reasoning problems can be successfully solved.
Vanrie, Jan; Béatse, Erik; Wagemans, Johan; Sunaert, Stefan; Van Hecke, Paul
2002-01-01
It has been proposed that object perception can proceed through different routes, which can be situated on a continuum ranging from complete viewpoint-dependency to complete viewpoint-independency, depending on the objects and the task at hand. Although these different routes have been extensively demonstrated on the behavioral level, the corresponding distinction in the underlying neural substrate has not received the same attention. Our goal was to disentangle, on the behavioral and the neurofunctional level, a process associated with extreme viewpoint-dependency, i.e. mental rotation, and a process associated with extreme viewpoint-independency, i.e. the use of viewpoint-invariant, diagnostic features. Two sets of 3-D block figures were created that either differed in handedness (original versus mirrored) or in the angles joining the block components (orthogonal versus skewed). Behavioral measures on a same-different judgment task were predicted to be dependent on viewpoint in the rotation condition (same versus mirrored), but not in the invariance condition (same angles versus different angles). Six subjects participated in an fMRI experiment while presented with both conditions in alternating blocks. Both reaction times and accuracy confirmed the predicted dissociation between the two conditions. Neurofunctional results indicate that all cortical areas activated in the invariance condition were also activated in the rotation condition. Parietal areas were more activated than occipito-temporal areas in the rotation condition, while this pattern was reversed in the invariance condition. Furthermore, some areas were activated uniquely by the rotation condition, probably reflecting the additional processes apparent in the behavioral response patterns.
Li, Bao-Sheng; Wang, Yuhuang; Proctor, Rupert S. J.; Zhang, Yuexia; Webster, Richard D.; Yang, Song; Song, Baoan; Chi, Yonggui Robin
2016-01-01
Benzyl bromides and related molecules are among the most common substrates in organic synthesis. They are typically used as electrophiles in nucleophilic substitution reactions. These molecules can also be activated via single-electron-transfer (SET) process for radical reactions. Representative recent progress includes α-carbon benzylation of ketones and aldehydes via photoredox catalysis. Here we disclose the generation of (nitro)benzyl radicals via N-heterocyclic carbene (NHC) catalysis under reductive conditions. The radical intermediates generated via NHC catalysis undergo formal 1,2-addition with ketones to eventually afford tertiary alcohol products. The overall process constitutes a formal polarity-inversion of benzyl bromide, allowing a direct coupling of two initially electrophilic carbons. Our study provides a new carbene-catalysed reaction mode that should enable unconventional transformation of (nitro)benzyl bromides under mild organocatalytic conditions. PMID:27671606
Zhang, Shu-Xin; Chai, Xin-Sheng; He, Liang
2016-09-16
This work reports on a method for the accurate determination of fiber water-retaining capability at process conditions by headspace gas chromatography (HS-GC) method. The method was based the HS-GC measurement of water vapor on a set closed vials containing in a given amount pulp with different amounts of water addition, from under-saturation to over-saturation. By plotting the equilibrated water vapor signal vs. the amount of water added in pulp, two different trend lines can be observed, in which the transition of the lines corresponds to fiber water-retaining capability. The results showed that the HS-GC method has good measurement precision (much better than the reference method) and good accuracy. The present method can be also used for determining pulp fiber water-retaining capability at the process temperatures in both laboratory research and mill applications. Copyright © 2016 Elsevier B.V. All rights reserved.
Attention to emotion and non-Western faces: revisiting the facial feedback hypothesis.
Dzokoto, Vivian; Wallace, David S; Peters, Laura; Bentsi-Enchill, Esi
2014-01-01
In a modified replication of Strack, Martin, and Stepper's demonstration of the Facial Feedback Hypothesis (1988), we investigated the effect of attention to emotion on the facial feedback process in a non-western cultural setting. Participants, recruited from two universities in Ghana, West Africa, gave self-reports of their perceived levels of attention to emotion, and then completed cartoon-rating tasks while randomly assigned to smiling, frowning, or neutral conditions. While participants with low Attention to Emotion scores displayed the usual facial feedback effect (rating cartoons as funnier when in the smiling compared to the frowning condition), the effect was not present in individuals with high Attention to Emotion. The findings indicate that (1) the facial feedback process can occur in contexts beyond those in which the phenomenon has previously been studied, and (2) aspects of emotion regulation, such as Attention to Emotion can interfere with the facial feedback process.
Effects of memory load on hemispheric asymmetries of colour memory.
Clapp, Wes; Kirk, Ian J; Hausmann, Markus
2007-03-01
Hemispheric asymmetries in colour perception have been a matter of debate for some time. Recent evidence suggests that lateralisation of colour processing may be largely task specific. Here we investigated hemispheric asymmetries during different types and phases of a delayed colour-matching (recognition) memory task. A total of 11 male and 12 female right-handed participants performed colour-memory tasks. The task involved presentation of a set of colour stimuli (encoding), and subsequent indication (forced choice) of which colours in a larger set had previously appeared at the retrieval or recognition phase. The effect of memory load (set size), and the effect of lateralisation at the encoding or retrieval phases were investigated. Overall, the results indicate a right hemisphere advantage in colour processing, which was particularly pronounced in high memory load conditions, and was seen in males rather than female participants. The results suggest that verbal (mnemonic) strategies can significantly affect the magnitude of hemispheric asymmetries in a non-verbal task.
Safdari, Reza; Ghazisaeedi, Marjan; Mirzaee, Mahboobeh; Farzi, Jebrail; Goodini, Azadeh
2014-01-01
Dynamic reporting tools, such as dashboards, should be developed to measure emergency department (ED) performance. However, choosing an effective balanced set of performance measures and key performance indicators (KPIs) is a main challenge to accomplish this. The aim of this study was to develop a balanced set of KPIs for use in ED strategic dashboards following an analytic hierarchical process. The study was carried out in 2 phases: constructing ED performance measures based on balanced scorecard perspectives and incorporating them into analytic hierarchical process framework to select the final KPIs. The respondents placed most importance on ED internal processes perspective especially on measures related to timeliness and accessibility of care in ED. Some measures from financial, customer, and learning and growth perspectives were also selected as other top KPIs. Measures of care effectiveness and care safety were placed as the next priorities too. The respondents placed least importance on disease-/condition-specific "time to" measures. The methodology can be presented as a reference model for development of KPIs in various performance related areas based on a consistent and fair approach. Dashboards that are designed based on such a balanced set of KPIs will help to establish comprehensive performance measurements and fair benchmarks and comparisons.
Jacobsen, Thomas; Höfel, Lea
2003-12-01
Descriptive symmetry and evaluative aesthetic judgment processes were compared using identical stimuli in both judgment tasks. Electrophysiological activity was recorded while participants judged novel formal graphic patterns in a trial-by-trial cuing setting using binary responses (symmetric, not symmetric; beautiful, not beautiful). Judgment analyses of a Phase 1 test and main experiment performance resulted in individual models, as well as group models, of the participants' judgment systems. Symmetry showed a strong positive correlation with beautiful judgments and was the most important cue. Descriptive judgments were performed faster than evaluative judgments. The ERPs revealed a phasic, early frontal negativity for the not-beautiful judgments. A sustained posterior negativity was observed in the symmetric condition. All conditions showed late positive potentials (LPPs). Evaluative judgment LPPs revealed a more pronounced right lateralization. It is argued that the present aesthetic judgments engage a two-stage process consisting of early, anterior frontomedian impression formation after 300 msec and right-hemisphere evaluative categorization around 600 msec after onset of the graphic patterns.
A stochastic hybrid systems based framework for modeling dependent failure processes
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313
A stochastic hybrid systems based framework for modeling dependent failure processes.
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.
Experimental quantum verification in the presence of temporally correlated noise
NASA Astrophysics Data System (ADS)
Mavadia, S.; Edmunds, C. L.; Hempel, C.; Ball, H.; Roy, F.; Stace, T. M.; Biercuk, M. J.
2018-02-01
Growth in the capabilities of quantum information hardware mandates access to techniques for performance verification that function under realistic laboratory conditions. Here we experimentally characterise the impact of common temporally correlated noise processes on both randomised benchmarking (RB) and gate-set tomography (GST). Our analysis highlights the role of sequence structure in enhancing or suppressing the sensitivity of quantum verification protocols to either slowly or rapidly varying noise, which we treat in the limiting cases of quasi-DC miscalibration and white noise power spectra. We perform experiments with a single trapped 171Yb+ ion-qubit and inject engineered noise (" separators="∝σ^ z ) to probe protocol performance. Experiments on RB validate predictions that measured fidelities over sequences are described by a gamma distribution varying between approximately Gaussian, and a broad, highly skewed distribution for rapidly and slowly varying noise, respectively. Similarly we find a strong gate set dependence of default experimental GST procedures in the presence of correlated errors, leading to significant deviations between estimated and calculated diamond distances in the presence of correlated σ^ z errors. Numerical simulations demonstrate that expansion of the gate set to include negative rotations can suppress these discrepancies and increase reported diamond distances by orders of magnitude for the same error processes. Similar effects do not occur for correlated σ^ x or σ^ y errors or depolarising noise processes, highlighting the impact of the critical interplay of selected gate set and the gauge optimisation process on the meaning of the reported diamond norm in correlated noise environments.
The Multi-Isotope Process (MIP) Monitor Project: FY13 Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meier, David E.; Coble, Jamie B.; Jordan, David V.
The Multi-Isotope Process (MIP) Monitor provides an efficient approach to monitoring the process conditions in reprocessing facilities in support of the goal of “… (minimization of) the risks of nuclear proliferation and terrorism.” The MIP Monitor measures the distribution of the radioactive isotopes in product and waste streams of a nuclear reprocessing facility. These isotopes are monitored online by gamma spectrometry and compared, in near-real-time, to spectral patterns representing “normal” process conditions using multivariate analysis and pattern recognition algorithms. The combination of multivariate analysis and gamma spectroscopy allows us to detect small changes in the gamma spectrum, which may indicatemore » changes in process conditions. By targeting multiple gamma-emitting indicator isotopes, the MIP Monitor approach is compatible with the use of small, portable, relatively high-resolution gamma detectors that may be easily deployed throughout an existing facility. The automated multivariate analysis can provide a level of data obscurity, giving a built-in information barrier to protect sensitive or proprietary operational data. Proof-of-concept simulations and experiments have been performed in previous years to demonstrate the validity of this tool in a laboratory setting for systems representing aqueous reprocessing facilities. However, pyroprocessing is emerging as an alternative to aqueous reprocessing techniques.« less
NASA Astrophysics Data System (ADS)
Avian, M.; Bauer, A.; Kellerer-Pirklbauer, A.
2009-04-01
Monitoring periglacial and glacial processes is a crucial task in observing the ongoing global warming as a result of climate change in high mountain areas. The project ALPCHANGE - Climate change and impacts in Southern Austrian Alpine Regions (www.alpchange.at) - comprises 4 test sites for terrestrial laserscanning (using Riegl long range laserscanner LPM-2k) such as Pasterze glacier (glacial, beginning in 2001, 8 data sets), Gössnitzkees (glacial, beginning in 2000, 9 data sets), Hinteres Langtalkar (periglacial, beginning in 2000, 9 data sets) and Fallbichl (periglacial, beginning in 2008). In September 2008 an airborne LiDAR (Light detection and ranging) campaign was carried out in all test sites for comparison to terrestrial data. Data acquisition in very remote areas such as Gössnitzkees and Hinteres Langtalkar were affected by partly insufficient power supply due to long acquisition time due to e.g. changing weather conditions. At Pasterze glacier especially meteorological conditions downgraded the quality of acquired data. Wrong distances are partly measured due to varying temperatures in different air packages covering the glacier. Reason for that is a large difference in elevation of appr. 300m from the scanneŕs position to the scanning area as well as an unfavourable scanning angle of 15 - 45°. We used up to 10 reflective traffic signs placed on the glacier surface, independently positioned with DGPS and geodetic surveys, to validate measurement data and correct the digital terrain model (DTM). At all test sites it turned to be crucial that non moving areas such as bedrock are within the scanning sector. Changing conditions (e.g. scanner horizontation, atmospheric influences) always need independent data for orientation and validation.
Neural activity in the hippocampus predicts individual visual short-term memory capacity.
von Allmen, David Yoh; Wurmitzer, Karoline; Martin, Ernst; Klaver, Peter
2013-07-01
Although the hippocampus had been traditionally thought to be exclusively involved in long-term memory, recent studies raised controversial explanations why hippocampal activity emerged during short-term memory tasks. For example, it has been argued that long-term memory processes might contribute to performance within a short-term memory paradigm when memory capacity has been exceeded. It is still unclear, though, whether neural activity in the hippocampus predicts visual short-term memory (VSTM) performance. To investigate this question, we measured BOLD activity in 21 healthy adults (age range 19-27 yr, nine males) while they performed a match-to-sample task requiring processing of object-location associations (delay period = 900 ms; set size conditions 1, 2, 4, and 6). Based on individual memory capacity (estimated by Cowan's K-formula), two performance groups were formed (high and low performers). Within whole brain analyses, we found a robust main effect of "set size" in the posterior parietal cortex (PPC). In line with a "set size × group" interaction in the hippocampus, a subsequent Finite Impulse Response (FIR) analysis revealed divergent hippocampal activation patterns between performance groups: Low performers (mean capacity = 3.63) elicited increased neural activity at set size two, followed by a drop in activity at set sizes four and six, whereas high performers (mean capacity = 5.19) showed an incremental activity increase with larger set size (maximal activation at set size six). Our data demonstrated that performance-related neural activity in the hippocampus emerged below capacity limit. In conclusion, we suggest that hippocampal activity reflected successful processing of object-location associations in VSTM. Neural activity in the PPC might have been involved in attentional updating. Copyright © 2013 Wiley Periodicals, Inc.
Efflorescence as a source of hydrated sulfate minerals in valley settings on Mars
NASA Astrophysics Data System (ADS)
Szynkiewicz, Anna; Borrok, David M.; Vaniman, David T.
2014-05-01
A distinctive sulfur cycle dominates many geological processes on Mars and hydrated sulfate minerals are found in numerous topographic settings with widespread occurrences on the Martian surface. However, many of the key processes controlling the hydrological transport of sulfur, including sulfur sources, climate and the depositional history that led to precipitation of these minerals, remain unclear. In this paper, we use a model for the formation of sulfate efflorescent salts (Mg-Ca-Na sulfates) in the Rio Puerco watershed of New Mexico, a terrestrial analog site from the semiarid Southwest U.S., to assess the origin and environmental conditions that may have controlled deposition of hydrated sulfates in Valles Marineris on Mars. Our terrestrial geochemical results (δS34 of -36.0 to +11.1‰) show that an ephemeral arid hydrological cycle that mobilizes sulfur present in the bedrock as sulfides, sulfate minerals, and dry/wet atmospheric deposition can lead to widespread surface accumulations of hydrated sulfate efflorescences. Repeating cycles of salt dissolution and reprecipitation appear to be major processes that migrate sulfate efflorescences to sites of surface deposition and ultimately increase the aqueous SO42- flux along the watershed (average 41,273 metric tons/yr). We suggest that similar shallow processes may explain the occurrence of hydrated sulfates detected on the scarps and valley floors of Valles Marineris on Mars. Our estimates of salt mass and distribution are in accord with studies that suggest a rather short-lived process of sulfate formation (minimum rough estimate ∼100 to 1000 years) and restriction by prevailing arid conditions on Mars.
Brandes, I; Wunderlich, B; Niehues, C
2011-04-01
The aim of the EVA study was to develop an outpatient education programme for women with endometriosis with a view to permanent transfer into routine care. Implementation of the programme generated several problems and obstacles that are not, or not to this extent, present in the inpatient setting of a rehabilitation clinic. The patient education programme was developed in line with an existing inpatient programme, taking into account the criteria for evaluating such training programmes. Several adjustments to process, structure and content level had to be made to achieve the conditions of the outpatient setting. Since May 2008, 17 training courses took place in various outpatient and acute inpatient settings, and a total of 156 women with diagnosed endometriosis participated. The problems and obstacles that emerged affected similarly the process, structure and content of the training programme. On the structural level, especially problems with availability of rooms, technical equipment and trainers occurred, leading to significant time pressures. The main problem on the procedural level was the recruitment of participants, since--in contrast to the inpatient setting and to disease management programmes--no assignment by physicians or insurers takes place. Furthermore, gainful activity of the participants and the resulting shift of the training beyond the usual working and opening hours are important barriers for implementation. The unavailability of trainers in these settings requires creative solutions. Regarding the contents of the training it has to be taken into consideration that--unlike the inpatient setting--no aftercare intervention and no individual psychological consultation are possible. The training programme has to be designed in such a way that all problems that have occurred could be dealt with appropriately. In summary, the permanent implementation of an outpatient training programme is possible but is more time-consuming than inpatient trainings due to unfavourable conditions concerning recruitment, organization and procedure. It seems that "soft" factors such as motivation, integration into the clinic concept, well-defined acceptance of responsibility and experience in dealing with the disease and with patient groups are the critical success factors. Until now cost carriage by the health insurance funds has not been realized--except for disease management programmes; so there is still a need for action here. © Georg Thieme Verlag KG Stuttgart · New York.
ARIADNE: a Tracking System for Relationships in LHCb Metadata
NASA Astrophysics Data System (ADS)
Shapoval, I.; Clemencic, M.; Cattaneo, M.
2014-06-01
The data processing model of the LHCb experiment implies handling of an evolving set of heterogeneous metadata entities and relationships between them. The entities range from software and databases states to architecture specificators and software/data deployment locations. For instance, there is an important relationship between the LHCb Conditions Database (CondDB), which provides versioned, time dependent geometry and conditions data, and the LHCb software, which is the data processing applications (used for simulation, high level triggering, reconstruction and analysis of physics data). The evolution of CondDB and of the LHCb applications is a weakly-homomorphic process. It means that relationships between a CondDB state and LHCb application state may not be preserved across different database and application generations. These issues may lead to various kinds of problems in the LHCb production, varying from unexpected application crashes to incorrect data processing results. In this paper we present Ariadne - a generic metadata relationships tracking system based on the novel NoSQL Neo4j graph database. Its aim is to track and analyze many thousands of evolving relationships for cases such as the one described above, and several others, which would otherwise remain unmanaged and potentially harmful. The highlights of the paper include the system's implementation and management details, infrastructure needed for running it, security issues, first experience of usage in the LHCb production and potential of the system to be applied to a wider set of LHCb tasks.
Reuter, Martin; Montag, Christian; Peters, Kristina; Kocher, Anne; Kiefer, Markus
2009-01-01
The role of the prefrontal Cortex (PFC) in higher cognitive functions - including working memory, conflict resolution, set shifting and semantic processing - has been demonstrated unequivocally. Despite the great heterogeneity among tasks measuring these phenotypes, due in part to the different cognitive sub-processes implied and the specificity of the stimulus material used, there is agreement that all of these tasks recruit an executive control system located in the PFC. On a biochemical level it is known that the dopaminergic system plays an important role in executive control functions. Evidence comes from molecular genetics relating the functional COMT Val158Met polymorphism to working memory and set shifting. In order determine whether this pattern of findings generalises to linguistic and semantic processing, we investigated the effects of the COMT Val158Met polymorphism in lexical decision making using masked and unmasked versions of the semantic priming paradigm on N = 104 healthy subjects. Although we observed strong priming effects in all conditions (masked priming, unmasked priming with short/long stimulus asynchronies (SOAs), direct and indirect priming), COMT was not significantly related to priming, suggesting no reliable influence on semantic processing. However, COMT Val158Met was strongly associated with lexical decision latencies in all priming conditions if considered separately, explaining between 9 and 14.5% of the variance. Therefore, the findings indicate that COMT mainly influences more general executive control functions in the PFC supporting the speed of lexical decisions.
A one dimensional moving bed biofilm reactor model for nitrification of municipal wastewaters.
Barry, Ugo; Choubert, Jean-Marc; Canler, Jean-Pierre; Pétrimaux, Olivier; Héduit, Alain; Lessard, Paul
2017-08-01
This work presents a one-dimensional model of a moving bed bioreactor (MBBR) process designed for the removal of nitrogen from raw wastewaters. A comprehensive experimental strategy was deployed at a semi-industrial pilot-scale plant fed with a municipal wastewater operated at 10-12 °C, and surface loading rates of 1-2 g filtered COD/m 2 d and 0.4-0.55 g NH 4 -N/m 2 d. Data were collected on influent/effluent composition, and on measurement of key variables or parameters (biofilm mass and maximal thickness, thickness of the limit liquid layer, maximal nitrification rate, oxygen mass transfer coefficient). Based on time-course variations in these variables, the MBBR model was calibrated at two time-scales and magnitudes of dynamic conditions, i.e., short-term (4 days) calibration under dynamic conditions and long-term (33 days) calibration, and for three types of carriers. A set of parameters suitable for the conditions was proposed, and the calibrated parameter set is able to simulate the time-course change of nitrogen forms in the effluent of the MBBR tanks, under the tested operated conditions. Parameters linked to diffusion had a strong influence on how robustly the model is able to accurately reproduce time-course changes in effluent quality. Then the model was used to optimize the operations of MBBR layout. It was shown that the main optimization track consists of the limitation of the aeration supply without changing the overall performance of the process. Further work would investigate the influence of the hydrodynamic conditions onto the thickness of the limit liquid layer and the "apparent" diffusion coefficient in the biofilm parameters.
Imagined futures in living with multiple conditions: Positivity, relationality and hopelessness.
Coyle, Lindsay-Ann; Atkinson, Sarah
2018-02-01
Hope serves as an overarching concept for a range of engagements that demonstrate the benefits of a positive outlook for coping with chronic conditions of ill-health and disability. A dominant engagement through medicine has positioned hope as a desirable attribute and its opposite, hopelessness, as pathological. In this engagement hope is individual, internally located and largely cognitive and able to be learned. Attaining hope reflects a process of coming to terms with the losses associated with long-term conditions and of imagining new meanings and purposes for the future ahead. This process is characterised by a set of linear temporal stages, from loss and denial to acceptance and reappraising the life-course, by an emphasis on the morally desirable exercise of self-care and by a desired outcome that, in the absence of cure, is hope. Through interviews, we aim to unsettle the privileged status given to a positive outlook through examining the expressions, contexts and negotiations of hopelessness of people living with multiple conditions of ill-health and/or disability. These narratives of hopelessness disclose the ways in which realistic imagined possibilities for the future are constrained by external structures of time and function that demand complex negotiations with places, bodies and other people. As a situated and relational narrative, hopelessness draws our attention to the need to rebalance the exclusive attention to individual, internal resources with a renewed attention to contexts and settings. Moreover, hopelessness can be generative for those living with multiple conditions in shaping alternatively framed priorities with respect to their temporal and interpersonal relations. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Bread board float zone experiment system for high purity silicon
NASA Technical Reports Server (NTRS)
Kern, E. L.; Gill, G. L., Jr.
1982-01-01
A breadboard float zone experimental system has been established at Westech Systems for use by NASA in the float zone experimental area. A used zoner of suitable size and flexibility was acquired and installed with the necessary utilities. Repairs, alignments and modifications were made to provide for dislocation free zoning of silicon. The zoner is capable of studying process parameters used in growing silicon in gravity and is flexible to allow trying of new features that will test concepts of zoning in microgravity. Characterizing the state of the art molten zones of a growing silicon crystal will establish the data base against which improvements of zoning in gravity or growing in microgravity can be compared. 25 mm diameter was chosen as the reference size, since growth in microgravity will be at that diameter or smaller for about the next 6 years. Dislocation free crystals were growtn in the 100 and 111 orientations, using a wide set of growth conditions. The zone shape at one set of conditions was measured, by simultaneously aluminum doping and freezing the zone, lengthwise slabbing and delineating by etching. The whole set of crystals, grown under various conditions, were slabbed, polished and striation etched, revealing the growth interface shape and the periodic and aperiodic natures of the striations.
Higgins, Paul; Searchfield, Grant; Coad, Gavin
2012-06-01
The aim of this study was to determine which level-dependent hearing aid digital signal-processing strategy (DSP) participants preferred when listening to music and/or performing a speech-in-noise task. Two receiver-in-the-ear hearing aids were compared: one using 32-channel adaptive dynamic range optimization (ADRO) and the other wide dynamic range compression (WDRC) incorporating dual fast (4 channel) and slow (15 channel) processing. The manufacturers' first-fit settings based on participants' audiograms were used in both cases. Results were obtained from 18 participants on a quick speech-in-noise (QuickSIN; Killion, Niquette, Gudmundsen, Revit, & Banerjee, 2004) task and for 3 music listening conditions (classical, jazz, and rock). Participants preferred the quality of music and performed better at the QuickSIN task using the hearing aids with ADRO processing. A potential reason for the better performance of the ADRO hearing aids was less fluctuation in output with change in sound dynamics. ADRO processing has advantages for both music quality and speech recognition in noise over the multichannel WDRC processing that was used in the study. Further evaluations of which DSP aspects contribute to listener preference are required.
Volcke, E I P; van Loosdrecht, M C M; Vanrolleghem, P A
2006-01-01
The combined SHARON-Anammox process for treating wastewater streams with high ammonia load is the focus of this paper. In particular, partial nitritation in the SHARON reactor should be performed to such an extent that a nitrite:ammonium ratio is generated which is optimal for full conversion in an Anammox process. In the simulation studies performed in this contribution, the nitrite:ammonium ratio produced in a SHARON process with fixed volume, as well as its effect on the subsequent Anammox process, is examined for realistic influent conditions and considering both direct and indirect pH effects on the SHARON process. Several possible operating modes for the SHARON reactor, differing in control strategies for O2, pH and the produced nitrite:ammonium ratio and based on regulating the air flow rate and/or acid/base addition, are systematically evaluated. The results are quantified through an operating cost index. Best results are obtained by means of cascade feedback control of the SHARON effluent nitrite:ammonium ratio through setting an O2 set-point that is tracked by adjusting the air flow rate, combined with single loop pH control through acid/base addition.
Hamann, Hendrik F.; Hwang, Youngdeok; van Kessel, Theodore G.; Khabibrakhmanov, Ildar K.; Muralidhar, Ramachandran
2016-10-18
A method and a system to perform multi-model blending are described. The method includes obtaining one or more sets of predictions of historical conditions, the historical conditions corresponding with a time T that is historical in reference to current time, and the one or more sets of predictions of the historical conditions being output by one or more models. The method also includes obtaining actual historical conditions, the actual historical conditions being measured conditions at the time T, assembling a training data set including designating the two or more set of predictions of historical conditions as predictor variables and the actual historical conditions as response variables, and training a machine learning algorithm based on the training data set. The method further includes obtaining a blended model based on the machine learning algorithm.
Li, Tianhao; Fu, Qian-Jie
2013-01-01
Objectives (1) To investigate whether voice gender discrimination (VGD) could be a useful indicator of the spectral and temporal processing abilities of individual cochlear implant (CI) users; (2) To examine the relationship between VGD and speech recognition with CI when comparable acoustic cues are used for both perception processes. Design VGD was measured using two talker sets with different inter-gender fundamental frequencies (F0), as well as different acoustic CI simulations. Vowel and consonant recognition in quiet and noise were also measured and compared with VGD performance. Study sample Eleven postlingually deaf CI users. Results The results showed that (1) mean VGD performance differed for different stimulus sets, (2) VGD and speech recognition performance varied among individual CI users, and (3) individual VGD performance was significantly correlated with speech recognition performance under certain conditions. Conclusions VGD measured with selected stimulus sets might be useful for assessing not only pitch-related perception, but also spectral and temporal processing by individual CI users. In addition to improvements in spectral resolution and modulation detection, the improvement in higher modulation frequency discrimination might be particularly important for CI users in noisy environments. PMID:21696330
Radar Sounding Investigations at the Boundary of Thwaites and Pine Island Glaciers
NASA Astrophysics Data System (ADS)
Schroeder, Dustin; Hilger, Andrew; Paden, John; Corr, Hugh; Blankenship, Donald
2017-04-01
Recent observational and modeling studies have shown that the behavior and stability of both Thwaites Glacier and Pine Island Glacier in the Amundsen Sea Embayment of the West Antarctic Ice Sheet are modulated by a combination of ocean forcing, bed topography, and basal conditions. In terms of future deglaciation scenarios and their ultimate sea level contribution, the configuration, evolution, and ice-dynamical impact of basal conditions in the boundary region between Thwaites Glacier and Pine Island Glacier stand to play a particularly significant role. This region not only separates the two most rapidly changing glaciers in Antarctica, but - as a result - also has the potential to be the site of dynamic and destabilizing interactions between them as either glacier retreats. Despite this potential, little research has focused on characterizing the basal condition context for modeling current and potential interaction across this boundary. One reason for this is the fact that (despite relatively dense airborne radar sounding coverage in the area) the data in this region was collected by three different radar systems and much of the Thwaites / Pine Island boundary lies at the boundary of these data sets. These include the 2004 survey of Thwaites Glacier by the UTIG HiCARS system, the 2004 campaign over Pine Island Glacier by the BAS PASIN system, and the 2011 - 2014 surveys of the Amundsen Sea Embayment by the CReSIS MCoRDS system. This has resulted in distinct sets of observations, collected across a range of frequencies, bandwidths, coherency, and observing geometries. To date, these data have also been processed by different institutions with software, algorithms and approaches that were specifically developed for each radar system. While each produce consistent ice thickness measurements, the character of their bed echoes have yet to be exploited. Here, we present initial results from processing, analyzing, and synthesizing these three distinct data sets to characterize basal conditions for modeling and interpretation across boundary between Thwaites and Pine Island. These results highlight the significance of conditions beneath the Southwest Tributary of Pine Island, the Eastern Shear Margin of Thwaites, and the Bentley Subglacial Trench for the behavior, evolution, and stability of the Amundsen Sea sector.
Yang, Li-Zhuang; Zhang, Wei; Shi, Bin; Yang, Zhiyu; Wei, Zhengde; Gu, Feng; Zhang, Jing; Cui, Guanbao; Liu, Ying; Zhou, Yifeng; Zhang, Xiaochu; Rao, Hengyi
2014-01-01
Transcranial direct current stimulation (tDCS) is a non-invasive brain stimulation technique that can modulate cortical excitability. Although the clinical value of tDCS has been advocated, the potential of tDCS in cognitive rehabilitation of face processing deficits is less understood. Face processing has been associated with the occipito-temporal cortex (OT). The present study investigated whether face processing in healthy adults can be modulated by applying tDCS over the OT. Experiment 1 investigated whether tDCS can affect N170, a face-sensitive ERP component, with a face orientation judgment task. The N170 in the right hemisphere was reduced in active stimulation conditions compared with the sham stimulation condition for both upright faces and inverted faces. Experiment 2 further demonstrated that tDCS can modulate the composite face effect, a type of holistic processing that reflects the obligatory attention to all parts of a face. The composite face effect was reduced in active stimulation conditions compared with the sham stimulation condition. Additionally, the current polarity did not modulate the effect of tDCS in the two experiments. The present study demonstrates that N170 can be causally manipulated by stimulating the OT with weak currents. Furthermore, our study provides evidence that obligatory attention to all parts of a face can be affected by the commonly used tDCS parameter setting. PMID:25531112
Mei, J.; Dong, P.; Kalnaus, S.; ...
2017-07-21
It has been well established that fatigue damage process is load-path dependent under non-proportional multi-axial loading conditions. Most of studies to date have been focusing on interpretation of S-N based test data by constructing a path-dependent fatigue damage model. Our paper presents a two-parameter mixed-mode fatigue crack growth model which takes into account of crack growth dependency on both load path traversed and a maximum effective stress intensity attained in a stress intensity factor plane (e.g.,KI-KIII plane). Furthermore, by taking advantage of a path-dependent maximum range (PDMR) cycle definition (Dong et al., 2010; Wei and Dong, 2010), the two parametersmore » are formulated by introducing a moment of load path (MLP) based equivalent stress intensity factor range (ΔKNP) and a maximum effective stress intensity parameter KMax incorporating an interaction term KI·KIII. To examine the effectiveness of the proposed model, two sets of crack growth rate test data are considered. The first set is obtained as a part of this study using 304 stainless steel disk specimens subjected to three combined non-proportional modes I and III loading conditions (i.e., with a phase angle of 0°, 90°, and 180°). The second set was obtained by Feng et al. (2007) using 1070 steel disk specimens subjected to similar types of non-proportional mixed-mode conditions. Once the proposed two-parameter non-proportional mixed-mode crack growth model is used, it is shown that a good correlation can be achieved for both sets of the crack growth rate test data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mei, J.; Dong, P.; Kalnaus, S.
It has been well established that fatigue damage process is load-path dependent under non-proportional multi-axial loading conditions. Most of studies to date have been focusing on interpretation of S-N based test data by constructing a path-dependent fatigue damage model. Our paper presents a two-parameter mixed-mode fatigue crack growth model which takes into account of crack growth dependency on both load path traversed and a maximum effective stress intensity attained in a stress intensity factor plane (e.g.,KI-KIII plane). Furthermore, by taking advantage of a path-dependent maximum range (PDMR) cycle definition (Dong et al., 2010; Wei and Dong, 2010), the two parametersmore » are formulated by introducing a moment of load path (MLP) based equivalent stress intensity factor range (ΔKNP) and a maximum effective stress intensity parameter KMax incorporating an interaction term KI·KIII. To examine the effectiveness of the proposed model, two sets of crack growth rate test data are considered. The first set is obtained as a part of this study using 304 stainless steel disk specimens subjected to three combined non-proportional modes I and III loading conditions (i.e., with a phase angle of 0°, 90°, and 180°). The second set was obtained by Feng et al. (2007) using 1070 steel disk specimens subjected to similar types of non-proportional mixed-mode conditions. Once the proposed two-parameter non-proportional mixed-mode crack growth model is used, it is shown that a good correlation can be achieved for both sets of the crack growth rate test data.« less
South Platte River Basin - Colorado, Nebraska, and Wyoming
Dennehy, Kevin F.; Litke, David W.; Tate, Cathy M.; Heiny, Janet S.
1993-01-01
The South Platte River Basin was one of 20 study units selected in 1991 for investigation under the U.S. Geological Survey's National Water-Quality Assessment (NAWQA) program. One of the initial tasks undertaken by the study unit team was to review the environmental setting of the basin and assemble ancillary data on natural and anthropogenic factors in the basin. The physical, chemical, and biological quality of the water in the South Platte River Basin is explicitly tied to its environmental setting. The resulting water quality is the product of the natural conditions and human factors that make up the environmental setting of the basin.This description of the environmental setting of the South Platte River Basin and its implications to the water quality will help guide the design of the South Platte NAWQA study. Natural conditions such as physiography, climate, geology, and soils affect the ambient water quality while anthropogenic factors such as water use, population, land use and water-management practices can have a pronounced effect on water quality in the basin. The relative effects of mining, urban, and agricultural land- and water-uses on water-quality constituents are not well understood. The interrelation of the surface-water and ground-water systems and the chemical and biological processes that affect the transport of constituents needs to be addressed. Interactions between biological communities and the water resources also should be considered. The NAWQA program and the South Platte River Basin study will provide information to minimize existing knowledge gaps, so that we may better understand the effect these natural conditions and human factors have on the water-quality conditions in the basin, now and in the future.
Powell, Jane; Letson, Susan; Davidoff, Jules; Valentine, Tim; Greenwood, Richard
2008-04-01
Twenty patients with impairments of face recognition, in the context of a broader pattern of cognitive deficits, were administered three new training procedures derived from contemporary theories of face processing to enhance their learning of new faces: semantic association (being given additional verbal information about the to-be-learned faces); caricaturing (presentation of caricatured versions of the faces during training and veridical versions at recognition testing); and part recognition (focusing patients on distinctive features during the training phase). Using a within-subjects design, each training procedure was applied to a different set of 10 previously unfamiliar faces and entailed six presentations of each face. In a "simple exposure" control procedure (SE), participants were given six presentations of another set of faces using the same basic protocol but with no further elaboration. Order of the four procedures was counterbalanced, and each condition was administered on a different day. A control group of 12 patients with similar levels of face recognition impairment were trained on all four sets of faces under SE conditions. Compared to the SE condition, all three training procedures resulted in more accurate discrimination between the 10 studied faces and 10 distractor faces in a post-training recognition test. This did not reflect any intrinsic lesser memorability of the faces used in the SE condition, as evidenced by the comparable performance across face sets by the control group. At the group level, the three experimental procedures were of similar efficacy, and associated cognitive deficits did not predict which technique would be most beneficial to individual patients; however, there was limited power to detect such associations. Interestingly, a pure prosopagnosic patient who was tested separately showed benefit only from the part recognition technique. Possible mechanisms for the observed effects, and implications for rehabilitation, are discussed.
The effects of quantity and depth of processing on children's time perception.
Arlin, M
1986-08-01
Two experiments were conducted to investigate the effects of quantity and depth of processing on children's time perception. These experiments tested the appropriateness of two adult time-perception models (attentional and storage size) for younger ages. Children were given stimulus sets of equal time which varied by level of processing (deep/shallow) and quantity (list length). In the first experiment, 28 children in Grade 6 reproduced presentation times of various quantities of pictures under deep (living/nonliving categorization) or shallow (repeating label) conditions. Students also compared pairs of durations. In the second experiment, 128 children in Grades K, 2, 4, and 6 reproduced presentation times under similar conditions with three or six pictures and with deep or shallow processing requirements. Deep processing led to decreased estimation of time. Higher quantity led to increased estimation of time. Comparative judgments were influenced by quantity. The interaction between age and depth of processing was significant. Older children were more affected by depth differences than were younger children. Results were interpreted as supporting different aspects of each adult model as explanations of children's time perception. The processing effect supported the attentional model and the quantity effect supported the storage size model.
Fluidized muds: a novel setting for the generation of biosphere diversity through geologic time.
Aller, J Y; Aller, R C; Kemp, P F; Chistoserdov, A Y; Madrid, V M
2010-06-01
Reworked and fluidized fine-grained deposits in energetic settings are a major modern-day feature of river deltas and estuaries. Similar environments were probably settings for microbial evolution on the early Earth. These sedimentary systems act as efficient biogeochemical reactors with high bacterial phylogenetic diversity and functional redundancy. They are temporally rather than spatially structured, with repeated cycling of redox conditions and successive stages of microbial metabolic processes. Intense reworking of the fluidized bed entrains bacteria from varied habitats providing new, diverse genetic materials to contribute to horizontal gene transfer events and the creation of new bacterial ecotypes. These vast mud environments may act as exporters and promoters of biosphere diversity and novel adaptations, potentially on a globally important scale.
Switching and optimizing control for coal flotation process based on a hybrid model
Dong, Zhiyong; Wang, Ranfeng; Fan, Minqiang; Fu, Xiang
2017-01-01
Flotation is an important part of coal preparation, and the flotation column is widely applied as efficient flotation equipment. This process is complex and affected by many factors, with the froth depth and reagent dosage being two of the most important and frequently manipulated variables. This paper proposes a new method of switching and optimizing control for the coal flotation process. A hybrid model is built and evaluated using industrial data. First, wavelet analysis and principal component analysis (PCA) are applied for signal pre-processing. Second, a control model for optimizing the set point of the froth depth is constructed based on fuzzy control, and a control model is designed to optimize the reagent dosages based on expert system. Finally, the least squares-support vector machine (LS-SVM) is used to identify the operating conditions of the flotation process and to select one of the two models (froth depth or reagent dosage) for subsequent operation according to the condition parameters. The hybrid model is developed and evaluated on an industrial coal flotation column and exhibits satisfactory performance. PMID:29040305
Development of an Optimization Methodology for the Aluminum Alloy Wheel Casting Process
NASA Astrophysics Data System (ADS)
Duan, Jianglan; Reilly, Carl; Maijer, Daan M.; Cockcroft, Steve L.; Phillion, Andre B.
2015-08-01
An optimization methodology has been developed for the aluminum alloy wheel casting process. The methodology is focused on improving the timing of cooling processes in a die to achieve improved casting quality. This methodology utilizes (1) a casting process model, which was developed within the commercial finite element package, ABAQUS™—ABAQUS is a trademark of Dassault Systèms; (2) a Python-based results extraction procedure; and (3) a numerical optimization module from the open-source Python library, Scipy. To achieve optimal casting quality, a set of constraints have been defined to ensure directional solidification, and an objective function, based on the solidification cooling rates, has been defined to either maximize, or target a specific, cooling rate. The methodology has been applied to a series of casting and die geometries with different cooling system configurations, including a 2-D axisymmetric wheel and die assembly generated from a full-scale prototype wheel. The results show that, with properly defined constraint and objective functions, solidification conditions can be improved and optimal cooling conditions can be achieved leading to process productivity and product quality improvements.
Wildman, R.A.; Domagalski, Joseph L.; Hering, J.G.
2009-01-01
The relative influences of hydrologic processes and biogeochemistry on the transport and retention of minor solutes were compared in the riverbed of the lower Merced River (California, USA). The subsurface of this reach receives ground water discharge and surface water infiltration due to an altered hydraulic setting resulting from agricultural irrigation. Filtered ground water samples were collected from 30 drive point locations in March, June, and October 2004. Hydrologic processes, described previously, were verified by observations of bromine concentrations; manganese was used to indicate redox conditions. The separate responses of the minor solutes strontium, barium, uranium, and phosphorus to these influences were examined. Correlation and principal component analyses indicate that hydrologic processes dominate the distribution of trace elements in the ground water. Redox conditions appear to be independent of hydrologic processes and account for most of the remaining data variability. With some variability, major processes are consistent in two sampling transects separated by 100 m. Copyright ?? 2009 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.
Bartel, S; Bethge, M; Streibelt, M; Thren, K; Lassahn, C
2010-06-01
In Germany, introduction of the law on Integrated Health Care (IC) (section sign 140a-d SGB V) opened up the possibility of cross-sectoral health care settings and new forms of remuneration, and improved the conditions for a closer cooperation between health care providers. However, cross-institutional and interdisciplinary work contexts demand new organizational structures in order to assure the coordination of different competences, resources and interests. This study aims at identifying factors of successful integrated care settings for total hip and knee arthroplasty. Using the example of an integrated care setting between an orthopaedic hospital and a rehabilitation clinic it will be examined which factors lead to successful implementation of the services and measures designed. A qualitative research design was developed comprising different methods of data assessment (participant observation, guided expert interviews, document analyses) enabling a comprehensive exploration. Overall, data were derived from six consultations with patients, two integrated care information sessions and various documents (17 patient files, information material, patient lists, etc.). First of all, the different phases of development and implementation of integrated care settings were described. In this context, clearly defined aims, structures and appropriate measures seem to be crucial for an ideal long-term cooperation. Furthermore, the staff perspective on the effects of the IC programme on their daily routines proved an essential basis for process reconstruction. The staff members pointed out four main aspects regarding IC settings, i. e., improved image, increased knowledge, intensity of relationship, and less and more work effort. Against this background, factors of successful IC settings could be generated such as the need for central coordination, a regular staff information systems as well as accompanying process monitoring. Several key factors of successful integrated care settings in arthroplasty could be generated which provide important clues for shaping future interdisciplinary and cross-sectoral cooperation settings in health care services in general. Georg Thieme Verlag KG Stuttgart New York.
NASA Technical Reports Server (NTRS)
Siegfried, D. E.
1982-01-01
A quartz hollow tube cathode was used to determine the operating conditions within a mercury orificed hollow cathode. Insert temperature profiles, cathode current distributions, plasma properties profile, and internal pressure-mass flow rate results are summarized and used in a phenomenological model which qualitatively describes electron emission and plasma production processes taking place within the cathode. By defining an idealized ion production region within which most of the plasma processes are concentrated, this model is expressed analytically as a simple set of equations which relate cathode dimensions and specifiable operating conditions, such as mass flow rate and discharge current, to such important parameters as emission surface temperature and internal plasma properties. Key aspects of the model are examined.
Redox processes and water quality of selected principal aquifer systems
McMahon, P.B.; Chapelle, F.H.
2008-01-01
Reduction/oxidation (redox) conditions in 15 principal aquifer (PA) systems of the United States, and their impact on several water quality issues, were assessed from a large data base collected by the National Water-Quality Assessment Program of the USGS. The logic of these assessments was based on the observed ecological succession of electron acceptors such as dissolved oxygen, nitrate, and sulfate and threshold concentrations of these substrates needed to support active microbial metabolism. Similarly, the utilization of solid-phase electron acceptors such as Mn(IV) and Fe(III) is indicated by the production of dissolved manganese and iron. An internally consistent set of threshold concentration criteria was developed and applied to a large data set of 1692 water samples from the PAs to assess ambient redox conditions. The indicated redox conditions then were related to the occurrence of selected natural (arsenic) and anthropogenic (nitrate and volatile organic compounds) contaminants in ground water. For the natural and anthropogenic contaminants assessed in this study, considering redox conditions as defined by this framework of redox indicator species and threshold concentrations explained many water quality trends observed at a regional scale. An important finding of this study was that samples indicating mixed redox processes provide information on redox heterogeneity that is useful for assessing common water quality issues. Given the interpretive power of the redox framework and given that it is relatively inexpensive and easy to measure the chemical parameters included in the framework, those parameters should be included in routine water quality monitoring programs whenever possible.
Schmucki, Reto; de Blois, Sylvie
2009-07-01
Habitat-corridors are assumed to counteract the negative impacts of habitat loss and fragmentation, but their efficiency in doing so depends on the maintenance of ecological processes in corridor conditions. For plants dispersing in linear habitats, one of these critical processes is the maintenance of adequate pollen transfer to insure seed production within the corridor. This study focuses on a common, self-incompatible forest herb, Trillium grandiflorum, to assess plant-pollinator interactions and the influence of spatial processes on plant reproduction in hedgerow corridors compared to forests. First, using pollen supplementation experiments over 2 years, we quantified the extent of pollen limitation in both habitats, testing the prediction of greater limitation in small hedgerow populations than in forests. While pollen limitation of fruit and seed set was common, its magnitude did not differ between habitats. Variations among sites, however, suggested an influence of landscape context on pollination services. Second, we examined the effect of isolation on plant reproduction by monitoring fruit and seed production, as well as pollinator activity and assemblage, in small flower arrays transplanted in hedgerows at increasing distances from forest and from each other. We detected no difference in the proportion of flowers setting fruit or in pollinator activity with isolation, but we observed some differences in pollinator assemblages. Seed set, on the other hand, declined significantly with increasing isolation in the second year of the study, but not in the first year, suggesting altered pollen transfer with distance. Overall, plants in hedgerow corridors and forests benefited from similar pollination services. In this system, plant-pollinator interactions and reproduction seem to be influenced more by variations in resource distribution over years and landscapes than by local habitat conditions.
Application of a statistical emulator to fire emission modeling
Marwan Katurji; Jovanka Nikolic; Shiyuan Zhong; Scott Pratt; Lejiang Yu; Warren E. Heilman
2015-01-01
We have demonstrated the use of an advanced Gaussian-Process (GP) emulator to estimate wildland fire emissions over a wide range of fuel and atmospheric conditions. The Fire Emission Production Simulator, or FEPS, is used to produce an initial set of emissions data that correspond to some selected values in the domain of the input fuel and atmospheric parameters for...
2013-01-01
settings that cover the range of environmental conditions in which the rations are expected to function. These vitally important state-of-the- art ...and the Joint Culinary Center of Excellence on nutritional issues impacting the Warfighter, supports the Surgeon General’s responsibilities as the...Advanced Food Processing Laboratory and Food Pilot Plant for production and testing of food to facilitate state-of-the- art ration development. The
ERIC Educational Resources Information Center
Tahiri, Afredita
2010-01-01
The author discuses transformative learning as a means to explore the centrality of experiences, provide critical reflection, and a rational discourse to examine a set of conditions which need to be fulfilled to foster the application of the transformative learning process for professors and the students at the University of Prishtina. The author…
The Social Consequences of the Changing Functions of the Rural Family in Post-War Poland.
ERIC Educational Resources Information Center
Kocik, Lucjan
Conducted in four villages situated near Tarnow, a large urban and industrial centre, this study examined the process of change in the functioning of the rural family, as set against the transformation of their general living conditions brought about by the socialist industrialization and urbanization in post-war Poland. Issues studied were:…
Effects of and Preference for Pay for Performance: An Analogue Analysis
ERIC Educational Resources Information Center
Long, Robert D., III; Wilder, David A.; Betz, Alison; Dutta, Ami
2012-01-01
We examined the effects of 2 payment systems on the rate of check processing and time spent on task by participants in a simulated work setting. Three participants experienced individual pay-for-performance (PFP) without base pay and pay-for-time (PFT) conditions. In the last phase, we asked participants to choose which system they preferred. For…
Design of a Slowed-Rotor Compound Helicopter for Future Joint Service Missions
NASA Technical Reports Server (NTRS)
Silva, Christopher; Yeo, Hyeonsoo; Johnson, Wayne R.
2010-01-01
A slowed-rotor compound helicopter has been synthesized using the NASA Design and Analysis of Rotorcraft (NDARC) conceptual design software. An overview of the design process and the capabilities of NDARC are presented. The benefits of trading rotor speed, wing-rotor lift share, and trim strategies are presented for an example set of sizing conditions and missions.
Fernández, Roemi; Salinas, Carlota; Montes, Héctor; Sarria, Javier
2014-01-01
The motivation of this research was to explore the feasibility of detecting and locating fruits from different kinds of crops in natural scenarios. To this end, a unique, modular and easily adaptable multisensory system and a set of associated pre-processing algorithms are proposed. The offered multisensory rig combines a high resolution colour camera and a multispectral system for the detection of fruits, as well as for the discrimination of the different elements of the plants, and a Time-Of-Flight (TOF) camera that provides fast acquisition of distances enabling the localisation of the targets in the coordinate space. A controlled lighting system completes the set-up, increasing its flexibility for being used in different working conditions. The pre-processing algorithms designed for the proposed multisensory system include a pixel-based classification algorithm that labels areas of interest that belong to fruits and a registration algorithm that combines the results of the aforementioned classification algorithm with the data provided by the TOF camera for the 3D reconstruction of the desired regions. Several experimental tests have been carried out in outdoors conditions in order to validate the capabilities of the proposed system. PMID:25615730
Usó-Doménech, Josep-Lluis; Nescolarde-Selva, Josué-Antonio; Lloret-Climent, Miguel; González-Franco, Lucía
2018-03-01
The mathematical submodel ULEX is used to study the dynamic behavior of the green, floral and woody biomass of the main pyrophite shrub species, the gorse (Ulex parviflorus Pourret), and its relationship with other shrub species, typical of a Mediterranean ecosystem. The focus are the ecological conditions of post-fire stage growth, and its efficacy as a protective cover against erosion processes in the short, medium and long term, both in normal conditions and at the limits of desertification conditions. The model sets a target to observe the behavior and to anticipate and consequently intervene with adequate protection, restoration and management measures. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Panaras, G.; Mathioulakis, E.; Belessiotis, V.
2018-01-01
The operation of desiccant air-conditioning systems is characterised by processes implemented to the moist air of the environment; it is, thus, expected to be affected by ambient conditions. The present work aims at quantifying this influence on the basis of an easy-to-implement, steady-state model of the system, presenting an efficiency factors approach, which has been experimentally validated. The analysis examines the behaviour of the ventilation and the recirculation cycles, which constitute the marginal cases regarding the achieved values of the outside air fraction, given the ambient conditions, the desired regeneration temperature and the efficiency of the involved components. The fact of a desiccant cycle undergoing a set of changing ambient conditions by its actual operation is also considered in the analysis. The results provide useful information for the selection of the optimum configuration to the designer of a desiccant air-conditioning system.
Robust interval-based regulation for anaerobic digestion processes.
Alcaraz-González, V; Harmand, J; Rapaport, A; Steyer, J P; González-Alvarez, V; Pelayo-Ortiz, C
2005-01-01
A robust regulation law is applied to the stabilization of a class of biochemical reactors exhibiting partially known highly nonlinear dynamic behavior. An uncertain environment with the presence of unknown inputs is considered. Based on some structural and operational conditions, this regulation law is shown to exponentially stabilize the aforementioned bioreactors around a desired set-point. This approach is experimentally applied and validated on a pilot-scale (1 m3) anaerobic digestion process for the treatment of raw industrial wine distillery wastewater where the objective is the regulation of the chemical oxygen demand (COD) by using the dilution rate as the manipulated variable. Despite large disturbances on the input COD and state and parametric uncertainties, this regulation law gave excellent performances leading the output COD towards its set-point and keeping it inside a pre-specified interval.
Effect of set size, age, and mode of stimulus presentation on information-processing speed.
NASA Technical Reports Server (NTRS)
Norton, J. C.
1972-01-01
First, second, and third grade pupils served as subjects in an experiment designed to show the effect of age, mode of stimulus presentation, and information value on recognition time. Stimuli were presented in picture and printed word form and in groups of 2, 4, and 8. The results of the study indicate that first graders are slower than second and third graders who are nearly equal. There is a gross shift in reaction time as a function of mode of stimulus presentation with increase in age. The first graders take much longer to identify words than pictures, while the reverse is true of the older groups. With regard to set size, a slope appears in the pictures condition in the older groups, while for first graders, a large slope occurs in the words condition and only a much smaller one for pictures.
Levenson, Steven A; Desai, Abhilash K
2017-04-01
Despite much attention including national initiatives, concerns remain about the approaches to managing behavior symptoms and psychiatric conditions across all settings, including in long-term care settings such as nursing homes and assisted living facilities. One key reason why problems persist is because most efforts to "reform" and "correct" the situation have failed to explore or address root causes and instead have promoted inadequate piecemeal "solutions." Further improvement requires jumping off the bandwagon and rethinking the entire issue, including recognizing and applying key concepts of clinical reasoning and the care delivery process to every situation. The huge negative impact of cognitive biases and rote approaches on related clinical problem solving and decision making and patient outcomes also must be addressed. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
Alant, Erna; Kolatsis, Anna; Lilienfeld, Margi
2010-03-01
An important aspect in AAC concerns the user's ability to locate an aided visual symbol on a communication display in order to facilitate meaningful interaction with partners. Recent studies have suggested that the use of different colored symbols may be influential in the visual search process, and that this, in turn will influence the speed and accuracy of symbol location. This study examined the role of color on rate and accuracy of identifying symbols on an 8-location overlay through the use of 3 color conditions (same, mixed and unique). Sixty typically developing preschool children were exposed to two different sequential exposures (Set 1 and Set 2). Participants searched for a target stimulus (either meaningful symbols or arbitrary forms) in a stimuli array. Findings indicated that the sequential exposures (orderings) impacted both time and accuracy for both types of symbols within specific instances.
The need for sustained and integrated high-resolution mapping of dynamic coastal environments
Stockdon, Hilary F.; Lillycrop, Jeff W.; Howd, Peter A.; Wozencraft, Jennifer M.
2007-01-01
The evolution of the United States' coastal zone response to both human activities and natural processes is dynamic. Coastal resource and population protection requires understanding, in detail, the processes needed for change as well as the physical setting. Sustained coastal area mapping allows change to be documented and baseline conditions to be established, as well as future behavior to be predicted in conjunction with physical process models. Hyperspectral imagers and airborne lidars, as well as other recent mapping technology advances, allow rapid national scale land use information and high-resolution elevation data collection. Coastal hazard risk evaluation has critical dependence on these rich data sets. A fundamental storm surge model parameter in predicting flooding location, for example, is coastal elevation data, and a foundation in identifying the most vulnerable populations and resources is land use maps. A wealth of information for physical change process study, coastal resource and community management and protection, and coastal area hazard vulnerability determination, is available in a comprehensive national coastal mapping plan designed to take advantage of recent mapping technology progress and data distribution, management, and collection.
Direct versus indirect processing changes the influence of color in natural scene categorization.
Otsuka, Sachio; Kawaguchi, Jun
2009-10-01
We examined whether participants would use a negative priming (NP) paradigm to categorize color and grayscale images of natural scenes that were presented peripherally and were ignored. We focused on (1) attentional resources allocated to natural scenes and (2) direct versus indirect processing of them. We set up low and high attention-load conditions, based on the set size of the searched stimuli in the prime display (one and five). Participants were required to detect and categorize the target objects in natural scenes in a central visual search task, ignoring peripheral natural images in both the prime and probe displays. The results showed that, irrespective of attention load, NP was observed for color scenes but not for grayscale scenes. We did not observe any effect of color information in central visual search, where participants responded directly to natural scenes. These results indicate that, in a situation in which participants indirectly process natural scenes, color information is critical to object categorization, but when the scenes are processed directly, color information does not contribute to categorization.
Fayolle-Guichard, Françoise; Lombard, Vincent; Hébert, Agnès; Coutinho, Pedro M.; Groppi, Alexis; Barre, Aurélien; Henrissat, Bernard
2016-01-01
Cost-effective biofuel production from lignocellulosic biomass depends on efficient degradation of the plant cell wall. One of the major obstacles for the development of a cost-efficient process is the lack of resistance of currently used fungal enzymes to harsh conditions such as high temperature. Adapted, thermophilic microbial communities provide a huge reservoir of potentially interesting lignocellulose-degrading enzymes for improvement of the cellulose hydrolysis step. In order to identify such enzymes, a leaf and wood chip compost was enriched on a mixture of thermo-chemically pretreated wheat straw, poplar and Miscanthus under thermophile conditions, but in two different set-ups. Unexpectedly, metagenome sequencing revealed that incubation of the lignocellulosic substrate with compost as inoculum in a suspension culture resulted in an impoverishment of putative cellulase- and hemicellulase-encoding genes. However, mimicking composting conditions without liquid phase yielded a high number and diversity of glycoside hydrolase genes and an enrichment of genes encoding cellulose binding domains. These identified genes were most closely related to species from Actinobacteria, which seem to constitute important players of lignocellulose degradation under the applied conditions. The study highlights that subtle changes in an enrichment set-up can have an important impact on composition and functions of the microcosm. Composting-like conditions were found to be the most successful method for enrichment in species with high biomass degrading capacity. PMID:27936240
Memory for light as a quantum process.
Lobino, M; Kupchak, C; Figueroa, E; Lvovsky, A I
2009-05-22
We report complete characterization of an optical memory based on electromagnetically induced transparency. We recover the superoperator associated with the memory, under two different working conditions, by means of a quantum process tomography technique that involves storage of coherent states and their characterization upon retrieval. In this way, we can predict the quantum state retrieved from the memory for any input, for example, the squeezed vacuum or the Fock state. We employ the acquired superoperator to verify the nonclassicality benchmark for the storage of a Gaussian distributed set of coherent states.
An fMRI study of semantic processing in men with schizophrenia
Kubicki, M.; McCarley, R.W.; Nestor, P.G.; Huh, T.; Kikinis, R.; Shenton, M.E.; Wible, C.G.
2009-01-01
As a means toward understanding the neural bases of schizophrenic thought disturbance, we examined brain activation patterns in response to semantically and superficially encoded words in patients with schizophrenia. Nine male schizophrenic and 9 male control subjects were tested in a visual levels of processing (LOP) task first outside the magnet and then during the fMRI scanning procedures (using a different set of words). During the experiments visual words were presented under two conditions. Under the deep, semantic encoding condition, subjects made semantic judgments as to whether the words were abstract or concrete. Under the shallow, nonsemantic encoding condition, subjects made perceptual judgments of the font size (uppercase/lowercase) of the presented words. After performance of the behavioral task, a recognition test was used to assess the depth of processing effect, defined as better performance for semantically encoded words than for perceptually encoded words. For the scanned version only, the words for both conditions were repeated in order to assess repetition-priming effects. Reaction times were assessed in both testing scenarios. Both groups showed the expected depth of processing effect for recognition, and control subjects showed the expected increased activation of the left inferior prefrontal cortex (LIPC) under semantic encoding relative to perceptual encoding conditions as well as repetition priming for semantic conditions only. In contrast, schizophrenics showed similar patterns of fMRI activation regardless of condition. Most striking in relation to controls, patients showed decreased LIFC activation concurrent with increased left superior temporal gyrus activation for semantic encoding versus shallow encoding. Furthermore, schizophrenia subjects did not show the repetition priming effect, either behaviorally or as a decrease in LIPC activity. In patients with schizophrenia, LIFC underactivation and left superior temporal gyrus overactivation for semantically encoded words may reflect a disease-related disruption of a distributed frontal temporal network that is engaged in the representation and processing of meaning of words, text, and discourse and which may underlie schizophrenic thought disturbance. PMID:14683698
An fMRI study of semantic processing in men with schizophrenia.
Kubicki, M; McCarley, R W; Nestor, P G; Huh, T; Kikinis, R; Shenton, M E; Wible, C G
2003-12-01
As a means toward understanding the neural bases of schizophrenic thought disturbance, we examined brain activation patterns in response to semantically and superficially encoded words in patients with schizophrenia. Nine male schizophrenic and 9 male control subjects were tested in a visual levels of processing (LOP) task first outside the magnet and then during the fMRI scanning procedures (using a different set of words). During the experiments visual words were presented under two conditions. Under the deep, semantic encoding condition, subjects made semantic judgments as to whether the words were abstract or concrete. Under the shallow, nonsemantic encoding condition, subjects made perceptual judgments of the font size (uppercase/lowercase) of the presented words. After performance of the behavioral task, a recognition test was used to assess the depth of processing effect, defined as better performance for semantically encoded words than for perceptually encoded words. For the scanned version only, the words for both conditions were repeated in order to assess repetition-priming effects. Reaction times were assessed in both testing scenarios. Both groups showed the expected depth of processing effect for recognition, and control subjects showed the expected increased activation of the left inferior prefrontal cortex (LIPC) under semantic encoding relative to perceptual encoding conditions as well as repetition priming for semantic conditions only. In contrast, schizophrenics showed similar patterns of fMRI activation regardless of condition. Most striking in relation to controls, patients showed decreased LIFC activation concurrent with increased left superior temporal gyrus activation for semantic encoding versus shallow encoding. Furthermore, schizophrenia subjects did not show the repetition priming effect, either behaviorally or as a decrease in LIPC activity. In patients with schizophrenia, LIFC underactivation and left superior temporal gyrus overactivation for semantically encoded words may reflect a disease-related disruption of a distributed frontal temporal network that is engaged in the representation and processing of meaning of words, text, and discourse and which may underlie schizophrenic thought disturbance.
Wästlund, Erik; Shams, Poja; Otterbring, Tobias
2018-01-01
In visual marketing, the truism that "unseen is unsold" means that products that are not noticed will not be sold. This truism rests on the idea that the consumer choice process is heavily influenced by visual search. However, given that the majority of available products are not seen by consumers, this article examines the role of peripheral vision in guiding attention during the consumer choice process. In two eye-tracking studies, one conducted in a lab facility and the other conducted in a supermarket, the authors investigate the role and limitations of peripheral vision. The results show that peripheral vision is used to direct visual attention when discriminating between target and non-target objects in an eye-tracking laboratory. Target and non-target similarity, as well as visual saliency of non-targets, constitute the boundary conditions for this effect, which generalizes from instruction-based laboratory tasks to preference-based choice tasks in a real supermarket setting. Thus, peripheral vision helps customers to devote a larger share of attention to relevant products during the consumer choice process. Taken together, the results show how the creation of consideration set (sets of possible choice options) relies on both goal-directed attention and peripheral vision. These results could explain how visually similar packaging positively influences market leaders, while making novel brands almost invisible on supermarket shelves. The findings show that even though unsold products might be unseen, in the sense that they have not been directly observed, they might still have been evaluated and excluded by means of peripheral vision. This article is based on controlled lab experiments as well as a field study conducted in a complex retail environment. Thus, the findings are valid both under controlled and ecologically valid conditions. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
A comparison between atmospheric/humidity and vacuum cyanoacrylate fuming of latent fingermarks.
Farrugia, Kevin J; Fraser, Joanna; Friel, Lauren; Adams, Duncan; Attard-Montalto, Nicola; Deacon, Paul
2015-12-01
A number of pseudo-operational trials were set up to compare the atmospheric/humidity and vacuum cyanoacrylate fuming processes on plastic carrier bags. The fuming processes were compared using two-step cyanoacrylate fuming with basic yellow 40 (BY40) staining and a one-step fluorescent cyanoacrylate fuming, Lumicyano 4%. Preliminary work using planted fingermarks and split depletions were performed to identify the optimum vacuum fuming conditions. The first pseudo-operational trial compared the different fuming conditions (atmospheric/humidity vs. vacuum) for the two-step process where an additional 50% more marks were detected with the atmospheric/humidity process. None of the marks by the vacuum process could be observed visually; however, a significant number of marks were detected by fluorescence after BY40 staining. The second trial repeated the same work in trial 1 using the one-step cyanoacrylate process, Lumicyano at a concentration of 4%. Trial 2 provided comparable results to trial 1 and all the items were then re-treated with Lumicyano 4% at atmospheric/humidity conditions before dyeing with BY40 to provide the sequences of process A (Lumicyano 4% atmospheric-Lumicyano 4% atmospheric-BY40) and process B (Lumicyano 4% vacuum-Lumicyano 4% atmospheric-BY40). The number of marks (visual and fluorescent) was counted after each treatment with a substantial increase in the number of detected marks in the second and third treatments of the process. The increased detection rate after the double Lumicyano process was unexpected and may have important implications. Trial 3 was performed to investigate whether the amount of cyanoacrylate and/or fuming time had an impact on the results observed in trial 2 whereas trial 4 assessed if the double process using conventional cyanoacrylate, rather than Lumicyano 4%, provided an increased detection rate. Trials 3 and 4 confirmed that doubling the amount of Lumicyano 4% cyanoacrylate and fuming time produced a lower detection rate than the double process with Lumicyano 4%. Furthermore, the double process with conventional cyanoacrylate did not provide any benefit. Scanning electron microscopy was also performed to investigate the morphology of the cyanoacrylate polymer under different conditions. The atmospheric/humidity process appears to be superior to the vacuum process for both the two-step and one-step cyanoacrylate fuming, although the two-step process performed better in comparison to the one-step process under vacuum conditions. Nonetheless, the use of vacuum cyanoacrylate fuming may have certain operational advantages and its use does not adversely affect subsequent cyanoacrylate fuming with atmospheric/humidity conditions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Kumar, Gulshan; Rattan, Usha Kumari; Singh, Anil Kumar
2016-01-01
Winter dormancy is a well known mechanism adopted by temperate plants, to mitigate the chilling temperature of winters. However, acquisition of sufficient chilling during winter dormancy ensures the normal phenological traits in subsequent growing period. Thus, low temperature appears to play crucial roles in growth and development of temperate plants. Apple, being an important temperate fruit crop, also requires sufficient chilling to release winter dormancy and normal phenological traits, which are often associated with yield and quality of fruits. DNA cytosine methylation is one of the important epigenetic modifications which remarkably affect the gene expression during various developmental and adaptive processes. In present study, methylation sensitive amplified polymorphism was employed to assess the changes in cytosine methylation during dormancy, active growth and fruit set in apple, under differential chilling conditions. Under high chill conditions, total methylation was decreased from 27.2% in dormant bud to 21.0% in fruit set stage, while no significant reduction was found under low chill conditions. Moreover, the demethylation was found to be decreased, while methylation increased from dormant bud to fruit set stage under low chill as compared to high chill conditions. In addition, RNA-Seq analysis showed high expression of DNA methyltransferases and histone methyltransferases during dormancy and fruit set, and low expression of DNA glcosylases during active growth under low chill conditions, which was in accordance with changes in methylation patterns. The RNA-Seq data of 47 genes associated with MSAP fragments involved in cellular metabolism, stress response, antioxidant system and transcriptional regulation showed correlation between methylation and their expression. Similarly, bisulfite sequencing and qRT-PCR analysis of selected genes also showed correlation between gene body methylation and gene expression. Moreover, significant association between chilling and methylation changes was observed, which suggested that chilling acquisition during dormancy in apple is likely to affect the epigenetic regulation through DNA methylation.
Combined comfort model of thermal comfort and air quality on buses in Hong Kong.
Shek, Ka Wing; Chan, Wai Tin
2008-01-25
Air-conditioning settings are important factors in controlling the comfort of passengers on buses. The local bus operators control in-bus air quality and thermal environment by conforming to the prescribed levels stated in published standards. As a result, the settings are merely adjusted to fulfill the standards, rather than to satisfy the passengers' thermal comfort and air quality. Such "standard-oriented" practices are not appropriate; the passengers' preferences and satisfaction should be emphasized instead. Thus a "comfort-oriented" philosophy should be implemented to achieve a comfortable in-bus commuting environment. In this study, the achievement of a comfortable in-bus environment was examined with emphasis on thermal comfort and air quality. Both the measurement of physical parameters and subjective questionnaire surveys were conducted to collect practical in-bus thermal and air parameters data, as well as subjective satisfaction and sensation votes from the passengers. By analyzing the correlation between the objective and subjective data, a combined comfort models were developed. The models helped in evaluating the percentage of dissatisfaction under various combinations of passengers' sensation votes towards thermal comfort and air quality. An effective approach integrated the combined comfort model, hardware and software systems and the bus air-conditioning system could effectively control the transient in-bus environment. By processing and analyzing the data from the continuous monitoring system with the combined comfort model, air-conditioning setting adjustment commands could be determined and delivered to the hardware. This system adjusted air-conditioning settings depending on real-time commands along the bus journey. Therefore, a comfortable in-bus air quality and thermal environment could be achieved and efficiently maintained along the bus journey despite dynamic outdoor influences. Moreover, this model can help optimize air-conditioning control by striking a beneficial balance between energy conservation and passengers' satisfaction level.
Kumar, Gulshan; Rattan, Usha Kumari; Singh, Anil Kumar
2016-01-01
Winter dormancy is a well known mechanism adopted by temperate plants, to mitigate the chilling temperature of winters. However, acquisition of sufficient chilling during winter dormancy ensures the normal phenological traits in subsequent growing period. Thus, low temperature appears to play crucial roles in growth and development of temperate plants. Apple, being an important temperate fruit crop, also requires sufficient chilling to release winter dormancy and normal phenological traits, which are often associated with yield and quality of fruits. DNA cytosine methylation is one of the important epigenetic modifications which remarkably affect the gene expression during various developmental and adaptive processes. In present study, methylation sensitive amplified polymorphism was employed to assess the changes in cytosine methylation during dormancy, active growth and fruit set in apple, under differential chilling conditions. Under high chill conditions, total methylation was decreased from 27.2% in dormant bud to 21.0% in fruit set stage, while no significant reduction was found under low chill conditions. Moreover, the demethylation was found to be decreased, while methylation increased from dormant bud to fruit set stage under low chill as compared to high chill conditions. In addition, RNA-Seq analysis showed high expression of DNA methyltransferases and histone methyltransferases during dormancy and fruit set, and low expression of DNA glcosylases during active growth under low chill conditions, which was in accordance with changes in methylation patterns. The RNA-Seq data of 47 genes associated with MSAP fragments involved in cellular metabolism, stress response, antioxidant system and transcriptional regulation showed correlation between methylation and their expression. Similarly, bisulfite sequencing and qRT-PCR analysis of selected genes also showed correlation between gene body methylation and gene expression. Moreover, significant association between chilling and methylation changes was observed, which suggested that chilling acquisition during dormancy in apple is likely to affect the epigenetic regulation through DNA methylation. PMID:26901339
McCreery, Ryan W; Stelmachowicz, Patricia G
2013-09-01
Understanding speech in acoustically degraded environments can place significant cognitive demands on school-age children who are developing the cognitive and linguistic skills needed to support this process. Previous studies suggest the speech understanding, word learning, and academic performance can be negatively impacted by background noise, but the effect of limited audibility on cognitive processes in children has not been directly studied. The aim of the present study was to evaluate the impact of limited audibility on speech understanding and working memory tasks in school-age children with normal hearing. Seventeen children with normal hearing between 6 and 12 years of age participated in the present study. Repetition of nonword consonant-vowel-consonant stimuli was measured under conditions with combinations of two different signal to noise ratios (SNRs; 3 and 9 dB) and two low-pass filter settings (3.2 and 5.6 kHz). Verbal processing time was calculated based on the time from the onset of the stimulus to the onset of the child's response. Monosyllabic word repetition and recall were also measured in conditions with a full bandwidth and 5.6 kHz low-pass cutoff. Nonword repetition scores decreased as audibility decreased. Verbal processing time increased as audibility decreased, consistent with predictions based on increased listening effort. Although monosyllabic word repetition did not vary between the full bandwidth and 5.6 kHz low-pass filter condition, recall was significantly poorer in the condition with limited bandwidth (low pass at 5.6 kHz). Age and expressive language scores predicted performance on word recall tasks, but did not predict nonword repetition accuracy or verbal processing time. Decreased audibility was associated with reduced accuracy for nonword repetition and increased verbal processing time in children with normal hearing. Deficits in free recall were observed even under conditions where word repetition was not affected. The negative effects of reduced audibility may occur even under conditions where speech repetition is not impacted. Limited stimulus audibility may result in greater cognitive effort for verbal rehearsal in working memory and may limit the availability of cognitive resources to allocate to working memory and other processes.
Multi-Mission Automated Task Invocation Subsystem
NASA Technical Reports Server (NTRS)
Cheng, Cecilia S.; Patel, Rajesh R.; Sayfi, Elias M.; Lee, Hyun H.
2009-01-01
Multi-Mission Automated Task Invocation Subsystem (MATIS) is software that establishes a distributed data-processing framework for automated generation of instrument data products from a spacecraft mission. Each mission may set up a set of MATIS servers for processing its data products. MATIS embodies lessons learned in experience with prior instrument- data-product-generation software. MATIS is an event-driven workflow manager that interprets project-specific, user-defined rules for managing processes. It executes programs in response to specific events under specific conditions according to the rules. Because requirements of different missions are too diverse to be satisfied by one program, MATIS accommodates plug-in programs. MATIS is flexible in that users can control such processing parameters as how many pipelines to run and on which computing machines to run them. MATIS has a fail-safe capability. At each step, MATIS captures and retains pertinent information needed to complete the step and start the next step. In the event of a restart, this information is retrieved so that processing can be resumed appropriately. At this writing, it is planned to develop a graphical user interface (GUI) for monitoring and controlling a product generation engine in MATIS. The GUI would enable users to schedule multiple processes and manage the data products produced in the processes. Although MATIS was initially designed for instrument data product generation,
NASA Astrophysics Data System (ADS)
Hong, X.; Reynolds, C. A.; Doyle, J. D.
2016-12-01
In this study, two-sets of monthly forecasts for the period during the Dynamics of Madden-Julian Oscillation (MJO)/Cooperative Indian Ocean Experiment of Intraseasonal Variability (DAYNAMO/CINDY) in November 2011 are examined. Each set includes three forecasts with the first set from Navy Global Environmental Model (NAVGEM) and the second set from Navy's non-hydrostatic Coupled Ocean-Atmosphere Mesoscale Prediction System (COAMPS®1). Three NAVGEM monthly forecasts have used sea surface temperature (SST) from persistent at the initial time, from Navy Coupled Ocean Data Assimilation (NCODA) analysis, and from coupled NAVGEM-Hybrid Coordinate Ocean Model (HYCOM) forecasts. Examination found that NAVGEM can predict the MJO at 20-days lead time using SST from analysis and from coupled NAVGEM-HYCOM but cannot predict the MJO using the persistent SST, in which a clear circumnavigating signal is absent. Three NAVGEM monthly forecasts are then applied as lateral boundary conditions for three COAMPS monthly forecasts. The results show that all COAMPS runs, including using lateral boundary conditions from the NAVGEM that is without the MJO signal, can predict the MJO. Vertically integrated moisture anomaly and 850-hPa wind anomaly in all COAMPS runs have indicated strong anomalous equatorial easterlies associated with Rossby wave prior to the MJO initiation. Strong surface heat fluxes and turbulence kinetic energy have promoted the convective instability and triggered anomalous ascending motion, which deepens moist boundary layer and develops deep convection into the upper troposphere to form the MJO phase. The results have suggested that air-sea interaction process is important for the initiation and development of the MJO. 1COAMPS® is a registered trademark of the Naval Research Laboratory
Britt, Allison E.; Ferrara, Casey; Mirman, Daniel
2016-01-01
Producing a word requires selecting among a set of similar alternatives. When many semantically related items become activated, the difficulty of the selection process is increased. Experiment 1 tested naming of items with either multiple synonymous labels (“Alternate Names,” e.g., gift/present) or closely semantically related but non-equivalent responses (“Near Semantic Neighbors,” e.g., jam/jelly). Picture naming was fastest and most accurate for pictures with only one label (“High Name Agreement”), slower and less accurate in the Alternate Names condition, and slowest and least accurate in the Near Semantic Neighbors condition. These results suggest that selection mechanisms in picture naming operate at two distinct levels of processing: selecting between similar but non-equivalent names requires two selection processes (semantic and lexical), whereas selecting among equivalent names only requires one selection at the lexical level. Experiment 2 examined how these selection mechanisms are affected by normal aging and found that older adults had significantly more difficulty in the Near Semantic Neighbors condition, but not in the Alternate Names condition. This suggests that aging affects semantic processing and selection more strongly than it affects lexical selection. Experiment 3 examined the role of the left inferior frontal gyrus (LIFG) in these selection processes by testing individuals with aphasia secondary to stroke lesions that either affected the LIFG or spared it. Surprisingly, there was no interaction between condition and lesion group: the presence of LIFG damage was not associated with substantively worse naming performance for pictures with multiple acceptable labels. These results are not consistent with a simple view of LIFG as the locus of lexical selection and suggest a more nuanced view of the neural basis of lexical and semantic selection. PMID:27458393
Effort in Multitasking: Local and Global Assessment of Effort.
Kiesel, Andrea; Dignath, David
2017-01-01
When performing multiple tasks in succession, self-organization of task order might be superior compared to external-controlled task schedules, because self-organization allows optimizing processing modes and thus reduces switch costs, and it increases commitment to task goals. However, self-organization is an additional executive control process that is not required if task order is externally specified and as such it is considered as time-consuming and effortful. To compare self-organized and externally controlled task scheduling, we suggest assessing global subjective and objectives measures of effort in addition to local performance measures. In our new experimental approach, we combined characteristics of dual tasking settings and task switching settings and compared local and global measures of effort in a condition with free choice of task sequence and a condition with cued task sequence. In a multi-tasking environment, participants chose the task order while the task requirement of the not-yet-performed task remained the same. This task preview allowed participants to work on the previously non-chosen items in parallel and resulted in faster responses and fewer errors in task switch trials than in task repetition trials. The free-choice group profited more from this task preview than the cued group when considering local performance measures. Nevertheless, the free-choice group invested more effort than the cued group when considering global measures. Thus, self-organization in task scheduling seems to be effortful even in conditions in which it is beneficiary for task processing. In a second experiment, we reduced the possibility of task preview for the not-yet-performed tasks in order to hinder efficient self-organization. Here neither local nor global measures revealed substantial differences between the free-choice and a cued task sequence condition. Based on the results of both experiments, we suggest that global assessment of effort in addition to local performance measures might be a useful tool for multitasking research.
Ruiz, Daniel; Cerón, Viviana; Molina, Adriana M; Quiñónes, Martha L; Jiménez, Mónica M; Ahumada, Martha; Gutiérrez, Patricia; Osorio, Salua; Mantilla, Gilma; Connor, Stephen J; Thomson, Madeleine C
2014-07-01
As part of the Integrated National Adaptation Pilot project and the Integrated Surveillance and Control System, the Colombian National Institute of Health is working on the design and implementation of a Malaria Early Warning System framework, supported by seasonal climate forecasting capabilities, weather and environmental monitoring, and malaria statistical and dynamic models. In this report, we provide an overview of the local ecoepidemiologic settings where four malaria process-based mathematical models are currently being implemented at a municipal level. The description includes general characteristics, malaria situation (predominant type of infection, malaria-positive cases data, malaria incidence, and seasonality), entomologic conditions (primary and secondary vectors, mosquito densities, and feeding frequencies), climatic conditions (climatology and long-term trends), key drivers of epidemic outbreaks, and non-climatic factors (populations at risk, control campaigns, and socioeconomic conditions). Selected pilot sites exhibit different ecoepidemiologic settings that must be taken into account in the development of the integrated surveillance and control system. © The American Society of Tropical Medicine and Hygiene.
NASA Astrophysics Data System (ADS)
Wang, Yuan; Wu, Rongsheng
2001-12-01
Theoretical argumentation for so-called suitable spatial condition is conducted by the aid of homotopy framework to demonstrate that the proposed boundary condition does guarantee that the over-specification boundary condition resulting from an adjoint model on a limited-area is no longer an issue, and yet preserve its well-poseness and optimal character in the boundary setting. The ill-poseness of over-specified spatial boundary condition is in a sense, inevitable from an adjoint model since data assimilation processes have to adapt prescribed observations that used to be over-specified at the spatial boundaries of the modeling domain. In the view of pragmatic implement, the theoretical framework of our proposed condition for spatial boundaries indeed can be reduced to the hybrid formulation of nudging filter, radiation condition taking account of ambient forcing, together with Dirichlet kind of compatible boundary condition to the observations prescribed in data assimilation procedure. All of these treatments, no doubt, are very familiar to mesoscale modelers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2009-09-01
The on cell phone software captures the images from the CMOS camera periodically, stores the pictures, and periodically transmits those images over the cellular network to the server. The cell phone software consists of several modules: CamTest.cpp, CamStarter.cpp, StreamIOHandler .cpp, and covertSmartDevice.cpp. The camera application on the SmartPhone is CamStarter, which is "the" user interface for the camera system. The CamStarter user interface allows a user to start/stop the camera application and transfer files to the server. The CamStarter application interfaces to the CamTest application through registry settings. Both the CamStarter and CamTest applications must be separately deployed on themore » smartphone to run the camera system application. When a user selects the Start button in CamStarter, CamTest is created as a process. The smartphone begins taking small pictures (CAPTURE mode), analyzing those pictures for certain conditions, and saving those pictures on the smartphone. This process will terminate when the user selects the Stop button. The camtest code spins off an asynchronous thread, StreamIOHandler, to check for pictures taken by the camera. The received image is then tested by StreamIOHandler to see if it meets certain conditions. If those conditions are met, the CamTest program is notified through the setting of a registry key value and the image is saved in a designated directory in a custom BMP file which includes a header and the image data. When the user selects the Transfer button in the CamStarter user interface, the covertsmartdevice code is created as a process. Covertsmartdevice gets all of the files in a designated directory, opens a socket connection to the server, sends each file, and then terminates.« less
The Condition for Generous Trust
Shinya, Obayashi; Yusuke, Inagaki; Hiroki, Takikawa
2016-01-01
Trust has been considered the “cement” of a society and is much studied in sociology and other social sciences. Most studies, however, have neglected one important aspect of trust: it involves an act of forgiving and showing tolerance toward another’s failure. In this study, we refer to this concept as “generous trust” and examine the conditions under which generous trust becomes a more viable option when compared to other types of trust. We investigate two settings. First, we introduce two types of uncertainties: uncertainty as to whether trustees have the intention to cooperate, and uncertainty as to whether trustees have enough competence to accomplish the entrusted tasks. Second, we examine the manner in which trust functions in a broader social context, one that involves matching and commitment processes. Since we expect generosity or forgiveness to work differently in the matching and commitment processes, we must differentiate trust strategies into generous trust in the matching process and that in the commitment process. Our analytical strategy is two-fold. First, we analyze the “modified” trust game that incorporates the two types of uncertainties without the matching process. This simplified setting enables us to derive mathematical results using game theory, thereby giving basic insight into the trust mechanism. Second, we investigate socially embedded trust relationships in contexts involving the matching and commitment processes, using agent-based simulation. Results show that uncertainty about partner’s intention and competence makes generous trust a viable option. In contrast, too much uncertainty undermines the possibility of generous trust. Furthermore, a strategy that is too generous cannot stand alone. Generosity should be accompanied with moderate punishment. As for socially embedded trust relationships, generosity functions differently in the matching process versus the commitment process. Indeed, these two types of generous trust coexist, and their coexistence enables a society to function well. PMID:27893759
A non-linear dynamical approach to belief revision in cognitive behavioral therapy
Kronemyer, David; Bystritsky, Alexander
2014-01-01
Belief revision is the key change mechanism underlying the psychological intervention known as cognitive behavioral therapy (CBT). It both motivates and reinforces new behavior. In this review we analyze and apply a novel approach to this process based on AGM theory of belief revision, named after its proponents, Carlos Alchourrón, Peter Gärdenfors and David Makinson. AGM is a set-theoretical model. We reconceptualize it as describing a non-linear, dynamical system that occurs within a semantic space, which can be represented as a phase plane comprising all of the brain's attentional, cognitive, affective and physiological resources. Triggering events, such as anxiety-producing or depressing situations in the real world, or their imaginal equivalents, mobilize these assets so they converge on an equilibrium point. A preference function then evaluates and integrates evidentiary data associated with individual beliefs, selecting some of them and comprising them into a belief set, which is a metastable state. Belief sets evolve in time from one metastable state to another. In the phase space, this evolution creates a heteroclinic channel. AGM regulates this process and characterizes the outcome at each equilibrium point. Its objective is to define the necessary and sufficient conditions for belief revision by simultaneously minimizing the set of new beliefs that have to be adopted, and the set of old beliefs that have to be discarded or reformulated. Using AGM, belief revision can be modeled using three (and only three) fundamental syntactical operations performed on belief sets, which are expansion; revision; and contraction. Expansion is like adding a new belief without changing any old ones. Revision is like adding a new belief and changing old, inconsistent ones. Contraction is like changing an old belief without adding any new ones. We provide operationalized examples of this process in action. PMID:24860491
NASA Astrophysics Data System (ADS)
Astarita, Antonello; Boccarusso, Luca; Durante, Massimo; Viscusi, Antonio; Sansone, Raffaele; Carrino, Luigi
2018-02-01
The deposition of a metallic coating on hemp-PLA (polylactic acid) laminate through the cold spray technique was studied in this paper. A number of different combinations of the deposition parameters were tested to investigate the feasibility of the process. The feasibility of the process was proved when processing conditions are properly set. The bonding mechanism between the substrate and the first layer of particles was studied through scanning electron microscope observations, and it was found that the polymeric matrix experiences a huge plastic deformation to accommodate the impinging particles; conversely a different mechanism was observed when metallic powders impact against a previously deposited metallic layer. The difference between the bonding mechanism and the growth of the coating was also highlighted. Depending on the spraying parameters, four different processing conditions were highlighted and discussed, and as a result the processing window was defined. The mechanical properties of the composite panel before and after the deposition were also investigated. The experiments showed that when properly carried out, the deposition process does not affect the strength of the panel; moreover, no improvements were observed because the contribution of the coating is negligible with respect to one of the reinforcement fibers.
Advanced multivariable control of a turboexpander plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altena, D.; Howard, M.; Bullin, K.
1998-12-31
This paper describes an application of advanced multivariable control on a natural gas plant and compares its performance to the previous conventional feed-back control. This control algorithm utilizes simple models from existing plant data and/or plant tests to hold the process at the desired operating point in the presence of disturbances and changes in operating conditions. The control software is able to accomplish this due to effective handling of process variable interaction, constraint avoidance and feed-forward of measured disturbances. The economic benefit of improved control lies in operating closer to the process constraints while avoiding significant violations. The South Texasmore » facility where this controller was implemented experienced reduced variability in process conditions which increased liquids recovery because the plant was able to operate much closer to the customer specified impurity constraint. An additional benefit of this implementation of multivariable control is the ability to set performance criteria beyond simple setpoints, including process variable constraints, relative variable merit and optimizing use of manipulated variables. The paper also details the control scheme applied to the complex turboexpander process and some of the safety features included to improve reliability.« less
Optimization Control of the Color-Coating Production Process for Model Uncertainty
He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong
2016-01-01
Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563
High throughput electrospinning of high-quality nanofibers via an aluminum disk spinneret
NASA Astrophysics Data System (ADS)
Zheng, Guokuo
In this work, a simple and efficient needleless high throughput electrospinning process using an aluminum disk spinneret with 24 holes is described. Electrospun mats produced by this setup consisted of fine fibers (nano-sized) of the highest quality while the productivity (yield) was many times that obtained from conventional single-needle electrospinning. The goal was to produce scaled-up amounts of the same or better quality nanofibers under variable concentration, voltage, and the working distance than those produced with the single needle lab setting. The fiber mats produced were either polymer or ceramic (such as molybdenum trioxide nanofibers). Through experimentation the optimum process conditions were defined to be: 24 kilovolt, a distance to collector of 15cm. More diluted solutions resulted in smaller diameter fibers. Comparing the morphologies of the nanofibers of MoO3 produced by both the traditional and the high throughput set up it was found that they were very similar. Moreover, the nanofibers production rate is nearly 10 times than that of traditional needle electrospinning. Thus, the high throughput process has the potential to become an industrial nanomanufacturing process and the materials processed by it may be used as filtration devices, in tissue engineering, and as sensors.
Optimization Control of the Color-Coating Production Process for Model Uncertainty.
He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong
2016-01-01
Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results.
The implementation of a postoperative care process on a neurosurgical unit.
Douglas, Mary; Rowed, Sheila
2005-12-01
The postoperative phase is a critical time for any neurosurgical patient. Historically, certain patients having neurosurgical procedures, such as craniotomies and other more complex surgeries, have been nursed postoperatively in the intensive care unit (ICU) for an overnight stay, prior to transfer to a neurosurgical floor. At the Hospital for Sick Children in Toronto, because of challenges with access to ICU beds and the cancellation of surgeries because of lack of available nurses for the ICU setting, this practice was reexamined. A set of criteria was developed to identify which postoperative patients should come directly to the neurosurgical unit immediately following their anesthetic recovery. The criteria were based on patient diagnosis, preoperative condition, comorbidities, the surgical procedure, intraoperative complications, and postoperative status. A detailed process was then outlined that allowed the optimum patients to be selected for this process to ensure patient safety. Included in this process was a postoperative protocol addressing details such as standard physician orders and the levels of monitoring required. Outcomes of this new process include fewer surgical cancellations for patients and families, equally safe, or better patient care, and the conservation of limited ICU resources. The program has currently been expanded to include patients who have undergone endovascular therapies.
A Novel Scale Up Model for Prediction of Pharmaceutical Film Coating Process Parameters.
Suzuki, Yasuhiro; Suzuki, Tatsuya; Minami, Hidemi; Terada, Katsuhide
2016-01-01
In the pharmaceutical tablet film coating process, we clarified that a difference in exhaust air relative humidity can be used to detect differences in process parameters values, the relative humidity of exhaust air was different under different atmospheric air humidity conditions even though all setting values of the manufacturing process parameters were the same, and the water content of tablets was correlated with the exhaust air relative humidity. Based on this experimental data, the exhaust air relative humidity index (EHI), which is an empirical equation that includes as functional parameters the pan coater type, heated air flow rate, spray rate of coating suspension, saturated water vapor pressure at heated air temperature, and partial water vapor pressure at atmospheric air pressure, was developed. The predictive values of exhaust relative humidity using EHI were in good correlation with the experimental data (correlation coefficient of 0.966) in all datasets. EHI was verified using the date of seven different drug products of different manufacturing scales. The EHI model will support formulation researchers by enabling them to set film coating process parameters when the batch size or pan coater type changes, and without the time and expense of further extensive testing.
Handwritten word preprocessing for database adaptation
NASA Astrophysics Data System (ADS)
Oprean, Cristina; Likforman-Sulem, Laurence; Mokbel, Chafic
2013-01-01
Handwriting recognition systems are typically trained using publicly available databases, where data have been collected in controlled conditions (image resolution, paper background, noise level,...). Since this is not often the case in real-world scenarios, classification performance can be affected when novel data is presented to the word recognition system. To overcome this problem, we present in this paper a new approach called database adaptation. It consists of processing one set (training or test) in order to adapt it to the other set (test or training, respectively). Specifically, two kinds of preprocessing, namely stroke thickness normalization and pixel intensity normalization are considered. The advantage of such approach is that we can re-use the existing recognition system trained on controlled data. We conduct several experiments with the Rimes 2011 word database and with a real-world database. We adapt either the test set or the training set. Results show that training set adaptation achieves better results than test set adaptation, at the cost of a second training stage on the adapted data. Accuracy of data set adaptation is increased by 2% to 3% in absolute value over no adaptation.
Light field and water clarity simulation of natural environments in laboratory conditions
NASA Astrophysics Data System (ADS)
Pe'eri, Shachak; Shwaery, Glenn
2012-06-01
Simulation of natural oceanic conditions in a laboratory setting is a challenging task, especially when that environment can be miles away. We present an attempt to replicate the solar radiation expected at different latitudes with varying water clarity conditions up to 30 m in depth using a 2.5 m deep engineering tank at the University of New Hampshire. The goals of the study were: 1) to configure an underwater light source that produced an irradiance spectrum similar to natural daylight with the sun at zenith and at 60° under clear atmospheric conditions, and 2) to monitor water clarity as a function of depth. Irradiance was measured using a spectra-radiometer with a cosine receiver to analyze the output spectrum of submersed lamps as a function of distance. In addition, an underwater reflection method was developed to measure the diffuse attenuation coefficient in real time. Two water clarity types were characterized, clear waters representing deep, open-ocean conditions, and murky waters representing littoral environments. Results showed good correlation between the irradiance measured at 400 nm to 600 nm and the natural daylight spectrum at 3 m from the light source. This can be considered the water surface conditions reference. Using these methodologies in a controlled laboratory setting, we are able to replicate illumination and water conditions to study the physical, chemical and biological processes on natural and man-made objects and/or systems in simulated, varied geographic locations and environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Linwen; Université de Sherbrooke, Quebec; François, Raoul, E-mail: raoul.francois@insa-toulouse.fr
2015-01-15
This paper deals with corrosion initiation and propagation in pre-cracked reinforced concrete beams under sustained loading during exposure to a chloride environment. Specimen beams that were cast in 2010 were compared to specimens cast in 1984. The only differences between the two sets of beams were the casting direction in relation to tensile reinforcement and the exposure conditions in the salt-fog chamber. The cracking maps, corrosion maps, chloride profiles, and cross-sectional loss of one group of two beams cast in 2010 were studied and their calculated corrosion rates were compared to that of beams cast in 1984 in order tomore » investigate the factors influencing the natural corrosion process. Experimental results show that, after rapid initiation of corrosion at the crack tip, the corrosion process practically halted and the time elapsing before corrosion resumed depended on the exposure conditions and cover depth.« less
Leguérinel, I; Couvert, O; Mafart, P
2007-02-28
Environmental conditions of sporulation influence bacterial heat resistance. For different Bacillus species a linear Bigelow type relationship between the logarithm of D values determined at constant heating temperature and the temperature of sporulation was observed. The absence of interaction between sporulation and heating temperatures allows the combination of this new relationship with the classical Bigelow model. The parameters zT and zT(spo) of this global model were fitted to different sets of data regarding different Bacillus species: B. cereus, B. subtilis, B. licheniformis, B. coagulans and B. stearothermophilus. The origin of raw products or food process conditions before a heat treatment can lead to warm temperature conditions of sporulation and to a dramatic increase of the heat resistance of the generated spores. In this case, provided that the temperature of sporulation can be assessed, this model can be easily implemented to rectify F values on account of possible increase of thermal resistance of spores and to ensure the sterilisation efficacy.
BOREAS AES READAC Surface Meteorological Data
NASA Technical Reports Server (NTRS)
Atkinson, G. Barrie; Funk, Barry; Hall, Forrest G. (Editor); Knapp, David E. (Editor)
2000-01-01
Canadian AES personnel collected and processed data related to surface atmospheric meteorological conditions over the BOREAS region. This data set contains 15-minute meteorological data from one READAC meteorology station in Hudson Bay, Saskatchewan. Parameters include day, time, type of report, sky condition, visibility, mean sea level pressure, temperature, dewpoint, wind, altimeter, opacity, minimum and maximum visibility, station pressure, minimum and maximum air temperature, a wind group, precipitation, and precipitation in the last hour. The data were collected non-continuously from 24-May-1994 to 20-Sep-1994. The data are provided in tabular ASCII files, and are classified as AFM-Staff data.
Why do providers contribute to disparities and what can be done about it?
Burgess, Diana J; Fu, Steven S; van Ryn, Michelle
2004-11-01
This paper applies social cognition research to understanding and ameliorating the provider contribution to racial/ethnic disparities in health care. We discuss how fundamental cognitive mechanisms such as automatic, unconscious processes (e.g., stereotyping) can help explain provider bias. Even well-intentioned providers who are motivated to be nonprejudiced may stereotype racial/ethnic minority members, particularly under conditions of that diminish cognitive capacity. These conditions-time pressure, fatigue, and information overload-are frequently found in health care settings. We conclude with implications of the social-cognitive perspective for developing interventions to reduce provider bias.
Bratzke, Lisa C; Muehrer, Rebecca J; Kehl, Karen A; Lee, Kyoung Suk; Ward, Earlise C; Kwekkeboom, Kristine L
2015-03-01
The purpose of this narrative review was to synthesize current research findings related to self-management, in order to better understand the processes of priority setting and decision-making among adults with multimorbidity. A narrative literature review was undertaken, synthesizing findings from published, peer-reviewed empirical studies that addressed priority setting and/or decision-making in self-management of multimorbidity. A search of PubMed, PsychINFO, CINAHL and SocIndex databases was conducted from database inception through December 2013. References lists from selected empirical studies and systematic reviews were evaluated to identify any additional relevant articles. Full text of potentially eligible articles were reviewed and selected for inclusion if they described empirical studies that addressed priority setting or decision-making in self-management of multimorbidity among adults. Two independent reviewers read each selected article and extracted relevant data to an evidence table. Processes and factors of multimorbidity self-management were identified and sorted into categories of priority setting, decision-making, and facilitators/barriers. Thirteen articles were selected for inclusion; most were qualitative studies describing processes, facilitators, and barriers of multimorbidity self-management. The findings revealed that patients prioritize a dominant chronic illness and re-prioritize over time as conditions and treatments change; that multiple facilitators (e.g. support programs) and barriers (e.g. lack of financial resources) impact individuals' self-management priority setting and decision-making ability; as do individual beliefs, preferences, and attitudes (e.g., perceived personal control, preferences regarding treatment). Health care providers need to be cognizant that individuals with multimorbidity engage in day-to-day priority setting and decision-making among their multiple chronic illnesses and respective treatments. Researchers need to develop and test interventions that support day-to-day priority setting and decision-making and improve health outcomes for individuals with multimorbidity. Copyright © 2014 Elsevier Ltd. All rights reserved.
Expression Atlas: gene and protein expression across multiple studies and organisms
Tang, Y Amy; Bazant, Wojciech; Burke, Melissa; Fuentes, Alfonso Muñoz-Pomer; George, Nancy; Koskinen, Satu; Mohammed, Suhaib; Geniza, Matthew; Preece, Justin; Jarnuczak, Andrew F; Huber, Wolfgang; Stegle, Oliver; Brazma, Alvis; Petryszak, Robert
2018-01-01
Abstract Expression Atlas (http://www.ebi.ac.uk/gxa) is an added value database that provides information about gene and protein expression in different species and contexts, such as tissue, developmental stage, disease or cell type. The available public and controlled access data sets from different sources are curated and re-analysed using standardized, open source pipelines and made available for queries, download and visualization. As of August 2017, Expression Atlas holds data from 3,126 studies across 33 different species, including 731 from plants. Data from large-scale RNA sequencing studies including Blueprint, PCAWG, ENCODE, GTEx and HipSci can be visualized next to each other. In Expression Atlas, users can query genes or gene-sets of interest and explore their expression across or within species, tissues, developmental stages in a constitutive or differential context, representing the effects of diseases, conditions or experimental interventions. All processed data matrices are available for direct download in tab-delimited format or as R-data. In addition to the web interface, data sets can now be searched and downloaded through the Expression Atlas R package. Novel features and visualizations include the on-the-fly analysis of gene set overlaps and the option to view gene co-expression in experiments investigating constitutive gene expression across tissues or other conditions. PMID:29165655
Assessment of Low Cycle Fatigue Behavior of Powder Metallurgy Alloy U720
NASA Technical Reports Server (NTRS)
Gabb, Tomothy P.; Bonacuse, Peter J.; Ghosn, Louis J.; Sweeney, Joseph W.; Chatterjee, Amit; Green, Kenneth A.
2000-01-01
The fatigue lives of modem powder metallurgy disk alloys are influenced by variabilities in alloy microstructure and mechanical properties. These properties can vary as functions of variables the different steps of materials/component processing: powder atomization, consolidation, extrusion, forging, heat treating, and machining. It is important to understand the relationship between the statistical variations in life and these variables, as well as the change in life distribution due to changes in fatigue loading conditions. The objective of this study was to investigate these relationships in a nickel-base disk superalloy, U720, produced using powder metallurgy processing. Multiple strain-controlled fatigue tests were performed at 538 C (1000 F) at limited sets of test conditions. Analyses were performed to: (1) assess variations of microstructure, mechanical properties, and LCF failure initiation sites as functions of disk processing and loading conditions; and (2) compare mean and minimum fatigue life predictions using different approaches for modeling the data from assorted test conditions. Significant variations in life were observed as functions of the disk processing variables evaluated. However, the lives of all specimens could still be combined and modeled together. The failure initiation sites for tests performed at a strain ratio R(sub epsilon) = epsilon(sub min)/epsilon(sub max) of 0 were different from those in tests at a strain ratio of -1. An approach could still be applied to account for the differences in mean and maximum stresses and strains. This allowed the data in tests of various conditions to be combined for more robust statistical estimates of mean and minimum lives.
Jenson, Susan K.; Domingue, Julia O.
1988-01-01
The first phase of analysis is a conditioning phase that generates three data sets: the original OEM with depressions filled, a data set indicating the flow direction for each cell, and a flow accumulation data set in which each cell receives a value equal to the total number of cells that drain to it. The original OEM and these three derivative data sets can then be processed in a variety of ways to optionally delineate drainage networks, overland paths, watersheds for userspecified locations, sub-watersheds for the major tributaries of a drainage network, or pour point linkages between watersheds. The computer-generated drainage lines and watershed polygons and the pour point linkage information can be transferred to vector-based geographic information systems for futher analysis. Comparisons between these computergenerated features and their manually delineated counterparts generally show close agreement, indicating that these software tools will save analyst time spent in manual interpretation and digitizing.
Source-independent full waveform inversion of seismic data
Lee, Ki Ha
2006-02-14
A set of seismic trace data is collected in an input data set that is first Fourier transformed in its entirety into the frequency domain. A normalized wavefield is obtained for each trace of the input data set in the frequency domain. Normalization is done with respect to the frequency response of a reference trace selected from the set of seismic trace data. The normalized wavefield is source independent, complex, and dimensionless. The normalized wavefield is shown to be uniquely defined as the normalized impulse response, provided that a certain condition is met for the source. This property allows construction of the inversion algorithm disclosed herein, without any source or source coupling information. The algorithm minimizes the error between data normalized wavefield and the model normalized wavefield. The methodology is applicable to any 3-D seismic problem, and damping may be easily included in the process.
TOWARDS A MULTI-SCALE AGENT-BASED PROGRAMMING LANGUAGE METHODOLOGY
Somogyi, Endre; Hagar, Amit; Glazier, James A.
2017-01-01
Living tissues are dynamic, heterogeneous compositions of objects, including molecules, cells and extra-cellular materials, which interact via chemical, mechanical and electrical process and reorganize via transformation, birth, death and migration processes. Current programming language have difficulty describing the dynamics of tissues because: 1: Dynamic sets of objects participate simultaneously in multiple processes, 2: Processes may be either continuous or discrete, and their activity may be conditional, 3: Objects and processes form complex, heterogeneous relationships and structures, 4: Objects and processes may be hierarchically composed, 5: Processes may create, destroy and transform objects and processes. Some modeling languages support these concepts, but most cannot translate models into executable simulations. We present a new hybrid executable modeling language paradigm, the Continuous Concurrent Object Process Methodology (CCOPM) which naturally expresses tissue models, enabling users to visually create agent-based models of tissues, and also allows computer simulation of these models. PMID:29282379
TOWARDS A MULTI-SCALE AGENT-BASED PROGRAMMING LANGUAGE METHODOLOGY.
Somogyi, Endre; Hagar, Amit; Glazier, James A
2016-12-01
Living tissues are dynamic, heterogeneous compositions of objects , including molecules, cells and extra-cellular materials, which interact via chemical, mechanical and electrical process and reorganize via transformation, birth, death and migration processes . Current programming language have difficulty describing the dynamics of tissues because: 1: Dynamic sets of objects participate simultaneously in multiple processes, 2: Processes may be either continuous or discrete, and their activity may be conditional, 3: Objects and processes form complex, heterogeneous relationships and structures, 4: Objects and processes may be hierarchically composed, 5: Processes may create, destroy and transform objects and processes. Some modeling languages support these concepts, but most cannot translate models into executable simulations. We present a new hybrid executable modeling language paradigm, the Continuous Concurrent Object Process Methodology ( CCOPM ) which naturally expresses tissue models, enabling users to visually create agent-based models of tissues, and also allows computer simulation of these models.
Chen, Yi-Chuan; Spence, Charles
2017-06-01
The extent to which attention modulates multisensory processing in a top-down fashion is still a subject of debate among researchers. Typically, cognitive psychologists interested in this question have manipulated the participants' attention in terms of single/dual tasking or focal/divided attention between sensory modalities. We suggest an alternative approach, one that builds on the extensive older literature highlighting hemispheric asymmetries in the distribution of spatial attention. Specifically, spatial attention in vision, audition, and touch is typically biased preferentially toward the right hemispace, especially under conditions of high perceptual load. We review the evidence demonstrating such an attentional bias toward the right in extinction patients and healthy adults, along with the evidence of such rightward-biased attention in multisensory experimental settings. We then evaluate those studies that have demonstrated either a more pronounced multisensory effect in right than in left hemispace, or else similar effects in the two hemispaces. The results suggest that the influence of rightward-biased attention is more likely to be observed when the crossmodal signals interact at later stages of information processing and under conditions of higher perceptual load-that is, conditions under which attention is perhaps a compulsory enhancer of information processing. We therefore suggest that the spatial asymmetry in attention may provide a useful signature of top-down attentional modulation in multisensory processing.
Callaway, John C.; Cahoon, Donald R.; Lynch, James C.
2014-01-01
Tidal wetlands are highly sensitive to processes that affect their elevation relative to sea level. The surface elevation table–marker horizon (SET–MH) method has been used to successfully measure these processes, including sediment accretion, changes in relative elevation, and shallow soil processes (subsidence and expansion due to root production). The SET–MH method is capable of measuring changes at very high resolution (±millimeters) and has been used worldwide both in natural wetlands and under experimental conditions. Marker horizons are typically deployed using feldspar over 50- by 50-cm plots, with replicate plots at each sampling location. Plots are sampled using a liquid N2 cryocorer that freezes a small sample, allowing the handling and measurement of soft and easily compressed soils with minimal compaction. The SET instrument is a portable device that is attached to a permanent benchmark to make high-precision measurements of wetland surface elevation. The SET instrument has evolved substantially in recent decades, and the current rod SET (RSET) is widely used. For the RSET, a 15-mm-diameter stainless steel rod is pounded into the ground until substantial resistance is achieved to establish a benchmark. The SET instrument is attached to the benchmark and leveled such that it reoccupies the same reference plane in space, and pins lowered from the instrument repeatedly measure the same point on the soil surface. Changes in the height of the lowered pins reflect changes in the soil surface. Permanent or temporary platforms provide access to SET and MH locations without disturbing the wetland surface.
ERIC Educational Resources Information Center
Canelos, James
An internal cognitive variable--mental imagery representation--was studied using a set of three information-processing strategies under external stimulus visual display conditions for various learning levels. The copy strategy provided verbal and visual dual-coding and required formation of a vivid mental image. The relational strategy combined…
1985-01-01
envisionment) produced by GIZMO . ? In the envisionment, I s indicates the set of quantity—conditioned individuals that exists during a situa- tion...envisionment step by step . In START, the initial state, GIZMO deduces that heat flow occurs, since there is assumed to be a temperature difference between the...stov e GIZMO implements the basic operations of qualitative process theory, including an envisioner for makin g predictions and a program for
M.S. Balshi; A.D. McGuire; P. Duffy; M. Flannigan; D.W. Kicklighter; J. Melillo
2009-01-01
We use a gridded data set developed with a multivariate adaptive regression spline approach to determine how area burned varies each year with changing climatic and fuel moisture conditions. We apply the process-based Terrestrial Ecosystem Model to evaluate the role of future fire on the carbon dynamics of boreal North America in the context of changing atmospheric...
Kurt F. Anschuetz
2007-01-01
The underlying premise of this discussion is that people contribute to conditions that warrant the restructuring and reorganization of their interactions with their physical settings, with other members of their communities, and with residents of other communities (see Anschuetz 1998b:31-82). People revise their existing tactics and strategies, or adopt altogether new...
Processes for liquefying carbonaceous feedstocks and related compositions
MacDonnell, Frederick M.; Dennis, Brian H.; Billo, Richard E.; Priest, John W.
2017-02-28
Methods for the conversion of lignites, subbituminous coals and other carbonaceous feedstocks into synthetic oils, including oils with properties similar to light weight sweet crude oil using a solvent derived from hydrogenating oil produced by pyrolyzing lignite are set forth herein. Such methods may be conducted, for example, under mild operating conditions with a low cost stoichiometric co-reagent and/or a disposable conversion agent.
Automated system function allocation and display format: Task information processing requirements
NASA Technical Reports Server (NTRS)
Czerwinski, Mary P.
1993-01-01
An important consideration when designing the interface to an intelligent system concerns function allocation between the system and the user. The display of information could be held constant, or 'fixed', leaving the user with the task of searching through all of the available information, integrating it, and classifying the data into a known system state. On the other hand, the system, based on its own intelligent diagnosis, could display only relevant information in order to reduce the user's search set. The user would still be left the task of perceiving and integrating the data and classifying it into the appropriate system state. Finally, the system could display the patterns of data. In this scenario, the task of integrating the data is carried out by the system, and the user's information processing load is reduced, leaving only the tasks of perception and classification of the patterns of data. Humans are especially adept at this form of display processing. Although others have examined the relative effectiveness of alphanumeric and graphical display formats, it is interesting to reexamine this issue together with the function allocation problem. Currently, Johnson Space Center is the test site for an intelligent Thermal Control System (TCS), TEXSYS, being tested for use with Space Station Freedom. Expert TCS engineers, as well as novices, were asked to classify several displays of TEXSYS data into various system states (including nominal and anomalous states). Three different display formats were used: fixed, subset, and graphical. The hypothesis tested was that the graphical displays would provide for fewer errors and faster classification times by both experts and novices, regardless of the kind of system state represented within the display. The subset displays were hypothesized to be the second most effective display format/function allocation condition, based on the fact that the search set is reduced in these displays. Both the subset and the graphic display conditions were hypothesized to be processed more efficiently than the fixed display conditions.
NASA Astrophysics Data System (ADS)
Vivet, L.; Joudrier, A.-L.; Bouttemy, M.; Vigneron, J.; Tan, K. L.; Morelle, J. M.; Etcheberry, A.; Chalumeau, L.
2013-06-01
Electroless nickel-high-phosphorus Ni-P plating is known for its physical properties. In case of electronic and mechatronic assembly processes achieved under ambient conditions the wettability of the Ni-P layer under ambient temperature and ambient air stays a point of surface quality investigation. This contribution will be devoted to the study of the surface properties of Ni-P films for which we performed air plasma treatment. We focus our attention on the evolution of the surface wettability, using the classical sessile drop technique. Interpreting the results with the OWRK model we extract the polar and disperse surface tension components from which we deduced typical evolution of the surface properties with the different treatment settings. By controlling the variations of the parameters of the plasma exposure we are able to change the responses of our Ni-P sample from total hydrophobic to total hydrophilic behaviours. All the intermediate states can be reached by adapting the treatment parameters. So it is demonstrated that the apparent Ni-P surface properties can be fully adapted and the surface setting can be well characterized by wettability measurements. To deep our knowledge of the surface modifications induced by plasma we performed parallel SEM and XPS analyses which provide informations on the structure and the chemical composition of the surface for each set of treatment parameters. Using this double approach we were able to propose a correlation between the evolution of surface chemical composition and surface wettability which are completely governed by the plasma treatment conditions. Chemical parameters as the elimination of the carbon contamination, the progressive surface oxidation, and the slight incorporation of nitrogen due to the air plasma interaction are well associated with the evolution of the wettability properties. So a complete engineering for the Ni-P surface preparation has been established. The sessile drop method can be considered as a very efficient method to propose qualification of treatments onto Ni-P surfaces before performing electronic and mechatronic assembly processes that are achieved under ambient conditions.
NASA Astrophysics Data System (ADS)
Jun, Jinhyuck; Park, Minwoo; Park, Chanha; Yang, Hyunjo; Yim, Donggyu; Do, Munhoe; Lee, Dongchan; Kim, Taehoon; Choi, Junghoe; Luk-Pat, Gerard; Miloslavsky, Alex
2015-03-01
As the industry pushes to ever more complex illumination schemes to increase resolution for next generation memory and logic circuits, sub-resolution assist feature (SRAF) placement requirements become increasingly severe. Therefore device manufacturers are evaluating improvements in SRAF placement algorithms which do not sacrifice main feature (MF) patterning capability. There are known-well several methods to generate SRAF such as Rule based Assist Features (RBAF), Model Based Assist Features (MBAF) and Hybrid Assisted Features combining features of the different algorithms using both RBAF and MBAF. Rule Based Assist Features (RBAF) continue to be deployed, even with the availability of Model Based Assist Features (MBAF) and Inverse Lithography Technology (ILT). Certainly for the 3x nm node, and even at the 2x nm nodes and lower, RBAF is used because it demands less run time and provides better consistency. Since RBAF is needed now and in the future, what is also needed is a faster method to create the AF rule tables. The current method typically involves making masks and printing wafers that contain several experiments, varying the main feature configurations, AF configurations, dose conditions, and defocus conditions - this is a time consuming and expensive process. In addition, as the technology node shrinks, wafer process changes and source shape redesigns occur more frequently, escalating the cost of rule table creation. Furthermore, as the demand on process margin escalates, there is a greater need for multiple rule tables: each tailored to a specific set of main-feature configurations. Model Assisted Rule Tables(MART) creates a set of test patterns, and evaluates the simulated CD at nominal conditions, defocused conditions and off-dose conditions. It also uses lithographic simulation to evaluate the likelihood of AF printing. It then analyzes the simulation data to automatically create AF rule tables. It means that analysis results display the cost of different AF configurations as the space grows between a pair of main features. In summary, model based rule tables method is able to make it much easier to create rule tables, leading to faster rule-table creation and a lower barrier to the creation of more rule tables.
Laboratory investigation on effects of flood intermittency on river delta dynamics
NASA Astrophysics Data System (ADS)
Miller, K. L.; Kim, W.
2015-12-01
In order to simplify the complex hydrological variability of flow conditions, experiments modeling delta evolution are often conducted using a representative "channel-forming" flood flow and then relate results to field settings using an intermittency factor, defined as the fraction of total time at flood conditions. Although this intermittency factor makes it easier to investigate how variables, such as relative base level and/or sediment supply, affect delta dynamics, little is known about how this generalization to a single flow condition affects delta processes. We conducted a set of laboratory experiments with periodic flow conditions to determine the effects of intermittent discharges on delta evolution. During the experiment, flood with a set water discharge and sediment supply, cycles between periods of normal flow where the water flux is halved and the sediment discharge is turned off. For each run, the magnitude of the flood is held constant, but the duration is assigned differently, thus varying the intermittency between 1 and 0.2. We find that as the intermittency factor decreases (duration of each flood period decreases), the delta topset has a larger, more elongated area with a shallower slope as a result of reworking on the delta topset during normal flow conditions. During periods of normal flow, the system adjusts towards a new equilibrium state that then in turn acts as the initial condition for the subsequent flood period. Furthermore, the natural delta avulsion cycle becomes obscured by the flood cycles as the flood duration becomes shorter than the autogenic behavior. These results suggest that the adjustment timescale for differing flow conditions is a factor in determining the overall shape of the delta and behavior of the fluviodeltaic channels. We conclude, periods of normal flow when topset sediment is reworked, may be just as important to delta dynamics as periods of flood when sediment is supplied to the system.
A Method for Applying Fluvial Geomorphology in Support of Catchment-Scale River Restoration Planning
NASA Astrophysics Data System (ADS)
Sear, D.; Newson, M.; Hill, C.; Branson, J.; Old, J.
2005-12-01
Fluvial geomorphology is increasingly used by those responsible for conserving river ecosystems; survey techniques are used to derive conceptual models of the processes and forms that characterise particular systems and locations, with a view to making statements of `condition' or `status' and providing fundamental strategies for rehabilitation/restoration. However, there are important scale-related problems in developing catchments scale restoration plans that inevitably are implemented on a reach- by-reach basis. This paper reports on a watershed scale methodology for setting geomorphological and physical habitat reference conditions based on a science-based conceptual model of cachment:channel function. Using a case study from the River Nar, a gravel-bed groundwater dominated river in the UK with important conservation status, the paper describes the sequences of the methodology; from analysis of available evidence, process of field data capture and development of a conceptual model of catchment-wide fluvial dynamics. Reference conditions were derived from the conceptual model and gathered from the literature for the two main river types found on the river Nar, and compared with the current situation in 76 sub-reaches from source to mouth. Multi-Criteria Analysis (MCA) was used to score the extent of channel departures from `natural' and to suggest the basis for a progressive restoration strategy for the whole river system. MCA is shown to be a flexible method for setting and communicating decisions that are amenable to stakeholder and public consultation.
Piccini, Ilaria; Araúzo-Bravo, Marcos; Seebohm, Guiscard; Greber, Boris
2016-12-01
Cardiac induction of human embryonic stem cells (hESCs) is a process bearing increasing medical relevance, yet it is poorly understood from a developmental biology perspective. Anticipated technological progress in deriving stably expandable cardiac precursor cells or in advancing cardiac subtype specification protocols will likely require deeper insights into this fascinating system. Recent improvements in controlling hESC differentiation now enable a near-homogeneous induction of the cardiac lineage. This is based on an optimized initial stimulation of mesoderm-inducing signaling pathways such as Activin and/or FGF, BMP, and WNT, followed by WNT inhibition as a secondary requirement. Here, we describe a comprehensive data set based on varying hESC differentiation conditions in a systematic manner and recording high-resolution differentiation time-courses analyzed by genome-wide expression profiling (GEO accession number GSE67154). As a baseline, hESCs were differentiated into cardiomyocytes under optimal conditions. Moreover, in additional time-series, individual signaling factors were withdrawn from the initial stimulation cocktail to reveal their specific roles via comparison to the standard condition. Hence, this data set presents a rich resource for hypothesis generation in studying human cardiac induction, as we reveal numbers of known as well as uncharacterized genes prominently marking distinct intermediate stages in the process. These data will also be useful for identifying putative cardiac master regulators in the human system as well as for characterizing expandable cardiac stem cells.
NASA Astrophysics Data System (ADS)
Unland, N. P.; Cartwright, I.; Cendón, D. I.; Chisari, R.
2014-12-01
Bank exchange processes within 50 m of the Tambo River, southeast Australia, have been investigated through the combined use of 3H and 14C. Groundwater residence times increase towards the Tambo River, which suggests the absence of significant bank storage. Major ion concentrations and δ2H and δ18O values of bank water also indicate that bank infiltration does not significantly impact groundwater chemistry under baseflow and post-flood conditions, suggesting that the gaining nature of the river may be driving the return of bank storage water back into the Tambo River within days of peak flood conditions. The covariance between 3H and 14C indicates the leakage and mixing between old (~17 200 years) groundwater from a semi-confined aquifer and younger groundwater (<100 years) near the river, where confining layers are less prevalent. It is likely that the upward infiltration of deeper groundwater from the semi-confined aquifer during flooding limits bank infiltration. Furthermore, the more saline deeper groundwater likely controls the geochemistry of water in the river bank, minimising the chemical impact that bank infiltration has in this setting. These processes, coupled with the strongly gaining nature of the Tambo River are likely to be the factors reducing the chemical impact of bank storage in this setting. This study illustrates the complex nature of river groundwater interactions and the potential downfall in assuming simple or idealised conditions when conducting hydrogeological studies.
Nuclear sensor signal processing circuit
Kallenbach, Gene A [Bosque Farms, NM; Noda, Frank T [Albuquerque, NM; Mitchell, Dean J [Tijeras, NM; Etzkin, Joshua L [Albuquerque, NM
2007-02-20
An apparatus and method are disclosed for a compact and temperature-insensitive nuclear sensor that can be calibrated with a non-hazardous radioactive sample. The nuclear sensor includes a gamma ray sensor that generates tail pulses from radioactive samples. An analog conditioning circuit conditions the tail-pulse signals from the gamma ray sensor, and a tail-pulse simulator circuit generates a plurality of simulated tail-pulse signals. A computer system processes the tail pulses from the gamma ray sensor and the simulated tail pulses from the tail-pulse simulator circuit. The nuclear sensor is calibrated under the control of the computer. The offset is adjusted using the simulated tail pulses. Since the offset is set to zero or near zero, the sensor gain can be adjusted with a non-hazardous radioactive source such as, for example, naturally occurring radiation and potassium chloride.
A Fluorescent G-quadruplex Sensor for Chemical RNA Copying.
Giurgiu, Constantin; Wright, Tom; O'Flaherty, Derek; Szostak, Jack
2018-06-25
Non-enzymatic RNA replication may have been one of the processes involved in the appearance of life on Earth. Attempts to recreate this process in a laboratory setting have not been successful thus far, highlighting a critical need for finding prebiotic conditions that increase the rate and the yield. Here, we present a highly parallel assay for template directed RNA synthesis that relies on the intrinsic fluorescence of a 2-aminopurine modified G-quadruplex. We demonstrate the application of the assay to examine the combined influence of multiple variables including pH, divalent metal concentrations and ribonucleotide concentrations on the copying of RNA sequences. The assay enables a direct survey of physical and chemical conditions, potentially prebiotic, which could enable the chemical replication of RNA. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Distributed condition monitoring techniques of optical fiber composite power cable in smart grid
NASA Astrophysics Data System (ADS)
Sun, Zhihui; Liu, Yuan; Wang, Chang; Liu, Tongyu
2011-11-01
Optical fiber composite power cable such as optical phase conductor (OPPC) is significant for the development of smart grid. This paper discusses the distributed cable condition monitoring techniques of the OPPC, which adopts embedded single-mode fiber as the sensing medium. By applying optical time domain reflection and laser Raman scattering, high-resolution spatial positioning and high-precision distributed temperature measurement is executed. And the OPPC cable condition parameters including temperature and its location, current carrying capacity, and location of fracture and loss can be monitored online. OPPC cable distributed condition monitoring experimental system is set up, and the main parts including pulsed fiber laser, weak Raman signal reception, high speed acquisition and cumulative average processing, temperature demodulation and current carrying capacity analysis are introduced. The distributed cable condition monitoring techniques of the OPPC is significant for power transmission management and security.
Support vector machine incremental learning triggered by wrongly predicted samples
NASA Astrophysics Data System (ADS)
Tang, Ting-long; Guan, Qiu; Wu, Yi-rong
2018-05-01
According to the classic Karush-Kuhn-Tucker (KKT) theorem, at every step of incremental support vector machine (SVM) learning, the newly adding sample which violates the KKT conditions will be a new support vector (SV) and migrate the old samples between SV set and non-support vector (NSV) set, and at the same time the learning model should be updated based on the SVs. However, it is not exactly clear at this moment that which of the old samples would change between SVs and NSVs. Additionally, the learning model will be unnecessarily updated, which will not greatly increase its accuracy but decrease the training speed. Therefore, how to choose the new SVs from old sets during the incremental stages and when to process incremental steps will greatly influence the accuracy and efficiency of incremental SVM learning. In this work, a new algorithm is proposed to select candidate SVs and use the wrongly predicted sample to trigger the incremental processing simultaneously. Experimental results show that the proposed algorithm can achieve good performance with high efficiency, high speed and good accuracy.
NASA Astrophysics Data System (ADS)
Suzuki, Izumi; Mikami, Yoshiki; Ohsato, Ario
A technique that acquires documents in the same category with a given short text is introduced. Regarding the given text as a training document, the system marks up the most similar document, or sufficiently similar documents, from among the document domain (or entire Web). The system then adds the marked documents to the training set to learn the set, and this process is repeated until no more documents are marked. Setting a monotone increasing property to the similarity as it learns enables the system to 1) detect the correct timing so that no more documents remain to be marked and to 2) decide the threshold value that the classifier uses. In addition, under the condition that the normalization process is limited to what term weights are divided by a p-norm of the weights, the linear classifier in which training documents are indexed in a binary manner is the only instance that satisfies the monotone increasing property. The feasibility of the proposed technique was confirmed through an examination of binary similarity and using English and German documents randomly selected from the Web.
Representations of mechanical assembly sequences
NASA Technical Reports Server (NTRS)
Homem De Mello, Luiz S.; Sanderson, Arthur C.
1991-01-01
Five types of representations for assembly sequences are reviewed: the directed graph of feasible assembly sequences, the AND/OR graph of feasible assembly sequences, the set of establishment conditions, and two types of sets of precedence relationships. (precedence relationships between the establishment of one connection between parts and the establishment of another connection, and precedence relationships between the establishment of one connection and states of the assembly process). The mappings of one representation into the others are established. The correctness and completeness of these representations are established. The results presented are needed in the proof of correctness and completeness of algorithms for the generation of mechanical assembly sequences.
A Supramodal Neural Network for Speech and Gesture Semantics: An fMRI Study
Weis, Susanne; Kircher, Tilo
2012-01-01
In a natural setting, speech is often accompanied by gestures. As language, speech-accompanying iconic gestures to some extent convey semantic information. However, if comprehension of the information contained in both the auditory and visual modality depends on same or different brain-networks is quite unknown. In this fMRI study, we aimed at identifying the cortical areas engaged in supramodal processing of semantic information. BOLD changes were recorded in 18 healthy right-handed male subjects watching video clips showing an actor who either performed speech (S, acoustic) or gestures (G, visual) in more (+) or less (−) meaningful varieties. In the experimental conditions familiar speech or isolated iconic gestures were presented; during the visual control condition the volunteers watched meaningless gestures (G−), while during the acoustic control condition a foreign language was presented (S−). The conjunction of the visual and acoustic semantic processing revealed activations extending from the left inferior frontal gyrus to the precentral gyrus, and included bilateral posterior temporal regions. We conclude that proclaiming this frontotemporal network the brain's core language system is to take too narrow a view. Our results rather indicate that these regions constitute a supramodal semantic processing network. PMID:23226488
Computing algebraic transfer entropy and coupling directions via transcripts
NASA Astrophysics Data System (ADS)
Amigó, José M.; Monetti, Roberto; Graff, Beata; Graff, Grzegorz
2016-11-01
Most random processes studied in nonlinear time series analysis take values on sets endowed with a group structure, e.g., the real and rational numbers, and the integers. This fact allows to associate with each pair of group elements a third element, called their transcript, which is defined as the product of the second element in the pair times the first one. The transfer entropy of two such processes is called algebraic transfer entropy. It measures the information transferred between two coupled processes whose values belong to a group. In this paper, we show that, subject to one constraint, the algebraic transfer entropy matches the (in general, conditional) mutual information of certain transcripts with one variable less. This property has interesting practical applications, especially to the analysis of short time series. We also derive weak conditions for the 3-dimensional algebraic transfer entropy to yield the same coupling direction as the corresponding mutual information of transcripts. A related issue concerns the use of mutual information of transcripts to determine coupling directions in cases where the conditions just mentioned are not fulfilled. We checked the latter possibility in the lowest dimensional case with numerical simulations and cardiovascular data, and obtained positive results.
Atalay, Nart Bedin; Misirlisoy, Mine
2012-11-01
The item-specific proportion congruence (ISPC) manipulation (Jacoby, Lindsay, & Hessels, 2003) produces larger Stroop interference for mostly congruent items than mostly incongruent items. This effect has been attributed to dynamic control over word-reading processes. However, proportion congruence of an item in the ISPC manipulation is completely confounded with response contingency, suggesting the alternative hypothesis, that the ISPC effect is a result of learning response contingencies (Schmidt & Besner, 2008). The current study asks whether the ISPC effect can be explained by a pure stimulus-response contingency-learning account, or whether other control processes play a role as well, by comparing within- and between-language conditions in a bilingual task. Experiment 1 showed that contingency learning for noncolor words was larger for the within-language than the between-language condition. Experiment 2 revealed significant ISPC effects for both within- and between-language conditions; importantly, the effect was larger in the former. The results of the contingency analyses for Experiment 2 were parallel to that of Experiment 1 and did not show an interaction between contingency and congruency. Put together, these sets of results support the view that contingency-learning processes dominate color-word ISPC effects.
Experimental approach for thermal parameters estimation during glass forming process
NASA Astrophysics Data System (ADS)
Abdulhay, B.; Bourouga, B.; Alzetto, F.; Challita, C.
2016-10-01
In this paper, an experimental device designed and developedto estimate thermal conditions at the Glass / piston contact interface is presented. This deviceis made of two parts: the upper part contains the piston made of metal and a heating device to raise the temperature of the piston up to 500 °C. The lower part is composed of a lead crucible and a glass sample. The assembly is provided with a heating system, an induction furnace of 6 kW for heating the glass up to 950 °C.The developed experimental procedure has permitted in a previous published study to estimate the Thermal Contact ResistanceTCR using the inverse technique developed by Beck [1]. The semi-transparent character of the glass has been taken into account by an additional radiative heat flux and an equivalent thermal conductivity. After the set-up tests, reproducibility experiments for a specific contact pressure have been carried outwith a maximum dispersion that doesn't exceed 6%. Then, experiments under different conditions for a specific glass forming process regarding the application (Packaging, Buildings and Automobile) were carried out. The objective is to determine, experimentallyfor each application,the typical conditions capable to minimize the glass temperature loss during the glass forming process.
Dou, Ming; Zuo, Qiting; Zhang, Jinping; Li, Congying; Li, Guiqiu
2013-09-01
With rapid economic development, the Pearl River Delta (PRD) of China has experienced a series of serious heavy metal pollution events. Considering complex hydrodynamic and pollutants transport process, one-dimensional hydrodynamic model and heavy metal transport model were developed for tidal river network of the PRD. Then, several pollution emergency scenarios were designed by combining with the upper inflow, water quality and the lower tide level boundary conditions. Using this set of models, the temporal and spatial change process of cadmium (Cd) concentration was simulated. The influence of change in hydrodynamic conditions on Cd transport in tidal river network was assessed, and its transport laws were summarized. The result showed the following: Flow changes in the tidal river network were influenced remarkably by tidal backwater action, which further influenced the transport process of heavy metals; Cd concentrations in most sections while encountering high tide were far greater than those while encountering middle or low tides; and increased inflows from upper reaches could intensify water pollution in the West River (while encountering high tide) or the North River (while encountering middle or low tides).
A theory of utility conditionals: Paralogical reasoning from decision-theoretic leakage.
Bonnefon, Jean-François
2009-10-01
Many "if p, then q" conditionals have decision-theoretic features, such as antecedents or consequents that relate to the utility functions of various agents. These decision-theoretic features leak into reasoning processes, resulting in various paralogical conclusions. The theory of utility conditionals offers a unified account of the various forms that this phenomenon can take. The theory is built on 2 main components: (1) a representational tool (the utility grid), which summarizes in compact form the decision-theoretic features of a conditional, and (2) a set of folk axioms of decision, which reflect reasoners' beliefs about the way most agents make their decisions. Applying the folk axioms to the utility grid of a conditional allows for the systematic prediction of the paralogical conclusions invited by the utility grid's decision-theoretic features. The theory of utility conditionals significantly extends the scope of current theories of conditional inference and moves reasoning research toward a greater integration with decision-making research.
NASA Astrophysics Data System (ADS)
Richey, J. Elizabeth
Research examining analogical comparison and self-explanation has produced a robust set of findings about learning and transfer supported by each instructional technique. However, it is unclear how the types of knowledge generated through each technique differ, which has important implications for cognitive theory as well as instructional practice. I conducted a pair of experiments to directly compare the effects of instructional prompts supporting self-explanation, analogical comparison, and the study of instructional explanations across a number of fine-grained learning process, motivation, metacognition, and transfer measures. Experiment 1 explored these questions using sequence extrapolation problems, and results showed no differences between self-explanation and analogical comparison support conditions on any measure. Experiment 2 explored the same questions in a science domain. I evaluated condition effects on transfer outcomes; self-reported self-explanation, analogical comparison, and metacognitive processes; and achievement goals. I also examined relations between transfer and self-reported processes and goals. Receiving materials with analogical comparison support and reporting greater levels of analogical comparison were both associated with worse transfer performance, while reporting greater levels of self-explanation was associated with better performance. Learners' self-reports of self-explanation and analogical comparison were not related to condition assignment, suggesting that the questionnaires did not measure the same processes promoted by the intervention, or that individual differences in processing are robust even when learners are instructed to engage in self-explanation or analogical comparison.
Bhattacharya, Sukanta S.; Syed, Khajamohiddin; Shann, Jodi; Yadav, Jagjit S.
2013-01-01
High molecular weight polycyclic aromatic hydrocarbons (HMW-PAHs) such as benzo[a]pyrene (BaP) are resistant to biodegradation in soil. Conventionally, white rot fungus Phanerochaete chrysosporium has been investigated for HMW-PAH degradation in soil primarily using nutrient-deficient (ligninolytic) conditions, albeit with limited and non-sustainable biodegradation outcomes. In this study, we report development of an alternative novel biphasic process initiated under nutrient-sufficient (non-ligninolytic) culture conditions, by employing an advanced experimental design strategy. During the initial nutrient-sufficient non-ligninolytic phase (16 days), the process showed upregulation (3.6-and 22.3-fold, respectively) of two key PAH-oxidizing P450 monooxygenases pc2 (CYP63A2) and pah4 (CYP5136A3) and formation of typical P450-hydroxylated metabolite. This along with abrogation (84.9%) of BaP degradation activity in response to a P450-specific inhibitor implied key role of these monooxygenases. The subsequent phase triggered on continued incubation (to 25 days) switched the process from non-ligninolytic to ligninolytic resulting in a significantly higher net degradation (91.6% as against 67.4% in the control nutrient-limited set) of BaP with concomitant de novo ligninolytic enzyme expression making it a biphasic process yielding improved sustainable bioremediation of PAH-contaminated soil. To our knowledge this is the first report on development of such biphasic process for bioremediation application of a white rot fungus. PMID:24051002
Capitanio, John P.; Abel, Kristina; Mendoza, Sally P.; Blozis, Shelley A.; McChesney, Michael B.; Cole, Steve W.; Mason, William A.
2008-01-01
From the beginning of the AIDS epidemic, stress has been a suspected contributor to the wide variation seen in disease progression, and some evidence supports this idea. Not all individuals respond to a stressor in the same way, however, and little is known about the biological mechanisms by which variations in individuals’ responses to their environment affect disease-relevant immunologic processes. Using the simian immunodeficiency virus/rhesus macaque model of AIDS, we explored how personality (sociability) and genotype (serotonin transporter promoter) independently interact with social context (stable or unstable social conditions) to influence behavioral expression, plasma cortisol concentrations, SIV-specific IgG, and expression of genes associated with Type I interferon early in infection. SIV viral RNA set-point was strongly and negatively correlated with survival as expected. Set-point was also associated with expression of interferon-stimulated genes, with CXCR3 expression, and with SIV-specific IgG titers. Poorer immune responses, in turn, were associated with display of sustained aggression and submission. Personality and genotype acted independently as well as in interaction with social condition to affect behavioral responses. Together, the data support an “interactionist” perspective (Eysenck, 1991) on disease. Given that an important goal of HIV treatment is to maintain viral set-point as low as possible, our data suggest that supplementing anti-retroviral therapy with behavioral or pharmacologic modulation of other aspects of an organism’s functioning might prolong survival, particularly among individuals living under conditions of threat or uncertainty. PMID:17719201
Affect intensity and processing fluency of deterrents.
Holman, Andrei
2013-01-01
The theory of emotional intensity (Brehm, 1999) suggests that the intensity of affective states depends on the magnitude of their current deterrents. Our study investigated the role that fluency--the subjective experience of ease of information processing--plays in the emotional intensity modulations as reactions to deterrents. Following an induction phase of good mood, we manipulated both the magnitude of deterrents (using sets of photographs with pre-tested potential to instigate an emotion incompatible with the pre-existent affective state--pity) and their processing fluency (normal vs. enhanced through subliminal priming). Current affective state and perception of deterrents were then measured. In the normal processing conditions, the results revealed the cubic effect predicted by the emotional intensity theory, with the initial affective state being replaced by the one appropriate to the deterrent only in participants exposed to the high magnitude deterrence. In the enhanced fluency conditions the emotional intensity pattern was drastically altered; also, the replacement of the initial affective state occurred at a lower level of deterrence magnitude (moderate instead of high), suggesting the strengthening of deterrence emotional impact by enhanced fluency.
Grasso, Esteban; Gori, Soledad; Paparini, Daniel; Soczewski, Elizabeth; Fernández, Laura; Gallino, Lucila; Salamone, Gabriela; Martinez, Gustavo; Irigoyen, Marcela; Ruhlmann, Claudio; Pérez Leirós, Claudia; Ramhorst, Rosanna
2018-01-15
The decidualization process involves phenotype and functional changes on endometrial cells and the modulation of mediators with immunoregulatory properties as the vasoactive intestinal peptide (VIP). We investigate VIP contribution to the decidualization program and to immunoregulation throughout the human embryo implantation process. The decidualization of Human endometrial stromal cell line (HESC) with Medroxyprogesterone-dibutyryl-cAMP increased VIP/VPAC-receptors system. In fact, VIP could induce decidualization increasing differentiation markers (IGFBP1, PRL, KLF13/KLF9 ratio, CXCL12, CXCL8 and CCL2) and allowing Blastocyst-like spheroids (BLS) invasion in an in vitro model of embryo implantation. Focus on the tolerogenic effects, decidualized cells induced a semi-mature profile on maternal dendritic cells; restrained CD4 + cells recruitment while increased regulatory T-cells recruitment. Interestingly, the human blastocyst conditioned media from developmentally impaired embryos diminished the invasion and T-regulatory cells recruitment in these settings. These evidences suggest that VIP contributes to the implantation process inducing decidualization, allowing BLS invasion and favoring a tolerogenic micro-environment. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sethuramalingam, Prabhu; Vinayagam, Babu Kupusamy
2016-07-01
Carbon nanotube mixed grinding wheel is used in the grinding process to analyze the surface characteristics of AISI D2 tool steel material. Till now no work has been carried out using carbon nanotube based grinding wheel. Carbon nanotube based grinding wheel has excellent thermal conductivity and good mechanical properties which are used to improve the surface finish of the workpiece. In the present study, the multi response optimization of process parameters like surface roughness and metal removal rate of grinding process of single wall carbon nanotube (CNT) in mixed cutting fluids is undertaken using orthogonal array with grey relational analysis. Experiments are performed with designated grinding conditions obtained using the L9 orthogonal array. Based on the results of the grey relational analysis, a set of optimum grinding parameters is obtained. Using the analysis of variance approach the significant machining parameters are found. Empirical model for the prediction of output parameters has been developed using regression analysis and the results are compared empirically, for conditions of with and without CNT grinding wheel in grinding process.
Bridging groundwater models and decision support with a Bayesian network
Fienen, Michael N.; Masterson, John P.; Plant, Nathaniel G.; Gutierrez, Benjamin T.; Thieler, E. Robert
2013-01-01
Resource managers need to make decisions to plan for future environmental conditions, particularly sea level rise, in the face of substantial uncertainty. Many interacting processes factor in to the decisions they face. Advances in process models and the quantification of uncertainty have made models a valuable tool for this purpose. Long-simulation runtimes and, often, numerical instability make linking process models impractical in many cases. A method for emulating the important connections between model input and forecasts, while propagating uncertainty, has the potential to provide a bridge between complicated numerical process models and the efficiency and stability needed for decision making. We explore this using a Bayesian network (BN) to emulate a groundwater flow model. We expand on previous approaches to validating a BN by calculating forecasting skill using cross validation of a groundwater model of Assateague Island in Virginia and Maryland, USA. This BN emulation was shown to capture the important groundwater-flow characteristics and uncertainty of the groundwater system because of its connection to island morphology and sea level. Forecast power metrics associated with the validation of multiple alternative BN designs guided the selection of an optimal level of BN complexity. Assateague island is an ideal test case for exploring a forecasting tool based on current conditions because the unique hydrogeomorphological variability of the island includes a range of settings indicative of past, current, and future conditions. The resulting BN is a valuable tool for exploring the response of groundwater conditions to sea level rise in decision support.
Non-equilibrium plasma kinetics of reacting CO: an improved state to state approach
NASA Astrophysics Data System (ADS)
Pietanza, L. D.; Colonna, G.; Capitelli, M.
2017-12-01
Non-equilibrium plasma kinetics of reacting CO for conditions typically met in microwave discharges have been developed based on the coupling of excited state kinetics and the Boltzmann equation for the electron energy distribution function (EEDF). Particular attention is given to the insertion in the vibrational kinetics of a complete set of electron molecule resonant processes linking the whole vibrational ladder of the CO molecule, as well as to the role of Boudouard reaction, i.e. the process of forming CO2 by two vibrationally excited CO molecules, in shaping the vibrational distribution of CO and promoting reaction channels assisted by vibrational excitation (pure vibrational mechanisms, PVM). PVM mechanisms can become competitive with electron impact dissociation processes (DEM) in the activation of CO. A case study reproducing the conditions of a microwave discharge has been considered following the coupled kinetics also in the post discharge conditions. Results include the evolution of EEDF in discharge and post discharge conditions highlighting the role of superelastic vibrational and electronic collisions in shaping the EEDF. Moreover, PVM rate coefficients and DEM ones are studied as a function of gas temperature, showing a non-Arrhenius behavior, i.e. the rate coefficients increase with decreasing gas temperature as a result of a vibrational-vibrational (V-V) pumping up mechanism able to form plateaux in the vibrational distribution function. The accuracy of the results is discussed in particular in connection to the present knowledge of the activation energy of the Boudouard process.
NASA Astrophysics Data System (ADS)
Boutaghane, A.; Bouhadef, K.; Valensi, F.; Pellerin, S.; Benkedda, Y.
2011-04-01
This paper presents results of theoretical and experimental investigation of the welding arc in Gas Tungsten Arc Welding (GTAW) and Gas Metal Arc Welding (GMAW) processes. A theoretical model consisting in simultaneous resolution of the set of conservation equations for mass, momentum, energy and current, Ohm's law and Maxwell equation is used to predict temperatures and current density distribution in argon welding arcs. A current density profile had to be assumed over the surface of the cathode as a boundary condition in order to make the theoretical calculations possible. In stationary GTAW process, this assumption leads to fair agreement with experimental results reported in literature with maximum arc temperatures of ~21 000 K. In contrast to the GTAW process, in GMAW process, the electrode is consumable and non-thermionic, and a realistic boundary condition of the current density is lacking. For establishing this crucial boundary condition which is the current density in the anode melting electrode, an original method is setup to enable the current density to be determined experimentally. High-speed camera (3000 images/s) is used to get geometrical dimensions of the welding wire used as anode. The total area of the melting anode covered by the arc plasma being determined, the current density at the anode surface can be calculated. For a 330 A arc, the current density at the melting anode surface is found to be of 5 × 107 A m-2 for a 1.2 mm diameter welding electrode.
The Outdoor Atmospheric Simulation Chamber of Orleans-France (HELIOS)
NASA Astrophysics Data System (ADS)
Mellouki, A.; Véronique, D.; Grosselin, B.; Peyroux, F.; Benoit, R.; Ren, Y.; Idir, M.
2016-12-01
Atmospheric simulation chambers are among the most advanced tools for investigating the atmospheric processes to derive physico-chemical parameters which are required for air quality and climate models. Recently, the ICARE-CNRS at Orléans (France) has set up a new large outdoor simulation chamber, HELIOS. HELIOS is one of the most advanced simulation chambers in Europe. It is one of the largest outdoor chambers and is especially suited to processes studies performed under realistic atmospheric conditions. HELIOS is a large hemispherical outdoor simulation chamber (volume of 90 m3) positioned on the top of ICARE-CNRS building at Orléans (47°50'18.39N; 1°56'40.03E). The chamber is made of FEP film ensuring more than 90 % solar light transmission. The chamber is protected against severe meteorological conditions by a moveable "box" which contains a series of Xenon lamps enabling to conduct experiments using artificial light. This special design makes HELIOS a unique platform where experiments can be made using both types of irradiations. HELIOS is dedicated mainly to the investigation of the chemical processes under different conditions (sunlight, artificial light and dark). The platform allows conducting the same type of experiments under both natural and artificial light irradiation. The available large range of complementary and highly sensitive instruments allows investigating the radical chemistry, gas phase processes and aerosol formation under realistic conditions. The characteristics of HELIOS will be presented as well as the first series of experimental results obtained so far.
Bostrom, Mathias; O'Keefe, Regis
2009-01-01
Understanding the complex cellular and tissue mechanisms and interactions resulting in periprosthetic osteolysis requires a number of experimental approaches, each of which has its own set of advantages and limitations. In vitro models allow for the isolation of individual cell populations and have furthered our understanding of particle-cell interactions; however, they are limited because they do not mimic the complex tissue environment in which multiple cell interactions occur. In vivo animal models investigate the tissue interactions associated with periprosthetic osteolysis, but the choice of species and whether the implant system is subjected to mechanical load or to unloaded conditions are critical in assessing whether these models can be extrapolated to the clinical condition. Rigid analysis of retrieved tissue from clinical cases of osteolysis offers a different approach to studying the biologic process of osteolysis, but it is limited in that the tissue analyzed represents the end-stage of this process and, thus, may not reflect this process adequately. PMID:18612016
Bostrom, Mathias; O'Keefe, Regis
2008-01-01
Understanding the complex cellular and tissue mechanisms and interactions resulting in periprosthetic osteolysis requires a number of experimental approaches, each of which has its own set of advantages and limitations. In vitro models allow for the isolation of individual cell populations and have furthered our understanding of particle-cell interactions; however, they are limited because they do not mimic the complex tissue environment in which multiple cell interactions occur. In vivo animal models investigate the tissue interactions associated with periprosthetic osteolysis, but the choice of species and whether the implant system is subjected to mechanical load or to unloaded conditions are critical in assessing whether these models can be extrapolated to the clinical condition. Rigid analysis of retrieved tissue from clinical cases of osteolysis offers a different approach to studying the biologic process of osteolysis, but it is limited in that the tissue analyzed represents the end-stage of this process and, thus, may not reflect this process adequately.
Tirumalesh, K; Shivanna, K; Sriraman, A K; Tyagi, A K
2010-04-01
This paper summarizes the findings obtained in a monitoring study to understand the sources and processes affecting the quality of shallow and deep groundwater near central air conditioning plant site in Trombay region by making use of physicochemical and biological analyses. All the measured parameters of the groundwaters indicate that the groundwater quality is good and within permissible limits set by (Indian Bureau of Standards 1990). Shallow groundwater is dominantly of Na-HCO(3) type whereas deep groundwater is of Ca-Mg-HCO(3) type. The groundwater chemistry is mainly influenced by dissolution of minerals and base exchange processes. High total dissolved solids in shallow groundwater compared to deeper ones indicate faster circulation of groundwater in deep zone preferably through fissures and fractures whereas groundwater flow is sluggish in shallow zone. The characteristic ionic ratio values and absence of bromide point to the fact that seawater has no influence on groundwater system.
Williams, Cory A.; Leib, Kenneth J.
2005-01-01
In 2003, the U.S. Geological Survey, in cooperation with Delta County, initiated a study to characterize streamflow gainloss in a reach of Terror Creek, in the vicinity of a mine-permit area planned for future coal mining. This report describes the methods of the study and includes results from a comparison of two sets of streamflow measurements using tracer techniques following the constant-rate injection method. Two measurement sets were used to characterize the streamflow gain-loss associated with reservoir-supplemented streamflow conditions and with natural base-flow conditions. A comparison of the measurement sets indicates that the streamflow gain-loss characteristics of the Terror Creek study reach are consistent between the two hydrologic conditions evaluated. A substantial streamflow gain occurs between measurement locations 4 and 5 in both measurement sets, and streamflow is lost between measurement locations 5 and 7 (measurement set 1, measurement location 6 not visited) and 5 and 6 (measurement set 2). A comparison of the measurement sets above and below the mine-permit area (measurement locations 3 and 7) shows a consistent loss of 0.37 and 0.31 cubic foot per second (representing 5- and 12-percent streamflow losses normalized to measurement location 3) for measurement sets 1 and 2, respectively. This indicates that similar streamflow losses occur both during reservoir-supplemented and natural base-flow conditions, with a mean streamflow loss of 0.34 cubic foot per second for measurement sets 1 and 2. Findings from a previous investigation support the observed streamflow loss between measurement locations 3 and 7 in this study. The findings from the previous investigation indicate a streamflow loss of 0.59 cubic foot per second occurs between these measurement locations. Statistical testing of the differences in streamflow between measurement locations 3 and 7 indicates that there is a discernible streamflow loss. The p-value of 0.0236 for the parametric paired t-test indicates that there is a 2.36-percent probability of observing a sample mean difference of 0.34 cubic foot per second if the population mean is zero. The p-value of 0.125 for the nonparametric exact Wilcoxon signed rank test indicates that there is a 12.5-percent probability of observing a sample mean difference this large if the population mean is zero. The similarity in streamflow gain-loss between measurement sets indicates that the process controlling streamflow may be the same between the two hydrologic conditions evaluated. Gains between measurement locations 4 and 5 may be related to hyporheic flow from tributaries that were dry during the study. No other obvious sources of surface water were identified during the investigation. The cause for the observed streamflow loss between measurement locations 5 and 6 is unknown but may be related to mapped local faulting, 100 years of coal mining in the area, and aquifer recharge.
Optimum processing of mammographic film.
Sprawls, P; Kitts, E L
1996-03-01
Underprocessing of mammographic film can result in reduced contrast and visibility of breast structures and an unnecessary increase in radiation dose to the patient. Underprocessing can be caused by physical factors (low developer temperature, inadequate development time, insufficient developer agitation) or chemical factors (developer not optimized for film type; overdiluted, underreplenished, contaminated, or frequently changed developer). Conventional quality control programs are designed to produce consistent processing but do not address the issue of optimum processing. Optimum processing is defined as the level of processing that produces the film performance characteristics (contrast and sensitivity) specified by the film manufacturer. Optimum processing of mammographic film can be achieved by following a two-step protocol. The first step is to set up the processing conditions according to recommendations from the film and developer chemistry manufacturers. The second step is to verify the processing results by comparing them with sensitometric data provided by the film manufacturer.
An extended transfer operator approach to identify separatrices in open flows
NASA Astrophysics Data System (ADS)
Lünsmann, Benedict; Kantz, Holger
2018-05-01
Vortices of coherent fluid volume are considered to have a substantial impact on transport processes in turbulent media. Yet, due to their Lagrangian nature, detecting these structures is highly nontrivial. In this respect, transfer operator approaches have been proven to provide useful tools: Approximating a possibly time-dependent flow as a discrete Markov process in space and time, information about coherent structures is contained in the operator's eigenvectors, which is usually extracted by employing clustering methods. Here, we propose an extended approach that couples surrounding filaments using "mixing boundary conditions" and focuses on the separation of the inner coherent set and embedding outer flow. The approach refrains from using unsupervised machine learning techniques such as clustering and uses physical arguments by maximizing a coherence ratio instead. We show that this technique improves the reconstruction of separatrices in stationary open flows and succeeds in finding almost-invariant sets in periodically perturbed flows.
Study regarding the spline interpolation accuracy of the experimentally acquired data
NASA Astrophysics Data System (ADS)
Oanta, Emil M.; Danisor, Alin; Tamas, Razvan
2016-12-01
Experimental data processing is an issue that must be solved in almost all the domains of science. In engineering we usually have a large amount of data and we try to extract the useful signal which is relevant for the phenomenon under investigation. The criteria used to consider some points more relevant then some others may take into consideration various conditions which may be either phenomenon dependent, or general. The paper presents some of the ideas and tests regarding the identification of the best set of criteria used to filter the initial set of points in order to extract a subset which best fits the approximated function. If the function has regions where it is either constant, or it has a slow variation, fewer discretization points may be used. This means to create a simpler solution to process the experimental data, keeping the accuracy in some fair good limits.
Transcriptome profile of Trichoderma harzianum IOC-3844 induced by sugarcane bagasse.
Horta, Maria Augusta Crivelente; Vicentini, Renato; Delabona, Priscila da Silva; Laborda, Prianda; Crucello, Aline; Freitas, Sindélia; Kuroshu, Reginaldo Massanobu; Polikarpov, Igor; Pradella, José Geraldo da Cruz; Souza, Anete Pereira
2014-01-01
Profiling the transcriptome that underlies biomass degradation by the fungus Trichoderma harzianum allows the identification of gene sequences with potential application in enzymatic hydrolysis processing. In the present study, the transcriptome of T. harzianum IOC-3844 was analyzed using RNA-seq technology. The sequencing generated 14.7 Gbp for downstream analyses. De novo assembly resulted in 32,396 contigs, which were submitted for identification and classified according to their identities. This analysis allowed us to define a principal set of T. harzianum genes that are involved in the degradation of cellulose and hemicellulose and the accessory genes that are involved in the depolymerization of biomass. An additional analysis of expression levels identified a set of carbohydrate-active enzymes that are upregulated under different conditions. The present study provides valuable information for future studies on biomass degradation and contributes to a better understanding of the role of the genes that are involved in this process.
Identifying Personal Goals of Patients With Long Term Condition: A Service Design Thinking Approach.
Lee, Eunji; Gammon, Deede
2017-01-01
Care for patients with long term conditions is often characterized as fragmented and ineffective, and fails to engage the resources of patients and their families in the care process. Information and communication technology can potentially help bridge the gap between patients' lives and resources and services provided by professionals. However, there is little attention on how to identify and incorporate the patients' individual needs, values, preferences and care goals into the digitally driven care settings. We conducted a case study with healthcare professionals and patients participated applying a service design thinking approach. The participants could elaborate some personal goals of patients with long term condition which can potentially be incorporated in digitally driven care plans using examples from their own experiences.
Unsupervised domain adaptation for early detection of drought stress in hyperspectral images
NASA Astrophysics Data System (ADS)
Schmitter, P.; Steinrücken, J.; Römer, C.; Ballvora, A.; Léon, J.; Rascher, U.; Plümer, L.
2017-09-01
Hyperspectral images can be used to uncover physiological processes in plants if interpreted properly. Machine Learning methods such as Support Vector Machines (SVM) and Random Forests have been applied to estimate development of biomass and detect and predict plant diseases and drought stress. One basic requirement of machine learning implies, that training and testing is done in the same domain and the same distribution. Different genotypes, environmental conditions, illumination and sensors violate this requirement in most practical circumstances. Here, we present an approach, which enables the detection of physiological processes by transferring the prior knowledge within an existing model into a related target domain, where no label information is available. We propose a two-step transformation of the target features, which enables a direct application of an existing model. The transformation is evaluated by an objective function including additional prior knowledge about classification and physiological processes in plants. We have applied the approach to three sets of hyperspectral images, which were acquired with different plant species in different environments observed with different sensors. It is shown, that a classification model, derived on one of the sets, delivers satisfying classification results on the transformed features of the other data sets. Furthermore, in all cases early non-invasive detection of drought stress was possible.
Weighted triangulation adjustment
Anderson, Walter L.
1969-01-01
The variation of coordinates method is employed to perform a weighted least squares adjustment of horizontal survey networks. Geodetic coordinates are required for each fixed and adjustable station. A preliminary inverse geodetic position computation is made for each observed line. Weights associated with each observed equation for direction, azimuth, and distance are applied in the formation of the normal equations in-the least squares adjustment. The number of normal equations that may be solved is twice the number of new stations and less than 150. When the normal equations are solved, shifts are produced at adjustable stations. Previously computed correction factors are applied to the shifts and a most probable geodetic position is found for each adjustable station. Pinal azimuths and distances are computed. These may be written onto magnetic tape for subsequent computation of state plane or grid coordinates. Input consists of punch cards containing project identification, program options, and position and observation information. Results listed include preliminary and final positions, residuals, observation equations, solution of the normal equations showing magnitudes of shifts, and a plot of each adjusted and fixed station. During processing, data sets containing irrecoverable errors are rejected and the type of error is listed. The computer resumes processing of additional data sets.. Other conditions cause warning-errors to be issued, and processing continues with the current data set.
APNEA list mode data acquisition and real-time event processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogle, R.A.; Miller, P.; Bramblett, R.L.
1997-11-01
The LMSC Active Passive Neutron Examinations and Assay (APNEA) Data Logger is a VME-based data acquisition system using commercial-off-the-shelf hardware with the application-specific software. It receives TTL inputs from eighty-eight {sup 3}He detector tubes and eight timing signals. Two data sets are generated concurrently for each acquisition session: (1) List Mode recording of all detector and timing signals, timestamped to 3 microsecond resolution; (2) Event Accumulations generated in real-time by counting events into short (tens of microseconds) and long (seconds) time bins following repetitive triggers. List Mode data sets can be post-processed to: (1) determine the optimum time bins formore » TRU assay of waste drums, (2) analyze a given data set in several ways to match different assay requirements and conditions and (3) confirm assay results by examining details of the raw data. Data Logger events are processed and timestamped by an array of 15 TMS320C40 DSPs and delivered to an embedded controller (PowerPC604) for interim disk storage. Three acquisition modes, corresponding to different trigger sources are provided. A standard network interface to a remote host system (Windows NT or SunOS) provides for system control, status, and transfer of previously acquired data. 6 figs.« less
Quantitative Real-Time PCR using the Thermo Scientific Solaris qPCR Assay
Ogrean, Christy; Jackson, Ben; Covino, James
2010-01-01
The Solaris qPCR Gene Expression Assay is a novel type of primer/probe set, designed to simplify the qPCR process while maintaining the sensitivity and accuracy of the assay. These primer/probe sets are pre-designed to >98% of the human and mouse genomes and feature significant improvements from previously available technologies. These improvements were made possible by virtue of a novel design algorithm, developed by Thermo Scientific bioinformatics experts. Several convenient features have been incorporated into the Solaris qPCR Assay to streamline the process of performing quantitative real-time PCR. First, the protocol is similar to commonly employed alternatives, so the methods used during qPCR are likely to be familiar. Second, the master mix is blue, which makes setting the qPCR reactions easier to track. Third, the thermal cycling conditions are the same for all assays (genes), making it possible to run many samples at a time and reducing the potential for error. Finally, the probe and primer sequence information are provided, simplifying the publication process. Here, we demonstrate how to obtain the appropriate Solaris reagents using the GENEius product search feature found on the ordering web site (www.thermo.com/solaris) and how to use the Solaris reagents for performing qPCR using the standard curve method. PMID:20567213
Assessment of applications of transport models on regional scale solute transport
NASA Astrophysics Data System (ADS)
Guo, Z.; Fogg, G. E.; Henri, C.; Pauloo, R.
2017-12-01
Regional scale transport models are needed to support the long-term evaluation of groundwater quality and to develop management strategies aiming to prevent serious groundwater degradation. The purpose of this study is to evaluate the capacity of previously-developed upscaling approaches to accurately describe main solute transport processes including the capture of late-time tails under changing boundary conditions. Advective-dispersive contaminant transport in a 3D heterogeneous domain was simulated and used as a reference solution. Equivalent transport under homogeneous flow conditions were then evaluated applying the Multi-Rate Mass Transfer (MRMT) model. The random walk particle tracking method was used for both heterogeneous and homogeneous-MRMT scenarios under steady state and transient conditions. The results indicate that the MRMT model can capture the tails satisfactorily for plume transported with ambient steady-state flow field. However, when boundary conditions change, the mass transfer model calibrated for transport under steady-state conditions cannot accurately reproduce the tailing effect observed for the heterogeneous scenario. The deteriorating impact of transient boundary conditions on the upscaled model is more significant for regions where flow fields are dramatically affected, highlighting the poor applicability of the MRMT approach for complex field settings. Accurately simulating mass in both mobile and immobile zones is critical to represent the transport process under transient flow conditions and will be the future focus of our study.
NASA Astrophysics Data System (ADS)
Waldo, N.; Moorberg, C.; Waldrop, M. P.; Turetsky, M. R.; Neumann, R. B.
2015-12-01
Wetlands are the largest natural source of methane to the atmosphere, and play a key role in feedback cycles to climate change. In recognition of this, many researchers are developing process-based models of wetland methane emissions at various scales. In these models, the three key biogeochemical reactions are methane production, methane oxidation, and heterotrophic respiration, and they are modeled using Michaelis-Menten kinetics. The majority of Michaelis-Menten rate constants used in models are based on experiments involving slurries of peat incubated in vials. While these slurries provide a highly controlled setting, they are different from in situ conditions in multiple ways; notably they lack live plants and the centimeter-scale heterogeneities that exist in the field. To determine rate constants in a system more representative of in situ conditions, we extracted peat cores intact from a bog and fen located in the Bonanza Creek Experimental Forest near Fairbanks, Alaska and part of the Alaska Peatland Experiment (APEX) research program. Into those cores we injected water with varying concentrations of methane and oxygen at multiple depths. We used planar oxygen sensors installed on the peat cores to collect high resolution, two dimensional oxygen concentration data during the injections and used oxygen consumption rates under various conditions to calculate rate constants. Results were compared to a similar but smaller set of injection experiments conducted against planar oxygen sensors installed in the bog. Results will inform parametrization of microbial processes in wetland models, improving estimates of methane emissions both under current climate conditions and in the future.
Voice gender identification by cochlear implant users: The role of spectral and temporal resolution
NASA Astrophysics Data System (ADS)
Fu, Qian-Jie; Chinchilla, Sherol; Nogaki, Geraldine; Galvin, John J.
2005-09-01
The present study explored the relative contributions of spectral and temporal information to voice gender identification by cochlear implant users and normal-hearing subjects. Cochlear implant listeners were tested using their everyday speech processors, while normal-hearing subjects were tested under speech processing conditions that simulated various degrees of spectral resolution, temporal resolution, and spectral mismatch. Voice gender identification was tested for two talker sets. In Talker Set 1, the mean fundamental frequency values of the male and female talkers differed by 100 Hz while in Talker Set 2, the mean values differed by 10 Hz. Cochlear implant listeners achieved higher levels of performance with Talker Set 1, while performance was significantly reduced for Talker Set 2. For normal-hearing listeners, performance was significantly affected by the spectral resolution, for both Talker Sets. With matched speech, temporal cues contributed to voice gender identification only for Talker Set 1 while spectral mismatch significantly reduced performance for both Talker Sets. The performance of cochlear implant listeners was similar to that of normal-hearing subjects listening to 4-8 spectral channels. The results suggest that, because of the reduced spectral resolution, cochlear implant patients may attend strongly to periodicity cues to distinguish voice gender.
Interactive processing of contrastive expressions by Russian children.
Sekerina, Irina A; Trueswell, John C
2012-04-05
Children's ability to interpret color adjective noun phrases (e.g., red butterfly) as contrastive was examined in an eyetracking study with 6-year-old Russian children. Pitch accent placement (on the adjective red , or on the noun butterfly ) was compared within a visual context containing two red referents (a butterfly and a fox) when only one of them had a contrast member (a purple butterfly) or when both had a contrast member (a purple butterfly and a grey fox). Contrastiveness was enhanced by the Russian-specific 'split constituent' construction (e.g., Red put butterfly . . .) in which a contrastive interpretation of the color term requires pitch accent on the adjective, with the nonsplit sentences serving as control. Regardless of the experimental manipulations, children had to wait until hearing the noun (butterfly) to identify the referent, even in splits. This occurred even under conditions for which the prosody and the visual context allow adult listeners to infer the relevant contrast set and anticipate the referent prior to hearing the noun (accent on the adjective in 1-Contrast scenes). Pitch accent on the adjective did facilitate children's referential processing, but only for the nonsplit constituents. Moreover, visual contexts that encouraged the correct contrast set (1-Contrast) only facilitated referential processing after hearing the noun, even in splits. Further analyses showed that children can anticipate the reference like adults but only when the contrast set is made salient by the preceding supportive discourse, that is, when the inference about the intended contrast set is provided by the preceding utterance.
NASA Astrophysics Data System (ADS)
Trevisan, L.; Illangasekare, T. H.; Rodriguez, D.; Sakaki, T.; Cihan, A.; Birkholzer, J. T.; Zhou, Q.
2011-12-01
Geological storage of carbon dioxide in deep geologic formations is being considered as a technical option to reduce greenhouse gas loading to the atmosphere. The processes associated with the movement and stable trapping are complex in deep naturally heterogeneous formations. Three primary mechanisms contribute to trapping; capillary entrapment due to immobilization of the supercritical fluid CO2 within soil pores, liquid CO2 dissolving in the formation water and mineralization. Natural heterogeneity in the formation is expected to affect all three mechanisms. A research project is in progress with the primary goal to improve our understanding of capillary and dissolution trapping during injection and post-injection process, focusing on formation heterogeneity. It is expected that this improved knowledge will help to develop site characterization methods targeting on obtaining the most critical parameters that capture the heterogeneity to design strategies and schemes to maximize trapping. This research combines experiments at the laboratory scale with multiphase modeling to upscale relevant trapping processes to the field scale. This paper presents the results from a set of experiments that were conducted in an intermediate scale test tanks. Intermediate scale testing provides an attractive alternative to investigate these processes under controlled conditions in the laboratory. Conducting these types of experiments is highly challenging as methods have to be developed to extrapolate the data from experiments that are conducted under ambient laboratory conditions to high temperatures and pressures settings in deep geologic formations. We explored the use of a combination of surrogate fluids that have similar density, viscosity contrasts and analogous solubility and interfacial tension as supercritical CO2-brine in deep formations. The extrapolation approach involves the use of dimensionless numbers such as Capillary number (Ca) and the Bond number (Bo). A set of experiments that captures some of the complexities of the geologic heterogeneity and injection scenarios are planned in a 4.8 m long tank. To test the experimental methods and instrumentation, a set of preliminary experiments were conducted in a smaller tank with dimensions 90 cm x 60 cm. The tank was packed to represent both homogeneous and heterogeneous conditions. Using the surrogate fluids, different injection scenarios were tested. Images of the migration plume showed the critical role that heterogeneity plays in stable entrapment. Destructive sampling done at the end of the experiments provided data on the final saturation distributions. Preliminary analysis suggests the entrapment configuration is controlled by the large-scale heterogeneities as well as the pore-scale entrapment mechanisms. The data was used in modeling analysis that is presented in a companion abstract.
Developing a policy for delegation of nursing care in the school setting.
Spriggle, Melinda
2009-04-01
School nurses are in a unique position to provide care for students with special health care needs in the school setting. The incidence of chronic conditions and improved technology necessitate care of complex health care needs that had formerly been managed in inpatient settings. Delegation is a tool that may be used by registered nurses to allow unlicensed assistive personnel to perform appropriate nursing tasks and activities while keeping in mind that the registered nurse ultimately retains accountability for the delegation. The legal parameters for nursing delegation are defined by State Nurse Practice Acts, State Board of Nursing guidelines, and Nursing Administrative Rules/Regulations. Delegation becomes more challenging when carried out in a non-health care setting. School administrators may not be aware of legal issues related to delegation of nursing care in the school setting. It is crucial for school nurses to have a working knowledge of the delegation process. Development of a specific delegation policy will ensure that delegation is carried out in a manner providing for safe and appropriate care in the school setting.
The influence of perceptual load on age differences in selective attention.
Maylor, E A; Lavie, N
1998-12-01
The effect of perceptual load on age differences in visual selective attention was examined in 2 studies. In Experiment 1, younger and older adults made speeded choice responses indicating which of 2 target letters was present in a relevant set of letters in the center of the display while they attempted to ignore an irrelevant distractor in the periphery. The perceptual load of relevant processing was manipulated by varying the central set size. When the relevant set size was small, the adverse effect of an incompatible distractor was much greater for the older participants than for the younger ones. However, with larger relevant set sizes, this was no longer the case, with the distractor effect decreasing for older participants at lower levels of perceptual load than for younger ones. In Experiment 2, older adults were tested with the empty locations in the central set either unmarked (as in Experiment 1) or marked by small circles to form a group of 6 items irrespective of set size; the 2 conditions did not differ markedly, ruling out an explanation based entirely on perceptual grouping.
Modified Facile Synthesis for Quantitatively Fluorescent Carbon Dots.
Hou, Xiaofang; Hu, Yin; Wang, Ping; Yang, Liju; Al Awak, Mohamad M; Tang, Yongan; Twara, Fridah K; Qian, Haijun; Sun, Ya-Ping
2017-10-01
A simple yet consequential modification was made to the popular carbonization processing of citric acid - polyethylenimine precursor mixtures to produce carbon dots (CDots). The modification was primarily on pushing the carbonization processing a little harder at a higher temperature, such as the hydrothermal processing condition of around 330 °C for 6 hours. The CDots thus produced are comparable in spectroscopic and other properties to those obtained in other more controlled syntheses including the deliberate chemical functionalization of preprocessed and selected small carbon nanoparticles, demonstrating the consistency in CDots and reaffirming their general definition as carbon nanoparticles with surface passivation by organic or other species. Equally significant is the finding that the modified processing of citric acid - polyethylenimine precursor mixtures could yield CDots of record-setting fluorescence performance, approaching the upper limit of being quantitatively fluorescent. Thus, the reported work serves as a demonstration on not only the need in selecting the right processing conditions and its associated opportunities in one-pot syntheses of CDots, but also the feasibility in pursuing the preparation of quantitatively fluorescent CDots, which represents an important milestone in the development and understanding of these fluorescent carbon nanomaterials.
Tracking composite material damage evolution using Bayesian filtering and flash thermography data
NASA Astrophysics Data System (ADS)
Gregory, Elizabeth D.; Holland, Steve D.
2016-05-01
We propose a method for tracking the condition of a composite part using Bayesian filtering of ash thermography data over the lifetime of the part. In this demonstration, composite panels were fabricated; impacted to induce subsurface delaminations; and loaded in compression over multiple time steps, causing the delaminations to grow in size. Flash thermography data was collected between each damage event to serve as a time history of the part. The ash thermography indicated some areas of damage but provided little additional information as to the exact nature or depth of the damage. Computed tomography (CT) data was also collected after each damage event and provided a high resolution volume model of damage that acted as truth. After each cycle, the condition estimate, from the ash thermography data and the Bayesian filter, was compared to 'ground truth'. The Bayesian process builds on the lifetime history of ash thermography scans and can give better estimates of material condition as compared to the most recent scan alone, which is common practice in the aerospace industry. Bayesian inference provides probabilistic estimates of damage condition that are updated as each new set of data becomes available. The method was tested on simulated data and then on an experimental data set.
Environmental and genetic factors that contribute to Escherichia coli K-12 biofilm formation
Prüß, Birgit M.; Verma, Karan; Samanta, Priyankar; Sule, Preeti; Kumar, Sunil; Wu, Jianfei; Christianson, David; Horne, Shelley M.; Stafslien, Shane J.; Wolfe, Alan J.; Denton, Anne
2010-01-01
Biofilms are communities of bacteria whose formation on surfaces requires a large portion of the bacteria’s transcriptional network. To identify environmental conditions and transcriptional regulators that contribute to sensing these conditions, we used a high-throughput approach to monitor biofilm biomass produced by an isogenic set of Escherichia coli K-12 strains grown under combinations of environmental conditions. Of the environmental combinationsd, growth in tryptic soy broth at 37°C supported the most biofilm production. To analyze the complex relationships between the diverse cell surface organelles, transcriptional regulators, and metabolic enzymes represented by the tested mutant set, we used a novel vector-item pattern-mining algorithm. The algorithm related biofilm amounts to the functional annotations of each mutated protein. The pattern with the best statistical significance was the gene ontology ‘pyruvate catabolic process,’ which is associated with enzymes of acetate metabolism. Phenotype microarray experiments illustrated that carbon sources that are metabolized to acetyl-coenzyme A, acetyl phosphate, and acetate are particularly supportive of biofilm formation. Scanning electron microscopy revealed structural differences between mutants that lack acetate metabolism enzymes and their parent and confirmed the quantitative differences. We conclude that acetate metabolism functions as a metabolic sensor, transmitting changes in environmental conditions to biofilm biomass and structure. PMID:20559621
Myths and realities about the recovery of L׳Aquila after the earthquake
Contreras, Diana; Blaschke, Thomas; Kienberger, Stefan; Zeil, Peter
2014-01-01
There is a set of myths which are linked to the recovery of L׳Aquila, such as: the L׳Aquila recovery has come to a halt, it is still in an early recovery phase, and there is economic stagnation. The objective of this paper is threefold: (a) to identify and develop a set of spatial indicators for the case of L׳Aquila, (b) to test the feasibility of a numerical assessment of these spatial indicators as a method to monitor the progress of a recovery process after an earthquake and (c) to answer the question whether the recovery process in L׳Aquila stagnates or not. We hypothesize that after an earthquake the spatial distribution of expert defined variables can constitute an index to assess the recovery process more objectively. In these articles, we aggregated several indicators of building conditions to characterize the physical dimension, and we developed building use indicators to serve as proxies for the socio-economic dimension while aiming for transferability of this approach. The methodology of this research entailed six steps: (1) fieldwork, (2) selection of a sampling area, (3) selection of the variables and indicators for the physical and socio-economic dimensions, (4) analyses of the recovery progress using spatial indicators by comparing the changes in the restricted core area as well as building use over time; (5) selection and integration of the results through expert weighting; and (6) determining hotspots of recovery in L׳Aquila. Eight categories of building conditions and twelve categories of building use were identified. Both indicators: building condition and building use are aggregated into a recovery index. The reconstruction process in the city center of L׳Aquila seems to stagnate, which is reflected by the five following variables: percentage of buildings with on-going reconstruction, partial reconstruction, reconstruction projected residential building use and transport facilities. These five factors were still at low levels within the core area in 2012. Nevertheless, we can conclude that the recovery process in L׳Aquila did not come to a halt but is still ongoing, albeit being slow. PMID:26779431
How and what do medical students learn in clerkships? Experience based learning (ExBL).
Dornan, Tim; Tan, Naomi; Boshuizen, Henny; Gick, Rachel; Isba, Rachel; Mann, Karen; Scherpbier, Albert; Spencer, John; Timmins, Elizabeth
2014-12-01
Clerkship education has been called a 'black box' because so little is known about what, how, and under which conditions students learn. Our aim was to develop a blueprint for education in ambulatory and inpatient settings, and in single encounters, traditional rotations, or longitudinal experiences. We identified 548 causal links between conditions, processes, and outcomes of clerkship education in 168 empirical papers published over 7 years and synthesised a theory of how students learn. They do so when they are given affective, pedagogic, and organisational support. Affective support comes from doctors' and many other health workers' interactions with students. Pedagogic support comes from informal interactions and modelling as well as doctors' teaching, supervision, and precepting. Organisational support comes from every tier of a curriculum. Core learning processes of observing, rehearsing, and contributing to authentic clinical activities take place within triadic relationships between students, patients, and practitioners. The phrase 'supported participation in practice' best describes the educational process. Much of the learning that results is too tacit, complex, contextualised, and individual to be defined as a set of competencies. We conclude that clerkship education takes place within relationships between students, patients, and doctors, supported by informal, individual, contextualised, and affective elements of the learned curriculum, alongside formal, standardised elements of the taught and assessed curriculum. This research provides a blueprint for designing and evaluating clerkship curricula as well as helping patients, students, and practitioners collaborate in educating tomorrow's doctors.
Lessons Learned from Participatory Design in Dementia Care: Placing Care Partners at the Centre.
Hendriks, Niels; Slegers, Karin; Wilkinson, Andrea
2017-01-01
In this paper we analyze the participatory design (PD) process of a health information technology (HIT) project. This project, AToM was situated in dementia care and involved partners from academia, industry and care. The analysis specifically focuses on the role of the care partners in the PD process. We will show that the conditions to enable 'good participatory design' were not fully met and we present a set of actions to prevent this in future HIT projects. Central to our recommended approach is placing the care partners at the centre of the PD project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pritychenko, B., E-mail: pritychenko@bnl.go; Mughaghab, S.F.; Sonzogni, A.A.
We have calculated the Maxwellian-averaged cross sections and astrophysical reaction rates of the stellar nucleosynthesis reactions (n, {gamma}), (n, fission), (n, p), (n, {alpha}), and (n, 2n) using the ENDF/B-VII.0, JEFF-3.1, JENDL-3.3, and ENDF/B-VI.8 evaluated nuclear reaction data libraries. These four major nuclear reaction libraries were processed under the same conditions for Maxwellian temperatures (kT) ranging from 1 keV to 1 MeV. We compare our current calculations of the s-process nucleosynthesis nuclei with previous data sets and discuss the differences between them and the implications for nuclear astrophysics.
NASA Astrophysics Data System (ADS)
Blokhin, A. M.; Kruglova, E. A.; Semisalov, B. V.
2018-03-01
The hydrodynamical model is used for description of the process of charge transport in semiconductors with a high rate of reliability. It is a set of nonlinear partial differential equations with small parameters and specific conditions at the boundaries of field effect transistors (FETs), which essentially complicates the process of finding its stationary solutions. To overcome these difficulties in the case of FETs with elements having different dielectric properties, a fast pseudospectral method has been developed. This method was used for advanced numerical simulation of charge transport in DG-MOSFET.
Analysis of dangerous area of single berth oil tanker operations based on CFD
NASA Astrophysics Data System (ADS)
Shi, Lina; Zhu, Faxin; Lu, Jinshu; Wu, Wenfeng; Zhang, Min; Zheng, Hailin
2018-04-01
Based on the single process in the liquid cargo tanker berths in the state as the research object, we analyzed the single berth oil tanker in the process of VOCs diffusion theory, built network model of VOCs diffusion with Gambit preprocessor, set up the simulation boundary conditions and simulated the five detection point sources in specific factors under the influence of VOCs concentration change with time by using Fluent software. We analyzed the dangerous area of single berth oil tanker operations through the diffusion of VOCs, so as to ensure the safe operation of oil tanker.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bley, D.C.; Cooper, S.E.; Forester, J.A.
ATHEANA, a second-generation Human Reliability Analysis (HRA) method integrates advances in psychology with engineering, human factors, and Probabilistic Risk Analysis (PRA) disciplines to provide an HRA quantification process and PRA modeling interface that can accommodate and represent human performance in real nuclear power plant events. The method uses the characteristics of serious accidents identified through retrospective analysis of serious operational events to set priorities in a search process for significant human failure events, unsafe acts, and error-forcing context (unfavorable plant conditions combined with negative performance-shaping factors). ATHEANA has been tested in a demonstration project at an operating pressurized water reactor.
Zurawska Vel Grajewska, Blandyna; Sim, Eun-Jin; Hoenig, Klaus; Herrnberger, Bärbel; Kiefer, Markus
2011-11-03
Cognitive control can be adapted flexibly according to the conflict level in a given situation. In the Eriksen flanker task, interference evoked by flankers is larger in conditions with a higher, rather than a lower proportion of compatible trials. Such compatibility ratio effects also occur for stimuli presented at two spatial locations suggesting that different cognitive control settings can be simultaneously maintained. However, the conditions and the neural correlates of this flexible adaptation of cognitive control are only poorly understood. In the present study, we further elucidated the mechanisms underlying the simultaneous maintenance of two cognitive control settings. In behavioral experiments, stimuli were presented centrally above and below fixation and hence processed by both hemispheres or lateralized to stimulate hemispheres differentially. The different compatibility ratio at two stimulus locations had a differential influence on the flanker effect in both experiments. In an fMRI experiment, blocks with an identical compatibility ratio at two central spatial locations elicited stronger activity in a network of prefrontal and parietal brain areas, which are known to be involved in conflict resolution and cognitive control, as compared with blocks with a different compatibility ratio at the same spatial locations. This demonstrates that the simultaneous maintenance of two conflicting control settings vs. one single setting does not recruit additional neural circuits suggesting the involvement of one single cognitive control system. Instead a crosstalk between multiple control settings renders adaptation of cognitive control more efficient when only one uniform rather than two different control settings has to be simultaneously maintained. Copyright © 2011 Elsevier B.V. All rights reserved.
Grieve, Sharon; Perez, Roberto SGM; Birklein, Frank; Brunner, Florian; Bruehl, Stephen; Harden R, Norman; Packham, Tara; Gobeil, Francois; Haigh, Richard; Holly, Janet; Terkelsen, Astrid; Davies, Lindsay; Lewis, Jennifer; Thomassen, Ilona; Connett, Robyn; Worth, Tina; Vatine, Jean-Jacques; McCabe, Candida S
2017-01-01
Complex Regional Pain Syndrome (CRPS) is a persistent pain condition that remains incompletely understood and challenging to treat. Historically, a wide range of different outcome measures have been used to capture the multidimensional nature of CRPS. This has been a significant limiting factor in the advancement of our understanding of the mechanisms and management of CRPS. In 2013, an international consortium of patients, clinicians, researchers and industry representatives was established, to develop and agree on a minimum core set of standardised outcome measures for use in future CRPS clinical research, including but not limited to clinical trials within adult populations The development of a core measurement set was informed through workshops and supplementary work, using an iterative consensus process. ‘What is the clinical presentation and course of CRPS, and what factors influence it?’ was agreed as the most pertinent research question that our standardised set of patient-reported outcome measures should be selected to answer. The domains encompassing the key concepts necessary to answer the research question were agreed as: pain, disease severity, participation and physical function, emotional and psychological function, self efficacy, catastrophizing and patient's global impression of change. The final core measurement set included the optimum generic or condition-specific patient-reported questionnaire outcome measures, which captured the essence of each domain, and one clinician reported outcome measure to capture the degree of severity of CRPS. The next step is to test the feasibility and acceptability of collecting outcome measure data using the core measurement set in the CRPS population internationally. PMID:28178071
Hill-Climbing Theories of Learning
1987-12-01
process continues as long as new instances are encountered. In some cases, a constrained state generator replaces the evaluation function, producing the...instance, our model represents a particular animal (say a cat) as a set of eight cylinders - representing the head, neck , torso, tail, and four legs. The...variety of conditions. Figure 1 summarizes an experiment in which we ’defined’ four classes - cats, dogs, horse, and giraffes - with different amounts of
Littoral Sediment Budget for the Mississippi Sound Barrier Islands
2012-07-01
Sound are driven by longshore transport processes associated with storm and normal wave and current conditions. Although beach erosion and washover...from storm impacts (Figure 1.1). Figure 1.1. High-altitude imagery of the northern Gulf of Mexico between New Orleans, LA and Pensacola, FL...increasing storm damage. A comprehensive evaluation of storm impacts requires analysis of historical shoreline and bathymetry data sets to document the
Spérandio, Mathieu; Pocquet, Mathieu; Guo, Lisha; Ni, Bing-Jie; Vanrolleghem, Peter A; Yuan, Zhiguo
2016-03-01
Five activated sludge models describing N2O production by ammonium oxidising bacteria (AOB) were compared to four different long-term process data sets. Each model considers one of the two known N2O production pathways by AOB, namely the AOB denitrification pathway and the hydroxylamine oxidation pathway, with specific kinetic expressions. Satisfactory calibration could be obtained in most cases, but none of the models was able to describe all the N2O data obtained in the different systems with a similar parameter set. Variability of the parameters can be related to difficulties related to undescribed local concentration heterogeneities, physiological adaptation of micro-organisms, a microbial population switch, or regulation between multiple AOB pathways. This variability could be due to a dependence of the N2O production pathways on the nitrite (or free nitrous acid-FNA) concentrations and other operational conditions in different systems. This work gives an overview of the potentialities and limits of single AOB pathway models. Indicating in which condition each single pathway model is likely to explain the experimental observations, this work will also facilitate future work on models in which the two main N2O pathways active in AOB are represented together.
Zoffmann, Vibeke; Hörnsten, Åsa; Storbækken, Solveig; Graue, Marit; Rasmussen, Bodil; Wahl, Astrid; Kirkevold, Marit
2016-03-01
Person-centred care [PCC] can engage people in living well with a chronic condition. However, translating PCC into practice is challenging. We aimed to compare the translational potentials of three approaches: motivational interviewing [MI], illness integration support [IIS] and guided self-determination [GSD]. Comparative analysis included eight components: (1) philosophical origin; (2) development in original clinical setting; (3) theoretical underpinnings; (4) overarching goal and supportive processes; (5) general principles, strategies or tools for engaging peoples; (6) health care professionals' background and training; (7) fidelity assessment; (8) reported effects. Although all approaches promoted autonomous motivation, they differed in other ways. Their original settings explain why IIS and GSD strive for life-illness integration, whereas MI focuses on managing ambivalence. IIS and GSD were based on grounded theories, and MI was intuitively developed. All apply processes and strategies to advance professionals' communication skills and engagement; GSD includes context-specific reflection sheets. All offer training programs; MI and GSD include fidelity tools. Each approach has a primary application: MI, when ambivalence threatens positive change; IIS, when integrating newly diagnosed chronic conditions; and GSD, when problem solving is difficult, or deadlocked. Professionals must critically consider the context in their choice of approach. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Deng, Yun; Hajilou, Tarlan; Barnoush, Afrooz
2017-06-01
To evaluate the hydrogen (H)-induced embrittlement in iron aluminium intermetallics, especially the one with stoichiometric composition of 50 at.% Al, a novel in situ micro-cantilever bending test was applied within an environmental scanning electron microscope (ESEM), which provides both a full process monitoring and a clean, in situ H-charging condition. Two sets of cantilevers were analysed in this work: one set of un-notched cantilevers, and the other set with focused ion beam-milled notch laying on two crystallographic planes: (010) and (110). The cantilevers were tested under two environmental conditions: vacuum (approximately 5 × 10-4 Pa) and ESEM (450 Pa water vapour). Crack initiation at stress-concentrated locations and propagation to cause catastrophic failure were observed when cantilevers were tested in the presence of H; while no cracking occurred when tested in vacuum. Both the bending strength for un-notched beams and the fracture toughness for notched beams were reduced under H exposure. The hydrogen embrittlement (HE) susceptibility was found to be orientation dependent: the (010) crystallographic plane was more fragile to HE than the (110) plane. This article is part of the themed issue 'The challenges of hydrogen and metals'.